title
stringlengths 3
71
| text
stringlengths 691
193k
| relevans
float64 0.76
0.81
| popularity
float64 0.94
1
| ranking
float64 0.74
0.81
|
---|---|---|---|---|
Ecosemiotics | Ecosemiotics is a branch of semiotics in its intersection with human ecology, ecological anthropology and ecocriticism. It studies sign processes in culture, which relate to other living beings, communities, and landscapes. Ecosemiotics also deals with sign-mediated aspects of ecosystems.
As stressed in ecosemiotic studies, environment has semiotic quality in different ways and levels. Material environment has affordances and potentials to participate in sign relations. Animal species attribute meanings to the environment based on their needs and umwelts. In human culture, environment can become meaningful in literary and artistic representations or through symbolization of animals or landscapes. Cultural representations of the environment in turn influence the natural environment through human actions.
Ecosemiotics analyzes processes, transmissions and problems that occur in and between the different semiotic layers of the environment. The central focus of ecosemiotics concerns the role of concepts (sign-based models people have) in designing and changing the environment. Concepts of ecosemiotic analysis are, for instance, semiocide, affordance, ecofield, consortium, dissent.
The field was initiated by Winfried Nöth and Kalevi Kull, and later elaborated by Almo Farina and Timo Maran. Ecosemiotics includes (or largely overlaps) with semiotics of landscape.
See also
Biosemiotics
Ecolinguistics
Environmental hermeneutics
Environmental history
Jakob von Uexküll Centre
Literature
Nöth, Winfried 1998. Ecosemiotics. Sign Systems Studies 26: 332–343.
Kull, Kalevi 1998. Semiotic ecology: Different natures in the semiosphere. Sign Systems Studies 26: 344–371.
Maran, Timo; Kull, Kalevi 2014. Ecosemiotics: main principles and current developments. Geografiska Annaler: Series B, Human Geography 96(1): 41–50.
Siewers, Alfred Kentigern 2013. Re-Imagining Nature: Environmental Humanities and Ecosemiotics. Bucknell University Press.
References
External links
Evolutionary ecopsychology and ecosemiotics
Semiotics
Ecology
Environmental humanities | 0.793295 | 0.940119 | 0.745791 |
Soft media | Soft media comprises media organizations that primarily deal with commentary, entertainment, arts and lifestyle. Soft media can take the form of television programs, magazines or print articles. The communication from soft media sources has been referred to as soft news as a way of distinguishing it from serious journalism, called hard news.
Soft news is defined as information that is primarily entertaining or personally useful. Soft news is often contrasted with hard news, which Harvard political scientist Thomas Patterson defines as the "coverage of breaking events involving top leaders, major issues, or significant disruptions in the routines of daily life". While the purposes of both hard and soft news include informing the public, the two differ from one another in both the information contained within them and the methods that are used to present that information. Communicated through forms of soft media, soft news is usually contained in outlets that primarily serve as sources of entertainment, such as television programs, magazines, or print articles.
Background
There are many terms that can be associated with soft media, among them are soft news and infotainment. These are, in large part, the byproducts of soft media. A fundamental role of the media, either hard or soft, is to inform the public. While the role of media is not in flux, the form is. For many Americans the lines are becoming blurred between hard and soft media as news organizations are blending their broadcasts with news shows and entertainment. Many of these news organizations create a narrative that begins with the first broadcast in the morning and end with the evening entertainment venue.
During the U.S. 2004 presidential campaign, magazines that would otherwise be considered lifestyle or entertainment (Vogue, Ladies Home Journal and O: The Oprah Magazine) became a source of political information. This illustrates that media organizations across the spectrum are emerging as suppliers of information on policy and politics.
Effects
The average American consumes more than five hours of television per day. During this time, they are exposed to a variety of news and information that either directly (hard news) or indirectly (soft news) focuses on politics, foreign affairs, and policy. These topics are aired on prime-time television in a variety of programs especially in a time of national crises. Studies have shown that news presented in this context attracts the attention of otherwise politically uninvolved people. Overall more people tend to watch hard news than soft news shows. However, those who are less politically involved can gain more from watching soft news than those who are highly politically involved.
Studies have also shown that exposure to soft news can affect consumers' attitudes. Soft media has been shown to increase a candidate’s likability, which has a greater appeal to those people who are generally not politically involved than a candidate's political policies do. For example, in the 2000 presidential election, researchers found politically uninvolved people who viewed candidates on daytime talk shows were likely to find those candidates more likable than their opponents. Studies show that instead of using a candidate's political policy to determine whether or not the candidate represents their interests, non-politically involved people also use a candidate's likability.
A study conducted in Australia concluded hard news is more followed than soft news, except in the realm of sports. The study found that regardless of age and sex, soft news is less likely to be followed than hard news. With the lack in following for soft news, support for increased public engagement caused by soft news is rejected.
In another study examining the effects of soft news consumption on voting behaviors, the phenomenon known as the Oprah Effect was established, which indicated that the intake of soft news has a positive effect on patterns of voting consistency in people who are otherwise politically unaware. In addition to this, Baum and Jamison's study found that watching interviews of candidates on soft media helps improve voting consistency in otherwise politically uninformed people.
See also
Junk food news
Least objectionable program
References
Mass media | 0.770461 | 0.967953 | 0.74577 |
Social effects of rock music | The popularity and worldwide scope of rock music resulted in a powerful impact on society in the 20th century, particularly among the baby boomer generation. Rock and roll influenced daily life, fashion, social attitudes, and language in a way few other social developments have equated to. As the original generation of rock and roll fans matured, the music became an accepted and deeply interwoven thread in popular culture. Beginning in the early 1950s, rock songs began to be used in a few television commercials; within a decade, this practice became widespread, and rock music also featured in film and television program soundtracks. By the 1980s, rock music culture had become the dominant form of popular music culture in the United States and other Western countries, before seeing a decline in subsequent years.
Race
In the crossover of African American "race music" to a growing white youth audience, the popularization of rock and roll involved both black performers reaching a white audience and white performers appropriating African-American music.
Rock and roll appeared at a time when racial tensions in the United States were entering a new phase, with the beginnings of the civil rights movement for desegregation, leading to the Supreme Court ruling that abolished the policy of "separate but equal" in 1954, but leaving a policy which would be extremely difficult to enforce in parts of the United States.
The coming together of white youth audiences and black music in rock and roll, inevitably provoked strong white racist reactions within the US, with many whites condemning its breaking down of barriers based on color. Many observers saw rock and roll as heralding the way for desegregation, in creating a new form of music that encouraged racial cooperation and shared experience. Many authors have argued that early rock and roll was instrumental in the way both white and black teenagers identified themselves.
Sex and drugs
The rock and roll lifestyle was popularly associated with sex and drugs. Many of rock and roll's early stars (as well as their jazz and blues counterparts) were known as hard-drinking, hard-living characters. During the 1960s the lifestyles of many stars became more publicly known, aided by the growth of the underground rock press. Musicians had always attracted attention of "groupies" (girls who followed musicians) who spent time with and often performed sexual favors for band members.
As the stars' lifestyles became more public, the popularity and promotion of recreational drug use by musicians may have influenced use of drugs and the perception of acceptability of drug use among the youth of the period. For example, when in the late 1960s the Beatles, who had previously been marketed as clean-cut youths, started publicly acknowledging using LSD, many fans followed. Journalist Al Aronowitz wrote "...whatever the Beatles did was acceptable, especially for young people." Jerry Garcia, of the rock band Grateful Dead said, "For some people, taking LSD and going to Grateful Dead show functions like a rite of passage ... we don't have a product to sell; but we do have a mechanism that works."
In the late 1960s and early 1970s, much of the rock and roll cachet associated with drug use dissipated as rock music suffered a series of drug-related deaths, including the 27 Club-member deaths of Jimi Hendrix, Janis Joplin and Jim Morrison. Although some amount of drug use remained common among rock musicians, a greater respect for the dangers of drug consumption was observed, and many anti-drug songs became part of the rock lexicon, notably "The Needle and the Damage Done" by Neil Young (1972).
Many rock musicians, including John Lennon, Paul McCartney, Mick Jagger, Bob Dylan, Jerry Garcia, Stevie Nicks, Jimmy Page, Keith Richards, Bon Scott, Eric Clapton, Pete Townshend, Brian Wilson, Carl Wilson, Dennis Wilson, Steven Tyler, Scott Weiland, Sly Stone, Ozzy Osbourne, Mötley Crüe, Layne Staley, Kurt Cobain, Lemmy, Bobby Brown, Buffy Sainte Marie, Dave Matthews, David Crosby, Anthony Kiedis, Dave Mustaine, David Bowie, Richard Wright, Phil Rudd, Phil Anselmo, James Hetfield, Kirk Hammett, Joe Walsh, Julian Casablancas and others, have acknowledged battling addictions to many substances including alcohol, cocaine and heroin; many of these have successfully undergone drug rehabilitation programs, but others have died.
In the early 1980s. along with the rise of the band Minor Threat, a straight edge lifestyle became popular. The straight edge philosophy of abstinence from recreational drugs, alcohol, tobacco and sex became associated with some hardcore punks through the years, and both remain popular with youth today.
Fashion
Rock music and fashion have been inextricably linked. In the mid-1960s of the UK, rivalry arose between "Mods" (who favoured 'modern' Italian-led fashion) and "Rockers" (who wore motorcycle leathers), each style had their own favored musical acts. (The controversy would form the backdrop for The Who's rock opera Quadrophenia). In the 1960s, The Beatles brought mop-top haircuts, collarless blazers, and Beatle Boots into fashion.
Rock musicians were also early adopters of hippie fashion and popularised such styles as long hair and the Nehru jacket. As rock music genres became more segmented, what an artist wore became as important as the music itself in defining the artist's intent and relationship to the audience. In the early 1970s, glam rock became widely influential featuring glittery fashions, high heels and camp. In the late 1970s, disco acts helped bring flashy urban styles to the mainstream, while punk groups began wearing mock-conservative attire, (including suit jackets and skinny ties), in an attempt to be as unlike mainstream rock musicians, who still favored blue jeans and hippie-influenced clothes.
Heavy Metal bands in the 1980s often favoured a strong visual image. For some bands, this consisted of leather or denim jackets and pants, spike/studs and long hair. Visual image was a strong component of the glam metal movement.
In 1981, MTV was formed, marking a large shift in the music world. Because MTV would become such a cultural force, the young would look toward MTV. Fashion happened to be one of those cultural centers that the Television company would have a great effect on. With debuts like Madonna's Iconic underwear-as-outerwear look and the companies featuring heavy metal as well as new wave and other genera that would go to promote each artist's brand of fashion into the greater culture, because of the sheer amount of visibility that MTV gave these artist through music videos and other content that the television channel had.
In the early 1990s, the popularity of grunge brought in a punk influenced fashion of its own, including torn jeans, old shoes, flannel shirts, backward baseball hats, and people grew their hair against the clean-cut image that was popular at the time in heavily commercialized pop music culture.
Musicians continue to be fashion icons; pop-culture magazines such as Rolling Stone often include fashion layouts featuring musicians as models.
Authenticity
Rock musicians and fans have consistently struggled with the paradox of "selling out"—to be considered "authentic", rock music must keep a certain distance from the commercial world and its constructs; however it is widely believed that certain compromises must be made to become successful and to make music available to the public. This dilemma has created friction between musicians and fans, with some bands going to great lengths to avoid the appearance of "selling out" (while still finding ways to make a lucrative living). In some styles of rock, such as punk and heavy metal, a performer who is believed to have "sold out" to commercial interests may be labelled with the pejorative term "poseur".
If a performer first comes to public attention with one style, any further stylistic development may be seen as selling out to long-time fans. On the other hand, managers and producers may progressively take more control of the artist, as happened, for instance, in Elvis Presley's swift transition in species from "The Hillbilly Cat" to "your teddy bear". It can be difficult to define the difference between seeking a wider audience and selling out. Ray Charles left behind his classic formulation of rhythm and blues to sing country music, pop songs and soft-drink commercials. In the process, he went from a niche audience to worldwide fame. Bob Dylan faced consternation from fans for embracing the electric guitar. In the end, it is a moral judgement made by the artist, the management, and the audience.
Social activism
Love and peace were very common themes in rock music during the 1960s and 1970s. Rock musicians have often attempted to address social issues directly as commentary or as calls to action. During the Vietnam War the first rock protest songs were heard, inspired by the songs of folk musicians such as Woody Guthrie and Bob Dylan, which ranged from abstract evocations of peace Peter, Paul and Mary's "If I Had a Hammer" to blunt anti-establishment diatribes Crosby, Stills, Nash & Young's "Ohio". Other musicians, notably John Lennon and Yoko Ono, were vocal in their anti-war sentiment both in their music and in public statements with songs such as "Imagine", and "Give Peace a Chance".
Famous rock musicians have adopted causes ranging from the environment (Marvin Gaye's "Mercy Mercy Me (The Ecology)") and the Anti-Apartheid Movement (Peter Gabriel's "Biko"), to violence in Northern Ireland (U2's "Sunday Bloody Sunday") and worldwide economic policy (the Dead Kennedys' "Kill the Poor"). Another notable protest song is Patti Smith's recording "People Have the Power." On occasion this involvement would go beyond simple songwriting and take the form of sometimes-spectacular concerts or televised events, often raising money for charity and awareness of global issues.
Live Aid concerts
Rock and roll as social activism reached a milestone in the Live Aid concerts, held July 13, 1985, which were an outgrowth of the 1984 charity single "Do They Know It's Christmas?" and became the largest musical concert in history with performers on two main stages, one in London, England and the other in Philadelphia, USA (plus some other acts performing in other countries) and televised worldwide. The concert lasted 16 hours and featured nearly everybody who was in the forefront of rock and pop in 1985. The charity event raised millions of dollars for famine relief in Africa. Live Aid became a model for many other fund-raising and consciousness-raising efforts, including the Farm Aid concerts for family farmers in North America, and televised performances benefiting victims of the September 11 attacks. Live Aid itself was reprised in 2005 with the Live 8 concert, to raise awareness of global economic policy. Environmental issues have also been a common theme, one example being Live Earth.
Religion
The common usage of the term "rock god" acknowledges the quasi-religious quality of the adulation some rock stars receive. Songwriters like Pete Townshend have explored spirituality within their work. John Lennon became infamous, particularly in the United States, after he remarked in 1966 that The Beatles were "more popular than Jesus", with Beatles records being burned in public in some places in the South. However, he later said that this statement was misunderstood and not meant to be anti-Christian.
Iron Maiden, Metallica, Ozzy Osbourne, King Diamond, Alice Cooper, Led Zeppelin, Marilyn Manson, Slayer and numerous others have also been accused of being satanists, immoral or otherwise having an "evil" influence on their listeners. Anti-religious sentiments also appear in punk and hardcore. There's the example of the song "Filler" by Minor Threat, the name and famous logo of the band Bad Religion and criticism of Christianity and all religions is an important theme in anarcho-punk and crust punk.
Christianity
Christian rock, alternative rock, metal, punk, and hardcore are specific, identifiable genres of rock music with strong Christian overtones and influence. Many groups and individuals who are not considered to be Christian rock artists have religious beliefs themselves. For example; The Edge and Bono of U2 are a Methodist and an Anglican, respectively; Bruce Springsteen is a Roman Catholic; and Brandon Flowers of The Killers is a Latter Day Saint. Carlos Santana, Ted Nugent, and John Mellencamp are all other examples of rock stars who profess some form of Christian faith.
However, some conservative Christians single out the music genres of hip hop and rock as well as blues and jazz as containing jungle beats, or jungle music, and claim that it is a beat or musical style that is inherently evil, immoral, or sensual. Thus, according to them, any song in the rap, hip hop and rock genres is inherently evil because of the song's musical beat, regardless of the song's lyrics or message. A few even extend this analysis even to Christian rock songs.
Christian conservative author David Noebel is one of the most notable opponents of the existence of jungle beats. In his writings and speeches, Noebel held that the use of such beats in music was a communist plot to subvert the morality of the youth of the United States. Pope Benedict XVI was quoted as saying, according to the British Broadcasting Corporation, that "Rock... is the expression of the elemental passions, and at rock festivals it assumes a sometimes cultic character, a form of worship, in fact, in opposition to Christian worship."
Satanism
Some metal bands use demonic imagery for artistic and/or entertainment purposes, though many do not worship or believe in Satan. Ozzy Osbourne is reported to be Anglican and Alice Cooper is a known born-again Christian. In some cases, though, metal performers have expressed satanic views. Numerous others in the early Norwegian black metal scene were Satanists. The most known example of this is Euronymous, who claimed that he worshiped Satan as a god. Varg Vikernes (back then called "the Count" or Grishnak) has also been called a Satanist, even though he has rejected that label. Even within this localized musical subgenre, however, the arson attacks against Christian churches and other centers of worship were condemned by some prominent figures within the Norwegian black metal scene, such as Kjetil Manheim.
References
Further reading
Alain Dister, The Story Of Rock Smash Hits And Superstars (New York: Thames and Hudson, 1993), 40.
Jeff Godwin, The Devil's Disciples: the Truth about Rock (Chino, Calif.: Chick Publications, 1985).
Dan Peters, Steve Peters, and Cher Merrill. Why Knock Rock? (Minneapolis, Minn.: Bethany House Publishers, 1984).
Perry F. Rockwood, Rock Music or Rock of Ages (Halifax, N.S.: People's Gospel Hour, [1980?]). Without ISBN
External links
Rock music museum
San Francisco Rock and Roll Hall of Fame
Topics in culture
Rock music
History of rock music | 0.762224 | 0.978409 | 0.745767 |
Postmodern religion | Postmodern religion is any type of religion that is influenced by postmodernism and postmodern philosophies. Examples of religions that may be interpreted using postmodern philosophy include Postmodern Christianity, Postmodern Neopaganism, and Postmodern Buddhism. Postmodern religion is not an attempt to banish religion from the public sphere; rather, it is a philosophical approach to religion that critically considers orthodox assumptions (that may reflect power differences in society rather than universal truths). Postmodern religious systems of thought view realities as plural, subjective, and dependent on the individual's worldview. Postmodern interpretations of religion acknowledge and value a multiplicity of diverse interpretations of truth, being, and ways of seeing. There is a rejection of sharp distinctions and global or dominant metanarratives in postmodern religion, and this reflects one of the core principles of postmodern philosophy. A postmodern interpretation of religion emphasises the key point that religious truth is highly individualistic, subjective, and resides within the individual.
Eclecticism and non-dogmatic theology
According to postmodern philosophy, society is in a state of constant change. There is no absolute version of reality, no absolute truths. Postmodern religion strengthens the perspective of the individual and weakens the strength of institutions and religions that deal with objective realities. Postmodern religion considers that there are no universal religious truths or laws. Rather, reality is shaped by social, historical, and cultural contexts according to the individual, place, and/or time. Individuals may seek to draw eclectically on diverse religious beliefs, practices, and rituals in order to incorporate these into their own religious worldview.
In Japan, Shinto and Buddhist ideas are woven together and coexist. Some people who practice Buddhism may be syncretic in their approach. Syncretism occurs among the Eastern religions. Similarly, versions of Hinduism and Neopaganism may also be interpreted from a postmodern perspective. A postmodern religion can be non-dogmatic, syncretic, and eclectic: in drawing from various faiths and traditions, postmodern religion challenges the notion of absolute truth.
A postmodern interpretation of religion emphasizes the importance of questioning and considering historical bias when studying religion from a historical perspective. For example, doctoral studies in religion at Harvard emphasise studying religion using wider contexts of history and comparative studies. It is these "wider contexts" that make religion a valid subject of postmodern contemplation. Studies of religion are often approached from a historical perspective. A postmodern interpretation of a religion acknowledges that history can be represented in an inherently biased way, reinforcing the mainstream ideologies of those in power.
Versions of truth
Postmodern religion acknowledges and accepts different versions of truth. For example, rituals, beliefs and practices can be invented, transformed, created and reworked based on constantly shifting and changing realities, individual preferences, myths, legends, archetypes, rituals and cultural values and beliefs. Individuals who interpret religion using postmodern philosophy may draw from the histories of various cultures to inform their religious beliefs - they may question, reclaim, challenge and critique representations of religion in history based on the theories of postmodernism, which acknowledge that realities are diverse, subjective and depend on the individual's interests and interpretations.
Appeal to marginalized groups
Members of groups in society who face discrimination or who are marginalized, such as women, the gay community, or ethnic minority groups, may be drawn to postmodern religious thinking. For example, the interpretation of Christianity from a postmodern perspective offers the potential for groups in society, such as the gay community or women, the ability to connect with a version of reality or truth that does not exclude or marginalize them. A postmodern interpretation of religion may focus on considering a religion without orthodox assumptions (that may reflect power differences in society rather than universal truths). In Semitic Neopaganism, a postmodern approach to Neopaganism involves challenging or reclaiming mainstream versions of reality and truth that may be more inclusive of women. Minority groups and the socially or economically disadvantaged may be drawn to follow a postmodern approach to religion because of the way that postmodern philosophy empowers the individual and provides an "emancipatory framework" with which to challenge mainstream ideologies or dominant power structures.
Postmodern interpretations of religion
Christianity
Interpreting Christianity using theories of postmodernism usually involves finding the balance between acknowledging pluralism, a plurality of views and historical influence on doctrine, and avoiding the extremes of postmodernism. Christian philosopher John Riggs proposes that postmodernism and Christianity have much to offer each other. He asserts that Christians who have adopted elements of postmodern thinking still need to acknowledge that some notions of reality need to be fixed and real in order to have "meaningful claims about vital topics such as ethics and God". An example of a specific religious movement that uses postmodern thinking is the Emerging Church.
Neopaganism
Neopaganism can be interpreted from a postmodern perspective. Postmodern religion can be non-dogmatic, syncretic, eclectic, and draw from various faiths and traditions and challenges the notion of absolute truths. Wicca, the largest tradition of Neopaganism, can be interpreted using postmodern philosophies. Postmodern interpretations of Wicca often lead to the practitioner adopting a more eclectic approach, because the very nature of postmodern theory involves the acceptance of many versions of truth and reality.
Eclectic Wicca is the most widely adapted form of Wicca in America today and the core philosophies of postmodern thinking are often used in order to form an interpretation of Wicca that is highly individual and characterized by the subjective questioning of reality and truth. This version of Wicca may draw eclectically from, adapt, challenge, and adopt a wider range of religious beliefs and perspectives, such as Buddhism, Shintoism, Druidism, Hinduism, and Goddess movements such as Dianic Wicca, Celtic Wicca, and Semitic Neopaganism.
Postmodern interpretations of Wicca tend to be context driven, egalitarian, immanent and experiential. Academic texts often represent Wicca in literature and research as a specific tradition that is underpinned by discourses of modernism.
Postmodern spirituality
Postmodern spirituality refers to new forms of spirituality in the contexts of postmodern societies in a globalised world. Former universalistic worldviews of modernity become contested, old explanations and certainties questioned.
See also
Chaos magic
References
Further reading
Ahlbäck, Tore (ed.) (2009): Postmodern spirituality. (based on papers read at the Symposium on Postmodern Spirituality, held at Åbo, Finland, on 11–13 June 2008)
Benedikter, Roland (2006): Postmodern spirituality. A dialogue in five parts - Part V: Can Only A God Save Us? Postmodern Proto-Spirituality And The Current Global Turn To Religion. (online)
Dunn, Patrick (2005): Postmodern Magic: The Art of Magic in the Information Age. St. Paul
Griffin, David Ray (1988): Spirituality and society: postmodern visions. Albany.
Griffin, David Ray (1989): God and religion in the postmodern world: essays in postmodern theology. New York
Hart, Kevin (ed.) (2005): The experience of God. A postmodern response. New York.
King, Ursula (1998): "Spirituality in a postmodern age: faith and praxis in new contexts". In: King, Ursula (ed.) (1998): Faith and Praxis in a Postmodern Age. London.
Muldoon, Tim (2005): Postmodern spirituality and the Ignatian Fundamentum. (short review)(full text)
Philosophical schools and traditions
Postmodernism | 0.773522 | 0.964096 | 0.74575 |
Situation analysis | In strategic management, situation analysis (or situational analysis) refers to a collection of methods that managers use to analyze an organization's internal and external environment to understand the organization's capabilities, customers, and business environment. The situation analysis can include several methods of analysis such as the 5C analysis, SWOT analysis and Porter's five forces analysis.
In marketing
In marketing, a marketing plan is created to guide businesses on how to communicate the benefits of their products to the needs of potential customer. The situation analysis is the second step in the marketing plan and is a critical step in establishing a long term relationship with customers.
The parts of a marketing plan are:
Introduction
Situation analysis
Objectives
Budgeting
Strategy
Execution
Evaluation
The situation analysis looks at both the macro-environmental factors that affect many firms within the environment and the micro-environmental factors that specifically affect the firm. The purpose of the situation analysis is to indicate to a company about the organizational and product position, as well as the overall survival of the business, within the environment. Companies must be able to summarize opportunities and problems within the environment so they can understand their capabilities within the market.
5C analysis
While a situation analysis is often referred to as the "3C analysis", the extension to the 5C analysis has allowed businesses to gain more information on internal and external factors to aid in strategic decision-making. The 5C analysis is considered the most useful and common way to analyze the market environment, because of the extensive information it provides.
Company
The company analysis involves evaluation of the company's objectives, strategy, and capabilities. These indicate to an organization the strength of the business model, whether there are areas for improvement, and how well an organization fits the external environment.
Goals and objectives: An analysis on the mission of the business, the industry of the business and the stated goals required to achieve the mission.
Position: An analysis on the marketing strategy and the marketing mix.
Performance: An analysis on how effective the business is achieving their stated mission and goals.
Product line: An analysis on the products manufactured by the business and how successful it is in the market.
Competitors
The competitor analysis takes into consideration the competitors position within the industry and the potential threat it may pose to other businesses. The main purpose of the competitor analysis is for businesses to analyze a competitor's current and potential nature and capabilities so they can prepare against competition. The competitor analysis looks at the following criteria:
Identify competitors: Businesses must be able to identify competitors within their industry. Identifying whether competitors provide the same services or products to the same customer base is useful in gaining knowledge of direct competitors. Both direct and indirect competitors must be identified, as well as potential future competitors.
Assessment of competitors: The competitor analysis looks at competitor goals, mission, strategies and resources. This supports a thorough comparison of goals and strategies of competitors and the organization.
Predict future initiatives of competitors: An early insight into the potential activity of a competitor helps a company prepare against competition.
Customers
Customer analysis can be vast and complicated. Some of the important areas that a company analyzes includes:
Demographics
Advertising that is most suitable for the demographic
Market size and potential growth
Customer wants and needs
Motivation to buy the product
Distribution channels (retail, online, wholesale, etc...)
Quantity and frequency of purchase
Income level of customer
Collaborators
Collaborators are useful for businesses as they allow for an increase in the creation of ideas, as well as an increase in the likelihood of gaining more business opportunities. The following type of collaborators are:
Agencies: Agencies are the middlemen of the business world. When businesses need a specific worker who specializes in the trade, they go to a recruitment agency.
Suppliers: Suppliers provide raw materials that are required to build products. Different types of Suppliers include manufacturers, wholesalers, merchants, franchisors, importers and exporters, independent crafts people and drop shippers. Each category of suppliers can bring a different skill and experience to the company.
Distributors: Distributors are important as they are the holding areas for inventory. Distributors can help manage manufacturer relationships as well as handle vendor relationships.
Partnerships: Business partners would share assets and liabilities, allowing for a new source of capital and skills.
Businesses must be able to identify whether the collaborator has the capabilities needed to help run the business as well as an analysis on the level of commitment needed for a collaborator-business relationship.
Climate
To fully understand the business climate and environment, many factors that can affect the business must be researched and understood. An analysis on the climate is also known as PEST analysis. The types of climate/environment firms have to analyse are:
Political and regulatory environment: An analysis of how actively the government regulates the market with their policies and how it would affect the production, distribution and sale of the goods and services.
Economic Environment: An analysis of trends regarding macroeconomics, such as exchange rates and inflation rate, can prove to influence businesses.
Social/cultural environment: An analysis interpreting the trends of society, which includes the study of demographics, education, culture etc.
Technological analysis: An analysis of technology helps improve on old routines and suggest new methods for being cost efficient. To stay competitive and gain an advantage over competitors, businesses must sufficiently understand technological advances.
SWOT
A SWOT analysis is another method of situation analysis that examines the strengths and weaknesses of a company (internal environment) as well as the opportunities and threats within the market (external environment). A SWOT analysis looks at both current and future situations. The goal is to build on strengths as much as possible while reducing weaknesses. This analysis helps a company come up with a plan that keeps it prepared for a number of potential scenarios, as part of corporate planning or strategic planning
Porter's five forces industry analysis
Porter's model involves scanning the environment for threats from competitors and identifying problems early on to minimize threats imposed by competitors. This model can apply for any type of business, from small to larger sized businesses. Porter's model is not just for businesses, but can also be applied to a country to help gain insight into creating a competitive advantage in the global market. The ultimate purpose of Porter's five forces model is to help businesses compare and analyze their profitability and position with the industry against indirect and direct competition.
References
Strategic management
Market research | 0.760699 | 0.980265 | 0.745687 |
Cultural radicalism | Cultural radicalism (Danish: Kulturradikalisme) was a movement in first Danish, but later also Nordic culture in general. It was particularly strong in the Interwar Period, but its philosophy has its origin in the 1870s and a great deal of modern social commentary still refer to it.
At the time of the height of the cultural radical movement it was referred to as modern. The words cultural radical and cultural radicalism was first used in an essay by Elias Bredsdorff in the broadsheet newspaper, Politiken, in 1956. Bredsdorff described cultural radicals as people who are socially responsible with an international outlook.
Cultural radicalism has usually been described as the heritage of Georg Brandes's Modern Breakthrough, the foundation and early editorials of the newspaper Politiken, the foundation of the political party Radikale Venstre, to the magazine Kritisk Revy by Poul Henningsen (PH).
The values most commonly associated with cultural radicalism are among others: criticism of religion, opposition to social norms, criticism of Victorian sexual morality, anti-militarism and an openness to new cultural input other than the classic western (e.g. jazz, modern architecture, art, literature and theater).
Internationally
Cultural radicalism is also used outside of Denmark. In Scandinavia, it often refers to the Danish movement, but elsewhere, the concept may just share the etymology. In Sweden, cultural radicalism has been seen as opposition to the Swedish church and to the Neo-Victorian sexual moral. In Norway the movement has been associated with the magazine Mot Dag in 1930s and its authors such as Sigurd Hoel and Arnulf Øverland. In the US, cultural radicalism is sometimes used as the opposite of cultural conservatism, especially in the context of culture wars.
Cultural radicals
Kjeld Abell
Edvard Brandes
Georg Brandes
Bernhard Christensen
Mogens Fog
Poul Henningsen
Edvard Heiberg
Viggo Hørup
Hans Kirk
Klaus Rifbjerg
Ove Rode
Hans Scherfig
Tøger Seidenfaden
August Strindberg
See also
Modern Breakthrough
Politiken
Radikale Venstre
Russian nihilist movement
External links
Cultural Radicalism in the Danish Democracy Canon
Denmark/Historical perspective: cultural policies and instruments
Kulturradikalismen on leksikon.org (in Danish)
Kulturradikal/kulturradikalisme (in Danish)
Intellectual history
Modernism
Philosophy of culture
Political philosophy
Radicalism (historical) | 0.790695 | 0.943077 | 0.745686 |
Professionalization | Professionalization or professionalisation is a social process by which any trade or occupation transforms itself into a true "profession of the highest integrity and competence." The definition of what constitutes a profession is often contested. Professionalization tends to result in establishing acceptable qualifications, one or more professional associations to recommend best practice and to oversee the conduct of members of the profession, and some degree of demarcation of the qualified from unqualified amateurs (that is, professional certification). It is also likely to create "occupational closure", closing the profession to entry from outsiders, amateurs and the unqualified.
Occupations not fully professionalized are sometimes called semiprofessions. Critique of professionalization views overzealous versions driven by perverse incentives (essentially, a modern analogue of the negative aspects of guilds) as a form of credentialism.
Process
The process of professionalization creates "a hierarchical divide between the knowledge-authorities in the professions and a deferential citizenry." This demarcation is often termed "occupational closure", as it means that the profession then becomes closed to entry from outsiders, amateurs and the unqualified: a stratified occupation "defined by professional demarcation and grade." The origin of this process is said to have been with guilds during the Middle Ages, when they fought for exclusive rights to practice their trades as journeymen, and to engage unpaid apprentices. It has also been called credentialism, a reliance on formal qualifications or certifications to determine whether someone is permitted to undertake a task or to speak as an expert. It has also been defined as "excessive reliance on credentials, especially academic degrees, in determining hiring or promotion policies.". It has been further defined as where the credentials for a job or a position are upgraded, even though, there is no skill change that makes this increase necessary.
Professions also possess power, prestige, high income, high social status and privileges; their members soon come to comprise an elite class of people, cut off to some extent from the common people, and occupying an elevated station in society: "a narrow elite ... a hierarchical social system: a system of ranked orders and classes."
The professionalization process tends to establish the group norms of conduct and qualification of members of a profession and tends also to insist that members of the profession achieve "conformity to the norm." and abide more or less strictly with the established procedures and any agreed code of conduct, which is policed by professional bodies, for "accreditation assures conformity to general expectations of the profession." Different professions are organized differently. For example, doctors desire autonomy over entrepreneurship. Professions want authority because of their expertise. Professionals are encouraged to have a lifetime commitment to their field of work.
Eliot Freidson (1923–2005) is considered one of the founders of the sociology of professions
History
Very few professions existed before the 19th century, although most of the societies always valued someone who was competent and skilled in a particular discipline. The government was especially in need of skilled people to complete various duties. Professionalism as an ideology only started in the early 19th century in North America and Western Europe.
Professions began to emerge rapidly. However, a person who wanted to become a professional had to gain the approval of members of the existing profession beforehand and only they could judge whether he or she had reached the level of expertise needed to be a professional. Official associations and credentialing boards were created by the end of the 19th century, but initially membership was informal. A person was a professional if enough people said they were a professional.
Adam Smith expressed support for professionalization, as he believed that professionals made a worthwhile contribution to society. They deserved power and high salaries due to the difficulties inherent in gaining entry to professional fields and living up to the rigorous demands of professionalism.
State licensure insured that experience could not be substituted for certification, and decreased outside competition. A code of ethics for professionals ensured that the public receiving the service was well served and set guidelines for their behavior in their professions. This code also ensured that penalties were put in place for those who failed to meet up to the standards stated. This could include termination of their license to practice. After the Second World War, professions were state controlled.
The degree of legislation and autonomy of self-regulated and regular professions varied across Canada. Possible causes include societal infrastructure, population density, social ideologies, and political mandates. Physicians and engineers were among the most successful at professionalization of their work. Medicine was consistently regulated before the confederation. Medicine and engineering became self-regulated and had their regulatory legislation altered five decades after the confederation even though some other occupations were not able to. This meant these professions could oversee entry to practice, education, and the behavior of those practicing.
Physicians
Physicians are a profession that became autonomous or self-regulating. Physicians started as a division of labor in health care. The social status of physicians made them feel like they merit deference. Physicians' authority was based on persuasion. Autonomy and independence of the organization of physicians caused a division of labor that is professionally dominated. Licensing caused monopolies on rights. Eliot Friedson had commented that the profession had "the authority to direct and evaluate the work of others without in turn being subject to formal direction and evaluation by them”. Doctors retained their dominance because hospitals were administered rather than managed. The medical field enjoyed more power than some other profession, for example engineering.
In the United States physicians from other countries could not practice unless they satisfied US regulation requirements.
To ensure social order and establish British institutions, Ontario established medicine as a self-regulating profession in the late 1860s. In many US states however, medicine remained unregulated until several decades later.
A publication in the 1840 British Medical Journal revealed an increase in professional consciousness from medical practitioners in England. Physicians in the 19th century came to have the features of modern professions. A major one was autonomy. This was further emphasized with the establishment of a controlling body of the profession. Competition and overcrowding (two or three decades after 1930) also put pressure on governments to establish a system of registration and requirements for those who wished to practice. This led to the Medical Act of 1840. In fact, this council consisted mostly of doctors. Therefore, they were in control of regulating their own profession. The act required their members to oversee medical education, keep track of the numbers of qualified practitioners, and regulate it for the government. Pg 688. It gave the qualified more power and set limitations on the unqualified. The exclusion from government service of the unqualified practitioners was the most influential policy. Along with the act, the qualified practitioners came to be known as the “officially recognized” healers, and as such had a competitive advantage in the job market.
To reduce competition, the Medical Act also raised the standards for qualifications in 1858. A modern codes of medical ethics were also implemented in the 19th century. Again, this proves the high degree of power that the profession had. As a result, many medical practitioners came to experience ethical problems. Unlike today, it was more the concern of the behavior of doctors towards each other, than towards their patients. It is suggested to be due by the changes of the medical world in the first half of the nineteenth century. Unlike the pre-industrial age, distinctions between say surgeons and physicians were greatly reduced, to replace a division of mostly consultants and general practitioners.
This new division caused disorder in establishing the roles of different types of practitioners and their status. It led to more competition as their various field of expertise was not made clear and thus resulted in accusations of unprofessional conduct among each other to protect their own interests. Issues, around management of medical practitioners and their practice stemming from this change, had to be attended to. In the second half of the 19th century, ethics were more severely monitored and disciplinary action against violators was put in effect. This was allowed as by the act of 1858. Even the allowance to remove from practice any practitioner violating the code of ethics put in place. A more elaborated code of professional ethics emerged. A practitioner had no other choice but to adhere to minimum standards if he wanted to keep his job and keep practicing.
The 19th-century education to become a physician encountered some changes from the 18th century. The 18th century was an apprenticeship program. The apprentice and master worked together and so the level of training received varied from person to person varied. In the 19th century, hospital medical schools and universities gained popularity for teaching. Apprenticeships were reducing rapidly. Training became more standardized. It was standardized more all over the world too because medical students that attended these schools came from all over the world. With this came a sense of professional identity and community made possible this modern profession seen today.
With the professionalization of medicine came the emergence of the movement of physical diagnoses of physicians' patients in the 19th century. It was believed to help treat patients better. Before the emergence of this movement, physicians based their diagnoses on the interpretation of their patients’ symptoms. Physical diagnoses became part of the modern professional practice of medicine. It was one of the major accomplishments of Parisian hospitals and with the rise of Parisian pathological-anatomy, it became a very important clinical practice. Disease was believed to be an anatomical lesion inside the body. Physical examination was necessary to properly qualify them. This new approach caused the problem of the growing diagnostic competence but smaller treatment capacities. As well, this caused a pressure on the physician to find and classify the illness but also to treat and cure the disease. Skepticism grew in the profession as fellow physicians watched each other for proper treatment of patients.
The invention of the stethoscope in 1816 made auscultation and percussion regularly employed to help in the physical diagnosis process. Diagnose and treatment now had to be based on science. The rise of hospitals facilitated physical diagnoses. That being said, patients were often reluctant to undergo physical diagnosis, especially with the rise of new medical instruments being used. In fact, manuals were written to help physicians gain knowledge on proper “patient etiquette” and gain their consent to perform certain procedures. Society had a hard time accepting the procedures required for the routine physical examination and its necessity. It was more interested in the cure and treatment effectiveness of the diagnosis.
The industrialization in the late nineteenth century resulted in a demand for physicians. In Canada, the industrializing towns and cities of the Maritimes gave plenty of opportunities for their physicians to show their skills as emerging professionals. For example, medical doctors were needed to inspect tenement housing, and sanitary conditions of factories and schools. Doctors were needed to promote public and personal hygiene to reduce disease transmission.
Medical failures often hampered the reputation of these physicians which made their status as professionals harder to implement and make the general population accept them as this. Not to mention over-crowding eventually became a problem. the profession called on the government for help especially in the last quarter of the 19th century. Restriction on who could get in medical schools, and higher demands on their education were put in place. As well, greater attentions to their professional ethics were among the strategies employed to distinguish themselves as high status professionals. Physicians also pressured the government for better attention to the health of its citizens. For example, the recollection of data of the births and deaths which it had stopped doing in the Maritimes in 1877. Provincial medical boards, allowance of registration for practice across all provinces, better schools, protection against the unlicensed physicians and unskilled persons, were some other actions taken.
Although medical techniques did approve in the nineteenth century, attempts to deny rights for the other competing professions in the health field made it seem like medical doctors wanted to monopolize medical care and seek their own interests rather the public welfare.
Engineers
Engineering, as it became a profession, had fewer restrictions in the 19th century. As it did not have mandatory licensing for entrants, competition was bigger. Unlike physicians, engineers could not enjoy protection from competition. For instance, a person without a college degree could still become an engineer. Engineers could be independent. It was a semi-autonomous profession because it could still require extended training and it formed body of specialized knowledge. The nature of their work meant that they were always influenced by business and industry. In many cases they did want to be independent. Oftentimes, they sought power through their connection with an organization. The engineer profession was much more collaborative.
In Canada, Interprofessional conflict, differences in organization, and state lobby caused the differences in timing and legislature of occupations such as engineering.
In engineering, the profession was initially just organized on a national or cross-provincial basis. For example, the Canadian Society of Civil Engineers was formed in 1887 before it was regulated in each province. Even then, legislation from province to province varied. This was due to the resistance and oppositions of the people in all provinces. For example, in Ontario, the act on engineering did not pass until 1922, and it had to be altered to exempt all mining operations from the bill. This was because the mining industry was afraid the act would alert business and the ability to hire whoever they wanted During times of rapid growth, regulations were added or altered to starve off over crowding.
In the 19th century, an engineer qualified to practice in England would not have trouble practicing in Canada. To obtain an engineer’s certificate from them these countries, many demands which had to be met. For example, in Ontario Canada, for each different class of engineer certificate obtained, certain math skills must be met first. To practice as a Water Supply Engineer in England, a person had to obtain a certificate. This certificate was only granted if the provisions under the Water act of 1890 are met. There was little opening for employment as a civil engineer in England, although those who were good eventually found work.
In England, because production was controlled by craftsmen, creativity and quality of the product was seen as dominant factors in the emerging engineering profession. During the Industrial revolution, whereas the United States focused its attention to standardization for mass production, England focused on methods of small-scale manufacturing. English engineers still emphasized quality in their work. Learning by practical experience was also strongly encouraged and training new engineers became like an apprenticeship.
In France, they were more concern with the theoretical aspect of engineering, specifically understanding the mathematical aspect of it. They built “grandes écoles" of engineering and state employment was the most predominant work for engineering. Engineering practices and education depended upon cultural values and preferences. Oftentimes in the US, business and engineer managers influenced engineer work.
In the United States, engineering was more focused on experience and achieving material and commercial success. Manual labor was seen as something positive. It was influenced by France to build schools for engineering training rather than on the site training, in the late 19th century. Professional status was gained through corporate training. Unlike the other emerging professions mentioned earlier, engineering as a profession did not reply on the approval of their peers but rather of corporate and government hierarchies (private industry).
The number of engineers increased by 2000 percent in the period between 1880 and 1920 in the United States. The Industrial revolution created a demand for them. Their main competition was Germany. Industries encouraged engineering to change from a craft to a profession. The standardization of practices during this time helped established their professional image as expertise. That being said, many factory and business and factory owners did not particularly like this standardization because they felt threaten that engineers would increase their authority and territory. This was also desired by engineers themselves to end labor troubles. It was believed that it would increase production and predictability.
Civil engineers were overtaken by mechanical engineers. In fact, the numbers of professional mechanical engineers increased by 600 percent and college enrollment in this specialization outnumbered civil engineering. Now, they were more needed. Engineers were okay being classified "professionals of a corporation", because they were still mostly industry workers anyway and valued the ideology of no government intervention in the economy.
Shortly before, and during the Progressive Era, better organization of various fields of work including engineering took place because it encouraged professionalism, equality, and progress. Systematization was a big part of it. For example, The American Society of Mechanical Engineer was founded in 1880, and met twice a year. Professional codes of ethics were also established for this profession. However, the growing profession of engineering had still difficulty in organizing itself.
Making a professional image of engineers was difficult because of its prominent association with manual labor. It struggles to this day to gain similar status as members of autonomous, self-regulating professions such as lawyers and physicians.
See also
Grade inflation
Occupational licensing
References
Bibliography
Andrew Delano Abbott, The System of Professions: Essay on the Division of Expert Labour, Chicago: University of Chicago Press, 1988
Jeffrey L. Berlant, Profession and Monopoly: A Study of Medicine in the United States and Great Britain, Berkeley, CA: University of California Press, 1975.
Charlotte G. Borst, Catching Babies: Professionalization of Childbirth, 1870–1920, Cambridge, MA: Harvard University Press, 1995
Robert Dingwall, Essays on Professions. Aldershot, England: Ashgate, 2008.
Eyre and Spottiswoode, Professional handbook, dealing with professions in the colonies / issued by the Emigrants Information Office Early Canadiana Online., 1892.
Eliot Freidson, Profession of Medicine: A Study of the Sociology of Applied Knowledge, Chicago: University of Chicago Press, 1970
Merle Jacobs and Stephen, E Bosanac, The Professionalization of Work, Whitby, ON: de Sitter Publications, 2006
Alice Beck Kehoe, Mary Beth Emmerichs, and Alfred Bendiner, Assembling the Past: Studies in the Professionalization of Archaeology, University of New Mexico Press, 2000, .
Lori Kenschaft, Professions and Professionalization., Oxford University Press, 2008
Magali Sarfatti Larson, The Rise of Professionalism: a Sociological Analysis, Berkeley, California: University of California Press, 1978
Gary R. Lowe and P. Nelson Reid, The Professionalization of Poverty: Social Work and the Poor in the Twentieth Century (Modern Applications of Social Work), Aldine de Gruyter, 1999
Keith M. Macdonald, The Sociology of the Professions, London: Sage Publications Ltd, 1995
Linda Reeser, Linda Cherrey, and Irwin Epstein, Professionalization and Activism in Social Work, Columbia University Press, 1990,
Patricia M. Schwirian, Professionalization of Nursing: Current Issues and Trends, Philadelphia: Lippencott, 1998,
Howard M Vollmer, and D L Mills, Professionalization, New Jersey: Prentice Hall, 1966
Anne Witz, Professions and Patriarchy, London: Routledge, 1992
Donald Wright, The Professionalization of History in English, Toronto: University of Toronto Press, 2005
External links
Article abstracts on this theme
ESA research network on sociology of professions
University of Aberdeen reading list: Sociology of Professions
An issue of Current Sociology devoted to this topic
Sociological terminology | 0.763622 | 0.97651 | 0.745685 |
Modernismo | Modernismo is a literary movement that took place primarily during the end of the nineteenth and early 20th century in the Spanish-speaking world, best exemplified by Rubén Darío, who is known as the father of modernismo. The term modernismo specifically refers to the literary movement that took place primarily in poetry. This literary movement began in 1888 after the publication of Rubén Darío's Azul.... It gave modernismo a new meaning. The movement died out around 1920, four years after the death of Rubén Darío. In Aspects of Spanish-American Literature, the author writes (1963),
Modernismo influences the meaning behind words and the impact of poetry on culture. Modernismo, in its simplest form, is finding the beauty and advances within the language and rhythm of literary works.
Other notable exponents are Leopoldo Lugones, Manuel Gutiérrez Nájera, José Asunción Silva, Julio Herrera y Reissig, Julián del Casal, Manuel González Prada, Aurora Cáceres, Delmira Agustini, Manuel Díaz Rodríguez and José Martí. It is a recapitulation and blending of three European currents: Romanticism, Symbolism and especially Parnassianism. Inner passions, visions, harmonies and rhythms are expressed in a rich, highly stylized verbal music. This movement was of great influence in the whole Hispanic world (including the Philippines), finding a temporary vogue also among the Generation of '98 in Spain, which posited various reactions to its perceived aestheticism.
Characteristics of modernismo
Modernismo is a distinct literary movement that can be identified through its characteristics. The main characteristics of modernismo are:
Giving an idea of the culture and time that we live within, cultural maturity.
Pride in nationality (pride in Latin American identity)
Search for a deeper understanding of beauty and art within the rhetoric. Gives ideas of meaning through colors and images related to senses.
Contains different metrics and rhythms. Uses medieval verses such as the Alexandrine verses from the French.
The use of Latin and Greek mythology.
The loss of everyday reality to which many of the modernismo poems are located within exotic or distant places.
The cultivation of a perfection within poetry.
Notable authors
Rubén Darío
Rubén Darío was the father of modernismo and trailblazed the path for future poets. Darío's idea of modernistic poems was rejected by poets following World War I because many considered it outdated and too heavy in rhetoric. He developed the idea of modernism after following Spanish poets and being influenced by them heavily. Darío created a rhythm within his poetry to represent the idea of modernism. This changed the metric of Spanish literature. His use of the French method, Alexandrine verses, changed and enhanced the literary movement. Modernismo literary works also tend to include a vocabulary that many see as lyrical. Modernistic vocabulary drew from many semantic fields to impart a different meaning behind words in his literary work. Examples are items such as flowers, technology, jewelry, diamonds, luxury items, etc. This vocabulary often stemmed from Greek and Latin terms, if not the languages themselves. Darío often mentions the 'swan' in his literary works to symbolize the idea of beauty and perfection within his writing. The idea of beauty and perfection in poetry is a major characteristic of modernismo. In his poem El Cisne, he wrote:
His contributions to the movement of modernismo created an opportunity for poets to use their words with meaning behind them within their poems. The swan represents perfection, and according to Darío in his poem, the swan was without flaw had the power to revive someone from the dead. This represents the modernismo movement within literary works.
José Martí
José Julián Martí y Pérez was born on January 28, 1853, in Havana, Cuba—died May 19, 1895. He was a poet, essayist, and a martyr for Cuban independence from Spain. His dedication to a free Cuba made him a symbol of Cuba's struggle for independence from Spain. He organized and unified the movement for Cuban independence and died on the battlefield fighting for it. Martí also used his writing ability to fight for independence. By the age of 15 he had published several poems, and by the age of 16 he founded a newspaper, La Patria Libre. This was his way of showing sympathy for the patriots during a revolutionary uprising in 1868. He was sentenced to six months of hard labor. Martí continued to use his talent to call attention to the problems plaguing Latin America. He is considered one of the fathers of modernismo.
Enrique González Martínez
Enrique González Martínez was born April 13, 1871, in Guadalajara, Mexico. He died on February 19, 1952, in Mexico City, Mexico. Martínez is considered one of the last great modernismo poets. While others consider him to be the first post-modernismo poet, he never completely abandoned modernismo characteristics in his work. For the first time in Latin American literature, his works showed more of a local concern in literature. He was a medical doctor, professor, and diplomat to Chile (1920-1922), Argentina (1922-24), Spain, and Portugal (1924-1931). One of his poems, called Tuércele el cuello al cisne (Twist the Swan’s Neck), has often been seen as an anti-modernismo manifesto. However, this is far from the truth. Enrique González Martínez continued to be a modernismo poet for the rest of his life.
See also
References
Bibliography
Aching, Gerard. The Politics of Spanish American Modernismo: Discourses of Engagement. Cambridge University Press, 1997.
Davison, Ned J. The Concept of Modernism in Hispanic Criticism. Boulder: Pruett Press, 1966.
Glickman, Robert Jay. Fin del siglo: retrato de Hispanoamérica en la época modernista. Toronto: Canadian Academy of the Arts, 1999.
Mañach, Jorge. Martí: Apostle of Freedom. Translated from Spanish by Coley Taylor, with a preface by Gabriela Mistral. New York, Devin-Adair, 1950.
Schulmanm, Iván A. and Manuel Pedro Gonzalez. Martí, Darío y el modernismo, Madrid, Editorial Gredos 1969. Martí, Darío and Modernism.
Torres-Rioseco, Arturo. Aspects of Spanish-American Literature. University of Washington Press, 1963.
El Modernismo en Cataluña
Works of Rubén Darío
Notes on Latin American Modernismo
El cisne
Latin American literature
Spanish words and phrases
Literary modernism | 0.764236 | 0.975714 | 0.745676 |
Cultural code | Cultural code refers to several related concepts about the body of shared practices, expectations and conventions specific to a given domain of a culture.
Under one interpretation, a cultural code is seen as defining a set of images that are associated with a particular group of stereotypes in our minds. This is sort of cultural unconscious, which is hidden even from our own understanding, but is also seen in our actions. The cultural codes of a nation helps to understand the behavioral responses characteristic of that nation's citizens. The key codes in understanding specific behaviors differentiate between religion, gender, relationships, money, food, health, and cultures.
See also
Code (semiotics)
Collective unconscious
Meme
References
Code
Crowd psychology | 0.778454 | 0.95789 | 0.745674 |
Machine aesthetic | The machine aesthetic "label" is used in architecture and other arts to describe works that either draw the inspiration from industrialization with its mechanized mass production or use elements resembling structures of complex machines (ships, planes, etc.) for the sake of appearance. As an example of the latter, buildings in the International Modernism style frequently used horizontal strips of metal-framed windows crossing the smooth walls to imitate an ocean liner in a deliberate violation of the "truth to materials" principle (as the walls were actually made of bricks).
Machine aesthetic is neither an art style, nor an art movement in itself, but a common trait shared by multiple movements of the first three decades of the 20th century (so called First Machine Age), including French purism, Dutch De Stijl, Russian suprematism and productivism, German constructivism (Bauhaus), and American precisionism. With the notable exception of dadaists, most adherents of machine aesthetic were generally adoring the industrial development, although sometimes with hesitation: "this shouldn't be beautiful, but it is" (Elsie Driggs on her smokestacks- and smoke-filled landscape of Pittsburg steel mills).
At its heyday, machine aesthetic was truly the spirit of the age, Zeitgeist, that affected the mood of artists across movements as diverse as realism, dadaism, and futurism, changing the minds like the (contemporaneous) Jazz Age did.
Expressions
Architecture and applied arts
The machine aesthetic label was born in the beginning of the 20th century, when the newly created machines embodied the purity of the function. Architects were fascinated by the possibilities of the clean geometric forms and smooth surfaces enabled by the new construction techniques. The adherents of machine aesthetic called for elimination of traditional for architecture (and furniture design) structural distinctions between load and support. For example, in the Red and Blue Chair (1917) the (red) back plays the role of the load (supported by a crossbar underneath the seat) and provides support for the arms at the same time. For buildings even differences between inside and outside became minor: since the walls no longer needed to carry the load (with the support often provided by the steel frame), the inside space can be made almost as open as the outside one.
Visual arts
On the most basic level, the machine aesthetic was manifested through depictions of the machines, sometimes using techniques alluding to their textures. For example, Charles Sheeler in his precisionist works showing off the factory complexes used glossy finishes and crisp lines to imitate the surface of instruments. Some artists, like sculptor Jacob Epstein in his "The Rock Drill", viewed the machine as a mindless and ruthless monster.
Constructivists viewed themselves as technicians who constructed and assembled the art pieces, rather than chiseling or painting them, their use of drastically unconventional art materials is sometimes called a culture of materials. Vladimir Tatlin in particular considered himself an "inventor" imitating factory processes when putting together three-dimensional pieces made of industrial materials like sheet metal.
Performing arts
Regularity and rhythm of the machine operation can be compared to music and ballet, hence the repeated use of the title Ballet mécanique in different genres. Oskar Schlemmer at Bauhaus had created a few ballets where dancers performed fast and repetitive motions of pistons and cogs (evoking "mechanthropomorphism"), with costumes and sets imitating the engine parts on a grand scale. Drawing a comparison between the body and machine, the mechanistic motions of dancers were in contrast to fluid and graceful moves of the traditional ballet.
Literature
Blaise Cendrars in his La prose du Transsibérien et de la Petite Jehanne de France about the journey aboard the Trans-Siberian Express alluded to the movements of the train.
Machines as art objects
The idea of exhibiting the machines goes back to at least the Great Exhibition of 1851. Full acceptance by the museum curators came in 1934, with the "Machine Art" exhibit at New York Museum of Modern Art where the machine parts were presented as more traditional art objects: engines on pedestals like sculptures, propellers mounted on the walls like paintings.
László Moholy-Nagy tried to erase the border between a work of art and the machine in his "Light-Space Modulator", a "fountain of light" made in collaboration with the engineering company (AEG).
Roots
J. J. P. Oud already in the 1919 declared that "the motor car [and the] machine ... correspond more closely to the socio-aesthetic tendencies of our age and the future than do the contemporary manifestations of architecture", the new aesthetical emotions thus ought to be evoked through "the grace of the machine".
Fernand Léger in the 1924 penned a declaration on "The Machine Aesthetic", stating that the humanity is starting to live in the "geometrical order" and called for "the architecture of the mechanical".
Theo van Doesburg in his 1926 manifesto "The End of Art" explicitly argued that aestheticism kills the creativity, and the architects should learn from non-artistic objects:
Other notable declarations of adoration towards the machines included Filippo Marinetti's "Mechanical and Geometrical Splendor and Sensitivity Toward Numbers" (1914), Gino Severini's "Machinery" (1922), Nikolai Tarabukin's "From the Easel to the Machine" (1923), Kurt Ewald's "The Beauty of Machines" (1925-1926), Walter Gropius' "Where Artists and Technicians Meet" (1925-1926).
Machine aesthetic can be considered as a backlash against the expressionism. At the time, art critics suggested that the abstract art was splitting into two streams, an evolution of expressionism, termed "non-geometrical" by Alfred Barr (who placed the machine aesthetic into the center of Barr's diagram), and "geometrical", a continuation of constructivism. Walter Gropius contrasted the "technological product" made by "sober mind" and "work of art created by passion". The groups associated with machine aesthetic sometimes exhibited a strong rejection of expressionist self-indulgence: "I have more faith [in the machine] than in the longhaired gentleman with a floppy cravat" (Fernand Léger, 1924).
De Stijl
Van Doesburg had founded a De Stijl journal in 1917 and published it together with Piet Mondrian, Oud, Rietveld until 1931. De Stijl was unabashed in its "celebration of the machine" and proclaiming the aesthetic values of the industrial production.
De Stijl community included painters (most notably Mondrian) and sculptors (Constantin Brancusi). Composer George Antheil had created the Ballet mécanique, "the first piece of music that has been composed OUT OF and FOR machines, ON EARTH", that was used as a score to a "film without scenario" by Léger. The music was played by an orchestra including primarily the percussion instruments, but also included the sounds of a bell, a siren, and an airplane propeller.
By the 1922 De Stijl exchanged ideas with the art's opposite take on the industrial revolution, Dadaism. Dadaists found chaos and absurdity where machine aesthetic found order and beauty, but both sides agreed on a need for a "new synthesis".
Examples
Schröder House in Utrecht (Gerrit Rietveld, 1924) is a combination of interlocking planes expanding outside (cantilevered) and movable walls partitioning the open space inside. The architect was trying to avoid an appearance of a monolithic mass.
Legacy
The modern aesthetics of high technology is to a large degree defined by the machine aesthetic. Just like machine aesthetic, the high-tech architecture proclaims that the form follows function, yet frequently completely detaches the form from function and resorts instead to the imitation of appearance of a factory or a restaurant kitchen.
See also
New Aesthetic, blending of digital and physical worlds
References
Sources
20th-century architectural styles
De Stijl
Industrialisation | 0.770632 | 0.967597 | 0.745661 |
Animutation | Animutation or fanimutation is a form of web-based computer animation, typically created in Adobe Flash and characterized by unpredictable montages of pop-culture images set to music, often in a language foreign to the intended viewers. It is not to be confused with manual collage animation (e.g., the work of Stan Vanderbeek and Terry Gilliam), which predates the Internet.
History
Animutation was popularized by Neil Cicierega. Cicierega claims to have been inspired by several sources, including bizarre Japanese commercials and Martin Holmström's "Hatten är din" Soramimi-style video made for the "Habbeetik" song by Azar Habib. The term animutation is a portmanteau of animation and mutation and was popularized in 2001 through Cicierega's flash animations such as Japanese Pokerap and Hyakugojyuuichi!!, which feature the credits music from older episodes of Pokémon. The popularity of Hyakugojyuuichi!! quickly made it an Internet phenomenon. Since that time, others have adopted a similar style and communities of similarly minded animators have sprung up around the web. These versions made by fans were christened "fanimutations".
Recurring themes
Audio
Animutations can be based on songs of foreign, independent, or mainstream origin. Japanese songs were used in many of the original animutations by Neil Cicierega, but newer animutations use songs in a wide variety of languages, including English, Dutch and gibberish.
The foreign language songs are often "misheard" into English by the creators and added as subtitles. The words are not translations but soramimis, English words that sound roughly the same as the original lyrics. For example, the animutation title "French erotic film" is a soramimi of the original Dutch lyrics "Weet je wat ik wil" in an Ome Henk song. The actual translation of the lyrics is "Do you know what I want?"
Recurring motifs
Though animutations are close in relation to the random nonsense of dadaism and can be entirely unpredictable, they sometimes exhibit recurring memes among them as a result of being influenced by each other and internet culture. Among the many recurring motifs found in animutations are:
The inclusion of Canadian comedian Colin Mochrie from Whose Line Is It Anyway? Regularly, a picture of Mochrie's head superimposed into a crudely drawn sun is also used. This inclusion is largely due to Neil Cicierega's fixation on the comedian.
The inclusion of Harry Potter in various forms, often edited in a bizarre fashion. Neil Cicierega, also a creator of the Potter Puppet Pals, is responsible from the outset explosion of Harry Potter's use in animutation, most notably starting in Hyakugojyuuichi.
Obscure pop-culture references, typically catchphrases or images.
Random cartoon characters, usually from children's television programs or anime, although obscure characters are also used.
Non-traditional interfaces
While many flash animations have a "replay button" at the end, animutations often use a silly graphic which animates when interacted with, included with instructions on how to replay the animutation. For instance, at the end of Cold Heart, the title character is holding a package of Mentos mints, which serves as the replay button. The package slightly increases in size when moused over, and text at the bottom of the video informs the user to "Click the Mentos to replay!".
Similarly to the replay button, a progress animation is used in many animutations, especially the later creations. For example, in Jesus H. Christ, a papier-mâché goose which was originally in a Mystery Science Theater 3000 episode was used as a pointer, rotating clockwise to indicate the animation's playback progress.
See also
Yatta (song)
YTMND
References
External links
Neil Cicierega's animutation website
Colin Mochrie vs. Jesus H. Christ: Messages About Masculinities and Fame in Online Video Conversations (PDF)
2000s neologisms
2001 introductions
2001 neologisms
Internet properties established in 2001
Internet memes
Internet memes introduced in 2001
Web animation | 0.760196 | 0.980826 | 0.745621 |
Vkhutemas | Vkhutemas (, acronym for Vysshiye Khudozhestvenno-Tekhnicheskiye Masterskiye "Higher Art and Technical Studios") was the Russian state art and technical school founded in 1920 in Moscow, replacing the Moscow Svomas.
The workshops were established by a decree from Vladimir Lenin with the intentions, in the words of the Soviet government, "to prepare master artists of the highest qualifications for industry, and builders and managers for professional-technical education". The school had 100 faculty members and an enrollment of 2,500 students. Vkhutemas was formed by a merger of two previous schools: the Moscow School of Painting, Sculpture and Architecture and the Stroganov School of Applied Arts. The workshops had artistic and industrial faculties; the art faculty taught courses in graphics, sculpture and architecture while the industrial faculty taught courses in printing, textiles, ceramics, woodworking, and metalworking.
Vkhutemas was a center for three major movements in avant garde art and architecture: constructivism, rationalism, and suprematism. In the workshops, the faculty and students transformed attitudes to art and reality with the use of precise geometry with an emphasis on space, in one of the great revolutions in the history of art. In 1926, the school was reorganized under a new rector and its name was changed from "Studios" to "Institute" (Вхутеин, Высший художественно-технический институт, Vkhutein, Vysshiy Khudozhestvenno-Tekhnicheskii Institut), or Vkhutein.
The school was dissolved in 1930 following political and internal pressures throughout its ten-year existence. Its faculty, students, and legacy were dispersed into as many as six other schools.
Basic course
A preliminary basic course was an important part of the new teaching method that was developed at Vkhutemas, and was made compulsory for all students, regardless of their future specialization. This was based on a combination of scientific and artistic disciplines. During the basic course, students had to learn the language of plastic forms, and chromatics. Drawing was considered a foundation of the plastic arts, and students investigated relationships between color and form, and the principles of spatial composition. Akin to the Bauhaus's basic course, which all first-year students were required to attend, it gave a more abstract foundation to the technical work in the studios. In the early 1920s this basic course consisted of the following:
the maximal influence of color (given by Lyubov Popova),
form through color (Alexander Osmerkin),
color in space (Aleksandra Ekster)
color on the plane (Ivan Kliun),
construction (Alexander Rodchenko),
simultaneity of form and color (Aleksandr Drevin),
volume in space (Nadezhda Udaltsova),
history of the Western arts (Amshey Nurenberg) and
tutelage (Wladimir Baranoff-Rossine).
Art faculty
The primary movements in art which influenced education at Vkhutemas were constructivism and suprematism, although individuals were versatile enough to fit into many or no movements—often teaching in multiple departments and working in diverse media. The leader figure of suprematist art, Kazimir Malevich, joined the teaching staff of Vkhutemas in 1925, though his group—Unovis, of the Vitebsk art college that included El Lissitzky—exhibited at Vkhutemas as early as 1921. While constructivism was ostensibly developed as an art form in graphics and sculpture, it had architecture and construction as its underlying subject matter. This influence pervaded the school. The artistic education at Vkhutemas tended to be multidisciplinary, which stemmed from its origins as a merger of a fine arts college and a craft school. A further contributor to this was the generality of the basic course, which continued after students had specialised and was complemented by a versatile faculty. Vkhutemas cultivated polymath masters in the Renaissance mold, many with achievements in graphics, sculpture, product design, and architecture. Painters and sculptors often made projects related to architecture; examples include Tatlin's Tower, Malevich's Architektons, and Rodchenko's Spatial Constructions. Artists moved from department to department, such as Rodchenko from painting to metalworking. Gustav Klutsis, who was head of a workshop on colour theory, also moved from painting and sculptural works to exhibition stands and kiosks. El Lissitzky, who had trained as an architect, also worked in a broad cross section of media such as graphics, print and exhibition design.
Industrial faculty
The industrial faculties had the task of preparing artists of a new type, artists capable of working not only in the traditional pictorial and plastic arts but also capable of creating all objects in the human environment such as the articles of daily life, the implements of labor, etc. The industrial department at Vkhutemas endeavored to create products of viability in the economy and functionality found in society. Class-based political requirements steered artists toward crafts, and the designing of household or industrial goods. There was significant pressure in this respect by the Central Committee of the Communist Party, that in 1926, 1927, and 1928, required a student body composition "of worker and peasant origins", and several demands for "working class" elements. This push for design economy resulted in a tendency towards working, functional designs with minimised luxuries. Tables designed by Rodchenko were equipped with mechanical moving parts, and were standardised and multi-functional. The products designed at Vkhutemas never bridged the gap between workshops and factory production, although they cultivated a factory aesthetic—Popova, Stepanova, and Tatlin even designed worker's industrial apparel. Furniture pieces constructed at Vkhutemas explored the possibilities of new industrial materials such as plywood and tubular steel.
There were many successes for the departments, and they were to influence future design thinking. At the 1925 Exposition Internationale des Arts Décoratifs et Industriels Modernes in Paris, the Soviet pavilion by Konstantin Melnikov and its contents attracted both criticism and praise for its economic and working class architecture. One focus of criticism was the "nakedness" of the structure, in comparison to other luxurious pavilions such as that by Émile-Jacques Ruhlmann. Alexander Rodchenko designed a worker's club, and the furniture that the Wood and Metal Working Faculty (Дерметфак) contributed was an international success. The student work won several prizes, and Melnikov's pavilion won the Grand Prix. As a new generation of artist/designers, the students and faculty at Vkhutemas paved the way for designer furniture by architects such as Marcel Breuer, and Alvar Aalto later in the century.
Metalwork and woodwork
The dean of this department was Alexander Rodchenko, who was appointed in February 1922. Rodchenko's department was more expansive than its name would suggest, concentrating on abstract and concrete examples of product design. In a report to the rector of 1923, Rodchenko listed the following subjects as being offered: higher mathematics, descriptive geometry, theoretical mechanics, physics, the history of art and political literacy. Theoretical tasks included graphic design and "volumetric and spatial discipline"; while practical experience was given in foundry work, minting, engraving and electrotyping. Students were also given internships in factories. Rodchenko's approach effectively combined art and technology, and he was offered the deanship of Vkhutein in 1928, although he refused. El Lissitzky was also a member of the faculty.
Textiles
The textile department was run by the constructivist designer Varvara Stepanova. In common with other departments, it was run on utilitarian lines, but Stepanova encouraged her students to take an interest in fashion: they were told to carry notebooks so that they could note the contemporary fabrics and aesthetics of everyday life as seen on the high street. Stepanova wrote in her 1925 course plan that this was done "with the goal of devising methods for a conscious awareness of the demands imposed on us by new social conditions". Lyubov Popova was also a member of the textile faculty, and in 1922, when hired to design fabrics for the First State Textile Print Factory, Popova and Stepanova were among the first women designers in the Soviet textile industry. Popova designed textiles both with asymmetrical architectonic geometries, and also work that was thematic. Before her death in 1924, Popova produced fabrics with grids of printed hammers and sickles, which would predate work by others in the political climate of the first five-year plan.
Lenin's visit
Vladimir Lenin signed a decree to create the school, although its emphasis was on art rather than Marxism. Three months after its founding, on 25 February 1921, Lenin went to Vkhutemas to visit the daughter of Inessa Armand and to converse with the students, where in a discussion about art he found an affinity among them for Futurism. There he first viewed avant garde art, such as suprematist painting. He did not wholly approve of it, expressing concern over the connection between the student's art and politics. After the discussion, Lenin was accepting and stated, "Well, tastes differ" and "I am an old man".
Although Lenin was not an enthusiast for avant garde art, the Vkhutemas faculty and students made projects to honor him and further his politics. Ivan Leonidov's final project at Vkhutemas was his design for a Lenin Institute of Librarianship. A model of Vladimir Tatlin's Monument to the Third International was built by students and displayed at their workshop in Saint Petersburg. Furthermore, Lenin's Mausoleum was designed by faculty member Aleksey Shchusev. Alexei Gan's book Constructivism, published in 1922, provided a theoretical link between the new emerging art and contemporary politics, connecting constructivism with the revolution, and Marxism. The founding decree included a statement that students have an "obligatory education in political literacy and the fundamentals of the communist world view on all courses". These examples help justify the school's projects in terms of the early political requirements but others would arise throughout the school's existence.
Comparisons with the Bauhaus
Vkhutemas was a close parallel to the German Bauhaus in its intent, organization and scope. The two schools were the first to train artist-designers in a modern manner. Both schools were state-sponsored initiatives to merge the craft tradition with modern technology, with a Basic Course in aesthetic principles, courses in color theory, industrial design, and architecture. Vkhutemas was a larger school than the Bauhaus, but it was less publicised and consequently, is less familiar to the West. Vkhutemas's influence was expansive however—the school exhibited two structures by faculty and award-winning student work at the 1925 Exposition in Paris. Furthermore, Vkhutemas attracted the interest and several visits from the director of the Museum of Modern Art, Alfred Barr. With the internationalism of modern architecture and design, there were many exchanges between the Vkhutemas and the Bauhaus. The second Bauhaus director Hannes Meyer attempted to organise an exchange between the two schools, while Hinnerk Scheper of the Bauhaus collaborated with various Vkhutein members on the use of colour in architecture. In addition, El Lissitzky's book Russia – an Architecture for World Revolution published in German in 1930 featured several illustrations of Vkhutemas/Vkhutein projects. Both schools flourished in a relatively liberal period, and were closed under pressure from increasingly totalitarian regimes.
Vkhutein
As early as 1923, Rodchenko and others published a report in LEF which foretold of Vkhutemas's closure. It was in response to students' failure to gain a foothold in industry and was entitled, The Breakdown of VKhUTEMAS: Report on the Condition of the Higher Artistic and Technical Workshops, which stated that the school was "disconnected from the ideological and practical tasks of today". In 1927, the school's name was modified: "Institute" replaced "Studios" (Вхутеин, Высший художественно-технический институт), or Vkhutein. Under this reorganisation, the 'artistic' content of the basic course was reduced to one term, when at one point it was two years. The school appointed a new rector, Pavel Novitsky, who took over from the painter Vladimir Favorsky in 1926. It was under Novitsky's tenure that external political pressures increased, including the "working class" decree, and a series of external reviews by industry, and commercial organisations of student works' viability. The school was dissolved in 1930, and was merged into various other programs. One such merger was with MVTU, forming the Architectural-Construction Institute, which became the Moscow Architectural Institute in 1933. The Modernist movements which Vkhutemas had helped generate were critically considered as abstract formalism, and were succeeded historically by socialist realism, postconstructivism, and the Empire style of Stalinist architecture.
See also
:Category:Vkhutemas alumni
Sources
Bokov, Anna. Avant-Garde as method Vkhutemas and the pedagogy of space, 1920–1930. Zurich: Park Book, 2019.
Solomon R. Guggenheim Museum et al. The great utopia : the Russian and Soviet avant-garde, 1915-1932. New York: Guggenheim Museum, 1992.
van Helvert, Mariane and Andrea Baldoni. The responsible object : a history of design ideology for the future. Amsterdam: Valiz; Melbourne: Ueberschwarz, 2016.
References
External links
Vkhutemas, photographs, Canadian Centre for Architecture (digitized items)
VKhUTEMAS Collection, Collection Inventories and Finding Aids, Getty Research Institute
VKhUTEMAS (Higher State Artistic and Technical Workshops), Art & Artists, Tate Modern
Agata Pyzik. "Vkhutemas: The ‘Soviet Bauhaus’". The Architectural Review. Published May 8, 2015.
Anna Bokov. "Institutionalizing the Avant-Garde: Vkhutemas 1920–1930." The Walker Art Center. Published June 19, 2017.
Ines Lalueta. "Vkhutemas - A Russian Laboratory of Modernity." Metalocus. Published December 28, 2014.
Art schools in Russia
Architecture schools in Russia
Russian avant-garde
Constructivism (art)
Education in Moscow
Universities and institutes established in the Soviet Union
Educational institutions established in 1920
Educational institutions disestablished in 1930
Constructivist architecture
Modernist architecture in Russia
1920 establishments in Russia | 0.763267 | 0.976807 | 0.745564 |
Scientific community | The scientific community is a diverse network of interacting scientists. It includes many "sub-communities" working on particular scientific fields, and within particular institutions; interdisciplinary and cross-institutional activities are also significant. Objectivity is expected to be achieved by the scientific method. Peer review, through discussion and debate within journals and conferences, assists in this objectivity by maintaining the quality of research methodology and interpretation of results.
History of scientific communities
The eighteenth century had some societies made up of men who studied nature, also known as natural philosophers and natural historians, which included even amateurs. As such these societies were more like local clubs and groups with diverse interests than actual scientific communities, which usually had interests on specialized disciplines. Though there were a few older societies of men who studied nature such as the Royal Society of London, the concept of scientific communities emerged in the second half of the 19th century, not before, because it was in this century that the language of modern science emerged, the professionalization of science occurred, specialized institutions were created, and the specialization of scientific disciplines and fields occurred.
For instance, the term scientist was first coined by the naturalist-theologian William Whewell in 1834 and the wider acceptance of the term along with the growth of specialized societies allowed for researchers to see themselves as a part of a wider imagined community, similar to the concept of nationhood.
Membership, status and interactions
Membership in the community is generally, but not exclusively, a function of education, employment status, research activity and institutional affiliation. Status within the community is highly correlated with publication record, and also depends on the status within the institution and the status of the institution. Researchers can hold roles of different degrees of influence inside the scientific community. Researchers of a stronger influence can act as mentors for early career researchers and steer the direction of research in the community like agenda setters.
Scientists are usually trained in academia through universities. As such, degrees in the relevant scientific sub-disciplines are often considered prerequisites in the relevant community. In particular, the PhD with its research requirements functions as a marker of being an important integrator into the community, though continued membership is dependent on maintaining connections to other researchers through publication, technical contributions, and conferences. After obtaining a PhD an academic scientist may continue through being on an academic position, receiving a post-doctoral fellowships and onto professorships. Other scientists make contributions to the scientific community in alternate ways such as in industry, education, think tanks, or the government.
Members of the same community do not need to work together. Communication between the members is established by disseminating research work and hypotheses through articles in peer reviewed journals, or by attending conferences where new research is presented and ideas exchanged and discussed. There are also many informal methods of communication of scientific work and results as well. And many in a coherent community may actually not communicate all of their work with one another, for various professional reasons.
Speaking for the scientific community
Unlike in previous centuries when the community of scholars were all members of few learned societies and similar institutions, there are no singular bodies or individuals which can be said today to speak for all science or all scientists. This is partly due to the specialized training most scientists receive in very few fields. As a result, many would lack expertise in all the other fields of the sciences. For instance, due to the increasing complexity of information and specialization of scientists, most of the cutting-edge research today is done by well funded groups of scientists, rather than individuals. However, there are still multiple societies and academies in many countries which help consolidate some opinions and research to help guide public discussions on matters of policy and government-funded research. For example, the United States' National Academy of Sciences (NAS) and United Kingdom's Royal Society sometimes act as surrogates when the opinions of the scientific community need to be ascertained by policy makers or the national government, but the statements of the National Academy of Science or the Royal Society are not binding on scientists nor do they necessarily reflect the opinions of every scientist in a given community since membership is often exclusive, their commissions are explicitly focused on serving their governments, and they have never "shown systematic interest in what rank-and-file scientists think about scientific matters". Exclusivity of membership in these types of organizations can be seen in their election processes in which only existing members can officially nominate others for candidacy of membership. It is very unusual for organizations like the National Academy of Science to engage in external research projects since they normally focus on preparing scientific reports for government agencies. An example of how rarely the NAS engages in external and active research can be seen in its struggle to prepare and overcome hurdles, due to its lack of experience in coordinating research grants and major research programs on the environment and health.
Nevertheless, general scientific consensus is a concept which is often referred to when dealing with questions that can be subject to scientific methodology. While the consensus opinion of the community is not always easy to ascertain or fix due to paradigm shifting, generally the standards and utility of the scientific method have tended to ensure, to some degree, that scientists agree on some general corpus of facts explicated by scientific theory while rejecting some ideas which run counter to this realization. The concept of scientific consensus is very important to science pedagogy, the evaluation of new ideas, and research funding. Sometimes it is argued that there is a closed shop bias within the scientific community toward new ideas. Protoscience, fringe science, and pseudoscience have been topics that discuss demarcation problems. In response to this some non-consensus claims skeptical organizations, not research institutions, have devoted considerable amounts of time and money contesting ideas which run counter to general agreement on a particular topic.
Philosophers of science argue over the epistemological limits of such a consensus and some, including Thomas Kuhn, have pointed to the existence of scientific revolutions in the history of science as being an important indication that scientific consensus can, at times, be wrong. Nevertheless, the sheer explanatory power of science in its ability to make accurate and precise predictions and aid in the design and engineering of new technology has ensconced "science" and, by proxy, the opinions of the scientific community as a highly respected form of knowledge both in the academy and in popular culture.
Political controversies
The high regard with which scientific results are held in Western society has caused a number of political controversies over scientific subjects to arise. An alleged conflict thesis proposed in the 19th century between religion and science has been cited by some as representative of a struggle between tradition and substantial change and faith and reason.. A popular example used to support this thesis is when Galileo was tried before the Inquisition concerning the heliocentric model. The persecution began after Pope Urban VIII permitted Galileo to write about the Copernican model. Galileo had used arguments from the Pope and put them in the voice of the simpleton in the work "Dialogue Concerning the Two Chief World Systems" which caused great offense to him. Even though many historians of science have discredited the conflict thesis it still remains a popular belief among many including some scientists. In more recent times, the creation–evolution controversy has resulted in many religious believers in a supernatural creation to challenge some naturalistic assumptions that have been proposed in some of the branches of scientific fields such as evolutionary biology, geology, and astronomy. Although the dichotomy seems to be of a different outlook from a Continental European perspective, it does exist. The Vienna Circle, for instance, had a paramount (i.e. symbolic) influence on the semiotic regime represented by the Scientific Community in Europe.
In the decades following World War II, some were convinced that nuclear power would solve the pending energy crisis by providing energy at low cost. This advocacy led to the construction of many nuclear power plants, but was also accompanied by a global political movement opposed to nuclear power due to safety concerns and associations of the technology with nuclear weapons. Mass protests in the United States and Europe during the 1970s and 1980s along with the disasters of Chernobyl and Three Mile Island led to a decline in nuclear power plant construction.
In the last decades or so, both global warming and stem cells have placed the opinions of the scientific community in the forefront of political debate.
See also
Academic discipline
Cudos
Epistemology
International community
Normal science
Objectivity (philosophy)
Scientific consensus
Scientific communication
Extended peer community
References
Sociologies of science
History and philosophy of science
Alan Chalmers - What is this thing called science
Other articles
Pdf.
Höhle, Ester (2015). From apprentice to agenda-setter: comparative analysis of the influence of contract conditions on roles in the scientific community. Studies in Higher Education 40(8), 1423–1437.
Philosophy of science
Sociology of science
Types of communities | 0.76071 | 0.980053 | 0.745536 |
Audience theory | Audience theory offers explanations of how people encounter media, how they use it, and how it affects them. Although the concept of an audience predates media, most audience theory is concerned with people’s relationship to various forms of media. There is no single theory of audience, but a range of explanatory frameworks. These can be rooted in the social sciences, rhetoric, literary theory, cultural studies, communication studies and network science depending on the phenomena they seek to explain. Audience theories can also be pitched at different levels of analysis ranging from individuals to large masses or networks of people.
James Webster suggested that audience studies could be organized into three overlapping areas of interest. One conceives of audiences as the site of various outcomes. This runs the gamut from a large literature on media influence to various forms of rhetorical and literary theory. A second conceptualizes audiences as agents who act upon media. This includes the literature on selective processes, media use and some aspects of cultural studies. The third see the audiences as a mass with its own dynamics apart from the individuals who constitute the mass. This perspective is often rooted in economics, marketing, and some traditions in sociology. Each approach to audience theory is discussed below.
Audience as outcome
Many audience theorists are concerned with what media do to people. There is a long tradition in the social sciences of investigating “media effects.” Early examples include the Payne Fund Studies, which assessed how movies affected young people, and Harold Lasswell’s analysis of WWI propaganda. Some have criticized early work for lacking analytical rigor and encouraging a belief in powerful effects.
Subsequent work in the social sciences employed a variety of methods to assess the media’s power to change attitudes and behaviors such as voting and violence. Sociologists Elihu Katz and Paul Lazarsfeld introduced the concept of a two-step flow in communication, which suggested that media influence was moderated by opinion leaders. By the late 1950s, most researchers concluded that media effects were limited by psychological processes like selective exposure, social networks, and the commercial nature of media. This new consensus was dubbed the “dominant paradigm” of media sociology and it was criticized for being too reductionist and understating the true power of media.
While the tenets of that limited effects perspective retain much of their appeal, later theories have highlighted various ways in which media operate on audiences. These audience outcomes include:
Agenda-setting: Asserts that media don’t tell people what to think (e.g., attitude change) but what to think about. Hence, media have the power to make things salient, setting the public agenda.
Spiral of silence: Stipulates that people fear social isolation and look toward media to assess popular opinion. Hence, media portrayals (accurate or not) can lead people to remain silent if they believe their opinion is unpopular.
Framing: Argues that media present a selective view of reality, privileging certain frames like problem definitions or moral judgments. Hence, media have the power to create interpretations of social reality.
Knowledge-gap: Stipulates that as the media environment becomes more information rich, higher social-economic groups acquire information at a higher rate than others. Hence, media can polarize society into better and less well informed segments.
Cultivation theory: Argues that television programs create a pervasive, but systematically distorted picture of social reality, leading heavy viewers to unthinkingly accept that reality. Hence, television has the power to cultivate distorted perceptions of reality.
Third-person effects: Asserts that individuals believe that they are relatively impervious to media influence, while believing others are susceptible. Hence, they believe media have effects (on others) and behave accordingly.
Humanists have also been concerned with how media operate on audiences. With a specific focus on rhetoric, some, such as Walter Ong, have suggested that the audience is a construct made up by the rhetoric and the rhetorical situation the text is addressing. Similarly, some forms of literary criticism such as Screen theory, argue that cinematic texts actually create spectators by sewing them into subject positions. In effect, audience members become unwitting accomplices in the production of meanings as orchestrated by the text. Hence media can promote widespread ideological outcomes such as false consciousness and hegemony.
Audience as agent
Emphasizing the agency of audiences takes a different approach to audience theory. Simply put, rather than asking what media do to people, these theories ask what people do to media. Such approaches, which are sometimes referred to as active audience theories, have been the province of the humanities and social sciences.
The Centre for Contemporary Cultural Studies, was founded at the University of Birmingham, England in the 1960s by Stuart Hall and Richard Hoggart. Hall was instrumental in promoting what he called the “encoding/decoding” model of communication (described below). This argued that audiences had the ability to read texts in ways that were not intended by the producer of the text. Subsequent work at the Centre provided empirical support for the model. Amongst these was The Nationwide Project by David Morley and Charlotte Brunsdon. Humanistic theories of audience agency are often grounded in theoretical perspectives such as structuralism, feminism, and Marxism. Notable examples include:
Encoding/Decoding: Which stipulates that organizations produce texts with encoded meanings, but that individuals have the ability to understand (decode) those texts in accordance with their own beliefs, producing dominant, negotiated and oppositional readings.
Reception theory: An application of reader response theory that argues the meaning of a text is not inherent within the text itself, but the audience must elicit meaning based on their individual cultural background and life experiences.
Social scientific interest in audiences as agents is, in part, a consequence of research on media effects. Two lynch pins of the limited effects perspective, selective processes and the two-step flow of communication, describe how the actions of audience members mitigate media influence. Hence, one cannot understand what media do to people without understanding how people use media. Still other strains of social science investigate media choice as something worthy of study in its own right. Examples include:
Selective exposure: Assumes that people seek out media that confirm their actions and beliefs and avoid messages that produce cognitive dissonance. Selective exposure to partisan media is thought to contribute to social polarization.
Selective perception: Another selective process in which individuals interpret information they encounter so that it conforms with their beliefs. It is akin to decoding and contributes to confirmation bias.
Uses and gratifications theory: Argues that people have needs they seek to gratify by actively consuming media toward that end. It assumes that audience members are awareness of their motivations for using media.
Models of program choice: An application of welfare economics that assumes people have well-defined program preferences and that they choose programs in accordance with those preferences. These micro-level assumptions are intended to predict aggregate audience formations.
Audience as mass
A third emphasis in audience theory explains the forces that shape audiences. Understanding mass audience behavior has been a concern of media owners and advertisers since the dawn of mass media. By the early twentieth century, broadcasters were using programming strategies to better manage audiences. By mid-century, economists introduced theoretical models of program choice (see above). By the 1960s, marketing practitioners and academics began testing statistical models of mass audience behavior.
Today, there are two main ways to conceptualize the audience as a mass. These correspond to the principal forms of media: linear media like broadcasting and network television, and more recently nonlinear or on demand media supported by digital networks. The former conceives of an audience as mass as it was first described by Herbert Blumer. Essentially, the audience is a collection of individuals who are anonymous to one another, act independently, and are united by a common object of attention. The latter variation conceives of audiences as networks, in which individual audience members may be visible to one another and are capable of acting in concert. Work on the audience as a mass makes little use of the individual traits discussed above (e.g., attitudes, need, preferences) and relies instead on structural factors and the law of large numbers to reveal patterns of behavior. Examples include:
Laws of viewing: Argues that television audiences exhibit law-like regularities which allow analysts to predict audience formation. These empirical regularities include the “duplication of viewing law” and the “law of double jeopardy.”
Social network analysis (SNA): Offers a method for investigating the structure of social networks, which consist of nodes (individuals or websites) and links (relationships or ties). SNA reveals emergent properties in digital media such as information cascades and power law or "long tail" distributions.
Audience networks: Applies SNA to audience behavior by casting media outlets as nodes and defining tie strength based on audience duplication between nodes. Audience networks highlight the centrality of mainstream outlets and draw into question the existence of media enclaves or echo chambers.
One might imagine that explanations of mass audience behavior could be based on the micro-level factors featured in theories of audience agency. But these have a limited ability to explain large-scale patterns of audience behavior such as audience flow, audience fragmentation, or how media “go viral.” To explain those behaviors, theorists are more likely to rely on structural factors such as networks, hyperlinks, platforms, algorithms, audience availabilities and cultural proximity.
See also
Attention economy
Audience effect
Audience measurement
Audience memory curve
Digital divide
Digital media
Ethnography of communication
Filter bubble
Frankfurt school
Ideology
Genre
Mass society
Media consumption
Media psychology
Public opinion
Taste
References
Influence of mass media | 0.766441 | 0.972666 | 0.745492 |
Clothing technology | Clothing technology describes advances in production methods, material developments, and the incorporation of smart technologies into textiles and clothes. The clothing industry has expanded throughout time, reflecting advances not just in apparel manufacturing and distribution, but also in textile functionality and environmental effect. The timeline of clothing and textiles technology includes major changes in the manufacture and distribution of clothing.
From clothing in the ancient world into modernity, the use of technology has dramatically influenced clothing and fashion in the modern age. Industrialization brought changes in the manufacture of goods. In many nations, homemade goods crafted by hand have largely been replaced by factory produced goods on assembly lines purchased in a consumer culture. Innovations include man-made materials such as polyester, nylon, and vinyl as well as features like zippers and velcro. The advent of advanced electronics has resulted in wearable technology being developed and popularized since the 1980s.
Design is an important part of the industry beyond utilitarian concerns and the fashion and glamour industries have developed in relation to clothing marketing and retail. Environmental and human rights issues have also become considerations for clothing and spurred the promotion and use of some natural materials such as bamboo that are considered environmentally friendly.
Production
The advent of industrialization included factories, specialized and technologically advanced equipment, and production lines for the mass production of textiles like natural and synthetic fibers. Globalization and advances in trade increased sourcing of materials and competition for wares across borders. The swadeshi movement in India was an effort to counteract the economic control and influence that British factories exerted over the one-time colony. Concerns have also been raised over the use of so-called sweat shops.
Clothing lines based on famous designers have been featured and advertised in magazines and other media. Branding and [marketing] are features of the advertising age. Some designers have also become television and media personalities. In recent years fashion and design has also been the subject of television shows.
The media and various social networking platforms heavily influence clothing production. Complex software is used to go through and analyze important data related to production and consumerism. This process needs to be done quickly and efficiently in order for companies to meet customer demand thus enhancing their profit and brand.
Sports
The design and constructions of sportswear has changed dramatically over time. Athletic apparel aids in the prevention of injuries, the improvement of breathability, the protection from the weather, and the encouragement of a fitness mindset. Athletes can now use wearable heart rate monitors and fitness trackers to capture a variety of fitness-related parameters, such as distance traveled, calorie consumption, heart rate, and sleep quality.
Techwear
The design of techwear has evolved to become a fashion statement and technical use all at the same time. Techwear involves added zippers, buttons, straps, cord lock, etc. to help aid the changeability of outfits for different temperatures, climates, and moods. It can allow the user to change what they are wearing no matter where they are in a matter of seconds.
Education
Computer-aided design is used in the development of clothing. Corporate and business training to address accounting, trade, and finance issues has also become a significant part of the trade. Courses and programs at Universities specialize in these fields. the Beijing Institute of Clothing Technology and Fachhochschule für Technik und Wirtschaft Berlin are examples institutions focused on the business. In the area of engineering development of functional clothing, TU Dresden, Germany provides courses at Bachelor, Dipl.-Ing and non-consecutive MSc. degree and HS Niedererrhein (Mönchengladbach) provides B.Sc. and M.Sc. programs. National governments have also become involved in the business with trade rules and negotiations as well as investments such as Europe's Future Textiles and Clothing program.
Research and scientific publications
The modern clothing development is performed using 3D avatars, often obtained through 3D scanning. Obtaining valid body measurements from 3D scan is still research area, while the recent topics are more related more to the speed and the accuracy of the process, than to the methods for this. TU Dresden applies high speed (4D) scanning system, developed by IBV for analysis of human body deformations during motion. Open access journal as source for reading the latest research in the area clothing development, related to soft avatars, protective clothing, comfort and other topics is the CDATP journal.
See also
Textile manufacturing
Wet processing engineering
Spinning (textiles)
E-textiles
Gore-Tex
Polypropylene
Rayon
SuperFabric
Zephyr Technology
References
History of clothing
Textile engineering
Clothing industry
Textile industry | 0.76956 | 0.968697 | 0.745471 |
Axes conventions | In ballistics and flight dynamics, axes conventions are standardized ways of establishing the location and orientation of coordinate axes for use as a frame of reference. Mobile objects are normally tracked from an external frame considered fixed. Other frames can be defined on those mobile objects to deal with relative positions for other objects. Finally, attitudes or orientations can be described by a relationship between the external frame and the one defined over the mobile object.
The orientation of a vehicle is normally referred to as attitude. It is described normally by the orientation of a frame fixed in the body relative to a fixed reference frame. The attitude is described by attitude coordinates, and consists of at least three coordinates.
While from a geometrical point of view the different methods to describe orientations are defined using only some reference frames, in engineering applications it is important also to describe how these frames are attached to the lab and the body in motion.
Due to the special importance of international conventions in air vehicles, several organizations have published standards to be followed. For example, German DIN has published the DIN 9300 norm for aircraft (adopted by ISO as ISO 1151–2:1985).
Earth bounded axes conventions
World reference frames: ENU and NED
Basically, as lab frame or reference frame, there are two kinds of conventions for the frames:
East, North, Up (ENU), used in geography
North, East, Down (NED), used specially in aerospace
This frame referenced w.r.t. Global Reference frames like Earth Center Earth Fixed (ECEF) non-inertial system.
World reference frames for attitude description
To establish a standard convention to describe attitudes, it is required to establish at least the axes of the reference system and the axes of the rigid body or vehicle. When an ambiguous notation system is used (such as Euler angles) the convention used should also be stated. Nevertheless, most used notations (matrices and quaternions) are unambiguous.
Tait–Bryan angles are often used to describe a vehicle's attitude with respect to a chosen reference frame, though any other notation can be used. The positive x-axis in vehicles points always in the direction of movement. For positive y- and z-axis, we have to face two different conventions:
In case of land vehicles like cars, tanks etc., which use the ENU-system (East-North-Up) as external reference (World frame), the vehicle's (body's) positive y- or pitch axis always points to its left, and the positive z- or yaw axis always points up. World frame's origin is fixed at the center of gravity of the vehicle.
By contrast, in case of air and sea vehicles like submarines, ships, airplanes etc., which use the NED-system (North-East-Down) as external reference (World frame), the vehicle's (body's) positive y- or pitch axis always points to its right, and its positive z- or yaw axis always points down. World frame's origin is fixed at the center of gravity of the vehicle.
Finally, in case of space vehicles like the Space Shuttle etc., a modification of the latter convention is used, where the vehicle's (body's) positive y- or pitch axis again always points to its right, and its positive z- or yaw axis always points down, but “down” now may have two different meanings: If a so-called local frame is used as external reference, its positive z-axis points “down” to the center of the Earth as it does in case of the earlier mentioned NED-system, but if the inertial frame is used as reference, its positive z-axis will point now to the north celestial pole, and its positive x-axis to the Vernal Equinox or some other reference meridian.
Frames mounted on vehicles
Specially for aircraft, these frames do not need to agree with the earth-bound frames in the up-down line. It must be agreed what ENU and NED mean in this context.
Conventions for land vehicles
For land vehicles it is rare to describe their complete orientation, except when speaking about electronic stability control or satellite navigation. In this case, the convention is normally the one of the adjacent drawing, where RPY stands for roll-pitch-yaw.
Conventions for sea vehicles
As well as aircraft, the same terminology is used for the motion of ships and boats. Some words commonly used were introduced in maritime navigation. For example, the yaw angle or heading, has a nautical origin, with the meaning of "bending out of the course". Etymologically, it is related with the verb 'to go'. It is related to the concept of bearing. It is typically assigned the shorthand notation .
Conventions for aircraft local reference frames
Coordinates to describe an aircraft attitude (Heading, Elevation and Bank) are normally given relative to a reference control frame located in a control tower, and therefore ENU, relative to the position of the control tower on the earth surface.
Coordinates to describe observations made from an aircraft are normally given relative to its intrinsic axes, but normally using as positive the coordinate pointing downwards, where the interesting points are located. Therefore, they are normally NED.
These axes are normally taken so that X axis is the longitudinal axis pointing ahead, Z axis is the vertical axis pointing downwards, and the Y axis is the lateral one, pointing in such a way that the frame is right-handed.
The motion of an aircraft is often described in terms of rotation about these axes, so rotation about the X-axis is called rolling, rotation about the Y-axis is called pitching, and rotation about the Z-axis is called yawing.
Frames for space navigation
For satellites orbiting the Earth it is normal to use the Equatorial coordinate system. The projection of the Earth's equator onto the celestial sphere is called the celestial equator. Similarly, the projections of the Earth's north and south geographic poles become the north and south celestial poles, respectively.
Deep space satellites use other Celestial coordinate system, like the Ecliptic coordinate system.
Local conventions for space ships as satellites
If the goal is to keep the shuttle during its orbits in a constant attitude with respect to the sky, e.g. in order to perform certain astronomical observations, the preferred reference is the inertial frame, and the RPY angle vector (0|0|0) describes an attitude then, where the shuttle's wings are kept permanently parallel to the Earth's equator, its nose points permanently to the vernal equinox, and its belly towards the northern polar star (see picture). (Note that rockets and missiles more commonly follow the conventions for aircraft where the RPY angle vector (0|0|0) points north, rather than toward the vernal equinox).
On the other hand, if the goal is to keep the shuttle during its orbits in a constant attitude with respect to the surface of the Earth, the preferred reference will be the local frame, with the RPY angle vector (0|0|0) describing an attitude where the shuttle's wings are parallel to the Earth's surface, its nose points to its heading, and its belly down towards the centre of the Earth (see picture).
Frames used to describe attitudes
Normally the frames used to describe a vehicle's local observations are the same frames used to describe its attitude with respect to the ground tracking stations. i.e. if an ENU frame is used in a tracking station, also ENU frames are used onboard and these frames are also used to refer local observations.
An important case in which this does not apply is aircraft. Aircraft observations are performed downwards and therefore normally NED axes convention applies. Nevertheless, when attitudes with respect to ground stations are given, a relationship between the local earth-bound frame and the onboard ENU frame is used.
See also
Attitude dynamics and control (spacecraft)
Euler's rotation theorem
Gyroscope
Triad Method
Rotation formalisms in three dimensions
Geographic coordinate system
Astronomical coordinate systems
References
Euclidean symmetries
Rotation in three dimensions | 0.760224 | 0.98058 | 0.74546 |
Form | Form is the shape, visual appearance, or configuration of an object. In a wider sense, the form is the way something happens.
Form may also refer to:
Form (document), a document (printed or electronic) with spaces in which to write or enter data
Form (education), a class, set, or group of students
Form (religion), an academic term for prescriptions or norms on religious practice
Form, a shallow depression or flattened nest of grass used by a hare
Form, or rap sheet, slang for a criminal record
People
Andrew Form, American film producer
Fluent Form, Australian rapper and hip hop musician
Arts, entertainment, and media
Form (arts organisation), a Western Australian arts organisation
Form (visual art), a three-dimensional geometrical figure; one of the seven elements of art
Poetic form, a set of structural rules and patterns to which a poem may adhere
Musical form, a generic type of composition or the structure of a particular piece
The Forms (band), an American indie rock band
Computing and technology
Form (computer virus), the most common computer virus of the 1990s
Form (HTML), a document form used on a web page to, typically, submit user data to a server
Form (programming), a component-based representation of a GUI window
FORM (symbolic manipulation system), a program for symbolic computations
Google Forms, cloud-based survey software
Oracle Forms, a Rapid Application Development environment for developing database applications
Windows Forms, the graphical API within the Microsoft .NET Framework for access to native Microsoft Windows interface elements
XForms, an XML format for the specification of user interfaces, specifically web forms
Form Energy, an American energy storage startup company. It is focused on utility-scale iron-air batteries.
Martial arts
Kata (型 or 形), the detailed pattern of defence-and-attack
Taeguk (Taekwondo) (형), the "forms" used to create a foundation for the teaching of Taekwondo
Taolu (套路), forms used in Chinese martial arts and sport wushu
Mathematics
Algebraic form (homogeneous polynomial), which generalises quadratic forms to degrees 3 and more, also known as quantics or simply forms
Bilinear form, on a vector space V over a field F is a mapping V × V → F that is linear in both arguments
Differential form, a concept from differential topology that combines multilinear forms and smooth functions
First-order reliability method, a semi-probabilistic reliability analysis method devised to evaluate the reliability of a system
Indeterminate form, an algebraic expression that cannot be used to evaluate a limit
Modular form, a (complex) analytic function on the upper half plane satisfying a certain kind of functional equation and growth condition
Multilinear form, which generalises bilinear forms to mappings VN → F
Quadratic form, a homogeneous polynomial of degree two in a number of variables
Philosophy
Argument form, Logical form or Test form - replacing the different words, or sentences, that make up the argument with letters, along the lines of algebra; the letters represent logical variables
Intelligible form, a substantial form as it is apprehended by the intellect
Substantial form, asserts that ideas organize matter and make it intelligible
Theory of forms, asserts that ideas possess the highest and most fundamental kind of reality
Value-form, an approach to understanding the origins of commodity trade and the formation of markets
Science
Form (botany), a formal taxon at a rank lower than species
Form (zoology), informal taxa used sometimes in zoology
Form, object of study of Morphology
"-form", a term used in science to describe large groups, often used in taxonomy
Isoform, several different forms of the same protein
Sports
Form (exercise), a proper way of performing an exercise
Form (horse racing), or racing form, a record of a racehorse's performance
Kata, a choreographed pattern of martial arts movements made to be practised alone
Other uses
Form (cigarette), a Finnish cigarette brand
Form, a backless bench formerly used for seating in dining halls, school rooms and courtrooms
Form, the relation a word has to a lexeme
Formwork, a mold used for concrete construction
See also
FORM (disambiguation)
Conformity (disambiguation)
Deformation (disambiguation)
Form factor (disambiguation)
Formal (disambiguation)
Formalism (disambiguation)
Formation (disambiguation)
Forme (disambiguation)
Formula (disambiguation)
Inform (disambiguation)
Reform (disambiguation) | 0.763579 | 0.976134 | 0.745355 |
Technocapitalism | Technocapitalism or tech-capitalism refers to changes in capitalism associated with the emergence of new technology sectors, the power of corporations, and new forms of organization. Technocapitalism is characterised by constant technological innovation, global competition, the digitisation of information and communication, and the growing importance of digital networks and platforms.
Corporate power and organization
Luis Suarez-Villa, in his 2009 book Technocapitalism: A Critical Perspective on Technological Innovation and Corporatism argues that it is a new version of capitalism that generates new forms of corporate organization designed to exploit intangibles such as creativity and new knowledge. The new organizations, which he refers to as experimentalist organizations are deeply grounded in technological research, as opposed to manufacturing and services production. They are also heavily dependent on the corporate appropriation of research outcomes as intellectual property.
This approach is further developed by Suarez-Villa in his 2012 book Globalization and Technocapitalism: The Political Economy of Corporate Power and Technological Domination, in which he relates the emergence of technocapitalism to globalization and to the growing power of technocapitalist corporations. Taking into account the new relations of power introduced by the corporations that control technocapitalism, he considers new forms of accumulation involving intangibles—such as creativity and new knowledge—along with intellectual property and technological infrastructure. This perspective on globalization—and the effect of technocapitalism and its corporations—also takes into account the growing global importance of intangibles, the inequalities created between nations at the vanguard of technocapitalism and those that are not, the increasing importance of brain-drain flows between nations, and the rise of what he refers to as a techno-military-corporate complex that is rapidly replacing the old military-industrial complex of the second half of the 20th century.
The concept behind technocapitalism is part of a line of thought that relates science and technology to the evolution of capitalism. At the core of this idea of the evolution of capitalism is that science and technology are not divorced from society—or that they exist in a vacuum, or in a separate reality of their own—out of reach of social action and human decision. Science and technology are part of society, and they are subject to the priorities of capitalism as much as any other human endeavor, if not more so. Prominent scientists in the early 20th century, such as John Bernal, posited that science has a social function, and cannot be seen as something apart from society. Other scientists at that time, such as John Haldane, related science to social philosophy, and showed how critical approaches to social analysis are very relevant to science, and to our understanding of the need for science. In our time, this line of thought has encouraged philosophers such as Andrew Feenberg to adopt and apply a critical theory approach to technology and science, providing many important insights on how scientific and technological decisions—and their outcomes—are shaped by society, and by capitalism and its institutions.
The term technocapitalism has been used by one author to denote aspects and ideas that diverge sharply from those explained above. Dinesh D'Souza, writing about Silicon Valley in an article, used the term to describe the corporate environment and venture capital relationships in a high tech-oriented local economy. His approach to the topic was consonant with that of business journals and the corporate management literature. Some newspaper articles have also used the term occasionally and in a very general sense, to denote the importance of advanced technologies in the economy.
Forms of tech capitalism
Tech capitalism can take many forms such as IP based financing where patents and registered IP of new technology is an instrument for building capitalism, debt based equity where new and innovative technology is used as the security interest for securing financing or leasing, and ICOs such as Blockchain capital.
See also
"The Californian Ideology"
Corporate capitalism
Corporatocracy
Cyberpunk
Innovation
Information society
Intellectual property
Oligopoly
Political economy
Research and development
Surveillance capitalism
Technological evolution
Technological fix
Technological singularity
Technological unemployment
References
External links
Capitalist systems
Technological change | 0.769823 | 0.968204 | 0.745346 |
Neuroscience and sexual orientation | Sexual orientation is an enduring pattern of romantic or sexual attraction (or a combination of these) to persons of the opposite sex or gender, the same sex or gender, or to both sexes or more than one gender, or none of the aforementioned at all. The ultimate causes and mechanisms of sexual orientation development in humans remain unclear and many theories are speculative and controversial. However, advances in neuroscience explain and illustrate characteristics linked to sexual orientation. Studies have explored structural neural-correlates, functional and/or cognitive relationships, and developmental theories relating to sexual orientation in humans.
Developmental neurobiology
Many theories concerning the development of sexual orientation involve fetal neural development, with proposed models illustrating prenatal hormone exposure, maternal immunity, and developmental instability. Other proposed factors include genetic control of sexual orientation. No conclusive evidence has been shown that environmental or learned effects are responsible for the development of non-heterosexual orientation.
As of 2005, sexual dimorphisms in the brain and behavior among vertebrates were accounted for by the influence of gonadal steroidal androgens as demonstrated in animal models over the prior few decades. The prenatal androgen model of homosexuality describes the neuro-developmental effects of fetal exposure to these hormones. In 1985, Geschwind and Galaburda proposed that homosexual men are exposed to high androgen levels early in development and proposed that temporal and local variations in androgen exposure to a fetus's developing brain is a factor in the pathways determining homosexuality. This led scientists to look for somatic markers for prenatal hormonal exposure that could be easily, and non-invasively, explored in otherwise endocrinologically normal populations. Various somatic markers (including 2D:4D finger ratios, auditory evoked potentials, fingerprint patterns and eye-blink patterns) have since been found to show variation based on sexual orientation in healthy adult individuals.
Other evidence supporting the role of testosterone and prenatal hormones in sexual orientation development include observations of male subjects with cloacal exstrophy who were sex-assigned as female during birth only later to declare themselves male. This supports the theory that the prenatal testosterone surge is crucial for gender identity development. Additionally, females whose mothers were exposed to diethylstilbestrol (DES) during pregnancy show higher rates of bi- and homosexuality.
Variations in the hypothalamus may have some influence on sexual orientation. Studies show that factors such as cell number and size of various nuclei in the hypothalamus may impact one's sexual orientation.
Brain structure
There are multiple areas of the brain which have been found to display differences based on sexual orientation. Several of these can be found in the hypothalamus, including the sexually dimorphic nucleus of the preoptic area (SDN-POA) present in several mammalian species. Researchers have shown that the SDN-POA aides in sex-dimorphic mating behavior in some mammals, which is representative of human sexual orientation. The human equivalent to the SDN-POA is the interstitial nucleus of the anterior hypothalamus, which is also sexually dimorphic and has demonstrated dissimilar sizes between sexualities. There are also other POA-like brain structures in the human brain which differ between sexual orientations, such as the suprachiasmatic nucleus and the anterior hypothalamus. Using meta-analysis of neuroimaging, researchers have concluded that these areas are linked to sexual preferences in humans, which would explain why they may differ based on sexual orientation.
Another area of the brain which demonstrates sexual orientation differentiation is the thalamus, which is a structure involved in sexual arousal and reward. The thalamus of heterosexual individuals was found to be bigger than that of homosexual individuals. The placement of connections in the amygdala have been demonstrated to differ between heterosexual and homosexual individuals. The posterior cingulate cortex, a part of the occipital lobe, the region of the brain that processes visual information, has also been demonstrated to have differences based on sexual orientation.
Research has shown that a couple of the areas of connection between the hemispheres of the brain have differences in their size depending on sexual orientation. The front commission was found to be wider in homosexual men than heterosexual men, and the corpus callosum was found to be larger in homosexual men than heterosexual men.
Some areas of the brain which researchers looked at but did not find differences in structure between sexualities are the temporal cortex, hippocampus and putamen.
Fraternal birth order effect
Neuroscience has been implicated in the study of birth order and male sexual orientation. A significant volume of research has found that the more older brothers a man has from the same mother, the greater the probability he will have a homosexual orientation. Estimates indicate that there is a 33–48% increase in chances of homosexuality in a male child with each older brother, and the effect is not observed in those with older adoptive or step-brothers, indicative of a prenatal biological mechanism. Ray Blanchard and Anthony Bogaert discovered the association in the 1990s, and named it the fraternal birth order (FBO) effect. The mechanism by which the effect is believed to operate states that a mother develops an immune response against a substance important in male fetal development during pregnancy, and that this immune effect becomes increasingly likely with each male fetus gestated by the mother. This immune effect is thought to cause an alteration in (some) later born males' prenatal brain development. The target of the immune response are molecules (specifically Y-linked proteins, which are thought to play a role in fetal brain sex-differentiation) on the surface of male fetal brain cells, including in sites of the anterior hypothalamus (which has been linked to sexual orientation in other research). Antibodies produced during the immune response are thought to cross the placental barrier and enter the fetal compartment where they bind to the Y-linked molecules and thus alter their role in sexual differentiation, leading some males to be attracted to men as opposed to women. Biochemical evidence to support this hypothesis was identified in 2017, finding mothers of gay sons, particularly those with older brothers, had significantly higher anti-NLGN4Y levels than other samples of women, including mothers of heterosexual sons.
The effect does not mean that all or most sons will be gay after several male pregnancies, but rather, the odds of having a gay son increase from approximately 2% for the firstborn son, to 4% for the second, 6% for the third and so on. Scientists have estimated that 15–29% of gay men owe their sexual orientation to this effect, but the number may be higher, as prior miscarriages and terminations of male pregnancies may have exposed their mothers to Y-linked antigens. In addition, the effect is nullified in left-handed men. As it is contingent on handedness and handedness is a prenatally determined trait, it further attributes the effect to be biological, rather than psychosocial. The fraternal birth order effect does not apply to the development of female homosexuality. Blanchard does not believe the same antibody response would cause homosexuality in firstborn gay sons – instead, they may owe their orientation to genes, prenatal hormones and other maternal immune responses which also influence fetal brain development.
The few studies which have not observed a correlation between gay men and birth order have generally been criticized for methodological errors and sampling methods. J. Michael Bailey has said that no plausible hypothesis other than a maternal immune response has been identified.
Research directions
As of 2005, research directions included:
finding markers for sex steroid levels in the brains of fetuses that highlight features of early neuro-development leading to certain sexual orientations
determine the precise neural circuitry underlying direction of sexual preference
use animal models to explore genetic and developmental factors that influence sexual orientation
further population studies, genetic studies, and serological markers to clarify and definitively determine the effect of maternal immunity
neuroimaging studies to quantify sexual-orientation-related differences in structure and function in vivo
neurochemical studies to investigate the roles of sex steroids upon neural circuitry involved in sexual attraction
See also
Biology and sexual orientation
Neuroscience of sex differences
Heterosexuality
Homosexuality and psychology
References
Behavioral neuroscience
Sexual orientation and science
Sexual orientation and medicine | 0.763683 | 0.975968 | 0.74533 |
Backlash (sociology) | A backlash is a strong adverse reaction to an idea, action, or object. It is usually a reflection of a normative resentment rather than a denial of its existence. In Western identitarian political discourse, the term is commonly applied to instances of bias and discrimination against marginalized groups. In this form of discourse, backlash can be explained as the response- or counter reaction- to efforts of social change made by a group to gain access to rights or power.
Historical Western examples
13th Amendment — Jim Crow Laws were racial backlash in response to the amendment to the United States constitution.
Civil rights — Voting restrictions implemented.
Women's Movement — Backlash centered on infertility issues, women's "biological clock" and shortage of men.
Contemporary Western examples
Me Too Movement — Impacted women in the workforce. Men were more reluctant to hire women deemed attractive, more reluctant to have one-on-one meetings with women, and had greater fears of being unfairly accused. In addition to this, 56% of women surveyed predicted that men would continue to harass them but would be more cautious to avoid being caught. Backlash of date-rape prevalent with misleading language used in media. In 1987 it was called an "epidemic" and in 1993, "rape hype"; terms that were exaggerated and victim oriented.
Abortion — Defund Planned Parenthood Act
"This bill temporarily restricts federal funding for Planned Parenthood Federation of America, Inc. Specifically, the bill prohibits, for a one-year period, the availability of federal funds for any purpose to this entity, or any of its affiliates or clinics, unless they certify that the affiliates and clinics will not perform, and will not provide any funds to any other entity that performs, an abortion during such period. This restriction does not apply in cases of rape or incest or where a physical condition endangers a woman's life unless an abortion is performed.
LGBT backlash — Bathroom bills and medical bans are proposed to restrict the rights of transgender youth and adults. Arguments center around fair play in sports and sexual harassment in bathrooms.
Black Lives Matter — Blue Lives Matter and All Lives Matter campaigns created in response.
Bikelash - A colloquial term about the social and political resistance to the creation of urban infrastructure intended to accommodate safer cycling, seemingly at the expense of the use of automobiles.
See also
EDSA III
Estallido social
Feminazi
Straight pride
White backlash
White Lives Matter
Yellow vests movement
Angry young man (South Korea)
Anger
References
Further reading
Democratic backsliding
Reactionary
Right-wing populism | 0.762132 | 0.977881 | 0.745275 |
Criticism of modern paganism | Modern paganism, also known as contemporary paganism and neopaganism, is a collective term for new religious movements which are influenced by or derived from the various historical pagan beliefs of pre-modern peoples. Although they share similarities, contemporary pagan religious movements are diverse, and as a result, they do not share a single set of beliefs, practices, or texts.
Due to its diversity, many criticisms of modern paganism are directed towards specific neopagan groups, and as a result, they are not directed towards all neopagan groups. Criticisms of specific neopagan groups range from criticisms of their belief in gender essentialism to criticisms of their belief in racial supremacy to criticisms of the worldly focuses of pagan organizations.
The analysis of Slavic and, in particular, Russian neopaganism from the standpoint of religious studies and ethnopolitics is carried out in the works of the religious scholar and the historian Victor Schnirelmann.
Criticism of its historicity
Many pagan traditions have been criticized on the basis that they bear little resemblance to the historical practices of which they claim to be revivals.
Gerald Gardner, the founder of Wicca, claimed that it is a continuation of an ancient persecuted Witch cult, a widely discredited notion.
Kemetic Orthodoxy has been criticized for being more based on contemporary revelation than historical continuity. Kemetism as a whole has been criticized over a lack of historical continuity, with most practices having little archaeological support or support from primary sources.
Romuva has been highly praised for maintaining historical continuity by contrast.
Neopagans have been repeatedly criticized for promoting pseudohistory, pseudoscience, pseudoarchaeology, and pseudolinguistics (see for example, The Veles Book).
Certain Russian scholars, such as O. V. Kutarev, believe that "old paganism in its fullness in Europe is essentially unknown, except the beliefs of Mari and Udmurts", and "the restoration of 'pure' paganism as it was in antiquity is impossible".
According to E. L. Moroz, Slavic neopaganism is a religion "in which the names of the ancient Slavic gods are combined with a vulgarized presentation of Hinduism and supplemented by all kinds of revelations about black and light energies and cosmic worlds.
According to Vladimir Borisovich Yashin, neopaganism is not so much an attempt to restore the real traditional pagan religions of the past as it is to establish an occult-esoteric worldview.
Russian archaeologist Leo Klejn, who devoted one of his books to the reconstruction of ancient Slavic paganism, negatively assessed the activities of Rodnovers, who (he asserts) are not really interested in "what their ancient ancestors prayed to and how they got on with their rites, what celebrations they celebrated and what they wore. Their present-day festive and ceremonial actions, devised in the style a la Russe, are a show, a spectacle, a buffoonery. And they themselves are skomorokhi."
The Belarusian publicist, scholar, and former figure of the neopagan movement Alexei Dzermant provided a similar assessment of the Rodnovers' activity:Their calendar of holidays and pantheon of gods is usually made up of fragments characteristic not of a particular local tradition, but borrowed from various East and West Slavic, Indian, and Scandinavian sources, "cabinet" mythology; Folklore texts are usually ignored; forgeries like the "Veles Book" are worshipped as "holy writs"; traditional rites are replaced by invented rituals; ritualistic "prayers" are sung instead of ritual songs; and folk music is either completely absent or presented in a "balalaika" form, Tasteless stylizations of early medieval and folk attire are understood as "Slavic" clothing; signs and symbols are used in a completely unmotivated manner; texts of "Rodnovers"-ideologists are imbued with profane esotericism, para science, dubious historical "discoveries," and national megalomania.A different view is presented in the works of the Czech ethnologist, a specialist in the field of Czech ethnography and Rodnovery Jiří Machida. He acknowledged the reality of reconstructions of the Slavic pagan ceremonial complex by Rodnovers. At the same time, he noted that the basis for reconstructing the rites is not historical sources, but folklore and folk worldview.
Victor Schnirelmann distinguished two streams in the world of neopaganism: speculative neopaganism, widespread among the urban intelligentsia, who lost all connection with tradition and genuine popular culture, and the revival of folk religion in the village, where we can often trace a continuous line of continuity coming from the past. In his opinion, "the first certainly prevails among Russians, Ukrainians, Belarusians, Lithuanians, Latvians, and Armenians, where one can safely speak of the 'invention of tradition'". Similarly, and E. Skachkova distinguished modern paganism as an unbroken tradition, though changing in response to the challenges of our time (with the Mari, Udmurts, Ossetians, etc.), and the new paganism, or neopaganism (with peoples who have historically moved away from the pagan past, including the Slavic neopaganism), a tradition constructed on the basis of the authors' ideas. Historian, religious scholar, and ethnologist A. V. Gurko believed that the concept of "neo-paganism" "can be defined from the term 'paganism,' which refers to heterogeneous polytheistic religions, cults, beliefs, and the definition of new religious movements characterized by syncretism, active use of mass media, communications, apocalypticism, missionary". However, according to M. A. Vasiliev, it is incorrect to apply the term "neo-paganism" (lit. "new paganism") to a movement that has long lost its connection with the traditional culture. In his opinion, it is preferable to call this artificial and eclectic intellectual construct pseudo-paganism, i.e. pseudo-paganism.
Those who oppose the Greek National Religion try to present as arbitrary its identification with the historical ancient Greek religion, arguing that there was no religion in ancient Greece. They argue that what they call ancient Greek religion is a set of beliefs, mythology, ritual practices, and ideological, philosophical, scientific, and political concepts much more diverse and less rigidly structured than any religion in the modern, Abrahamic sense. This whole developed over a period of more than 1500 years, half of it in eras without a script (which would have acted as a stabilizing factor), changed from period to period and from region to region and was subjected to foreign influences. The diversity and lack of coherence, according to them, is largely due to the fact that in Ancient Greece there was never a supreme religious authority imposing an "orthodox" version, nor were there Holy Scriptures with universal acceptance, such as the Old Testament, the New Testament, the Quran, or others.
Criticism of pagan communities and hierarchies
The decentralized nature of many pagan communities has led many to demonstrate traits very different from traditional organized religions. Some scholars have argued that the lack of religious hierarchies leads to an increase in political extremism on both the right and left, or that it leads to members feeling lost and unable to find spiritual guidance.
Many pagans have expressed little interest or even opposition to the development of more robust organizational structures. Many express their paganism as a manifestation of a rejection of organized religion. This is especially true among more progressive pagan groups.
This is not a universal ideal, with many pagans citing disillusionment with Christian theology but a desire for a Christian-like organizational structure. This issue continues to be subject to much debate and self-criticism in pagan circles.
Rodnovery and Kemetic Orthodoxy have been relatively free of this criticism due to their more robust organizational structures.
Racial issues
Slavic
Some forms of modern Slavic paganism have been assessed as extremist radical-nationalist by researchers.
Victor Schnirelmann considered Russian neopaganism as a branch of Russian nationalism that denies Russian Orthodoxy (Christianity) as an enduring national value and identified two cardinal tasks that Russian neopaganism set itself: saving Russian national culture from the levelling influence of modernization and protecting the natural environment from the impact of modern civilization. Anti-Christian, nationalistic, antisemitic, and racist attitudes of neopagan groups were noted. This aspect of Rodnovery in Russia has been reflected in a number of court decisions: some Rodnovery organizations and writings were included in the Russian Ministry of Justice's List of Extremist Organizations and the Federal List of Extremist Materials, respectively. According to Shnirelman, "Russian neo-paganism is a radical kind of conservative ideology, characterized by outright anti-intellectualism and populism".
attributes the origins of Slavic neopaganism to the beginning of the twentieth century. According to , the psychological motivation for participation in Rodnovery organizations can be linked to a compensatory function: people often come here who, for various reasons, have not been able to fulfill themselves in other spheres of life.
considers it a mistake to reduce the entire diversity of Rodnovery groups only to nationalism, and that the ecological direction of Rodnovery is no less significant. In the conclusion of his dissertation wrote:An adequate approach to Slavic neo-paganism can ensure the transition of the bulk of its participants to natural-ecological types of groups, thereby defusing national tensions and freeing up greater forces for potential creation under a new identity.
According to , Rodnovery is only an attempt to comprehend and recreate historical culture and tradition, which were largely lost by the urban population in the 20th century.
Germanic
Germanic occultism and neopaganism emerged in the early 20th Century and it became influential, with beliefs such as Ariosophy, gaining adherents inside the far-right Völkisch movement which eventually culminated in Nazism. Post-World War II continuations of similar beliefs have given rise to the Wotansvolk, a white nationalist neopagan movement, in the late 20th Century.
Modern white supremacist and neo-Nazi ideologies, with all of their racist, antisemitic, and anti-LGBTQ beliefs, have either continued to practice, infiltrated, or co-opted many Heathen traditions, such as Ásatrú (sometimes called Odinism). These groups believe that the Norse-Germanic beliefs which they adhere to form the true Caucasian-European ethnic religion.
The question of race represents a major source of division among Heathens, particularly in the United States. Within the Heathen community, one viewpoint is that race is entirely a matter of biological heredity, while the opposing position is that race is a social construct which is rooted in cultural heritage. In U.S. Heathen discourse, these viewpoints are described as the folkish and the universalist positions, respectively. These two factions—which Kaplan termed the "racialist" and "nonracialist" camps—often clash, with Kaplan claiming that a "virtual civil war" existed between them within the American Heathen community. The universalist and folkish divisions have also spread to other countries, although has had less impact in the more ethnically homogenous Iceland. A 2015 survey revealed a greater number of Heathens subscribed to universalist ideas than folkish ones.
Contrasting with this binary division, Gardell divides Heathenry in the United States into three groups according to their stances on race: the "anti-racist" group which denounces any association between the religion and racial identity, the "radical racist" faction which believes that the religion should not be followed by members of other racial groups because racial identity is the natural religion of the Aryan race, and the "ethnic" faction which seeks to carve out a middle-path by acknowledging the religion's roots in Northern Europe and its connection to people who are of Northern European heritage. The religious studies scholar Stefanie von Schnurbein adopted Gardell's tripartite division, although referred to the groups as the "a-racist", "racial-religious", and "ethnicist" factions respectively.
Exponents of the universalist and anti-racist approach believe that the deities of Germanic Europe can call anyone to worship them, regardless of ethnic background. This group rejects the folkish emphasis on race, believing that even if it is unintended, it can lead to the adoption of racist attitudes towards people who are of non-Northern European ancestry. Universalist practitioners such as Stephan Grundy have emphasized the fact that ancient Northern Europeans were known to marry and have children with members of other ethnic groups, and he has also stated that in Norse mythology, the Æsir also did the same thing with Vanir, Jötunn, and humans, thus, he has used such points to criticize the racialist view. Universalists welcome practitioners of Heathenry who are not of Northern European ancestry; for instance, there are Jewish and African American members of the U.S.-based Troth, and many of its white members have spouses who are members of different racial groups. While some Heathens continue to believe that Heathenry is an indigenous religion, proponents of this view have sometimes argued that Heathenry is indigenous to the land of Northern Europe, rather than indigenous to any specific race. Universalist Heathens often express frustration that some journalists depict Heathenry as an intrinsically racist movement, and they use their online presence to stress their opposition to far-right politics.
Folkish practitioners consider Heathenry the indigenous religion of a biologically distinct race, which is conceptualised as being "white", "Nordic", or "Aryan". Some practitioners explain this by asserting that the religion is intrinsically connected to this race's collective unconscious, with prominent American Heathen Stephen McNallen developing this belief into a concept which he termed "metagenetics". McNallen and many others in the "ethnic" faction of Heathenry explicitly state that they are not racists, although Gardell noted that their views would be deemed racist under certain definitions of the word. Gardell considered many "ethnic" Heathens ethnic nationalists, and many folkish practitioners express disapproval of multiculturalism and the mixture of different races in modern Europe, advocating racial separatism. This group's discourse contains much talk of "ancestors" and "homelands", concepts that may be very vaguely defined. Ethno-centrist Heathens are heavily critical of their universalist counterparts, they frequently declare that the latter have been misled by New Age literature and political correctness. Those who have adopted the "ethnic" folkish position have been criticized by members of the universalist and ethno-centrist factions, the former deeming "ethnic" Heathenry a front for racism and the latter deeming its adherents race traitors for their failure to fully embrace white supremacism.
Some folkish Heathens are white supremacists and explicit racists, representing a "radical racist" faction that favours the terms Odinism, Wotanism, and Wodenism. These individuals inhabit "the most distant reaches" of modern paganism, according to Kaplan. The borders between this form of Heathenry and National Socialism (Nazism) are "exceedingly thin", because its adherents pay tribute to Adolf Hitler and Nazi Germany, claim that the white race is facing extinction at the hands of a Jewish world conspiracy, and reject Christianity as a creation of the Jews. Many individuals who were in the inner circle of The Order, a white supremacist militia which was active in the U.S. during the 1980s, called themselves Odinists, and various racist Heathens have espoused the Fourteen Words slogan which was developed by the Order member David Lane. Some white supremacist organisations, such as the Order of Nine Angles and the Black Order, combine elements of Heathenism with elements of Satanism, although other racist Heathens, such as Wotansvolk's Ron McVan, reject the syncretism of these two religions.
Lack of spirituality
Many involved in the New Age have expressed criticism of paganism for emphasizing the material world over the spiritual.
Modern pagans frequently seek to distance themselves from New Age identity and some communities use the term "New Age" as an insult. Their recurring criticism of New Age ethos and practice includes accusations of charging too much money, of thinking in simplistic ways and of engaging in escapism. They reject the common New Age metaphor of a battle between the forces of light and darkness, arguing that darkness represents a necessary part of the natural world which should not be viewed as evil.
New Agers criticise modern pagans for placing too much emphasis on the material world and for lacking a proper spiritual perspective. There has been New Age criticism of how some modern pagans embrace extravagant subcultures, such as adopting dark colour schemes and imagery. People from both movements have accused the other of egocentrism and narcissism.
LGBT issues
Gender dualism, essentialism, and sexual orientation
Ideological issues that affect LGBTQ perception and interaction within the modern pagan community often stem from a traditionally dualistic cosmology, a view which focuses on two overarching and often oppositional categories. In modern paganism, this is traditionally seen surrounding sexuality, particularly heterosexuality, based on a gender binary which is assigned via genitalia at birth (in other words, gender essentialism.).
Binary gender essentialism is highly present in neopagan communities and their respective theological/philosophical belief systems. Pagan sources themselves, such as the Pagan Federation of the U.K., express views concurring with this academic understanding. The basis of the difference is commonly reflected in discussion about spiritual energy, which is traditionally believed to be intrinsically masculine or feminine in type and inherently possessed by those born into either binary gender.
A preeminent example of this belief is the duotheistic veneration of a God-Goddess pairing, often the Triple Goddess and Horned God, a pairing used by Wiccans. The Goddess (representing the feminine) is traditionally seen as receptive, fertile, nurturing, and passive (cast as the Moon), while the God (representing the masculine) as impregnative, a hunter, and active/aggressive (cast as the Sun). Janet Farrar, a notable Wiccan priestess and author, described this as an adoption of yin and yang in Western pagan practice.
This dual-gender archetype is traditionally regarded in a heterosexual manner, a belief which is reflected in the theology of many neopagan belief systemsas well as practices such as magic and spellcraft, which traditional sects require heterosexual-based dynamics to perform. This can be a struggle for LGBTQ pagans who find the exemplified duality not reflective of their own feelings and desires.
The liturgy of the deity pair is often associated in essentialist ways. The Triple Goddess is associated with the reproductive development and cessation of cisgender woman in her three aspects Maiden, Mother, and Crone. Beginning life, the Maiden (young woman) represents virginal preadolescence. Upon menarche, the woman comes of age and transforms into the Mother (adult woman) aspect, now ostensibly capable of reproduction. Upon menopause, the woman loses her reproductive capacity she once carried, transforming into the Crone (mature woman) aspect. The Moon is believed to represent the menstrual cycle and many pagans believe the two are linked. Likewise, The Horned God is associated with the reproductive capability of cisgender men. Phallic symbology, such as the eponymous horns, represent the penis and the associated reproductive function.
In his 1997 manifesto Vargsmål, the Norwegian metal musician and the racial pagan Varg Vikernes, claimed that homosexuality is a type of "spiritual defect" that results from men who are "develop[ing] womanly instincts" and women "who think they are men", and they also claimed that female bisexuality is "natural" provided it does not reject attraction to men. In 2005, Vikernes claimed on his personal website that "you cannot be Pagan and homosexual or even tolerate homosexuality."
Recent historical views on sexuality and gender
In the mid-20th century dawn of neopaganism, heterosexual dualism was most exemplified in the "Great Rite" of British Traditional Wicca, one of the first notable neopagan ideological groups. In this Rite, a priest and priestess "were cast into rigidly gendered, heteronormative roles" in which the pairing performed a symbolic or literal representation of heterosexual intercourse which was considered vital for venerating supernatural entities and performing magic. It is notable that early neopagan views on sex were radical for their time in their sex positivity and tacit acceptance of BDSM.
Later in the 20th Century, as Wicca spread to North America, it incorporated countercultural, second-wave feminist, and LGBTQ elements. The essentialist rigidity fluctuated under the influence of Carl Jung's notions of anima and animus and non-heterosexual orientations became more acceptable. By the 1980s and 1990s, figures like Vivianne Crowley and Starhawk continued the evolving beliefs. Crowley associated the Jungian binary with classical elements possessed by all—the feminine/anima with water and the masculine/animus with fire. Starhawk, espousing views similar to Crowley in her 1979 edition of her seminal book The Spiral Dance, began calling into question the masculine-feminine divisions entirely by the 1999 edition, and instead focusing on traits instead of gender archetypes.
At the dawn of the 21st century, queer neopagans and their sects began to assert themselves more publicly. These LGBTQ-aligned groups "challenged the gender essentialism remaining in the sexual polarity still practiced" which remained in certain Wicca and feminist neopagan enclaves. Greater exploration and acceptance of queer and transgender figures began not only for adherents but deities and mythological figures as well. In addition, sex positivity and BDSM were brought back into active exploration and acceptance.
Gardnerian Wicca
Gerald Gardner, the eponymous founder of Gardnerian Wicca, particularly stressed heterosexual approaches to Wicca. This practice may stem from Gardner's text (ostensibly quoting a witch, but perhaps in his own words):
Gardner was accused of advocating homophobia by Lois Bourne, one of the high priestesses of the Bricket Wood coven:Gerald was homophobic. He had a deep hatred and detestation of homosexuality, which he regarded as a disgusting perversion and a flagrant transgression of natural law... "There are no homosexual witches, and it is not possible to be a homosexual and a witch" Gerald almost shouted. No one argued with him.However, the legitimacy of Gardner's rumored homophobia is disputable because Gardner showed much more evidence of an open and accepting attitude about practices in his writing which would not be characterized by the hatred or phobia which was common in the 1950s:Also, though the witch ideal is to form perfect couples of people ideally suited to each other, nowadays this is not always possible; the right couples go together and the rest go singly and do as they can. Witchcraft today is largely a case of "make do".
Criticism by Abrahamic religions
In the Islamic World, pagans are not considered people of the book, so they are not protected by Islamic religious law.
Regarding European paganism, in Modern Paganism in World Cultures: Comparative Perspectives Michael F. Strmiska writes that "in Pagan magazines, websites, and Internet discussion venues, Christianity is frequently denounced as an antinatural, antifemale, sexually and culturally repressive, guilt-ridden, and authoritarian religion that has fostered intolerance, hypocrisy, and persecution throughout the world." Furthermore, in the pagan community, the belief that Christianity and paganism are opposing belief systems is common. This animosity is inflamed by historical conflicts between Christian and pre-Christian religions, as well as the perceived ongoing disdain for paganism among Christians. Some pagans have claimed that Christian authorities have never apologized for the religious displacement of Europe's pre-Christian belief systems, particularly following the Roman Catholic Church's apology for past antisemitism in its A Reflection on the Shoah. They also express disapproval of Christianity's continued missionary efforts around the globe at the expense of indigenous and other polytheistic faiths.
Some Christian authors have published books which criticize modern paganism, while other Christian critics have equated paganism with Satanism, which it was often portrayed as such in the mainstream American entertainment industry in the 2000s.
In areas such as the US Bible Belt, where conservative Christian dominance is strong, pagans still experience religious persecution. For instance, Strmiska highlighted instances in both the US and the UK in which school teachers were fired when their employers discovered that they were pagans. Thus, many pagans keep their religion private in order to avoid discrimination and ostracism.
Patriarch of Moscow and All Russia Alexy II at the opening of the Archbishops Council in 2004 called the spread of neopaganism one of the main threats of the 21st century, placing it on a par with terrorism and "other pernicious phenomena of our time". In this regard, Circle of Pagan Tradition sent an open letter to the Holy Synod of the Russian Orthodox Church, which was forwarded on 18 October 2004, to the Department for External Church Relations Moscow Patriarchate. This open letter stated that statements that offend the honor and dignity of modern pagans and violate the laws "On Freedom of Conscience and on Religious Associations" and "On Counteracting Extremist Activity" were inadmissible.
In publications which were written by leaders of the Russian Orthodox Church, the unscientific approach to the reconstruction of ancient Slavic beliefs by adherents of Rodnovery is extensively documented.
At the opening of the XVIII World Russian People's Council in 2014, Patriarch Kirill of Moscow and All Russia noted that on the road to preserving national memory, "unfortunately, quite painful and dangerous phenomena arise. These include attempts to construct pseudo-Russian pagan beliefs. On the one hand, this is an extremely low estimate of the religious choice of the Russian people who have lived for a thousand years in the bosom of the Orthodox Church, as well as of the historical path taken by Orthodox Russia. On the other hand, it is the conviction of one's own personal and narrow-group superiority over one's own people. He saw the roots of this social phenomenon in the "tendency to ignore the importance of the Russian people" and the "revision of Russian history" in the 1990s, as a result of which many compatriots have shaken "faith in their people and in their country." "How torn must the national consciousness have been, in what caves of thought and spirit must it have been, for someone, considering himself the bearer of the Russian national idea, to abandon the saints and heroes of their native history, the deeds of their ancestors, and make Nazis and their henchmen their idols? ".
Philosophical criticism
Beliefs and practices vary widely among pagan groups; however, there are a series of core principles common to most, if not all, forms of modern paganism. The English academic Graham Harvey noted that pagans "rarely indulge in theology".
Neopagan theology has been criticized for its lack of coherence. Proponents often argue that this incoherence is not an issue with the religion as it is more based on Orthopraxy than Orthodoxy.
Criticism by Russian Scholars
Historian Vladimir Borisovich Yashin identifies the following main features of this phenomenon:– The ancient conceptions of the world, the guardians of which the adherents of neo-paganism proclaim themselves to be, are interpreted by them as a strictly structured system of higher knowledge, surpassing both religious dogmatism and the materialistic limitations of modern science, but at the same time harmoniously synthesizing elements of faith and scientific thinking (which in reality usually turns out to be outright irrationalism and eclecticism).
– Neo-pagan texts and doctrines are distinguished, on the one hand, by their scientific imagery, wide use of notions, ideas and achievements of modern science and technology, and pseudo-rational interpretation of folklore plots and mythologemes. On the other hand, it is argued that mastering the wisdom of the ancestors offers the adherent a superhuman perspective, transforming his nature and transforming him into a human god, and therefore neopaganism is imbued with the spirit of mysticism and magic.
– Accordingly, neo-pagan knowledge is presented by neo-pagans as secret knowledge, oriented toward the chosen, the initiated. They emphasize the direct continuity of neopagan communities (emerging before our eyes) with some deeply concealed, rigidly organized unions of ancient wise men, which did not disappear with Christianization.
– In addition to the symbols and images of the national tradition, the neo-pagans actively use fragments of classical occult-esoteric systems such as Gnosticism, Kabbalah, Theosophy, etc.
– No less actively the neo-pagan revelations include images and motifs of the fantasy genre and the whole gentlemanly set of modern technocratic myth-making (paleocontact, space aliens, flying saucers, etc.).
– Neopagan ideology is internally antinomian: on the one hand, neopaganism tends towards nationalism, the cult of exceptional greatness of its people, while at the same time it asserts the universality of ancient super knowledge and postulates the existence of a certain original "secret doctrine" that forms the basis of all known spiritual teachings. In this connection neo-paganism freely introduces fragments of the most diverse "alien" traditions into its constructions.
– At the same time, neo-paganism openly opposes the historic world religions, although at the same time it adopts much of their dogma, cult practices, etc.
– Neo-paganism represents the most politicized wing of the "new religious movements," and even initially apolitical associations of supporters of the revival of the "glorious past" eventually become used by certain political forces.
Violent incidents of Rodnovery
Some Rodnovery organizations have been declared extremist by Russian courts. Rodnovery followers have committed a number of hate crimes, including armed attacks and terrorist acts mainly against members of Orthodox churches and representatives of non-Slavic nationalities.
In 2003, the office of the Memorial NGO was attacked by armed men in Saint Petersburg. Two unknown masked persons threatened the employees of the organization with a hammer, tied them up, threw them into a closet, and removed their office equipment worth 125,000 rubles. The prosecutor's office opened a criminal case under article 162 Criminal Code of Russia "Robbery". The chief priest of the association "Skhoron zhizhen" Vladimir Golyakov was detained on this case. In 2004, Golyakov was sentenced to five years of suspended imprisonment for robbery.
In 2006, Alexandr Koptsev broke into the synagogue on Bolshaya Bronnaya (Moscow) with a knife and wounded Rabbi Yitzhak Kogan and nine congregants. He was detained at the scene by a security guard and worshippers at the synagogue. During the investigation of the case, it was found that Koptsev's desk book was an essay titled "The Blow of the Russian Gods".
In 2007, a student at a Penza university broke a memorial plaque into pieces with several blows with an axe and damaged a wooden cross at the site of the future construction of the St. Elisabethan spiritual and pastoral center, and resisted law enforcement officers when he tried to arrest them and burned one of them with liquid from a gas canister in the eyes. During the trial, the defendant stated that he committed his unlawful act under the impression of the book "Blow of the Russian Gods".
In 2008, in Yekaterinburg, a swastika and the inscription: "Russian or Christian. Choose one." The next day the young man threw one Molotov cocktail at the church and another at the parish school, and fled the scene. The wooden church burned down completely.
In 2008, Rodnovers David Bashelutskov, Stanislav Lukhmyrin, and Yevgenia Zhihareva made a bomb, placing it in a three-liter can with a fuse in the form of a firecracker, and brought it to the Church of Nicholas the Wonderworker in Biryulyovo. Anna Mikhalkina, a 62-year-old priestess of the church, found the smoking bomb and poured water on it, as a result of which only the fuse detonated. Nevertheless, Mikhalkina and parishioner Pavel Bukovsky, who carried the bomb out of the church, were seriously injured: Mikhalkina sustained eye injuries, burns and shrapnel wounds and lost one eye, while Bukovsky suffered a head contusion and a leg wound. The Rodnovers left the bomb in the temple during the evening service when the building was full of people. Experts noted that the amount of explosives would have been enough to completely destroy the wooden building of the church. The attackers were counting on a large number of casualties. Rodnovery members formed an "autonomous fighting group," and before the attack on the church killed more than a dozen people of "non-Slavic" appearance, including a 60-year-old Azerbaijani, and also in 2008 killed a Russian man near the church of Nicholas the Wonderworker in Biryulyovo, mistaking him for an Orthodox priest. Rodnovers considered the mosque on Poklonnaya Hill to be their next target
In 2009, a wooden church of blessed Kozma Verkhotursky was burned down in Yekaterinburg using a "Molotov cocktail. The arsonists left an inscription on the fence of the temple: "The Warrior of the Rod. Svyatoslav's Men.".
In 2009 in Vladimir Sergey Khlupin left a letter on the fence of St. Cyril and Methodius Church, threatening to blow up the church if it continued to function. He then committed an act of terror by throwing a homemade bomb through the window. The explosion damaged church utensils, but no one was hurt. The explosion was a warning, with a leaflet that read, "By terror and the destruction of Judeo-Christian shrines, we will put an end to the spread of this contagion." Khlupin intended to kill people if his ultimatum was not carried out, but was detained. During a search he was found to have an arsenal of weapons.
In 2014, Stepan Komarov, a security company officer and follower of the teachings of Nikolai Levashov, opened fire on parishioners of the Resurrection Cathedral in Yuzhno-Sakhalinsk, wounding six people and killing two – nun Lyudmila (Pryashnikova) and parishioner Vladimir Zaporozhets, who was asking for alms on the porch, who entered the church and tried to stop the criminal.
Rodnovers are also attacked by their ideological opponents, mostly Orthodox, and less frequently by other nationalists. Rodnovers' places of worship are often destroyed, and the police often fail to act.
In the village of Okunevo of Omsk Oblast, Rodnovers erected a pillar topped with a swastika, which caused concern among local Orthodox priests. In 1993, a church delegation headed by Archbishop Theodosius arrived in Okunevo. Representatives of the delegation replaced a shivaist pillar with a Om sign with a massive Orthodox cross. Later a chapel was built nearby.
A series of scandals involving the Orthodox community, city authorities, and youth anti-fascist organizations took place in St. Petersburg around the "All-Slavic" sanctuary of Perun there, created under the leadership of Vladimir Golyakov, the high priest of the "Schoron zhozhen" association. In 2007, the shrine was destroyed.
In 2009, the supreme sorcerer of the Ukrainian association "Rodnoi Fires of the Native Orthodox Faith" Vladimir Kurovsky at the head of his "Polk of Perun" discovered the idol of Perun on Mount Bogit (Ternopil region, not far from the location of the Zbrush Idol). A few days later, the idol was torn down by fighters of Tryzub led by Greek Catholic priests. Several priests were beaten.
In 2012, four participants were shot and stabbed at a Kupala festival held by Rodnovers in Bitsa Park (Moscow). Vladimir Golyakov, who was present there, was slightly injured. Golyakov blamed the incident on a "foreign yoke" and wrote that it was no accident that it happened during Vladimir Putin's visit to Israel. Lubomir (Dionis Georgis), head of the Commonwealth of Natural Faith "Slavia" wrote that a number of Rodnovers hold the Russian Orthodox Church and the state bodies supporting it responsible for the attack.
On 17 October 2017, it became known that in the village of Pochinki Nizhny Novgorod Oblast unknown vandals were destroying a neopagan temple. Unknown persons knocked down two idols and ripped animal skulls from trees.
In 2015, Vladimir Golyakov installed a neopagan idol near an Orthodox church in Kupchina, which was then fallen by unknown persons and then sawed to pieces by the Orthodox political figure Vitaly Milonov. Golyakov set up the pole again.
Historian and religious scholar R. V. Shizhensky believes that Rodnovery is not dangerous and that radical groups should be dealt with by law enforcement agencies.
The Red Ribbon Project of the Traditional Religions Foundation monitors persecution, harassment of Rodnovers, and conflicts. It publishes an annual report on the number of sanctuaries destroyed, negative statements, etc.
The conflict in eastern Ukraine has caused different reactions among Ukrainian Rodnovers. Representatives of the Native Ukrainian National Faith view Russia as the aggressor, while members of other Rodnovery organizations, such as the Pan-Slavic Rodnovery Fire of the Native Orthodox Faith, most often believe that Russians and Ukrainians are brothers and that the conflict is caused by machinations of the United States.
Rodnovers played an important role in the war in Donbas by forming armed units or joining active units. Some, such as the Svarog battalion, fought on the side of the rebels; others, such as the Azov detachment, fought on the side of Ukraine.
The Svarog battalion, an all Rodnover military unit fought in the Donetsk people's republic until its leader was imprisoned by the republic, they had many unique practices such as vegetarianism.
Pro-Russian Slavic military units in Donbass are represented by such formations as the Svarog battalion, "Varyag", The "Rusich" sabotage and assault reconnaissance group (SSRG) in Luhansk People's Republic, the "Ratibor" SSRG also under the "Batman" SSRG, and Rodnovers in the Russian Orthodox Army. Rodnovers are engaged in missionary work in the region and promote the concept of a new Russian world. Pro-Russian Rodnovers often use the eight-branched swastika as a military symbol.
The commander of DSRG Rusich was Alexei Milchakov, a well-known neo-Nazi from St. Petersburg who had repeatedly killed and eaten dogs and called out on social media: "Cut up homeless people, puppies, and children!. According to him, Rusich consists of "nationalist Rodnovers ... volunteers from Russia and Europe" and acts as a "closed collective" and is a unit in which Russian nationalists receive combat training. On the chevrons of "Rusich" fighters there is an eight-pointed "swastika. The emblem of the DSRG "Ratibor" has a swastika and a skull on it. In the Petrovsky district of Donetsk, the Rodnovery community "Kolo Derevo Roda" led by Oleg Orchikov (Volkhov Vargan) organized the "Svarga" people's militia in late February 2014, which grew during the war into the numerous "Svarog battalion, the fourth battalion of the "Oplot" association. Oleg Orchikov (call sign "Vargan") became the battalion commander. He constantly wore an armband of a priest with a swastika pattern and called for creation of "one state from the Pacific Ocean to the Atlantic Ocean". Battalion fighters performed prayers in camouflage and with submachine guns at idols, propagated the idea of "the superiority of the Slavic race" and claimed that their opponents "have not yet reached human level. On 28 October 2014, drunken members of the Svarog battalion beat each other and civilians and opened fire in Donetsk, after which Orchikov was arrested in November and the battalion disbanded. In November 2014, the top of the battalion, led by Orchikov, was sent to prison. Other battalion fighters were integrated into the DNR army. Soon, the Russian volunteer Rodnovers, who had fought in the Luhansk People's Republic as part of the Ghost Battalion and the Batman rapid reaction group, were sent back to their homeland.
See also
"The Pagan School"
References
Bibliography
Paganism
Modern paganism and society | 0.761239 | 0.978937 | 0.745205 |
Cultural resource management | In the broadest sense, cultural resource management (CRM) is the vocation and practice of managing heritage assets, and other cultural resources such as contemporary art. It incorporates Cultural Heritage Management which is concerned with traditional and historic culture. It also delves into the material culture of archaeology. Cultural resource management encompasses current culture, including progressive and innovative culture, such as urban culture, rather than simply preserving and presenting traditional forms of culture.
However, the broad usage of the term is relatively recent and as a result it is most often used as synonymous with heritage management. In the United States, cultural resources management is not usually diverse from the heritage context. The term is, "used mostly by archaeologists and much more occasionally by architectural historians and historical architects, to refer to managing historic places of archaeological, architectural, and historical interests and considering such places in compliance with environmental and historic preservation laws."
Cultural resources include both physical assets such as archaeology, architecture, paintings and sculptures and also intangible culture such as folklore and interpretative arts, such as storytelling and drama. Cultural resource managers are typically in charge of museums, galleries, theatres etc., especially those that emphasize culture specific to the local region or ethnic group. Cultural tourism is a significant sector of the tourism industry.
At a national and international level, cultural resource management may be concerned with larger themes, such as languages in danger of extinction, public education, the ethos or operation of multiculturalism, and promoting access to cultural resources. The Masterpieces of the Oral and Intangible Heritage of Humanity is an attempt by the United Nations to identify exemplars of intangible culture.
Background
Federal legislation had passed earlier in 1906 under the Antiquities Act, but it was not until the 1970s when the term "cultural resources" was coined by the National Park Service. The Archaeological and Historic Preservation Act of 1974, commonly known as the Moss-Bennett Act, helped to fuel the creation of CRM. The National Park Service defines cultural resources as being "Physical evidence or place of past human activity: site, object, landscape, structure; or a site, structure, landscape, object or natural feature of significance to a group of people traditionally associated with it."
Cultural resource management applied to heritage management
Cultural resource management in the heritage context is mainly concerned with the investigation of sites with archaeological potential, the preservation and interpretation of historic sites and artifacts, and the culture of indigenous people. The subject developed from initiatives in rescue archaeology, sensitivities to the treatment of indigenous people, and subsequent legislation to protect cultural heritage.
Current cultural resource management laws and practices in the United States addresses the following resources:
Historic properties (as listed or eligible for the National Register of Historic Places)
Older properties that may have cultural value, but may or may not be eligible for the National Register
Spiritual places
Cultural landscapes
Archaeological sites
Shipwrecks, submerged aircraft
Native American graves and cultural items
Historical documents
Archaeological and historical artifacts
Religious sites
Religious practices
Cultural use of natural resources
Folklife, tradition, and other social institutions
A significant proportion of the archaeological investigation in countries that have heritage management legislation including the United States and United Kingdom is conducted on sites under threat of development. In the US, such investigations are now done by private companies on a consulting basis, and a national organization exists to support the practice of CRM. Museums, besides being popular tourist attractions, often play roles in conservation of, and research on, threatened sites, including as repositories for collections from sites slated for destruction.
National Register eligibility
In the United States, a common Cultural Resource Management task is the implementation of a Section 106 review: CRM archaeologists determine whether federally funded projects are likely to damage or destroy archaeological sites that may be eligible for the National Register of Historic Places. This process commonly entails one or more archaeological field surveys.
Careers in CRM
Cultural resource management features people from a wide array of disciplines. The general education of most involved in CRM includes, but is not limited to, sociology, archaeology, architectural history, cultural anthropology, social and cultural geography, and other fields in the social sciences.
In the field of cultural resource management there are many career choices. One could obtain a career with an action agency that works directly with the NEPA or even more specifically, Native American resources. There are also careers that can be found in review agencies like the Advisory Council on Historic Preservation (ACHP), or the state historic preservation office (SHPO). Beyond these choices, one could also obtain a career as part of the local government and work with planning agencies, housing agencies, social service agencies, local museums, libraries, or educational institutions. Jobs at private cultural resource management companies can range from field technicians (see shovelbum) to principal investigators, project archaeologists, historic preservationists, and laboratory work. One could also become a part of an advocacy organization, such as the National Trust for Historic Preservation.
Debates
It is commonly debated in cultural resource management how to determine whether cultural or archaeological sites should be considered significant or not. The criteria that is stated by the National Register of Historic Places is said to be able to be "interpreted in different ways so that the significance... may be subjectively argued for many cultural resources." Another issue that arises among scholars is that "protection does not necessarily mean preservation." Any public projects occurring near the cultural resource can have adverse effects. Development plans for a proposed project may not be able to be changed to limit impact and to avoid damage to the resource.
Management of cultural organizations
The vocation of management in cultural and creative sectors is the subject of research and improvement initiatives, by organizations such as Arts and Business which take a partnership approach to involving professional business people in running and mentoring arts organizations. Some universities now offer vocational degrees.
The management of cultural heritage is underpinned by academic research in archaeology, ethnography and history. The broader subject is also underpinned by research in sociology and culture studies.
Anthropology
Understanding the traditional cultures of all peoples (Indigenous or not) is essential in mitigating the adverse impact of development and ensuring that intervention by more developed nations is not prejudicial to the interests of local people or results in the extinction of cultural resources.
Cultural resources policies
Cultural resources policies have developed over time with the recognition of the economic and social importance of heritage and other cultural assets.
The exploitation of cultural resources can be controversial, particularly where the finite cultural heritage resources of developing countries are exported to satisfy the demand for antiquities market in the developed world. The exploitation of the potential intellectual property of traditional remedies in identifying candidates for new drugs has also been controversial. On the other hand, traditional crafts can be important elements of income from tourism, performance of traditional dances, and music that is popular with tourists and traditional designs can be exploited in the fashion industry. Popular culture can also be an important economic asset.
See also
Centre for Cultural Resources and Training
Committee on Education, Culture, Tourism and Human Resources
Community art
Cultural anthropology
Cultural Heritage Management
Cultural landscape
Cultural tourism
Valletta Treaty
Historic preservation
Intangible culture
Masterpieces of the Oral and Intangible Heritage of Humanity
National Register of Historic Places
Natural resource management
Portal:Arts
Public history
Rescue archaeology
Urban culture
References
Further reading
American Cultural Resources Association. 2013. The Cultural Resources Management Industry: Providing Critical Support for Building Our Nation’s Infrastructure through Expertise in Historic Preservation. Electronic document.
Hutchings, Rich. 2014. "The Miner’s Canary"—What the Maritime Heritage Crisis Says About Archaeology, Cultural Resource Management, and Global Ecological Breakdown. Unpublished PhD dissertation, Interdisciplinary Studies, University of British Columbia.
Hutchings, Rich and Marina La Salle. 2012. Five Thoughts on Commercial Archaeology. Electronic document.
King, Thomas F. 2012. Cultural Resource Laws and Practice: An Introductory Guide (4th Edition). Altamira Press.
King, Thomas F. 2009. Our Unprotected Heritage: Whitewashing the Destruction of Our Cultural and Natural Environment. Left Coast Press.
King, Thomas F. 2005. Doing Archaeology: A Cultural Resource Management Perspective. Left Coast Press.
La Salle, Marina and Rich Hutchings. 2012. Commercial Archaeology in British Columbia. The Midden 44(2): 8-16.
Neumann, Thomas W. and Robert M. Sanford. 2010. Cultural Resources Archaeology: An Introduction (2nd Edition). Rowman and Littlefield.
Neumann, Thomas W. and Robert M. Sanford. 2010. Practicing Archaeology: A Training Manual for Cultural Resources Archaeology (2nd Edition). Rowman and Littlefield.
Nissley, Claudia and Thomas F. King. 2014. Consultation and Cultural Heritage: Let Us Reason Together. Left Coast Press.
Smith, Laurajane. 2004. Archaeological Theory and Politics of Cultural Heritage. Routledge.
Smith, Laurajane. 2001. Archaeology and the Governance of Material Culture: A Case Study from South-Eastern Australia. Norwegian Archaeological Review 34(2): 97-105.
Smith, Laurajane. 2000. A History of Aboriginal Heritage Legislation in South-Eastern Australia. Australian Archaeology 50: 109-118.
Stapp, Darby and Julia J. Longenecker. 2009. Avoiding Archaeological Disasters: A Risk Management Approach. Left Coast Press.
White, Gregory G. and Thomas F. King. 2007. The Archaeological Survey Manual. Left Coast Press.
Zorzin, Nicolas. 2014. Heritage Management and Aboriginal Australians: Relations in a Global, Neoliberal Economy—A Contemporary Case Study from Victoria. Archaeologies: The Journal of the World Archaeological Congress 10(2): 132-167.
Zorzin, Nicolas. 2011. Contextualising Contract Archaeology in Quebec: Political Economy and Economic Dependencies. Archaeological Review from Cambridge 26(1): 119-135.
Parga Dans, Eva and Pablo Alonso Gonzalez. 2020. The Unethical Enterprise of the Past: Lessons from the Collapse of Archaeological Heritage Management in Spain Journal of Business Ethics.
External links
American Cultural Resources Association
Collections care
Conservation and restoration of cultural heritage
Museology
The arts
Cultural anthropology | 0.764276 | 0.975046 | 0.745204 |
Spatial design | Spatial design is a relatively new conceptual design discipline that crosses the boundaries of traditional design specialisms such as architecture, landscape architecture, landscape design, interior design, urban design and service design as well as certain areas of public art.
It focuses upon the flow of people between multiple areas of interior and exterior environments and delivers value and understanding in spaces across both the private and public realm. The emphasis of the discipline is upon working with people and space, particularly looking at the notion of place, also place identity and genius loci. As such, the discipline covers a variety of scales, from detailed design of interior spaces to large regional strategies, and is largely found within the UK. As a discipline, it uses the language of architecture, interior design and landscape architecture to communicate design intentions. Spatial design uses research methods often found in disciplines such as product and service design, identified by IDEO, as well as social and historical methods that help with the identification and determination of place.
As a growth area of design, the number of spatial design practitioners work within existing disciplines or as independent consultants.
The subject is studied at a number of institutions within the UK, Denmark, Switzerland, and Italy though, as with any new field of study, these courses differ in their scope and ambition.
Ultimately it can be seen as "the glue that joins traditional built environment disciplines together with the people they are designed to serve".
During the COVID-19 pandemic, spatial design became an important aspect of reshaping collective use of urban space. and thinking about access and egress.
References
Public art
Urban design
Environmental design
Landscape architecture
Types of garden
Interior design | 0.761348 | 0.978784 | 0.745196 |
Componential analysis | Componential analysis (feature analysis or contrast analysis) is the analysis of words through structured sets of semantic features, which are given as "present", "absent" or "indifferent with reference to feature". The method thus departs from the principle of compositionality. Componential analysis is a method typical of structural semantics which analyzes the components of a word's meaning. Thus, it reveals the culturally important features by which speakers of the language distinguish different words in a semantic field or domain (Ottenheimer, 2006, p. 20).
Examples
man = [+ MALE], [+ MATURE] or woman = [– MALE], [+ MATURE] or boy = [+ MALE], [– MATURE] or girl = [– MALE] [– MATURE] or child = [+/– MALE] [– MATURE]. In other words, the word girl can have three basic factors (or semantic properties): human, young, and female. Another example, being edible is an important factor by which plants may be distinguished from one another (Ottenheimer, 2006, p. 20). To summarize, one word can have basic underlying meanings that are well established depending on the cultural context. It is crucial to understand these underlying meanings in order to fully understand any language and culture.
Historical background
Structural semantics and the componential analysis were patterned on the phonological methods of the Prague School, which described sounds by determining the absence and presence of features. On one hand, componential analysis gave birth to various models in generative semantics, lexical field theory and transformational grammar. On the other hand, its shortcoming were also visible:
The discovery procedures for semantic features are not clearly objectifiable.
Only part of the vocabulary can be described through more or less structured sets of features.
Metalinguistic features are expressed through language again.
Features used may not have clear definitions.
Limited in focus and mechanical in style.
As a consequence, entirely different ways to describe meaning were developed, such as prototype semantics.
See also
Ethnoscience
Structural linguistics
Word-sense disambiguation
References
Bussmann, Hadumod (1996), Routledge Dictionary of Language and Linguistics, London: Routledge, s.v. componential analysis.
Ottenheimer, H. J. (2006). The Anthropology of Language. Belmont, CA: Thomson Wadsworth.
Semantics | 0.777185 | 0.958835 | 0.745192 |
Ambiguity tolerance–intolerance | Ambiguity tolerance–intolerance is a psychological construct that describes the relationship that individuals have with ambiguous stimuli or events. Individuals view these stimuli in a neutral and open way or as a threat.
History
Ambiguity tolerance–intolerance is a construct that was first introduced in 1949 through the work of Else Frenkel-Brunswik while researching ethnocentrism in children and was perpetuated by her research of ambiguity intolerance in connection to authoritarian personality. It serves to define and measure how well an individual responds when presented with an event that results in ambiguous stimuli or situations. In her study, she tested the notion that children who are ethnically prejudiced also tend to reject ambiguity more so than their peers. She studied children who ranked high and low on prejudice in a story recall test and then studied their responses to an ambiguous disc shaped figure. The children who scored high in prejudice were expected to take longer to give a response to the shape, less likely to make changes on their response, and less likely to change their perspectives.
A study by Kenny and Ginsberg (1958) retesting Frenkel-Brunswik's original connection of ambiguity intolerance to ethnocentrism and authoritarian personality found that the results were unreplicable. However, it was discussed that this may be due to the fact that at the time the study was done incorrect methodology was used and that there lacked a concrete definition as to what the construct was. Most of the research on this subject was completed in the two decades after the publication of The Authoritarian Personality, however the construct is still studied in psychological research today.
Budner gives three examples as to what could be considered ambiguous situations: a situation with no familiar cues, a situation in which there are many cues to be taken into consideration, and a situation in which cues suggest the existence of different structures to be adhered to.
Conceptualization
There have been many attempts to conceptualize the construct of ambiguity tolerance–intolerance as to give researchers a more standard concept to work with. Many of these conceptualizations are based on the work of Frenkel-Brunswik.
Budner (1962) defines the construct as the following:
Intolerance of ambiguity may be defined as 'the tendency to perceive (i.e. interpret) ambiguous situations as sources of threat'; tolerance of ambiguity as 'the tendency to perceive ambiguous situations as desirable.'
Additionally Bochner (1965) categorized attributes given by Frenkel-Brunswik's theory of individuals who are intolerant to ambiguity.
The nine primary characteristics describe intolerance of ambiguity and are as follows:
Need for categorization
Need for certainty
Inability to allow good and bad traits to exist in the same person
Acceptance of attitude statements representing a white-black view of life
A preference for familiar over unfamiliar
Rejection of the unusual or different
Resistance to reversal of fluctuating stimuli
Early selection and maintenance of one solution in an ambiguous situation
Premature closure
The secondary characteristics describe individuals who are intolerant to ambiguity as:
authoritarian
dogmatic
rigid
closed minded
ethnically prejudiced
uncreative
anxious
extra-punitive
aggressive
Operationalization and measurement
Because of the lack of concrete conceptualization of what ambiguity intolerance is, there are a variety of ways in which to measure the construct. For example, Stanley Budner developed a scale with 16 items designed to measure how subjects would respond to an ambiguous situation.
Block and Block (1951) operationalized the construct by measuring the amount of time required to structure an ambiguous situation. The less amount of time required to structure, the higher a person would score in ambiguity intolerance.
Levitt (1953) studied intolerance of ambiguity in children and asserted that the Decision Location Test and Misconception Scale both served as accurate measures of ambiguity intolerance.
Psychological implications
The construct of ambiguity intolerance is found in different aspects of psychology and mental health. The construct is used in many branches of psychology including personality, developmental, and social psychology. Some examples of how tolerance–intolerance of ambiguity is used within various branches are displayed below.
Personality psychology
The construct of ambiguity intolerance was conceptualized in the study of personality. While the original theory of ambiguity intolerance being positively correlated to authoritarian personalities has come under fire, the construct is still used in this branch. A study was done testing college students' tolerance for ambiguity and found that students who were involved in the arts had higher scores than business students on ambiguity tolerance, from which the assertion that creativity is linked to the construct.
Developmental psychology
Harington, Block, and Block (1978) assessed intolerance of ambiguity in children at an early age, ranging from 3.5 to 4.5 years. The children were assessed using two tests performed by caretakers in a daycare center. The researchers then re-evaluated the children when they turned seven, and their data showed that male students who were high in ambiguity intolerance at the early age had more anxiety, required more structure, and had less effective cognitive structure than their female peers who had also tested high in ambiguity intolerance.
Social psychology
Being intolerant to ambiguity can affect how an individual perceives others with whom they come into contact. Social psychology uses ambiguity tolerance–intolerance to study these relationships and the relationship one holds with themselves. Research has been conducted on how ambiguity tolerance–intolerance interacts with racial identity, homophobia, marital satisfaction, and pregnancy adjustment.
Mental health
Research shows that being too far on either end of the spectrum of ambiguity tolerance–intolerance can be detrimental to mental health.
Ambiguity intolerance is thought to serve as a cognitive vulnerability that can lead, in conjunction with stressful life events and negative rumination, to depression. Anderson and Schwartz hypothesize that this is because ambiguity intolerant individuals tend to see the world as concrete and unchanging, and when an event occurs which disrupts this view these individuals struggle with the ambiguity of their future. Therefore, those who are intolerant to ambiguity begin to have negative cognitions about their respective situation, and soon view these cognitions as a certainty. This certainty can serve as a predictive measure of depression.
References
Human communication
Pedagogy
Ambiguity | 0.764759 | 0.974395 | 0.745177 |
Dramatic convention | Dramatic conventions are the specific actions and techniques the actor, writer or director has employed to create a desired dramatic effect or style.
A dramatic convention is a set of rules which both the audience and actors are familiar with and which act as a useful way of quickly signifying the nature of the action or of a character.
All forms of theatre have dramatic conventions, some of which may be unique to that particular form, such as the poses used by actors in Japanese kabuki theatre to establish a character, or the stock character of the black-cloaked, mustache-twirling villain in early cinema melodrama serials.
It can also include an implausible facet of a performance required by the technical limitations or artistic nature of a production and which is accepted by the audience as part of suspension of disbelief. For example, a dramatic convention in Shakespeare is that a character can move downstage to deliver a soliloquy which be heard by the other characters on stage nor are characters in a musical surprised by another character bursting into song. One more example would be how the audience accepts the passage of time during a play or how music will play during a romantic scene.
Dramatic conventions may be categorized into groups, such as rehearsal, technical, or theatrical.
Rehearsal conventions can include hot seating, roles on the wall, and still images. Technical conventions can include lighting, dialogue, monologue, set, costuming, and entrances/exits. Theatrical conventions may include split focus, flashback/flashforward, narration, soliloquy, and spoken thought.
All categories of dramatic conventions may be used in creative drama to support educators teaching dramatic arts. "Jonothan Neelands and Tony Goode note that the experience of drama requires teachers to use forms and structures that engage both the intellect and emotions in making and representing collaborative meaning. [...] As you work in drama, you will discover other modes of representing meaning and your repertoire of ideas for containing and shaping the work will expand and become refined." Educators use dramatic conventions in integrated and cross-curricular instruction – particularly literacy and the humanities – to make meaningful educational experiences for students.
See also
Fourth wall
Suspension of disbelief
References
Acting | 0.767899 | 0.970378 | 0.745152 |
Neorealism (art) | In art, neorealism refers to a few movements.
In literature
Portuguese neorealism was a Marxist literary movement that began slightly before Salazar's reign. It was mostly in line with socialist realism.
In Italy, neorealism was a movement that emerged in the end of 1920s and started rapidly developing after World War II. It was represented by such authors as Alberto Moravia, Ignazio Silone, Elio Vittorini, Carlo Levi, Vasco Pratolini and others.
In painting
Neo-realism in painting was established by the ex-Camden Town Group painters Charles Ginner and Harold Gilman at the beginning of World War I. They set out to explore the spirit of their age through the shapes and colours of daily life. Their intentions were proclaimed in Ginner's manifesto in New Age (1 January 1914), which was also used as the preface to Gilman and Ginner's two-man exhibition of that year. It attacked the academic and warned against the ‘decorative’ aspect of imitators of Post-Impressionism. The best examples of neorealist work is that produced by these two artists; Howard Kanovitz and also Robert Bevan. For Robert Bevan he joined Cumberland Market Group in 1914.
Artists
Howard Kanovitz - Vernissage, 1967 - Cologne, Museum Ludwig
Chuck Close - Linda 1975/76 Akron (OH), Akron Art Museum
Stanley Spencer - Seated Nude, 1942
In cinema
Neorealism is characterized by a general atmosphere of authenticity. André Bazin, a French film theorist and critic, argued that neorealism portrays: truth, naturalness, authenticity, and is a cinema of duration. The necessary characteristics of neo-realism in film include:
a definite social context;
a sense of historical actuality and immediacy;
political commitment to progressive social change;
authentic on-location shooting as opposed to the artificial studio;
a rejection of classical Hollywood acting styles; extensive use of non-professional actors as much as possible;
a documentary style of cinematography.
Films
Precursors
Land Without Bread (1933, Spain)
1860 (1934, Italy)
An Inn in Tokyo (1935, Japan)
Toni (1935, France)
Aniki-Bóbó (1942, Portugal)
People of the Mountains (1942, Hungary)
Ossessione (1943, Italy)
Saltimbancos (1951, Portugal)
Italian
Roma, città aperta (1945)
Shoeshine (Sciuscià) (1946)
Paisà (1946)
Germania anno zero (1948)
Bicycle Thieves (Ladri di biciclette) (1948)
La terra trema (1948)
Bitter Rice (1949)
Stromboli (1950)
Miracle in Milan (1951)
Umberto D. (1952)
La strada (1954)
Rocco and His Brothers (1960)
Il Posto (1961)
Other countries
Lowly City (1946, India)
Drunken Angel (1948, Japan)
Stray Dog (1949, Japan)
Los Olvidados (1950, Mexico)
Surcos (1951, Spain)
Ikiru (1952, Japan)
Nagarik (1952, India)
Tokyo Story (1953, Japan)
Two Acres of Land (1953, India)
Salt of the Earth (1954, United States)
Newspaper Boy (1955, India)
The Apu Trilogy (1955–1959, India)
Death of a Cyclist (1955, Spain)
Cairo Station (1958, Egypt)
The Runaway (1958, India)
The 400 Blows (1959, France)
The Beginning and the End (1960, Egypt)
Los golfos (1960, Spain)
Nothing But a Man (1964, United States)
Jeanne Dielman, 23 Quai du Commerce, 1080 Bruxelles (1975, Belgium)
Killer of Sheep (1978, United States)
Pixote (1981, Brazil)
The Stolen Children (1982, Italy)
Yol (1982, Turkey)
Salaam Bombay! (1988, India)
Veronico Cruz (1988, Argentina)
American Me (1992, United States)
Children of Heaven (1997, Iran)
Satya (1998, India)
The City (La Ciudad) (1998, United States)
Not One Less (1999, China)
Rosetta (1999, France)
The Circle (Dayereh) (2000, Iran)
Amores perros (2000, Mexico)
Bolivia (2001, Argentina)
Lilja 4-Ever (2002, Sweden)
Cidade de Deus (2002, Brazil)
Carandiru (2003, Brazil / Argentina)
Familia rodante (2004, Argentina, et al.)
Machuca (2004, Chile)
The Death of Mr. Lazarescu (2005, Romania)
L'Enfant (2005, Belgium / France)
Man Push Cart (2005, United States)
Half Nelson (2006, United States)
Still Life (2006, China)
4 Months, 3 Weeks and 2 Days (2007, Romania)
Chop Shop (2007, United States)
Ballast (2008, United States)
Frozen River (2008, United States)
Involuntary (2008, Sweden)
Lorna's Silence (2008, Belgium)
Wendy and Lucy (2008, United States)
Fish Tank (2009, Great Britain)
Goodbye Solo (2009, United States)
Sin Nombre (2009, United States / Mexico)
Treeless Mountain (2009, United States / South Korea)
Winter's Bone (2010, United States)
0-41* (2016, India)
En el séptimo día (2017, United States)
See also
History of cinema
Nouveau réalisme, a later French art movement
References
External links
Neorealism at the Internet Movie Database.
Movements in cinema
Social realism | 0.761351 | 0.978694 | 0.74513 |
Nylonkong | Nylonkong, a contraction of New York–London–Hong Kong, is a neologism coined to link New York City, London, and Hong Kong as the ecumenopolis of the Americas, Euro-Africa, and Asia-Pacific that first appeared in the magazine Time in 2008. The article suggests that the cities share similarities, especially in being globalised financial and cultural centres, and are the most remarkable cities in the 21st century.
History
Nylonkong, from NY (New York City), Lon (London), and Kong (Hong Kong) first appeared in Time article A Tale of Three Cities in 2008.
According to Time, these cities share similarities in cultural and economic establishments. They are influential in cultural industries, including art auctions, movies, and pop songs. In addition, the cities are top international financial centres located in different time zones. Time further described the cities as the three most remarkable international cities in the 21st century, and one could better understand the world through understanding Nylonkong. Financial Development Index ranks Hong Kong, USA, and UK as the first to third places of the table, accordingly.
Influence
An extended range of responses was observed in media and academia after the term was coined, such as in the Mingpao Daily. The term was first quoted in academia by Yin Pak Andrew Lau in the Liberal Studies Youth Summit on Basic Law, and Sing Tao Daily suggested that the term is a profound sign of globalization. The influence is substantial and long-lasting, and has imposed a constant check on the achievement of the three financial hubs. In 2016, Lam Hang-chi, an economist, for example, suggested that Nylonkong would fade out if the financial hubs were not constantly competitive. In late 2016, Tai Kung Pao suggested that Hong Kong should not compare itself with New York and London, as the Asian financial center is politically and socially autonomous when compared to the other two, which were only cities; the passage further argues that the autonomous entity of Hong Kong requires more diversified economic development like other Four Asian Tigers economies.
See also
Multipolar world
Global capitalism
NyLon (concept)
References
Time (magazine)
−
Culture of New York City
Culture in London
Culture of Hong Kong
2008 neologisms | 0.769901 | 0.967825 | 0.745129 |
Homogamy (sociology) | Homogamy is marriage between individuals who are, in some culturally important way, similar to each other. It is a form of assortative mating. The union may be based on socioeconomic status, class, gender, caste, ethnicity, or religion, or age in the case of the so-called age homogamy.
It can also refer to the socialization customs of a particular group in that people who are similar tend to socialize with one another.
Criteria for mates
There are three criteria with which people evaluate potential mates: warmth and loyalty, attractiveness and vitality, and status and resources. These three categories can heavily shape themselves around the secondary traits of ethnicity, religion, and socio-economic status.
Ethnicity can be tied to perceptions of biological vitality and attractiveness. Socio-economic status relates directly to status and resources. Religious or spiritual beliefs directly impact interpersonal behavior; people tend to be warmer and more trustworthy to those with similar beliefs. Homogamy is an unsurprising phenomenon regarding people's liking and nurturing of others who are like them, may look like them, and act like them.
Homogamy is the broader precursor of endogamy, which encompasses homogamy in its definition but also includes an open refusal of others on the basis of conflicting traits, appearance, and fiscal worth. Homogamy is much less rigid in structure; a couple can belong to different denominations of Christianity but this will not be a point of contention in the relationship.
Religion
The integration of social science research and religion has given researchers a new insight into variables that affect marriage. Thomas and Cornwall (1990) state that the growing body of research is focused towards marital stratification and religiosity findings indicate that the ratio of higher religiosity within a marriage indicate a happier and stable partnership.
According to data collected from 700 couples in their first marriage and 300 couples in a remarriage of; religious and non-religious/ non-practicing, conclude the following. The majority of religious couples who attend their denominational/non-denominational church regularly experience a higher level of satisfaction in their martial relationship compared to non-practicing couples. Religious couples experience increased commitment and tend to be happier because of the stability and guide lines that religion poses on marriage. Findings in other areas of research also support that same-faith or inter-faith marriages tend to be stronger and more prosperous then non-religious marriages. According to Kalmijn (1998) there are three resources of culture to acknowledge.
First, couples who share religious beliefs tend to communicate and interact more effectively based on doctrine, and may also positively reinforce and encourage each other.
Second, opinions and values shared between spouses may lead to similar behaviour and perspective of the world.
Third, religious views that are compatible may lead to joint exercises in both religious and nonreligious endeavours, this can only strengthen the relationship indefinitely.
Ellison and Curtis (2002) wrote that decisions on issues relating to family matters may result in greater consensus among couples who choose homogamy. Also, Church attendance provides a close network of support for couples. Marital separation between couples attending a denominational and non-denominational church is generally frowned upon and stigmatized.
Socioeconomic status
It is often seen that people choose to marry within their sociological group or with someone who is close to them in status. Characteristics such as ethnicity, race, religion, and socioeconomic status play a role in how someone chooses their spouse. Socioeconomic status can be defined as an individual's income, level of education, and occupation. Research on socioeconomic status of homogamy was developed by stratification researchers who used marriage patterns in conjunction with mobility patterns to describe how open stratification systems are. (Kalmijn, 2). Socioeconomic status can be divided into two studies: ascribed status and achieved status. Ascribed status simply means the occupational class of the father or father in law while achieved status is one's education and occupation. Ascribed status has become less important while achieved status and education have not lost their importance.
Most countries look at the educational status because it is easier for them to judge the individual. The trends of socioeconomic homogamy are studied by the analysis of class, background and education. There has been a decline in a few industrialized countries regarding the importance of the social background for marriage choice; United States, Hungary, France and the Netherlands. (Kalmijn, 17). Today parents do not have any control over their children as the kids spend more time at college or university, increasing their social background. Education has become important for both the cultural taste and socioeconomic status. After education, falls the romantic consideration, when high standard of living is everyone's main goal.
See also
Endogamy
Hypergamy
References
External links
Partner similarity and relationship satisfaction: development of a compatibility quotient. Glenn D. Wilson & Jon M. Cousins. Sexual and Relationship Therapy, Vol 18, No. 2, 2003.
Endogamy | 0.766063 | 0.972621 | 0.745089 |
Africanfuturism | Africanfuturism is a cultural aesthetic and philosophy of science that centers on the fusion of African culture, history, mythology, point of view, with technology based in Africa and not limiting to the diaspora. It was coined by Nigerian American writer Nnedi Okorafor in 2019 in a blog post as a single word. Nnedi Okorafor defines Africanfuturism as a sub-category of science fiction that is "directly rooted in African culture, history, mythology and point-of-view..and...does not privilege or center the West," is centered with optimistic "visions in the future," and is written by (and centered on) "people of African descent" while rooted in the African continent. As such its center is African, often does extend upon the continent of Africa, and includes the Black diaspora, including fantasy that is set in the future, making a narrative "more science fiction than fantasy" and typically has mystical elements. It is different from Afrofuturism, which focuses mainly on the African diaspora, particularly the United States. Works of Africanfuturism include science fiction, fantasy, alternate history, horror and magic realism.
Writers of Africanfuturism include Nnedi Okorafor, Tochi Onyebuchi, Oghenechovwe Donald Ekpeki, Tade Thompson, Namwali Serpell, Wole Talabi, Suyi Davies Okungbowa.
History
Early beginnings
Works of Africanfuturism have long existed and have been assigned to Afrofuturism. Themes of Africanfuturism can be traced back to Buchi Emecheta's 1983 novel The Rape Of Shavi and Ben Okri's 1991 novel The Famished Road.
21st century
In 2019 and 2020, African writers began to reject the term Afrofuturism because of the differences between both genres with Africanfuturism focusing more on African point of view, culture, themes and history as opposed to Afrofuturism which covers African diaspora history, culture and themes. The speculative fiction magazine Omenana and the Nommo Awards presented by The African Speculative Fiction Society launched in 2017 helped to widen the content of the genre.
In August 2020, Hope Wabuke, a writer and assistant professor at the University of Nebraska-Lincoln of English and Creative Writing, noted that Afrofuturism, coined by Mark Dery, a White critic, in 1993, treats African-American themes and concerns in the "context of twentieth-century technoculture," which was later expanded by Alondra Nelson, arguing that Dery's conception of Blackness began in 1619 and "is marked solely by the ensuing 400 years of violation by whiteness" that he portrayed as "potentially irreparable." Critical of this definition, saying it lacks the qualities of the "Black American diasporic imagination" and ability to conceive of "Blackness outside of the Black American diaspora" or independent from Whiteness, she noted that "Africanfuturism" is different because it is, according to Nnedi Okorafor, more deeply rooted in "African culture, history, mythology and point-of-view as it then branches into the Black diaspora, and it does not privilege or center the West," while explaining Africanjujuism as a subcategory of fantasy. Wabuke further explains how Africanfuturism is more specific and rids itself of the "othering of the white gaze and the de facto colonial Western mindset," free from what she calls the "white Western gaze" and saying this is the main difference "between Afrofuturism and Africanfuturism." She adds that, in her view, Africanfuturism has a different outlook and perspective than "mainstream Western and American science fiction and fantasy" and even Afrofuturism which is "married to the white Western gaze." Wabuke goes on to explain Africanfuturist and Africanjujuist themes in Okorafor's Who Fears Death and Zahrah the Windseeker, Akwaeke Emezi's Pet, and Buchi Emecheta's The Rape of Shavi.
In February 2021, Aigner Loren Wilson of Tor.com explained the difficulty of finding books in the subgenre because many institutions "treat Africanfuturism and Afrofuturism like the same thing" even though the distinction between them is plain. She said that Africanfuturism is "centered in and about Africa and their people" while Afrofuturism is a sci-fi subcategory which is about "Black people within the diaspora," often including stories of those outside Africa, including in "colonized Western societies.". Another reviewer called Okorafor's Lagoon, which "recounts the story of the arrival of aliens in Nigeria," as an Africanfuturist work which requires a reader who is "actively engaged in co-creating the alternative future that the novel is constructing," meaning that the reader becomes part of the "creative conversation."
Literature and comics
Africanfuturism literature features speculative fiction which narrates events centered on Africa from an African point of view rather than a Western point of view. Works of Africanfuturism literature are still wrongly categorized as Afrofuturism.
Works of Nigerian American writer Nnedi Okorafor are often in the Africanfuturism genre with her works like Who Fears Death, Lagoon, Remote Control, The Book of Phoenix and Noor. She won a Hugo and Nebula award for her novella Binti, the first from the Binti trilogy which features a native Himba girl from Namibia in space. Tade Thompson won a Arthur C. Clarke award for his Africanfuturist novel Rosewater about an alien dome in Nigeria and Zambian writer Namwali Serpell's The Old Drift won the same award.
In 2020, Africanfuturism: An Anthology edited by Wole Talabi was published by Brittle Paper and as of the end of 2022 is currently still offered for free on its website in celebration of the 10th anniversary of this publisher which has been called "the village square of African literature". Gary K. Wolfe reviewed this anthology in February 2021. He credits Nnedi Okorafor for coining "Africanfuturism," noting its describes "more Africa-centered SF," although saying he is not sure whether her term "Africanjujuism," a parallel term for fantasy, will catch on. While saying that both are useful, he says that he does not like how they have to "do with the root, not the prefix," with "futurism" only describing a bit of science fiction and fantasy. He still calls the book a "solid anthology," saying it challenges the idea of viewing African science fiction as monolithic. Stories in the book include "Egoli" by T. L. Huchu, "Yat Madit" by Dilman Dila, "Behind Our Irises" by Tlotlo Tsamaase, "Fort Kwame" by Derek Lubangakene, "Rainmaker" by Mazi Nwonwu, "Fruit of the Calabash" by Rafeeat Aliyu, "Lekki Lekki" by Mame Bougouma Diene, and "Sunrise" by Nnedi Okorafor.
When Tor.com outlined a list of stories and books from the genre as of 2021, Tor also highlighted Africanfuturism: An Anthology (edited by Wole Talabi) along with the individual works of Namwali Serpell's The Old Drift, Nnedi Okorafor's Lagoon, Nicky Drayden's The Prey of Gods, Oghenechovwe Donald Ekpeki's Ife-Iyoku, the Tale of Imadeyunuagbon, and Tochi Onyebuchi's War Girls.
In comics, as of the end of 2022, so far a few Africanfuturism comics exist. Comic Republic Global Network, a Lagos-based publisher, is prominent in creating Africanfuturist superheroes like Guardian Prime. Laguardia, a comic book by Nnedi Okorafor, is associated with Africanfuturism.
Films and television
Africanfuturism movies are often scarce; films like Black Panther have been criticized by some viewers, who say that their depiction of Africa "differs little from the colonial view". In recent times, Africanfuturist movies include Hello, Rain, Pumzi, and Ratnik. Several Africanfuturism novels have been optioned for live adaptation, including Binti and Who Fears Death. In 2020, Walt Disney Studios and Pan African company Kugali announced that they would be co-producing an africanfuturist animated science fiction series, Iwájú, inspired by the city of Lagos.
On July 5, 2023, Kizazi Moto: Generation Fire, an Africanfuturist animated anthology short film series premiered on Disney+, Peter Ramsey was picked as executive producer, while Tendayi Nyeke and Anthony Silverston were supervising producers, and Triggerfish was the primary studio, along with other animation studios in Africa. Each of the ten films is from an African perspective, on themes such as social media, duality, disability, self-reflection, shared humanity, and other topics, with stories which include time travel, extraterrestrials, and alternate universes.
References
Further reading
University of Calabar, Nigeria's Ojima Sunday Nathaniel & Jonas Egbudu Akung's 2022 article "Afrofuturism and Africanfuturism: Black speculative writings in search of meaning and criteria" in Research Journal in Advanced Humanities preferentially supporting Okorafor's 'Africanfuturism' "because Dery’s". . ." inappropriate". . . "Afrofuturism is clearly an African-American signification that provides no space for the African imaginary", then their focus seeks more completion in "a different set of criteria for evaluation and categorization of both concepts, and proposes five-point criteria—experience, authorship, language, black heroism and technology for their evaluation.".
Tor 2021 Guide to Africanfuturism
AfrikaIsWoke's 2021 article "The Difference Between African Futurism & Afrofuturism" which suggests that 'Black' is the perhaps the common general term comprising what have become narrowed in 'African' and 'Afro' when used as ethnic or racial terms, proceeding from Zambian queerist futurist author Masiyaleti Mbewe's distinction that "differences between African Futurism and Afrofuturism can best be understood as a natural byproduct of the fact that Africans in Africa, and Blacks in the diaspora have different life experiences that stem purely from the fact that they exist in different parts of the world."
University of Kwazulu-Natal's Brett Taylor Banks' 2021 dissertation on "Okorafor’s Organic Fantasy: An Africanfuturist Approach to Science Fiction and Gender in Lagoon." by a European-African, reminding us that Africans now are not only Black, and Olive in the North, just as Americans have for half a millennium been not only Red but now Black, Yellow, White, and Brown, so geopolitical labels in the modern era of pervasive presumption of democracy's desirability deems interracial and genetic society politically correct. Overview page with abstract and link to downloadable copy of the actual dissertation. Notable, Banks "adapt[s] Francis Nyamnjoh’s convivial theory (2015) to estrange postmodernism from its western context, providing an African critical vocabulary".
University of Ghana's black Nigerian-Finnish and Swedish Minna Salami's article "The Liquid Space where African Feminism and African Futurism Meet" in Feminist Africa, 2021, a journal of the Institute of African Studies and the University of Ghana, by this SOAS, University of London cross-cultural author who dubs herself a "Ms Afropolitan" and has received an Honorary Fellowship in Writing from The Hong Kong Baptist University.
City College of New York's Damion Kareem Scott's 2021 chapter "Afrofuturism and Black Futurism: Some Ontological and Semantic Considerations" in Critical Black Futures, ed. P Butler.
Africanfuturism: An Anthology edited by Wole Talabi, 2020, Brittle Paper, a defining collection of these newly named genres, has since October 2020 and is currently still offered for free on the publisher's website in celebration of the 10th anniversary of this publisher which has been called "the village square of African literature".
Finnish Päivi Väätänen's 2019 'academic article' "Afro- versus Africanfuturism in Nnedi Okorafor’s 'The Magical Negro' and 'Mother of Invention'"
Botswana-born York University's Pamela Phatsimo Sunstrum's 2013 article "Afro-mythology and African Futurism: The Politics of Imagining and Methodologies for Contemporary Creative Research Practices" in Paradoxa's special publication No. 25 – Africa SF, ed. Mark Bould of UWE Bristol, precursing current diction before 'African' and 'Futurism' were concatenated as an emergent term, though titles by her colleagues in this collection use, in 2013, Technofuture, Afrofuturism and AfroSF, and Bould's introduction uses Africa SF.
2019 neologisms
Art movements
Culture of Africa
Science fiction genres
Pan-Africanism
Futurism by region
African literature | 0.761085 | 0.978961 | 0.745073 |
Articulation (sociology) | In sociology, articulation labels the process by which particular classes appropriate cultural forms and practices for their own use. The term appears to have originated from the work of Antonio Gramsci, specifically from his conception of superstructure. Chantal Mouffe, Stuart Hall, and others have adopted or used it.
Articulation (expression) theorizes the relationship between components of social formation or relationship between cultural and political economy. In this theory, cultural forms and practices (Antonio Gramsci's superstructure and Richard Middleton's instance or level of practice) have relative autonomy; socio-economic structures of power do not determine them, but rather they relate to them. "The theory of articulation recognizes the complexity of cultural fields. It preserves a relative autonomy for cultural and ideological elements ... but also insists that those combinatory patterns that are actually constructed do mediate deep, objective patterns in the socio-economic formation, and that the mediation takes place in struggle: the classes fight to articulate together constituents of the cultural repe[r]toire in particular ways so that they are organized in terms of principles or sets of values determined by the position and interests of the class in the prevailing mode of production."
This is because "the relationship between actual culture...on the one hand, and economically determined factors such as class position, on the other, is always problematical, incomplete, and the object of ideological work and struggle. ... Cultural relationships and cultural change are thus not predetermined; rather they are the product of negotiation, imposition, resistance, transformation, and so on....Thus particular cultural forms and practices cannot be attached mechanically or even paradigmatically to particular classes; nor, even, can particular interpretations, valuations, and uses of a single form or practice. In Stuart Hall's words (1981: 238), 'there are no wholly separate "cultures"...attached, in a relation of historical fixity, to specific "whole" classes'. However, "while elements of culture are not directly, eternally, or exclusively tied to specific economically determined factors such as class position, they are determined in the final instance by such factors, through the operation of articulating principles which are tied to class position".
Articulating principles "operate by combining existing elements into new patterns or by attaching new connotations to them". Examples of these processes in musical culture include the re-use of elements of bourgeois marches in labor anthems or the assimilation of liberated (in the Marcusian sense) countercultural 1960s rock into a tradition of bourgeois bohemianism and the combination of elements of black and white working-class music with elements of art music that created countercultural 1960s rock.
Some scholars may prefer the theory of articulation, where "class does not coincide with the sign community", to the theory of homology, where class does coincide with the sign community and where economic forces determine the superstructure. However, "it seems likely that some signifying structures are more easily articulated to the interests of one group than are some others" and cross-connotation, "when two or more different elements are made to connote, symbolize, or evoke each other", can set up "particularly strong articulative relationships". For example: Elvis Presley's linking of elements of "youth rebellion, working-class 'earthiness', and ethnic 'roots', each of which can evoke the others, all of which were articulated together, however briefly, by a moment of popular self-assertion".
References
Further reading
Hall, S. (1978) "Popular culture, politics, and history", in Popular Culture Bulletin, 3, Open University duplicated paper.
Antonio Gramsci
Sociological terminology | 0.778424 | 0.957147 | 0.745066 |
Foregrounding | Foregrounding is a concept in literary studies that concerns making a linguistic utterance (word, clause, phrase, phoneme, etc.) stand out from the surrounding linguistic context, from given literary traditions, or from more general world knowledge. It is "the 'throwing into relief' of the linguistic sign against the background of the norms of ordinary language."
There are two main types of foregrounding: parallelism and deviation. Parallelism can be described as unexpected regularity, while deviation can be seen as unexpected irregularity. As the definition of foregrounding indicates, these are relative concepts. Something can only be unexpectedly regular or irregular within a particular context. This context can be relatively narrow, such as the immediate textual surroundings (referred to as a 'secondary norm'), or wider such as an entire genre (referred to as a 'primary norm'). Foregrounding can occur on all levels of language (phonology, graphology, morphology, lexis, syntax, semantics, and pragmatics). It is generally used to highlight important parts of a text, aid memorability, and/or invite interpretation.
Origin
The term originated in English through the translation by Paul Garvin of the Czech aktualisace (literally "to actualize"), borrowing the terms from Jan Mukařovský of the Prague school of the 1930s. The Prague Structuralists' work was a continuation of the ideas generated by the Russian Formalists, particularly their notion of Defamiliarization ('ostranenie'). Especially the 1917 essay 'Art as Technique' (Iskusstvo kak priem) by Viktor Shklovsky proved to be highly influential in laying the basis of an anthropological theory of literature. To quote from his essay: "And art exists that one may recover the sensation of life; it exists to make one feel things, to make the stone stony. The purpose of art is to impart the sensation of things as they are perceived and not as they are known. The technique of art is to make objects "unfamiliar," to make forms difficult, to increase the difficulty and length of perception because the process of perception is an aesthetic end in itself and must be prolonged."
It took several decades before the Russian Formalists' work was discovered in the West, but in 1960 some British stylisticians, notably Geoffrey Leech and Roger Fowler, established the notion of 'foregrounding' in the linguistically oriented analysis of literature. Soon a plethora of studies investigated foregrounding features in a multitude of texts, demonstrating its ubiquity in a large variety of literary traditions. These analyses were seen as evidence that there was a special literary register, which was called, also after the Russian Formalists, 'literariness' (literaturnost').
Evidence Supporting Foregrounding Theory
The attempt to support foregrounding theory, based on real reader responses, started with Willie Van Peer in 1986, and since then, many studies have validated foregrounding theory's predictions. In 1994 Miall and Kuiken had participants read three short stories one sentence after the other – and rank each sentence for strikingness and affect. Sentences that had more foregrounding devices were found to be judged by readers as more striking, more emotional, and they also lead to slower reading times. These findings were independent of the reader previous experience with reading literature, but other experiments found foregrounding effects that seem to be connected to experience. Some evidence suggest there is a difference between experienced and inexperienced readers in second readings of a literary text that is rich with foregrounding devices: For experienced readers there is an improvement in evaluation between first and second readings. This effect was initially found by Dixon, Bortolussi, Twilley and Leung in 1993 for the story Emma Zunz by Jorge Luis Borges, and was later found by Hakemulder and his colleagues for other texts as well. However, recent replication attempts by Kuijpers and Hakemulder did not get the same results. They found that the main reason for an improvement in evaluation between readings was a better understanding of the story. Another line of research suggests that experience affects the reader tendency to engage foregrounding. In an experiment that combines eye tracking and retrospective think aloud interviews Harash found that when inexperienced readers encounter a challenging stylistic device they are more prone to use shallow processing and not to start a foregrounding process, and that experienced readers have a higher tendency both to start a foregrounding process and to finish it successfully. Foregrounding also appears to play some role in increasing empathic understanding for people in similar situations as the characters in a story they just read. Koopman gave subjects to read 1 of 3 versions of an excerpt from a literary novel about the loss of a child, the original version, a manipulated version "without imagery" and a version "without foregrounding." Results showed that readers who had read the "original" version showed higher empathy for people who are grieving than those who had read the version "without foregrounding."
Example
For example, the last line of a poem with a consistent metre may be foregrounded by changing the number of syllables it contains. This would be an example of a deviation from a secondary norm. In the following poem by E. E. Cummings, there are two types of deviation:
light's lives lurch
a once world quickly from rises
army the gradual of unbeing fro
on stiffening greenly air and to ghosts go
drift slippery hands tease slim float twitter faces
Only stand with me, love! against these its
until you are, and until i am dreams...
Firstly, most of the poem deviates from 'normal' language (primary deviation). In addition, there is secondary deviation in that the penultimate line is unexpectedly different from the rest of the poem. Nursery rhymes, adverts and slogans often exhibit parallelism in the form of repetition and rhyme, but parallelism can also occur over longer texts. For example, jokes are often built on a mixture of parallelism and deviation. They often consist of three parts or characters. The first two are very similar (parallelism) and the third one starts out as similar, but our expectations are thwarted when it turns out different in end (deviation).
See also
Defamiliarization
Glossary of rhetorical terms
Rhetorical device
Stylistics (linguistics)
Effects of foregrounding - research coalition
References
Linguistics
Rhetoric
Stylistics | 0.763581 | 0.97575 | 0.745064 |
Authentic learning | In education, authentic learning is an instructional approach that allows students to explore, discuss, and meaningfully construct concepts and relationships in contexts that involve real-world problems and projects that are relevant to the learner. It refers to a "wide variety of educational and instructional techniques focused on connecting what students are taught in school to real-world issues, problems, and applications. The basic idea is that students are more likely to be interested in what they are learning, more motivated to learn new concepts and skills, and better prepared to succeed in college, careers, and adulthood if what they are learning mirrors real-life contexts, equips them with practical and useful skills, and addresses topics that are relevant and applicable to their lives outside of school."
Authentic instruction will take on a much different form than traditional teaching methods. In the traditional classroom, students take a passive role in the learning process. Knowledge is considered to be a collection of facts and procedures that are transmitted from the teacher to the student. In this view, the goal of education is to possess a large collection of these facts and procedures. Authentic learning, on the other hand, takes a constructivist approach, in which learning is an active process. Teachers provide opportunities for students to construct their own knowledge through engaging in self-directed inquiry, problem solving, critical thinking, and reflections in real-world contexts. This knowledge construction is heavily influenced by the student's prior knowledge and experiences, as well as by the characteristics that shape the learning environment, such as values, expectations, rewards, and sanctions. Education is more student-centered. Students no longer simply memorize facts in abstract and artificial situations, but they experience and apply information in ways that are grounded in reality.
Characteristics
There is no definitive description of authentic learning. Educators must develop their own interpretations of what creates meaning for the students in their classrooms. However, the literature suggests that there are several characteristics of authentic learning. It is important to note that authentic learning tasks do not have to have all the characteristics. They can be thought of as being on a spectrum, with tasks being more or less authentic. The characteristics of authentic learning include the following:
Authentic learning is centered on authentic, relevant, real-world tasks that are of interest to the learners.
Students are actively engaged in exploration and inquiry.
Learning, most often, is interdisciplinary. It requires integration of content from several disciplines and leads to outcomes beyond the domain-specific learning outcomes.
Learning is closely connected to the world beyond the walls of the classroom.
Students become engaged in complex tasks and higher-order thinking skills, such as analyzing, synthesizing, designing, manipulating, and evaluating information.
Learning begins with a question or problem, which cannot be constricting in that it allows the student to construct their own response and inquiry. The outcome of the learning experience cannot be predetermined.
Students produce a product that can be shared with an audience outside the classroom. These products have value in their own right, rather than simply for earning a grade.
The resulting products are concrete allowing them to be shared and critiqued; this feedback allows the learner to be reflective and deepen their learning.
Learning is student driven, with tutors, peers, teachers, parents, and outside experts all assisting and coaching in the learning process.
Learners employ instructional scaffolding techniques at critical times.
Students have opportunities for social discourse, collaboration, and reflection.
Ample resources are available.
Assessment of authentic learning is integrated seamlessly within the learning task in order to reflect similar, real world assessments. This is known as authentic assessment and is in contrast to traditional learning assessments in which an exam is given after the knowledge or skills have hopefully been acquired.
Authentic learning provides students with the opportunity to examine the problem from different perspectives, which allows for competing solutions and a diversity of outcomes instead of one single correct answer.
Students are provided the opportunity for articulation of their learning process and/or final learning product.
Five standards
While there has been much attention given to educational standards for curriculum and assessment, "the standards for instruction tend to focus on procedural and technical aspects, with little attention to more fundamental standards of quality." The challenge is not simply to adopt innovative teaching techniques but to give students the opportunity to use their minds well and to provide students with instruction that has meaning or value outside of achieving success in school.
In order to address this challenge, a framework consisting of five standards of authentic instruction has been developed by Wisconsin's Center on Organization and Restructuring of Schools. This framework can be a valuable tool for both researchers and teachers. It provides "a set of standards through which to view assignments, instructional activities, and the dialogue between teacher and students and students with one another."
Teachers can use the framework to generate questions, clarify goals, and critique their teaching. Each standard can be assessed on a scale of one to five rather than a categorical yes or no variable. "The five standards are higher-order thinking, depth of knowledge, connectedness to the world beyond the classroom, substantive conversation, and social support for student achievement."
Higher-Order Thinking: This scale measures the degree to which students use higher-order thinking skills. Higher-order thinking requires students to move beyond simple recall of facts to the more complex task of manipulating information and ideas in ways that transform their meaning and implications, such as when students synthesize, generalize, explain, hypothesize, or arrive at some conclusion or interpretation.
Depth of Knowledge: This scale assesses students' depth of knowledge and understanding. Knowledge is considered deep when students are able to "make clear distinctions, develop arguments, solve problems, construct explanations, and otherwise work with relatively complex understandings." Rather than emphasizing large quantities of fragmented information, instruction covers fewer topics in systematic and connected ways which leads to deeper understanding.
Connectedness to the World: This scale measures the extent to which the instruction has value and meaning beyond the instructional context. Instruction can exhibit connectedness when students address real-world public problems or when they use personal experiences as a context for applying knowledge.
Substantive Conversation: This scale assesses the extent of communication to learn and understand the substance of a subject. High levels of substantive conversation are indicated by three features: considerable interaction about the subject matter which includes evidence of higher-order thinking, sharing of ideas that are not scripted or controlled, and dialogue that builds on participants' ideas to promote improved collective understanding of a theme or topic.
Social Support for Student Achievement: The social support scale measures the culture of the learning community. Social support is high in classes where there are high expectations for all students, a climate of mutual respect, and inclusion of all students in the learning process. Contributions from all students are welcomed and valued.
Examples
There are several authentic learning practices in which students may participate. These are a few examples:
Simulation-Based Learning: Students engage in simulations and role-playing in order to be put in situations where the student has to actively participate in the decision making of a project. This helps in "developing valuable communication, collaboration, and leadership skills that would help the student succeed as a professional in the field he/she is studying." Learning through simulation and role-playing has been used to train flight attendants, fire fighters, and medical personnel to name a few.
Student-Created Media: Student-created media focuses on using various technologies to "create videos, design websites, produce animations, virtual reconstructions, and create photographs." In addition to gaining valuable experience in working with a range of technologies, "students have also improved their reading comprehension, writing skills, and their abilities to plan, analyze, and interpret results as they progress through the media project."
Inquiry-Based Learning: Inquiry-based learning starts by posing questions, problems or scenarios rather than simply presenting material to students. Students identify and research issues and questions to develop their knowledge or solutions. Inquiry-based learning is generally used in field-work, case studies, investigations, individual and group projects, and research projects.
Peer-Based Evaluation: In peer based evaluation students are given the opportunity to analyze, critique, and provide constructive feedback on the assignments of their peers. Through this process, they are exposed to different perspectives on the topic being studied, giving them a deeper understanding.
Working with Remote Instruments: Specialized software can provide students with opportunities they might not have otherwise. For example, "various software packages produce similar results that students working in a fully equipped lab might receive. By interpreting the software based results students are able to apply theory to practice as they interpret the data that would otherwise not be available to them."
Working with Research Data: Students collect their own data or use data collected from researchers to conduct their own investigations.
Reflecting and Documenting Achievements: The importance of metacognition in the learning process is well-documented. Giving students the opportunity to reflect upon and monitor their learning is essential. Journals, portfolios, and electronic portfolios are examples of authentic learning tasks designed to showcase the student's work as well as give the student a means to reflect back on his/her learning over time.
Project-Based Learning: Begins with a problem or question that is the starting point for inquiry and which all products are created as a result of. Results in a single or series of products or artifacts that are created as a result or solution to the inquiry.
Benefits
Educational research shows that authentic learning is an effective learning approach to preparing students for work in the 21st century. By situating knowledge within relevant contexts, learning is enhanced in all four domains of learning: cognitive (knowledge), affective (attitudes), psychomotor (skills), and psychosocial (social skills). Some of the benefits of authentic learning include the following:
Students are more motivated and more likely to be interested in what they are learning when it is relevant and applicable to their lives outside of school.
Students are better prepared to succeed in college, careers, and adulthood.
Students learn to assimilate and connect knowledge that is unfamiliar.
Students are exposed to different settings, activities, and perspectives.
Transfer and application of theoretical knowledge to the world outside of the classroom is enhanced.
Students have opportunities to collaborate, produce products, and to practice problem solving and professional skills.
Students have opportunities to exercise professional judgments in a safe environment.
Students practice higher-order thinking skills.
Students develop patience to follow longer arguments.
Students develop flexibility to work across disciplinary and cultural boundaries.
References
Pedagogy
Educational technology
Applied learning | 0.766158 | 0.972461 | 0.745059 |
Adaptationism | Adaptationism is a scientific perspective on evolution that focuses on accounting for the products of evolution as collections of adaptive traits, each a product of natural selection with some adaptive rationale or raison d'etre.
A formal alternative would be to look at the products of evolution as the result of neutral evolution, in terms of structural constraints, or in terms of a mixture of factors including (but not limited to) natural selection.
The most obvious justification for an adaptationist perspective is the belief that traits are, in fact, always adaptations built by natural selection for their functional role. This position is called "empirical adaptationism" by Godfrey-Smith. However, Godfrey-Smith also identifies "methodological" and "explanatory" flavors of adaptationism, and argues that all three are found in the evolutionary literature (see for explanation).
Although adaptationism has always existed— the view that the features of organisms are wonderfully adapted predates evolutionary thinking— and was sometimes criticized for its "Panglossian" excesses (e.g., by Bateson or Haldane), concerns about the role of adaptationism in scientific research did not become a major issue of debate until evolutionary biologists Stephen Jay Gould and Richard Lewontin penned a famous critique, "The Spandrels of San Marco and the Panglossian Paradigm".
According to Gould and Lewontin, evolutionary biologists had a habit of proposing adaptive explanations for any trait by default without considering non-adaptive alternatives, and often by conflating products of adaptation with the process of natural selection. They identified neutral evolution and developmental constraints as potentially important non-adaptive factors and called for alternative research agendas.
This critique provoked defenses by Mayr, Reeve and Sherman and others, who argued that the adaptationist research program was unquestionably highly successful, and that the causal and methodological basis for considering alternatives was weak. The "Spandrels paper" (as it came to be known) also added fuel to the emergence of an alternative "evo-devo" agenda focused on developmental "constraints"
Today, molecular evolutionists often cite neutral evolution as the null hypothesis in evolutionary studies, i.e., offering a direct contrast to the adaptationist approach. Constructive neutral evolution has been suggested as a means by which complex systems emerge through neutral transitions, and has been invoked to help understand the origins of a wide variety of features from the spliceosome of eukaryotes to the interdependency and simplification widespread in microbial communities.
Today, adaptationism is associated with the "reverse engineering" approach. Richard Dawkins noted in The Blind Watchmaker that evolution, an impersonal process, produces organisms that give the appearance of having been designed for a purpose. This observation justifies looking for the function of traits observed in biological organisms. This reverse engineering is used in disciplines such as psychology and economics to explain the features of human cognition. Reverse engineering can, in particular, help explain cognitive biases as adaptive solutions that assist individuals in decision-making when considering constraints such as the cost of processing information. This approach is valuable in understanding how seemingly irrational behaviors may, in fact, be optimal given the environmental and informational limitations under which human cognition operates.
Overview
Criteria to identify a trait as an adaptation
Adaptationism is an approach to studying the evolution of form and function. It attempts to frame the existence and persistence of traits, assuming that each of them arose independently and improved the reproductive success of the organism's ancestors.
A trait is an adaptation if it fulfils the following criteria:
The trait is a variation of an earlier form.
The trait is heritable through the transmission of genes.
The trait enhances reproductive success.
Constraints on the power of evolution
Genetic constraints
Genetic reality provides constraints on the power of random mutation followed by natural selection.
With pleiotropy, some genes control multiple traits, so that adaptation of one trait is impeded by effects on other traits that are not necessarily adaptive. Selection that influences epistasis is a case where the regulation or expression of one gene, depends on one or several others. This is true for a good number of genes though to differing extents. The reason why this leads to muddied responses is that selection for a trait that is epistatically based can mean that an allele for a gene that is epistatic when selected would happen to affect others. This leads to the coregulation of others for a reason other than there is an adaptive quality to each of those traits. Like with pleiotropy, traits could reach fixation in a population as a by-product of selection for another.
In the context of development the difference between pleiotropy and epistasis is not so clear but at the genetic level the distinction is more clear. With these traits as being by-products of others it can ultimately be said that these traits evolved but not that they necessarily represent adaptations.
Polygenic traits are controlled by a number of separate genes. Many traits are polygenic, for example human height. To drastically change a polygenic trait is likely to require multiple changes.
Anatomical constraints
Anatomical constraints are features of organism's anatomy that are prevented from change by being constrained in some way. When organisms diverge from a common ancestor and inherit certain characteristics which become modified by natural selection of mutant phenotypes, it is as if some traits are locked in place and are unable to change in certain ways. Some textbook anatomical constraints often include examples of structures that connect parts of the body together though a physical link.
These links are hard if not impossible to break because evolution usually requires that anatomy be formed by small consecutive modifications in populations through generations. In his book, Why We Get Sick, Randolph Nesse uses the "blind spot" in the vertebrate eye (caused by the nerve fibers running through the retina) as an example of this. He argues that natural selection has come up with an elaborate work-around of the eyes wobbling back-and-forth to correct for this, but vertebrates have not found the solution embodied in cephalopod eyes, where the optic nerve does not interrupt the view. See also: Evolution of the eye.
Another example is the cranial nerves in tetrapods. In early vertebrate evolution, sharks, skates, and rays (collectively Chondrichthyes), the cranial nerves run from the part of the brain that interprets sensory information, and radiate out towards the organs that produce those sensations. In tetrapods, however, and mammals in particular, the nerves take an elaborate winding path through the cranium around structures that evolved after the common ancestor with sharks.
Debate with structuralism
Adaptationism is sometimes characterized by critics as an unsubstantiated assumption that all or most traits are optimal adaptations. Structuralist critics (most notably Richard Lewontin and Stephen Jay Gould in their "spandrel" paper) contend that the adaptationists have overemphasized the power of natural selection to shape individual traits to an evolutionary optimum. Adaptationists are sometimes accused by their critics of using ad hoc "just-so stories". The critics, in turn, have been accused of misrepresentation (Straw man argumentation), rather than attacking the actual statements of supposed adaptationists.
Adaptationist researchers respond by asserting that they, too, follow George Williams' depiction of adaptation as an "onerous concept" that should only be applied in light of strong evidence. This evidence can be generally characterized as the successful prediction of novel phenomena based on the hypothesis that design details of adaptations should fit a complex evolved design to respond to a specific set of selection pressures. In evolutionary psychology, researchers such as Leda Cosmides, John Tooby, and David Buss contend that the bulk of research findings that were uniquely predicted through adaptationist hypothesizing comprise evidence of the methods' validity.
Purpose and function
There are philosophical issues with the way biologists speak of function, effectively invoking teleology, the purpose of an adaptation.
Function
To say something has a function is to say something about what it does for the organism. It also says something about its history: how it has come about. A heart pumps blood: that is its function. It also emits sound, which is considered to be an ancillary side-effect, not its function. The heart has a history (which may be well or poorly understood), and that history is about how natural selection formed and maintained the heart as a pump. Every aspect of an organism that has a function has a history. Now, an adaptation must have a functional history: therefore we expect it must have undergone selection caused by relative survival in its habitat. It would be quite wrong to use the word adaptation about a trait which arose as a by-product.
Teleology
Teleology was introduced into biology by Aristotle to describe the adaptedness of organisms. Biologists have found the implications of purposefulness awkward as they suggest supernatural intention, an aspect of Plato's thinking which Aristotle rejected. A similar term, teleonomy, grew out of cybernetics and self-organising systems and was used by biologists of the 1960s such as Ernst Mayr and George C. Williams as a less loaded alternative. On the one hand, adaptation is obviously purposeful: natural selection chooses what works and eliminates what does not. On the other hand, biologists want to deny conscious purpose in evolution. The dilemma gave rise to a famous joke by the evolutionary biologist Haldane: "Teleology is like a mistress to a biologist: he cannot live without her but he's unwilling to be seen with her in public.'" David Hull commented that Haldane's mistress "has become a lawfully wedded wife. Biologists no longer feel obligated to apologize for their use of teleological language; they flaunt it. The only concession which they make to its disreputable past is to rename it 'teleonomy'."
See also
Adaptive evolution in the human genome
Beneficial acclimation hypothesis
Constructive neutral evolution
Evolutionary failure
Exaptation
Gene-centered view of evolution
Neutral theory of molecular evolution
Vitalism
References
Sources
External links
Information from "Deep Ethology" course website, by Neil Greenberg
Tooby & Cosmides comments on Maynard Smith's New York Review of Books piece on Gould et al.
Evolutionary biology
Modern synthesis (20th century) | 0.767838 | 0.970299 | 0.745032 |
Design principles | Design principles are propositions that, when applied to design elements, form a design.
Unity/harmony
According to Alex White, author of The Elements of Graphic Design, to achieve visual unity is a main goal of graphic design. When all elements are in agreement, a design is considered unified. No individual part is viewed as more important than the whole design. A good balance between unity and variety must be established to avoid a chaotic or a lifeless design.
Methods
Perspective: sense of distance between elements.
Similarity: ability to seem repeatable with other elements.
Continuation: the sense of having a line or pattern extend.
Repetition: elements being copied or mimicked numerous times.
Rhythm: is achieved when recurring position, size, color, and use of a graphic element has a focal point interruption.
Altering the basic theme achieves unity and helps keep interest.
Balance
It is a state of equalized tension and equilibrium, which may not always be calm.
Types of balance in visual design
Symmetry
Asymmetrical balance produces an informal balance that is attention attracting and dynamic.
Radial balance is arranged around a central element. The elements placed in a radial balance seem to 'radiate' out from a central point in a circular fashion.
Overall is a mosaic form of balance which normally arises from too many elements being put on a page. Due to the lack of hierarchy and contrast, this form of balance can look noisy but sometimes quiet.
Hierarchy/Dominance/Emphasis
A good design contains elements that lead the reader through each element in order of its significance. The type and images should be expressed starting from most important to the least important. Dominance is created by contrasting size, positioning, color, style, or shape. The focal point should dominate the design with scale and contrast without sacrificing the unity of the whole.
Scale/proportion
Using the relative size of elements against each other can attract attention to a focal point. When elements are designed larger than life, the scale is being used to show drama. Scale can be considered both objectively and subjectively. In terms of objective, scale refers to the exact literal physical dimensions of an object in the real world or the coloration between the representation and the real one. Printed maps can be good examples as they have an exact scale representing the real physical world. Subjectively, however, scale refers to one's impression of an object's size. A representation “lacks scale” when there is no exact cause linking it to lived experience, giving it a physical identity. As an example, a book may have a grand or intimate scale based on how it relates to our own body or our knowledge of other books.
Scale in design
A printed piece can be as small as a postage stamp or as large as a billboard. A logo should be legible both in tiny dimensions as well as from a distance on a screen. Some projects have their specified scale designed for a certain medium or site, while some others need to work in various sizes designed for reproduction in multiple scales. No matter what size the design work is, it should have its own sense of scale. Increasing an element's scale in a design piece increases its value in terms of hierarchy and makes it to be seen first compared to other elements while decreasing an element's scale reduces its value.
Similarity and contrast
Planning a consistent and similar design is an important aspect of a designer's work to make their focal point visible. Too much similarity is boring but without similarity important elements will not exist and an image without contrast is uneventful so the key is to find the balance between similarity and contrast.
Similar environment
There are several ways to develop a similar environment:
Build a unique internal organization structure.
Manipulate shapes of images and text to correlate together.
Express continuity from page to page in publications. Items to watch include headers, themes, borders, and spaces.
Develop a style manual and adhere to it.
Contrasts
Space
Filled / Empty
Near / Far
2-D / 3-D
Position
Left / Right
Isolated / Grouped
Centered / Off-Center
Top / Bottom
Form
Simple / Complex
Beauty / Ugly
Whole / Broken
Direction
Stability / Movement
Structure
Organized / Chaotic
Mechanical / Hand-Drawn
Size
Large / Small
Deep / Shallow
Fat / Thin
Color
Grey scale / Color
Black & White / Color
Light / Dark
Texture
Fine / Coarse
Smooth / Rough
Sharp / Dull
Density
Transparent / Opaque
Thick / Thin
Liquid / Solid
Gravity
Light / Heavy
Stable / Unstable
Movement is the path the viewer's eye takes through the artwork, often to focal areas. Such movement can be directed along lines edges, shape and color within the artwork, and more.
See also
Composition (visual arts)
Gestalt laws of grouping
Interior design
Landscape design
Pattern language
Elements of art
Principles of art
Color theory
Notes
References
Kilmer, R., & Kilmer, W. O. (1992). Designing Interiors. Orland, FL: Holt, Rinehart and Winston, Inc. .
Nielson, K. J., & Taylor, D. A. (2002). Interiors: An Introduction. New York: McGraw-Hill Companies, Inc.
Pile, J.F. (1995; fourth edition, 2007). Interior Design. New York: Harry N. Abrams, Inc.
Sully, Anthony (2012). Interior Design: Theory and Process. London: Bloomsbury. .
External links
Art, Design, and Visual Thinking. An online, interactive textbook by Charlotte Jirousek at Cornell University.
The 6 Principles of Design
Design
Composition in visual art | 0.764125 | 0.974995 | 0.745018 |
Mise en abyme (in literature and other media) | Mise en abyme (also mise-en-abîme, French "put in the abyss", [miːz ɒn əˈbɪːm]) is a transgeneric and transmedial technique that can occur in any literary genre, in comics, film, painting or other media. It is a form of similarity and/or repetition, and hence a variant of self-reference. Mise en abyme presupposes at least two hierarchically different levels. A subordinate level 'mirrors' content or formal elements of a primary level.
'Mirroring' can mean repetition, similarity or even, to a certain extent, contrast. The elements thus ‘mirrored’ can refer to form (e.g. a painting within a painting) or content (e.g. a theme occurring on different levels).
Mise en abyme can be differentiated according to its quantitative, qualitative and functional features. For instance, ‘mirroring’ can occur once, several times (on a lower and yet on a lower and so on level) or (theoretically) an infinite number of times (as in the reflection of an object between two mirrors, which creates the impression of a visual abyss). Further, mise en abyme can either be partial or complete (i.e. mirror part or all of the upper level) and either probable, improbable or paradoxical. It can contribute to the understanding of a work, or lay bare its artificiality.
History
The term mise en abyme derives from heraldry. It describes the appearance of a smaller shield in the center of a larger one; see for example the Royal coat of arms of the United Kingdom (in the form used between 1801 and 1837). André Gide, in an 1893 entry into his journal, was the first to write about mise en abyme in connection with describing self-reflexive embeddings in various forms of art.
The term enters the lexicon through Claude-Edmonde Magny who described the aesthetic effects of the device. Jean Ricardou developed the concept further by outlining some of its functions. On the one hand it may confuse and disrupt the work in question, but on the other hand it may enhance understanding e.g. by pointing out the work's true meaning or intention. Lucien Dällenbach continues the research in a magisterial study by classifying and describing various forms and functions of mise en abyme.
Examples
Mise en abyme is not restricted to a specific kind of literature or art. The recursive appearance of a novel within a novel, a play within a play, a picture within a picture, or a film within a film form mises en abyme that can have many different effects on the perception and understanding of the literary text or work of art.
Painting: Marriage à-la mode 4: The Toilette by William Hogarth
Mariage à-la-Mode (1743–45) is a narrative series of six socially and morally critical paintings by William Hogarth. In the fourth painting, Mariage à-la-Mode 4: The Toilette, an example of mise en abyme can be found. The man on the right is not the woman's husband, however they are clearly flirting and are possibly arranging a meeting at night. The paintings above their heads depict sexual scenes, foreshadowing what is going to happen.
Drama: Hamlet by William Shakespeare
Another example of mise en abyme would be a novel within a novel, or a play within a play. In William Shakespeare's Hamlet the title character stages a play within the play (“The Murder of Gonzago”) to find out whether his uncle really murdered his father as the ghost of his father has told him. It is not only a formal mirroring of a theatrical situation (a play within a play) but also a mirroring of a content element, namely of what supposedly had happened in the pre-history. Hamlet wants to find out the truth by instructing the actors to perform a play which contains striking similarities to the alleged murder of Hamlet's father. The embedded performance thus includes details from the broader plot, which illuminates a thematic aspect of the play itself.
Short story: "The Fall of the House of Usher" by Edgar A. Poe
The Fall of the House of Usher by Edgar Allan Poe sports a particularly noteworthy example of mise en abyme, a story within a story. Towards the end of the story, the narrator begins to read aloud parts of an antique volume entitled Mad Trist by Sir Launcelot Canning. At first, the narrator only vaguely realizes that the sounds occurring in the embedded fiction Mad Trist can really be heard by him. The embedded story is subsequently more and more intertwined with the events that are happening in the embedding story, until, in a climactic scene, a supposedly dead and buried member of the House of Usher (Madeline), is about to enter the room where the recital takes place when both she and her incestuously beloved brother die in a final embrace. This fall (and the partial mirroring of the scene in Mad Trist) anticipates the final fall of the House of Usher, which sinks into the tarn surrounding the building.
TV series: The Simpsons
In The Simpsons the characters frequently watch television: characters of a TV series are thus watching TV themselves. This act is a mise en abyme, as we see a film within a film. However, if they started discussing what they are watching it would also be an instance of meta-reference (or rather the mise en abyme would, as it sometimes does, have triggered metareferential reflections). Yet, as a rule, mise en abyme merely ‘mirrors’ elements from a superior level on a subordinate one, but does not necessarily trigger an analysis of them.
List of modern media
Here is a list of modern media that features a mise en abyss at the core of their scenario:
Books
Ready Player One
Most of the LitRPG genre
TV series
The Simpsons
Movies
Matrix
Tron
Inception
Video Games
There is No Game: Wrong Dimension
Narita Boy
Hack 'n' Slash
One Dreamer
Data Hacker: Initiation series
Patrick's Parabox and World en Abyme
Perspective
Potential problems
Mise en abyme can be easily confused with metalepsis and metareference. These terms describe related features, as mise en abyme can be a springboard to metalepsis if there is a paradoxical confusion of the levels involved. If the artificiality of the mirroring device or related issues are foregrounded or discussed, mise en abyme can also be conducive to metareference.
To summarise, mise en abyme is a form of similarity, repetition and hence a variant of self-reference that is not necessarily discussed within its appearing medium, it only occurs. If the occurrence is discussed, or if mise en abyme triggers reflections on the respective medium or the construction of the text for example, mise en abyme is combined with metareference.
References
Metafictional techniques | 0.763034 | 0.97638 | 0.745011 |
Interpersonal circumplex | The interpersonal circle or interpersonal circumplex is a model for conceptualizing, organizing, and assessing interpersonal behavior, traits, and motives. The interpersonal circumplex is defined by two orthogonal axes: a vertical axis (of status, dominance, power, ambitiousness, assertiveness, or control) and a horizontal axis (of agreeableness, compassion, nurturant, solidarity, friendliness, warmth, affiliation or love). In recent years, it has become conventional to identify the vertical and horizontal axes with the broad constructs of agency and communion. Thus, each point in the interpersonal circumplex space can be specified as a weighted combination of agency and communion.
Character traits
Placing a person near one of the poles of the axes implies that the person tends to convey clear or strong messages (of warmth, hostility, dominance or submissiveness). Conversely, placing a person at the midpoint of the agentic dimension implies the person conveys neither dominance nor submissiveness (and pulls neither dominance nor submissiveness from others). Likewise, placing a person at the midpoint of the communal dimension implies the person conveys neither warmth nor hostility (and pulls neither warmth nor hostility from others).
The interpersonal circumplex can be divided into broad segments (such as fourths) or narrow segments (such as sixteenths), but currently most interpersonal circumplex inventories partition the circle into eight octants. As one moves around the circle, each octant reflects a progressive blend of the two axial dimensions.
There exist a variety of psychological tests designed to measure these eight interpersonal circumplex octants. For example, the Interpersonal Adjective Scales (IAS; Wiggins, 1995) is a measure of interpersonal traits associated with each octant of the interpersonal circumplex. The Inventory of Interpersonal Problems (IIP; Horowitz, Alden, Wiggins, & Pincus, 2000) is a measure of problems associated with each octant of the interpersonal circumplex, whereas the Inventory of Interpersonal Strengths (IIS; Hatcher & Rogers, 2009) is a measure of strengths associated with each octant. The Circumplex Scales of Interpersonal Values (CSIV; Locke, 2000) is a 64-item measure of the value individuals place on interpersonal experiences associated with each octant of the interpersonal circumplex. The Person's Relating to Others Questionnaire (PROQ), the latest version being the PROQ3 is a 48-item measure developed by the British doctor John Birtchnell. Finally, the Impact Message Inventory-Circumplex (IMI; Kiesler, Schmidt, & Wagner, 1997) assesses the interpersonal dispositions of a target person, not by asking the target person directly, but by assessing the feelings, thoughts, and behaviors that the target evokes in another person. Since interpersonal dispositions are key features of most personality disorders, interpersonal circumplex measures can be useful tools for identifying or differentiating personality disorders (Kiesler, 1996; Leary, 1957; Locke, 2006).
History
Originally coined Leary Circumplex or Leary Circle after Timothy Leary is defined as "a two-dimensional representation of personality organized around two major axes".
In the 20th century, there were a number of efforts by personality psychologists to create comprehensive taxonomies to describe the most important and fundamental traits of human nature. Leary would later become famous for his controversial LSD experiments at Harvard. His circumplex, developed in 1957, is a circular continuum of personality formed from the intersection of two base axes: Power and Love. The opposing sides of the power axis are dominance and submission, while the opposing sides of the love axis are love and hate (Wiggins, 1996).
Leary argued that all other dimensions of personality can be viewed as a blending of these two axes. For example, a person who is stubborn and inflexible in their personal relationships might graph her personality somewhere on the arc between dominance and love. However, a person who exhibits passive–aggressive tendencies might find herself best described on the arc between submission and hate. The main idea of the Leary Circumplex is that each and every human trait can be mapped as a vector coordinate within this circle.
Furthermore, the Leary Circumplex also represents a kind of bull's eye of healthy psychological adjustment. Theoretically speaking, the most well-adjusted person of the planet could have their personality mapped at the exact center of the circumplex, right at the intersection of the two axes, while individuals exhibiting extremes in personality would be located on the circumference of the circle.
The Leary Circumplex offers three major benefits as a taxonomy. It offers a map of interpersonal traits within a geometric circle. It allows for comparison of different traits within the system. It provides a scale of healthy and unhealthy expressions of each trait.
See also
Circumplex model of group tasks
Interpersonal reflex
Lorna Smith Benjamin – creator of the similar Structural Analysis of Social Behavior (SASB) circumplex model
Personality psychology
Unmitigated communion
References
Cited
General
Hatcher, R.L., & Rogers, D.T. (2009). Development and validation of a measure of interpersonal strengths: The Inventory of Interpersonal Strengths. Psychological Assessment, 21, 544-569.
Horowitz, L.M. (2004). Interpersonal Foundations of Psychopathology. Washington, DC: American Psychological Association.
Horowitz, L.M., Alden, L.E., Wiggins, J.S., & Pincus, A.L. (2000). Inventory of Interpersonal Problems Manual. Odessa, FL: The Psychological Corporation.
Kiesler, D.J. (1996). Contemporary Interpersonal Theory and Research: Personality, psychopathology and psychotherapy. New York: Wiley.
Kiesler, D.J., Schmidt, J.A. & Wagner, C.C. (1997). A circumplex inventory of impact messages: An operational bridge between emotional and interpersonal behavior. In R. Plutchik & H.R. Conte (Eds.), Circumplex models of personality and emotions (pp. 221–244). Washington, DC: American Psychological Association.
Leary, T. (1957). Interpersonal Diagnosis of Personality. New York: Ronald Press.
Locke, K.D. (2000). Circumplex Scales of Interpersonal Values: Reliability, validity, and applicability to interpersonal problems and personality disorders. Journal of Personality Assessment, 75, 249–267.
Locke, K.D. (2006). Interpersonal circumplex measures. In S. Strack (Ed.), Differentiating normal and abnormal personality (2nd Ed., pp. 383–400). New York: Springer.
Interpersonal relationships
Psychological models
Timothy Leary | 0.763446 | 0.975766 | 0.744945 |
Aspect (geography) | In physical geography and physical geology, aspect (also known as exposure) is the compass direction or azimuth that a terrain surface faces.
For example, a slope landform on the eastern edge of the Rockies toward the Great Plains is described as having an easterly aspect. A slope which falls down to a deep valley on its western side and a shallower one on its eastern side has a westerly aspect or is a west-facing slope. The direction a slope faces can affect the physical and biotic features of the slope, known as a slope effect.
The term aspect can also be used to describe a related distinct concept: the horizontal alignment of a coastline. Here, the aspect is the direction which the coastline is facing towards the sea. For example, a coastline with sea to the northeast (as in most of Queensland) has a northeasterly aspect.
Aspect is complemented by grade to characterize the surface gradient.
Importance
Aspect can have a strong influence on temperature. This is because of the angle of the sun in the northern and southern hemispheres which is less than 90 degrees or directly overhead. In the northern hemisphere, the north side of slopes is often shaded, while the southern side receives more solar radiation for a given surface area insolation because the slope is tilted toward the sun and isn't shaded directly by the earth itself. The further north or south you are and closer to winter solstice, the more pronounced the effects of aspect of this are, and on steeper slopes the effect is greater, with no energy received on slopes with an angle greater than 22.5° at 40° north on December 22 (winter solstice).
The aspect of a slope can make very significant influences on its local climate (microclimate). For example, because the sun's rays are in the west at the hottest time of day in the afternoon, in most cases a west-facing slope will be warmer than a sheltered east-facing slope (unless large-scale rainfall influences dictate otherwise). This can have major effects on altitudinal and polar limits of tree growth and also on the distribution of vegetation that requires large quantities of moisture. In Australia, for example, remnants of rainforest are almost always found on east-facing slopes which are protected from dry westerly wind.
Similarly, in the northern hemisphere a south-facing slope (more open to sunlight and warm winds) will therefore generally be warmer and drier due to higher levels of evapotranspiration than a north-facing slope. This can be seen in the Swiss Alps, where farming is much more extensive on south-facing than on north-facing slopes. In the Himalayas, this effect can be seen to an extreme degree, with south-facing slopes being warm, wet and forested, and north-facing slopes cold, dry but much more heavily glaciated.
Soil aspects
In some locales there are patterns of soil differences related to differences in aspect. Strong slopes with equatorward aspects tend to have soil organic matter levels and seasonal influences similar to level slopes at lower elevation whereas poleward aspects have soil development similarities to level soils at higher elevations. Soils with a prevailing windward aspect will typically be shallower, and often with more developed subsoil characteristics, than adjacent soils on the leeward where decelerating winds tend to deposit more air-borne particulate material. Outside of the tropics, soils with an aspect directed toward an early afternoon solar position will typically have the lowest soil moisture content and lowest soil organic matter content relative to other available aspects in a locale. Aspect similarly influence seasonal soil biological processes that are temperature dependent. Particulate laden winds often blow from a prevailing direction near solar early afternoon; the effects combine in a pattern common to both hemispheres.
Coastal aspects
These are usually of importance only in the tropics, but there they produce many unexpected climatic effects:
The dryness of the Dahomey Gap, due to the rain-bearing winds moving parallel to the coast.
The summer dryness of the Coromandel Coast due to the southerly monsoon flowing parallel to the coast. Its wetness during the northeast monsoon is similarly explained.
The anomalous late autumn rainy seasons of central Vietnam and the coastal zone of northeastern Brazil for the same reason as above.
The unusual dryness of Port Moresby compared to the rest of New Guinea is because the National Capital District lies parallel to the trade winds which have a drying effect. In Gulf Province and Lae, which receives their full force, rainfall during southern winter is exceedingly heavy, with rainfall and thunder storms during the rainy season.
The relative dryness of the Queensland coast has the same cause as with Port Moresby.
See also
Microclimate
Pedology
References
Physical geography | 0.762774 | 0.976603 | 0.744928 |
Interdisciplinary arts | Interdisciplinary arts are a combination of arts that use an interdisciplinary approach involving more than one artistic discipline.
Examples of different arts include visual arts, performing arts, musical arts, digital arts, conceptual arts, etc. Interdisciplinary artists apply at least two different approaches to the arts in their artworks. Often a combination of art and technology, typically digital in nature, is involved.
See also
Electronic Visualisation and the Arts
Interdisciplinary Arts Department, Columbia College Chicago
Museums and Digital Culture
Universidad Nacional de las Artes, leading Interdisciplinary Arts programs in Argentina.
PhD Program in Interdisciplinary Arts, National Dong Hwa University
The School of Interdisciplinary Arts, Ohio University
References
Bibliography
The arts
Academic discipline interactions | 0.769705 | 0.967759 | 0.744889 |
Allocentrism | Allocentrism is a collectivistic personality attribute whereby people center their attention and actions on other people rather than themselves. It is a psychological dimension which corresponds to the general cultural dimension of collectivism. In fact, allocentrics "believe, feel, and act very much like collectivists do around the world." Allocentric people tend to be interdependent, define themselves in terms of the group that they are part of, and behave according to that group's cultural norms. They tend to have a sense of duty and share beliefs with other allocentrics among their in-group. Allocentric people appear to see themselves as an extension of their in-group and allow their own goals to be subsumed by the in-group's goals. Additionally, allocentrism has been defined as giving priority to the collective self over the private self, particularly if these two selves happen to come into conflict.
History
Allocentrism is closely related to collectivism; it is the psychological manifestation of collectivism. Scholars have discussed collectivism since at least the 1930s. Collectivism has been used to describe cultural level tendencies and has been described as a "broad cultural syndrome." It was not until much later (1985) that Triandis, Leung, Villareal, and Clack proposed that the term allocentrism be used to describe collectivistic tendencies on the individual level. They proposed this because of the confusion that arises when talking about cultural level collectivism versus individual level collectivism. Allocentrism, therefore, has been used since by some scholars to describe personal collectivism, "the individual level analog of cultural collectivism," as a very broad cultural trait.
Allocentrism versus Idiocentrism
Allocentrism is contrasted with idiocentrism, the psychological manifestation of individualism. As stated earlier, allocentrism includes holding values and preferences of placing higher importance on in-group needs and goals over one's own, defining oneself in terms of the in-group, and seeing oneself as an extension of the in-group. Idiocentrism, however, is an orientation whereby individuals hold quite different values and preferences from those with an allocentric orientation. Idiocentric people tend to focus more on their own goals and needs rather than in-group ones. They prefer self-reliance, to make their own decisions without worrying about what others think, and enjoy competition. It seems people can be both allocentric and idiocentric, but how much they are either is dependent on the situation and how the individual defines that situation. Certain situations encourage more allocentric behavior. These are found more in some cultures than others. These situations include when people are rewarded by the social context for being group orientated, when cultural norms encourage conformity which leads to success, when goals are easier achieved through group action, and when there are not many options for acting independently.
Measuring Allocentrism
When researchers measure collectivism, they tend to use large scale studies that look at the cultural level. They add up many people's responses within different cultures with the unit of analysis the whole culture; the N of these studies is the number of cultures. This can be confusing when trying to measure collectivism on an individual level, which is why the term allocentrism has been suggested. One way to measure allocentrism is to look at what it is correlated with which includes high affiliation with others, low need to be unique, and high sensitivity to rejection. Allocentrism includes a sense of self that is interdependent which can be measured by statements about the self-starting with "I am" or by using interdependence scales. If individuals answer the "I am" statements with statements about others and common fate with others, they are deemed to be more allocentric. This method was highly recommended for measuring allocentrism.
Another aspect of allocentrism is the priority of in-group goals over personal goals and this can be measured using the Collectivism Scale or scales that look at interdependence versus independence. Triandis et al., 1995 Allocentrism has been measured utilizing The Collectivism Scale in three cultures—Korean, Japanese, and American—and found to have good concurrent and criterion validity and acceptable reliability (Cronbach's Alpha .77-.88). It is a ten item, five point Likert scale that assesses how much an individual acts in his or her own self-interest versus his or her group's interest. The group can be defined in various ways such as one's family, peer group, or work group. Allocentrism is a very broad construct and therefore cannot be measured using only a few items; therefore, it is suggested that it be measured with multi-methods due to the limitations of each method.
Culture
Allocentrism tends to be found more in collectivistic cultures (about 60%) but can also be found in all cultures, and in every culture there is a "full distribution of both types." While individualism and collectivism are used on the broad cultural level, Triandis et al. (1985) suggested the use of idiocentrism and allocentrism respectively for conducting analyses on the individual level (within-culture analyses). All humans have both collectivist and individualist cognitive structures; people in collectivist cultures are exposed more to collectivist cognitive structures and hence tend to be more allocentric than those in individualistic cultures. The amount of allocentrism an individual portrays depends in a large part on their culture; there is a possibility that an individual could be high (or low) on both allocentrism and idiocentrism. Different people in the same culture can have different levels of allocentrism and it is both group and setting specific. Minorities in the US such as Hispanics and Asians tend to be highly allocentric.
There are certain personality dimensions that all allocentrics share despite whether they are from an individualistic (American) or collectivistic (Japanese and Korean) culture. These dimensions include high affiliation with others, being sensitive to rejection from others, and less of a need for individual uniqueness.
Situation
For allocentrics, the situation is of paramount importance and they tend to define themselves relative to the context. Priming people to think about commonalities that they have with family and friends gets them to be more allocentric. Allocentrics tend to be more cooperative in a collectivistic situation and less in an individualistic situation. There are, however, also transituational aspects to allocentrism. Even when allocentrics are living in a more individualistic culture, they still will put more emphasis on relationships than idiocentrics through joining groups such as gangs, churches, and collectives.
Sociability
Allocentrics tend to receive higher quality and more social support than idiocentrics; they tend to be more social, interdependent with others, and pay a lot of attention to their in-group and family. This could possibly be because some of the important values of allocentrics are cooperation, honesty, and equality. Allocentrics usually perceive that they have more and better social support than idiocentrics. The amount of social support allocentrics receive seems to be related to their well-being with higher support indicative of higher levels of well-being and lower support with lower well-being. Allocentrics tend to be less lonely, receive more social support (and are more satisfied with it), and are more cooperative than idiocentrics.
Subjective well-being (SWB) is term psychological researchers have used when studying happiness Amount of social support allocentrics in collectivist countries received seemed to be related to their well-being with higher support indicative of higher levels of well-being and lower support with lower well-being. North Americans whose lifestyles are more allocentric tend to have higher subjective well-being than those whose life styles are idiocentric. In addition, allocentrics’ evaluation of their in-group in addition to how they perceive others view their group is positively related to higher subjective well-being. Allocentrism had a greater effect on the SWB of African Americans than on Euro-Americans. Furthermore, idiocentrism was more negatively related to SWB for Euro-Americans than it was for African Americans.
Big Five
Allocentrism is related to the Big Five personality traits. It is negatively correlated to Openness to experience and positively correlated with Agreeableness and Conscientiousness (Triandis, 2001).
Ethnocentricism
Allocentrics tend to be more ethnocentric in terms of showing more negative attitudes towards people who are not in their group and more positive attitudes to those in their own group. People who are in allocentrics’ in-group are considered much closer than out-group members who are put at a much larger social distance. Allocentrics tend to minimize with-in group differences while preferring equal outcomes in social dilemmas.
Consumer ethnocentrism
Allocentric people tend to be more consumer ethnocentric (the tendency to prefer the products on their own countries when shopping). Huang et al., (2008) looked at consumer ethnocentrism (CET) and allocentrism among a group of Taiwanese participants in relation to Korean products sold in Taiwan versus national products. This study found that allocentrism with parents was positively correlated with higher CET. However, allocentrism with friends was negatively correlated with CET.
Tourism and travel
The term allocentrism has also been used in the travel field to have a different meaning from the way it is used in the psychological research. Here the term allocentric traveler refers to a traveler who is an extroverted venturer. This is contrasted with the term psychocentric traveler who is dependable, less adventurous, and cautious. They tend to be curious, confident, seek out novelty, and prefer to travel by plane and alone. They often visit locations that the average traveler would not consider visiting. There are an estimated 4% of true allocentric travelers among tourists, with the majority of travelers being midcentrics (halfway between psychocentric and allocentric). Female allocentric travelers were found to be more neurotic and less extroverted than psychocentric travelers. American and Japanese college students who reported being more horizontally individualistic tended to prefer more allocentric destinations. A gap was shown between the vacations that individuals ideally would like to go on and the ones they actually go on. It was predicted that about 17% of individuals would go to allocentric destinations, however it was found that only 3% did, quite rare. Indeed, it seems people tend to compromise ideal for practicality, however some people will choose allocentric locations at some point in their lives.
Notes
References
Bettencourt, B. A., & Dorr, N. (1997). Collective self-esteem as a mediator of the relationship between allocentrism and subjective well-being. Personality and Social Psychology Bulletin, 23(9), 955-955-964.
Hoxter, A. L., & Lester, D. (1988). Tourist behavior and personality. Personality and Individual Differences, 9(1), 177–178.
Huang, Y. (2008). Allocentrism and consumer ethnocentrism: The effects of social identity on purchase intention. Social Behavior and Personality, 36(8), 1097-1097.
Hulbert, L. G., Corrêa, d. S., & Adegboyega, G. (2001). Cooperation in social dilemmas and allocentrism: A social values approach. European Journal of Social Psychology, 31(6), 641-641-657.
Kernahan, C., Bettencourt, B. A., & Dorr, N. (2000). Benefits of allocentrism for the subjective well-being of African Americans. Journal of Black Psychology, 26(2), 181–193.
Litvin, S. W. (2006). Revisiting plog's model of allocentricity and psychocentricity . . . one more time. United States, Ithaca: Sage Publications, Inc.
McKercher, B. (2005). Are psychographics predictors of destination life cycles? Journal of Travel & Tourism Marketing, 19(1), 49.
Muller, H. M. (1935). Democratic collectivism. New York: Wilson.
Sakakida, Y. (2004). A cross-cultural study of college students' travel preferences. Journal of Travel & Tourism Marketing, 16(1), 35–41.
Seligman, M. E. P. (2002). Authentic happiness: Using the new positive psychology to realize
your potential for lasting fulfillment. New York: The Free Press.
Sinha, J. B. P., & Verma, J. (1994). Social support as a moderator of the relationship between allocentrism and psychological well-being. In U. Kim, H. C. Triandis, Ç. Kâğitçibaşi, S. Choi & G. Yoon (Eds.), (pp. 267–267-292). Thousand Oaks, CA, US: Sage Publications, Inc.
Trafimow, D., Triandis, H. C., & Goto, S. G. (1991). Some tests of the distinction between the private self and the collective self. Journal of Personality and Social Psychology, 60(5), 649–655.
Triandis, H. C. (2000). In Kazdin A. E. (Ed.), Allocentrism-idiocentrism. Washington New York, DC NY, US US: American Psychological Association Oxford University Press.
Triandis, H. C., Leung, K., Villareal, M. J., & Clack, F. I. (1985). Allocentric versus idiocentric tendencies: Convergent and discriminant validation. Journal of Research in Personality, 19(4), 395–415.
Triandis, H. C. (1995). Individualism & collectivism. Boulder: Westview Press.
Triandis, H. C., & University Publications of America (Firm). (1983). Allocentric vs. idiocentric social behavior : A major cultural difference between Hispanics and the mainstream. [Urbana-Champaign, IL]: University of Illinois.
Triandis, H. (1995). Multimethod probes of allocentrism and idiocentrism. International Journal of Psychology, 30(4), 461–480.
Cross-cultural psychology | 0.764418 | 0.974433 | 0.744874 |
Postmodernism in political science | Postmodernism in political science refers to the use of postmodern ideas in political science.
Postmodernists believe that many situations which are considered political in nature can not be adequately discussed in traditional realist and liberal approaches to political science. Postmodernists cite examples such as the situation of a Benedictine University “draft-age youth whose identity is claimed in national narratives of ‘national security’ and the universalizing narratives of the ‘rights of man,’” of “the woman whose very womb is claimed by the irresolvable contesting narratives of ‘church,’ ‘paternity,’ ‘economy,’ and ‘liberal polity.’ In these cases, postmodernists argue that there are no fixed categories, stable sets of values, or common sense meanings to be understood in their scholarly exploration.
In these margins, postmodernists believe that people resist realist concepts of power which is repressive, in order to maintain a claim on their own identity. What makes this resistance significant is that among the aspects of power resisted is that which forces individuals to take a single identity or to be subject to a particular interpretation. Meaning and interpretation in these types of situations is always uncertain; arbitrary in fact. The power in effect here is not that of oppression, but that of the cultural and social implications around them, which creates the framework within which they see themselves, which creates the boundaries of their possible courses of action.
Postmodern political scientists, such as Richard Ashley, claim that in these marginal sites it is impossible to construct a coherent narrative, or story, about what is really taking place without including contesting and contradicting narratives, and still have a “true” story from the perspective of a “sovereign subject,” who can dictate the values pertinent to the “meaning” of the situation. In fact, it is possible here to deconstruct the idea of meaning. Ashley attempts to reveal the ambiguity of texts, especially Western texts, how the texts themselves can be seen as "sites of conflict" within a given culture or worldview. By regarding them in this way, deconstructive readings attempt to uncover evidence of ancient cultural biases, conflicts, lies, tyrannies, and power structures, such as the tensions and ambiguity between peace and war, lord and subject, male and female, which serve as further examples of Derrida's binary oppositions in which the first element is privileged, or considered prior to and more authentic, in relation to the second. Examples of postmodern political scientists include post-colonial writers such as Frantz Fanon, feminist writers such as Cynthia Enloe, and postpositive theorists such as Ashley and James Der Derian.
See also
Postmodernism (international relations)
References
Political science theories
Postmodernism | 0.773572 | 0.962893 | 0.744867 |
Family Environment Scale | The Family Environment Scale (FES) was developed and is used to measure social and environmental characteristics of families. It can be used in several ways, in family counseling and psychotherapy, to teach program evaluators about family systems, and in program evaluation.
Scale inventory
The scale is a 90-item inventory that has a 10 subscales measuring interpersonal Relationship dimension, the Personal Growth, and the System Maintenance. The Relationship dimension includes measurements of cohesion, expressiveness, and conflict. Cohesion is the degree of commitment and support family members provide for one another, expressiveness is the extent to family members are encouraged to express their feelings directly, and conflict is the amount of openly expressed anger and conflict among family members.
Five subscales refer to Personal Growth: independence, achievement orientation, intellectual-cultural orientation, active-recreational orientation, and moral-religious emphasis. Independence assesses the extent to which family members are assertive, self-sufficient and make their own decisions. Achievement Orientation reflects how much activities are cast into an achievement oriented or competitive framework. Intellectual-cultural orientation measures the level of interest in political, intellectual, and cultural activities. Active-recreational orientation measures the amount of participation in social and recreational activities. Moral-religious emphasis assesses the emphasis on ethical and religious issues and values.
The final two subscales, organization and control, are for System Maintenance. These measure how much planning is put into family activities and responsibilities and how much set rules and procedures are used to run family life.
Family Environment Scale
There are three, equivalent forms to the FES that are used to measure different aspects of the family. The Real Form (Form R) measures people's attitude about their family current environment, the Ideal Form (Form I) measures person's ideal family perception, and the Expectations Form (Form E) assess the family ability to withstand change.
After approximately 20 minutes, the test is completed and ten scores are derived from the subscales to create an overall profile of the family environment. Based on these scores, families are then grouped into one of three family environment typologies based on their most salient characteristics. The Family Environment Scale gives counselors and researchers a way of examining each family member’s perceptions of the family in those three categories. This also helps capture perception of the family's functioning from one of its own members.
Reliability and validity of FES
Reliability estimates for subscales measurements are consistent and test-retest intervals is significant and the validity of this scale is supported by evidence. There is no proof though that officially states that the ten subscales of the FES are independent because there has not been an analysis completed on this.
Notes
References
Moos, R. & Moos, B. (1994). Family Environment Scale Manual: Development, Applications, Research - Third Edition. Palo Alto, CA: Consulting Psychologist Press.
Sherman, Robert, and Norman Fredman. Handbook of Measurements for Marriage and Family Therapy. Routledge, 2014.
External links
The Family Environment Scale
Family | 0.780972 | 0.953762 | 0.744861 |
Cultural divide | A cultural divide is "a boundary in society that separates communities whose social economic structures, opportunities for success, conventions, styles, are so different that they have substantially different psychologies". A cultural divide is the virtual barrier caused by cultural differences, that hinder interactions, and harmonious exchange between people of different cultures. For example, avoiding eye contact with a superior shows deference and respect in East Asian cultures, but can be interpreted as suspicious behavior in Western cultures. Studies on cultural divide usually focus on identifying and bridging the cultural divide at different levels of society.
Significance
A cultural divide can have significant impact on international operations on global organizations that require communication between people from different cultures. Commonly, ignorance of the cultural differences such as social norms and taboos may lead to communication failure within the organization.
Sufficiently large cultural divides may also discourage groups from seeking to understand the other party's point of view, as differences between the groups are seen as immutable. Such gaps may in turn inhibit efforts made to reach a consensus between these groups.
Factors and causes
Internal
Internal causes of Cultural Divide refer to causes based on innate or personal characteristics of an individual, such as a personal way of thinking, an internal mental structure or habit that influences how a person acts.
Ideological differences
Rules, norms and way of thinking are often inculcated from a young age and these help to shape a person's mindset and their thinking style, which will explain how two different cultural groups can view the same thing very differently. For example, Western cultures with their history of Judeo-Christian belief in the individual soul and focus on the pursuit of individual rights tend to adopt an individualistic mindset whereas East Asian cultures with a history of teachings based on Confucianism tend to view the individual as a relation to the larger community and hence develop a more collectivist mindset. Hence, it is more common for people in collectivist cultures to make an external attribution while people in individualistic cultures making an internal attribution. Thus, these differences can cause how people, situations or objects are perceived differently.
Stereotypes
Perceptions about an out-group or of a different culture may tend to be perpetuated and reinforced by the media or long-standing notions of stereotypes. As a result of using schemas to simplify the world as we look at it, we rely on a set of well-established stereotypes available in our own culture to define and view the out-group. As such, the risk of stereotypes is if it is inaccurate and blinds us to certain key understanding of a certain class of people, and as stereotypes tend to persist even with new information, the problem of cultural divide can be perpetuated.
Social identity theory
The social identity theory implies an inherent and inclined favoritism towards people of the same social group as you or people who share similar characteristics, also known as the in-group favoritism. This desire to achieve and maintain a positive self-image motivates people to place their own group in a superior position as compared to the out-group.
External
Cultural divide can also be caused by external influences that shape the way an individual thinks about people from other cultures. For example, the cultural disconnect and misunderstandings between USA and the Arab countries has been attributed to the spread of superficial information that "serve to promote self-interests and perpetuate reckless acts of individuals, misguided official policies and irresponsible public narratives, all colored by self-righteousness and hypocrisy". An individual's experience of foreign cultures can be largely shaped by the information available to the individual and the cultural divide arises due to the difference between a culture and how it is perceived by people foreign to the culture.
Some examples of external sources that influence views on other cultures include:
Official government policies
This also includes any official source of information by the government such as speeches by government officials. Government attitudes to foreign governments often lead to information released to citizens that influence the way they think about foreign governments and foreign peoples. One extreme example of this propaganda.
News and media reports
Media bias can cause misunderstandings and cultural divide by controlling the information and perceptions of other cultures. For example, media bias in the United States can exacerbate the political divide between the liberals and the conservatives.
Social pressure
Due to a fundamental need for social companionship and a desire to be accepted and liked by others, people often conform to social norms and adopt the group's beliefs and values. Hence, groups that are already culturally divided will tend to remain that way as the effect of normative social influence is self-perpetuating.
Bridging the cultural divide
When a cultural divide can be bridged, it can be beneficial for all parties. However, when cultures are vastly different, or if people are opposed to such exchange, the cultural divide may prove difficult to cross.
Understanding cultural boundaries
Being aware of cultural boundaries when dealing with others is important to avoid accidentally offending the other party and turning the difference into a divide. Educating both parties in the reasons behind these boundaries would also help foster trust and cooperation between them. This also has a side effect of creating a virtuous cycle, where the improved understanding between both parties grants them an advantage when dealing with members of the opposite culture, encouraging future communication and reducing the impact of a cultural divide.
Cultural intelligence
Developing high cultural intelligence increases one's openness and hardiness when dealing with major differences in culture. Improving one's openness requires both humility when learning from others and inquisitiveness in actively pursuing opportunities to develop one's cultural awareness. Strong hardiness allows one to deal better with stress, cultural shocks and tension when interacting with others in a foreign context.
Increased interaction
Increasing interaction between two groups of people will help increase mutual understanding and fill in any gaps in knowledge of another group's culture. However, trenched can be changed.
See also
Intercultural communication
Cultural assimilation
Parallel society
Cultural conflict
References
Divide
Multiculturalism | 0.772396 | 0.964303 | 0.744824 |
Cultural bias | Cultural bias is the interpretation and judgment of phenomena by the standards of one's own culture. It is sometimes considered a problem central to social and human sciences, such as economics, psychology, anthropology, and sociology. Some practitioners of these fields have attempted to develop methods and theories to compensate for or eliminate cultural bias.
Cultural bias occurs when people of a culture make assumptions about conventions, including conventions of language, notation, proof and evidence. They are then accused of mistaking these assumptions for laws of logic or nature. Numerous such biases exist, concerning cultural norms for color, mate selection, concepts of justice, linguistic and logical validity, the acceptability of evidence, and taboos.
Psychology
Cultural bias has no a priori definition. Instead, its presence is inferred from differential performance of socioracial (e.g., Blacks, Whites), ethnic (e.g., Latinos/Latinas, Anglos), or national groups (e.g., Americans, Japanese) on measures of psychological constructs such as cognitive abilities, knowledge or skills (CAKS), or symptoms of psychopathology (e.g., depression). Historically, the term grew out of efforts to explain between group score differences on CAKS tests primarily of African American and Latino/Latina American test takers relative to their White American counterparts and concerns that test scores should not be interpreted in the same manner across those groups in the name of fairness and equality (see also Cognitive dissonance). Although the concept of cultural bias in testing and assessment also pertains to score differences and potential misdiagnoses with respect to a broader range of psychological concepts, particularly in applied psychology and other social and behavioral sciences, this aspect of cultural bias has received less attention in the relevant literature.
Cultural bias in psychological testing refers to the standardized psychological tests that are conducted to determine the level of intelligence among the test-takers. Limitations of such verbal or non-verbal intelligence tests have been observed since their introduction. Many tests have been objected to, as they produced poor results for the ethnic or racial minorities (students), as compared to the racial majorities. There is minimal evidence supporting claims of cultural bias and cross-cultural examination is both possible and done frequently. As discussed above, the learning environment, the questions posed or situations given in the test may be familiar and strange at the same time to students from different backgrounds- the type of ambiguity in which intellectual differences become apparent in individual capacities to resolve the strange-yet-familiar entity.
Economics
Cultural bias in economic exchange is often overlooked. A study done at the Northwestern University suggests that the cultural perception that two countries have of each other plays a large factor in the economic activity between them. The study suggests that low bilateral trust between two countries results in less trade, less portfolio investment, and less direct investment. The effect is amplified for goods, as they are more trust-intensive.
Anthropology
The concept of culture theory in anthropology explains that cultural bias is a critical piece of human group formation.
Sociology
It is thought that societies with conflicting beliefs will more likely have cultural bias, as it is dependent on the group's standing in society in which the social constructions affect how a problem is produced. One example of cultural bias within the context of sociology can be seen in a study done at the University of California by Jane R. Mercer of how test "validity", "bias", and "fairness" in different cultural belief systems affect one's future in a pluralistic society. A definition of the cultural bias was given as "the extent that the test contains cultural content that is generally peculiar to the members of one group but not to the members of another group", which leads to a belief that "the internal structure of the test will differ for different cultural groups". In addition, the different types of errors made on culture-biased tests are dependent on different cultural groups. The idea progressed to the conclusion that a non-cultural-test represents the ability of a population as intended and not the abilities of a group that is not represented.
History
Cultural bias may also arise in historical scholarship when the standards, assumptions and conventions of the historian's own era are anachronistically used to report and to assess events of the past. The tendency is sometimes known as presentism, and is regarded by many historians as a fault to be avoided. Arthur Marwick has argued that "a grasp of the fact that past societies are very different from our own, and... very difficult to get to know" is an essential and fundamental skill of the professional historian; and that "anachronism is still one of the most obvious faults when the unqualified (those expert in other disciplines, perhaps) attempt to do history."
See also
Cognitive bias
Confirmation bias
Cultural pluralism
Determinism
Embodied philosophy
Environmental racism
Ethnocentrism
Framing (social sciences)
Goodness and value theory
Observer-expectancy effect
Out-group homogeneity
Social Darwinism
Social learning theory
Theory-ladenness
Ultimate attribution error
Xenocentrism
References
Bibliography
External links
Cognitive biases
Cultural anthropology
Value (ethics)
Ethnocentrism
Bias | 0.760802 | 0.978961 | 0.744796 |
Resonance (sociology) | Resonance is a quality of human relationships with the world proposed by Hartmut Rosa. Rosa, professor of sociology at the University of Jena, conceptualised resonance theory in Resonanz (2016) to explain social phenomena through a fundamental human impulse towards "resonant" relationships.
Background
Rosa outlined the cause of several crises of modernity in his monograph on social acceleration and dynamic stabilisation. In this monograph, Rosa put forward social acceleration as the cultural logic of modernity and the cause of the modern burnout crisis, environmental issues, and mass alienation. Resonance sets out to provide a solution to the alienation caused by social acceleration in late modernity. He theorises that resonance, a normative experience in which an individual experiences a transformational, responsive, and affectual relationship to the world, is the solution to the extremes of alienation caused by modernity.
Definition
With the aim of theorising a sociology of 'the good' in dialectical opposite to alienation, Rosa outlines the following definition of resonance:...a kind of relationship to the world, formed through affect and emotion, intrinsic interest, and perceived self-efficacy, in which subject and world are mutually affected and transformed.
Resonance is not an echo, but a responsive relationship, requiring that both sides speak with their own voice. This is only possible where strong evaluations are affected. Resonance implies an aspect of constitutive inaccessibility.
Resonant relationships require that both subject and world be sufficiently “closed” or self-consistent so as to each speak in their own voice, while also remaining open enough to be affected or reached by each other.
Resonance is not an emotional state, but a mode of relation that is neutral with respect to emotional content. This is why we can love sad stories.
Resonance term
The acoustic term resonance describes a subject–object relationship as a vibrating system in which both sides mutually stimulate each other. However, as with a tuning fork, they do not merely return the received sound, but speak "with their own voice". According to Rosa, the subjects' relational abilities and sense of their places in the world are influenced and reformed by such resonant experiences. Negative or alienated experiences, then, are those which lack resonance, and provide what Rahel Jaeggi terms 'a relation of relationlessness'. Resonance is therefore a way of approaching the question of successful relations between subject and world in the sense of "good life", which marks a significant departure from a critical theory primarily focused on relations of alienation.The possible points of reference of such resonances are ubiquitous and are described in four axes:
Horizontal resonances take place between two (or more) people, in love and family relationships, friendships or political space.
Diagonal resonance axes are relationships to the world of things and regular activities, such as school or sports practice.
Vertical resonance axes are relationships to abstract, ontological categories, such as nature, art, history or religion.
A recent fourth addition to resonance theory by Rosa is the axis of the self, the extent to which one feels a non-alienated sense of relation with their own body and psyche.
For instance, a horizontal relationship of resonance is evidenced in the relationship between the newborn and primary caregivers, by whose reception or rejection of interactions the fundamental attachment patterns develop. Diagonal resonances, are conveyed by Rosa through Rainer Maria Rilke's conception of 'The singing of things', which conveys a feeling of being called by material things, such as mountains, artworks or household possessions. Feeling part of nature, an epoch of history, or a moment of worship is accounted for in the vertical axis resonance.
In all these contexts, resonant experiences are juxtaposed with silent or instrumental world relations, determined by an orientation towards domination and attaining resources, which are primarily concerned with the achievement of a useful goal. For example, a mountain tour aimed at tourists can either be a resonant experience (as an opportunity to confront the beautiful but challenging walk), or a more purpose-oriented, instrumental, and therefore, "mute" experience.Relations that are controlling, hostile or anixous result in "silent", non-resonant experiences. Rosa argues that much of consumer culture promises resonance commenting "Buy yourself resonance! is the implicit siren song of nearly all advertising campaigns and sales pitches." However, the attempt to control the experience of resonance ends up inhibiting the experience by instrumentalising it. Rosa argues that mediopassivity, a stance of the subject being not entirely active or passive in an experience allows enough uncontrollability for resonance to take place. Another prerequisite for the establishment of resonances are the strong evaluations of the subject, which give the object a significance that goes beyond desire or attractiveness.
If an attempt is made to outline as resonance what people seek and long for in their innermost being, it is by no means conceived as a permanent state that can be established, but always as a selective, momentary success or self-transformation that stands out against the background of a world that is predominantly silent, instrumental. Resonance in this sense is therefore essentially characterized by the fact that it cannot be produced systematically and intentionally, but is ultimately unavailable. Nevertheless, Rosa calls for institutional reforms which are geared towards resonance and avoid the kinds of extractive instrumental activity which cause alienation.
Social theory
As a sociological theory, resonance theory deals with the social conditions that promote or hinder successful world relationships. The conditions of modernity have created what he terms social acceleration, an approach to time which is geared towards increasing resources and innovations in as short a time as possible. This results in a logic of increase, which requires a constant continuation of improvement and multiplication of resources. This is accompanied by an increasing pressure to accelerate: in order to maintain the status quo within a modern society, societies must continually increase the number of services, innovations and material production opportunities. Rosa sees this mode of dynamic stabilization as the defining characteristic of modernity. While pre-modern societies transform themselves adaptively, i.e. in response to changed conditions, modern society is virtually defined by its compulsion for continuous economic, social and technological transformation.
While the current phase of late modernism is characterized by a high resonance sensitivity and expectation of its subjects, the mode of dynamic stabilization results in a loss of resonance. Rosa notes three essential manifestations of the current crisis of modernity:
the ecological crisis due to the rapid extraction of nature's finite resources feeding an unlimited expectation of increase
the political crisis, arising from democratic negotiation processes being too slow to keep up with accelerated technological change, resulting in social changes which are therefore regarded as ineffective or obsolete
the psychological crisis of the subjects, overwhelmed by acceleration and therefore experiencing burnout
Resonance theory is thus in the tradition of critical theory from Marx to Adorno and Horkheimer to Habermas and Honneth. It shares the central finding of alienation as an obstacle to a successful life, but attempts to contrast this with a positive counter-concept, the concept of resonance. Honneth, for example, has already made this attempt with the concept of recognition. Despite critiques of the vagueness of the concept of resonance, Rosa sees this as a universal concept that includes concepts such as recognition, justice or self-efficacy.
Policies
Rosa says that “this sociology of human relationships to the world does not pursue its own political agenda”. However, he claims that resonance can serve as a driver in political debates, providing a standard for action. He lists, for instance:
Preserving resonances in how we interact with the natural world. For instance, problematising practices such as animal testing and mass factory farming
Recognition of labor as valuable as a sphere of resonance, rather than mere economic output
Schools being made into resonance spaces which seek to reduce alienation
A democratic arrangement that sees democracy as an instrument for “adaptively transforming public institutions, formative structures, and the shared lifeworld, and thus creates opportunities for the experience of genuine collective self-efficacy”
Rosa finds some consonance between resonance and Habermas's concept of communicative action. However, he criticises Habermas for mainly considering ‘intersubjective resonant relationships’, and ignoring relationships to the world, to things, and non-humans. Moreover, he suggests that Habermas overlooks the aesthetic and emotional relationships evidenced in Fromm, Marcuse and Adorno. He finds more alignment in Honneth’s theory of recognition, in which:…understanding is geared toward inner accordance with and the communicative accommodation of Others … recognition, in the three forms of love/friendship, legal recognition, and social esteem, establishes three kinds of resonant axes to the social world that allow individuals to experience self-confidence, self-respect, and self-esteem.In advocating for a system which emphasises a plurality of voices and is critical of escalatory logics, resonance is critical of both excessive bureaucracy and social acceleration. Rosa advocates for policies which help achieve a ‘paradigm shift from the logic of escalation to sensitivity to resonance', prefacing that “This does not mean that there be no space for competition in markets”, but instead more regulation against "blind escalation".
Reception
Rosa has been credited for his search for a far-reaching framework for addressing social issues issues in a matter quite contrasted with a critical theory which looks at the world in negative terms, often summarised with Adorno's "There is no right life in the wrong one". Such an appreciation of resonance theory as a positive continuation of critical theory can be found with Anna Henkel. Micha Brumlik sees in the comprehensive combination of interdisciplinary strands the completion, but with it also the end, of critical theory, which thereby loses its "theoretically informed irreconcilability looking coldly at society". On the other hand, Brumlik states that this comprehensive derivation of the concept of resonance from a multitude of perspectives and contexts is flawed since "resonance" has an almost arbitrary effect that lacks conceptual precision. Brumlik concludes that it is therefore ultimately unsuitable as a social-philosophical basic concept.
Other critics refer to Rosa's alleged recourse to the intellectual world of Romanticism. Rosa does indeed frequently refer to the sensitivity to resonance implicit in Romanticism, even in conscious contradiction of rationalist concepts, but at the same time sees the danger of Romanticism's championing of purely subjective emotion instead of resonance. Thus he rather describes the continuing effect of the resonance concepts of Romanticism in modernity, without propagating a return to it.
Rosa's book argues that the socio-political outlook on concrete solutions is poor and he has publicly shared his dissatisfaction with the paths to post-growth suggested in his original monograph. Despite reference to political reform proposals such as that of a universal basic income and emerging pilot projects of post-growth economies, Rosa does not necessarily provide a direct route to this:It is akin to the question of how humanity was able to move out of the social formations of the “Middle Ages” into modernity. Both cases involve a fundamental transformation of humanity’s relationship to the world that sets subjective and institutional, cultural and structural, cognitive, affective and habitual levels in motion all at once, without either a clear starting point or a unilinear direction of propagation. The theory of resonance articulated here, however, attempts to provide a small building block by at least making it possible to again perceive a different form of existence.Literary theorist Rita Felski, one of the originators of postcritique, which advocates for a literary theory which moves beyond the hermeneutics of suspicion, has celebrated resonance theory as an educational approach which is inclusive of both critical theory and aesthetic appreciation. Felski argues that resonance provides an alternative to parametric and instrumental approaches to education, and makes the case for educational experiences that "speak[s] to the force of intellectual engagement for its own sake", based on attachment, enchantment and affect.
There is a particular interest in Rosa's ideas in education research. Several studies investigating the possibility of resonant school structures and pedagogies have been published.
References
Sociological theories | 0.765277 | 0.973234 | 0.744794 |
Gender inequality in curricula | Gender inequality in curriculum exposes indications that female and male learners are not treated equally in various types of curriculum. There are two types of curricula: formal and informal. Formal curricula are introduced by a government or an educational institution. Moreover, they are defined as sets of objectives, content, resources and assessment. Informal curricula, also defined as hidden or unofficial, refer to attitudes, values, beliefs, assumptions, behaviours and undeclared agendas underlying the learning process. These are formulated by individuals, families, societies, religions, cultures and traditions.
More particularly, gender inequality is apparent in the curriculum of both schools and Teacher Education Institutes (TEIs). Physical education (PE) is particularly delicate, as gender equality issues coming from preconceived stereotyping of boys and girls often arise. It is often believed that boys are better at physical exercise than girls and that these are better at 'home' activities including sewing and cooking. This belief prevails in many cultures around the world and is not bound to one culture only.
Curriculum language and gender
Some curricular objectives show that the language used is gender biased. Indeed, it can happen that the language itself can communicate the status of being male or female, and the status of being assertive or submissive. In many cultures, 'being male' is expressed in language as being confident. In Japan, according to Pavlenko, female Japanese learners are led 'to see English as a language of empowerment. The students state that ... the pronoun system of English allow[s] them to position and express themselves differently as more independent individuals than when speaking Japanese.' This example clearly shows how languages, reflecting cultures, are the basis for introducing gender inequalities highlighting the curricula.
Curriculum structure and gender
Many Teacher Education Institutes (TEIs) around the world, which set curricula, that is; teaching diplomas, show a worrying shortcoming regarding issues of gender equality. For instance, students who prove being prepared to become schoolteachers are taught on education theories, the psychology of learning, teaching methodologies and class management, among others and one or two practical courses. There is no highlight on gender equality-related issues in their training. Even courses on curriculum design ignore these issues. This omission is highly problematic and should be addressed by curriculum designers of TEIs. It is important that gender equality issues are part of the curriculum in order to help future teachers to be more sensitive about gender equality issues. Thus when they become teachers, they can become agents of change in their schools.
Content of instructional materials and gender
Several studies have shown that textbooks reinforce traditional views of masculinity and femininity and encourage children to accept a traditional gender order. For example, a recent study conducted by Kostas (2019) found that female characters in textbooks of primary education portrayed mainly as mothers and housewives whilst male characters were identified as breadwinners. Additionally, teachers often use materials, including texts, images or examples that reinforce stereotyped roles. Typical examples given include, roles of the father (reading the newspaper) and the mother (serving dinner); the doctor (male) and the nurse (female); playing ball (boy) and combing doll's hair (girl). By doing this, teachers are also promoting gender bias which favors girls as well. For instance, bullying and noise-making for boys and politeness and gentleness for girls. Gender bias does not only favor males over females; it can also go the other way around. They are both negative when considering a healthy relationship between the teacher and the learner.
A gender equal curriculum
A gender equal curriculum shows the diversity of society when increasing examples that highlight successful female characters in texts as well as in the examples used during classes. Instructional materials, including textbooks, handouts or workbooks, should be studied to determine whether they are gender biased, gender neutral or gender-sensitive/responsive. In Teacher Education Institutes (TEIs), curricula need to include elements that recognize gender equality-related issues in learning materials, and how those issues can be faced by teachers once they take up the profession and start to use these materials in their classes.
Quality curriculum should include gender equality as a result of teaching and learning in TEIs, as well as in schools. Educational systems that adopt gender equality aspects are able to:
Revise its curriculum framework to explicitly state commitment to gender equality.
Emphasize attitudes and values that promote gender equality.
Ensure that the content of the course syllabus includes values and attitudes of gender equality. Revise textbooks and learning materials to become gender-sensitive.
Remove gender-based stereotypes that contribute towards perpetuating gender inequalities.
Approaches in preventing gender inequality and school-related gender-based violence
It is possible to integrate school-related gender-based violence (SRGBV) prevention into the curriculum for children of all school-going ages. Topics include comprehensive sexuality education (CSE), life skills education, civics education and targeted approaches on managing aggression, developing bystander skills, forming healthy relationships and protection from bullying – these elements are often combined.
Examples
The World Starts with Me, Uganda
In 2002, two Dutch NGOs including the World Population Foundation and Butterfly Works, created The World Starts with me. Aimed at students aged 12 – 19 years old, it is a low-tech, online, interactive sex education programme. The programme uses David and Rose, two virtual peer educators who guide students through fourteen lessons around self-esteem, healthy relationships, sexual development, safer sex, gender equality and sexual rights. Each lesson includes an assignment-type lesson, for instance creating a storyboard, an art work or conducting a role play on the topic of that lesson. Evaluation of the programme (with the use of a quasi-experimental design) showed a significant positive effects on non-coercive sex within students in intervention groups. Indeed, they reported having more confidence in their ability to deal with situations where sexual pressure and force would be used.
Programs H and M, Brazil
Named after the Spanish and Portuguese words for men and women (hombres in Sp., homes in Port., mujeres in Sp., mulheres in Port.), the programs H and M used an evidence-based curriculum and included a group of educational activities which were designed to be carried out in same-sex group settings, as well as by same-sex facilitators of the groups, who can eventually be consider as gender-equitable role models.
The manuals used by these programmes include activities on fatherhood/motherhood and caregiving, violence prevention, sexual and reproductive health including HIV/AIDS and other related issues. The activities of the programme included brainstorming role-playing and other exercises which contributed to students reflecting on how boys and girls socialized and the pros and cons of this socialization and to explore the benefits of changing certain behaviours.
The programme was evaluated in several locations through mostly quasi-experimental studies. It showed evidence of positive changes in participants' gender-equitable attitudes and behaviours and showed reduced gender-based violence.
Fourth R, Canada
This programme is based on the belief that relationship knowledge and skills can and should be taught in the same way as reading, writing and arithmetic, which gives the program its name. The programme is taught with children of grades 8 - 12.
Thanks to a five-year randomized control trial of the classes with Grade 9 students (aged between 14 and 15), it was found that when comparing, students who received standard health classes students (especially boys) who received the Fourth R used significantly fewer acts of violence towards a dating partner by the end of Grade 11.
Second Step, United States
This Second Step program teaches skills such as communication, coping and decision-making with the objective to help young people navigate peer pressure, substance abuse and in person and online bullying. The aforementioned programme has been used with more than 8 million students in over 32,000 American schools.
A two-year cluster-randomized clinical trial of Second Step was carried out with over 3,600 students at 36 middle schools in Grades 6 and 7 (aged 11–13 years) in Illinois and Kansas. At the end of the programme, the study found that, students in Illinois intervention schools were 39 per cent less likely to report sexual violence perpetration and 56 per cent less likely to self-report homophobic name-calling victimization than students in control schools. There was, however, no significant difference in the Kansas schools.
The Gender Equity Movement in Schools (GEMS), India
The GEMS project used extracurricular activities, role-playing and games. This project began in the sixth grade and worked for two years with boys and girls between the ages of 12–14 in public schools in Goa, Kota and Mumbai. In Goa and Kota, it was layered with ongoing school curriculum. In Mumbai, it was run as an independent pilot project in 45 schools.
An evaluation study was implemented on the pilot which using a quasi-experimental design to assess the results of the programme on the students. Over the course of the programme, the study found that participating students were more supportive of girls pursuing higher education and marrying later in life, and of boys and men contributing to household tasks. However, an important component of GEMS, students' behaviours and attitudes around reducing violence proved mixed results. The GEMS approach will now be carried out in up to 250 schools in Mumbai, following the success of the first pilot programme. It is also being rolled out in 20 schools in Viet Nam.
See also
Gender equity in education
Gender mainstreaming in teacher education policy
Educational inequality
Education sector responses to LGBT violence
Sexual harassment in education
School-related gender-based violence (SRGBV)
References
Free content from UNESCO
Gender and education
Curricula
Gender equality | 0.767461 | 0.970457 | 0.744788 |
Contextual learning | Contextual learning is based on a constructivist theory of teaching and learning. Learning takes place when teachers are able to present information in such a way that students are able to construct meaning based on their own experiences. Contextual learning experiences include internships, service learning and study abroad programs.
Contextual learning has the following characteristics:
emphasizing problem solving
recognizing that teaching and learning need to occur in multiple contexts
assisting students in learning how to monitor their learning and thereby become self-regulated learners
anchoring teaching in the diverse life context of students
encouraging students to learn from each other
employing authentic assessment
Key elements
Current perspectives on what it means for learning to be contextualized include
situated cognition – all learning is applied knowledge
social cognition – intrapersonal constructs
distributed cognition – constructs that are continually shaped by other people and things outside the individual
Constructivist learning theory maintains that learning is a process of constructing meaning from experience
Benefits
Both direct instruction and constructivist activities can be compatible and effective in the achievement of learning goals.
Increasing one’s efforts results in more ability. This theory opposes the notion that one’s aptitude is unchangeable. Striving for learning goals motivates an individual to be engaged in activities with a commitment to learning.
Children learn the standards values, and knowledge of society by raising questions and accepting challenges to find solutions that are not immediately apparent. Other learning processes are explaining concepts, justifying their reasoning and seeking information. Therefore, learning is a social process which requires social and cultural factors to be considered during instructional planning. This social nature of learning also drives the determination of the learning goals.
Knowledge and learning are situated in particular physical and social context. A range of settings may be used such as the home, the community, and the workplace, depending on the purpose of instruction and the intended learning goals.
Knowledge may be viewed as distributed or stretched over the individual, other persons, and various artifacts such as physical and symbolic tools and not solely as a property of individuals. Thus, people, as an integral part of the learning process, must share knowledge and tasks.
Assessment
One of the main goals of contextual learning is to develop an authentic task to assess performance. Creating an assessment in a context can help to guide the teacher to replicate real world experiences and make necessary inclusive design decisions. Contextual learning can be used as a form of formative assessment and can help give educators a stronger profile on how the intended learning goals, standards and benchmarks fit the curriculum. It is essential to establish and align the intended learning goals of the contextual task at the beginning to create a shared understanding of what success looks like. Self-directed theory states that humans by nature seek purpose and the desire to make a contribution and to be part of a cause greater and more enduring then oneself. Contextual learning can help bring relevance and meaning to the learning, helping students relate to the world they live in.
Questions to address when defining and developing a contextual task
Does the task fulfill the intended learning goals?
Does the task involve problems that require the students to use their knowledge creatively to find a solution?
Is the task an engaging learning experience?
Is the audience as authentic as possible?
Does the task require students to use processes, products and procedures that simulate those used by people working in a similar field?
Is the task inclusive?
Are there clear criteria for students on how the product, performance or service will be evaluated?
Are there models of excellence which demonstrate standards?
Are the students involved in the assessment process?
Is there a provision made for continuous formative feedback, from oneself, from teachers and peers to help the students improve?
Is there an opportunity for student choice and ownership to the extent that would be.
GRASPS Concept Wheel
See also
Context-based learning
Experiential learning
References
Alternative education | 0.767011 | 0.970948 | 0.744727 |
Hierarchical proportion | Hierarchical proportion is a technique used in art, mostly in sculpture and painting, in which the artist uses unnatural proportion or scale to depict the relative importance of the figures in the artwork.
For example, in Egyptian times, people of higher status would sometimes be drawn or sculpted larger than those of lower status.
During the Dark Ages, people with more status had larger proportions than serfs. During the Renaissance images of the human body began to change, as proportion was used to depict the reality an artist interpreted.
Gallery
See also
Art movement
Creativity techniques
List of art media
List of artistic media
List of art movements
List of most expensive paintings
List of most expensive sculptures
List of art techniques
List of sculptors
References
Citations
Bibliography
Artforms by Preble, Preble, Frank; Prentice Hall 2004
External links
'Gifts for the Gods: Images from Egyptian Temples, a fully digitized exhibition catalog from The Metropolitan Museum of Art Libraries, which contains material on hierarchical proportion
Artistic techniques | 0.762816 | 0.976184 | 0.744648 |
Expression problem | The expression problem is a challenging problem in programming languages that concerns the extensibility and modularity of statically typed data abstractions. The goal is to define a data abstraction that is extensible both in its representations and its behaviors, where one can add new representations and new behaviors to the data abstraction, without recompiling existing code, and while retaining static type safety (e.g., no casts). The statement of the problem exposes deficiencies in programming paradigms and programming languages, and is still considered unsolved, although there are many proposed solutions.
History
Philip Wadler formulated the challenge and named it "The Expression Problem" in response to a discussion with Rice University's Programming Languages Team (PLT).
He also cited three sources that defined the context for his challenge:
The problem was first observed by John Reynolds in 1975. Reynolds discussed two forms of Data Abstraction: User-defined Types, which are now known as Abstract Data Types (ADTs) (not to be confused with Algebraic Data Types), and Procedural Data Structures, which are now understood as a primitive form of Objects with only one method. He argued that they are complementary, in that User-defined Types could be extended with new behaviors, and Procedural Data Structures could be extended with new representations. He also discussed related work going back to 1967.
Fifteen years later in 1990, William Cook
applied Reynold's idea in the context of Objects and Abstract Data Types, which had both grown extensively. Cook identified the matrix of representations and behaviors that are implicit in a Data Abstraction, and discussed how ADTs are based on the behavioral axis, while Objects are based on the representation axis. He provides extensive discussion of work on ADTs and Objects that are relevant to the problem. He also reviewed implementations in both styles, discussed extensibility in both directions, and also identified the importance of static typing.
Most importantly, he discussed situations in which there was more flexibility than
Reynolds considered, including internalization and optimization of methods.
At ECOOP '98, Shriram Krishnamurthi et al. presented a design pattern solution to the problem of simultaneously extending an expression-oriented programming language and its tool-set. They dubbed it the "expressivity problem" because they thought programming language designers could use the problem to demonstrate the expressive power of their creations. For PLT, the problem had shown up in the construction of DrScheme, now DrRacket, and they solved it via a rediscovery of mixins. To avoid using a programming language problem in a paper about programming languages, Krishnamurthi et al. used an old geometry programming problem to explain their pattern-oriented solution. In conversations with Felleisen and Krishnamurthi after the ECOOP presentation, Wadler understood the PL-centric nature of the problem and he pointed out that Krishnamurthi's solution used a cast to circumvent Java's type system. The discussion continued on the types mailing list, where Corky Cartwright (Rice) and Kim Bruce (Williams) showed how type systems for OO languages might eliminate this cast. In response Wadler formulated his essay and stated the challenge, "whether a language can solve the expression problem is a salient indicator of its capacity for expression." The label "expression problem" puns on expression = "how much can your language express" and expression = "the terms you are trying to represent are language expressions".
Others co-discovered variants of the expression problem around the same time as Rice University's PLT, in particular Thomas Kühne in his dissertation, and Smaragdakis and Batory in a parallel ECOOP 98 article.
Some follow-up work used the expression problem to showcase the power of programming language designs.
The expression problem is also a fundamental problem in multi-dimensional Software Product Line design and in particular as an application or special case of FOSD Program Cubes.
Solutions
There are various solutions to the expression problem. Each solution varies in the amount of code a user must write to implement them, and the language features they require.
Multiple dispatch
Coproducts of functors
Type classes
Tagless-final / Object algebras
Polymorphic Variants
Example
Problem description
We can imagine we do not have the source code for the following library, written in C#, which we wish to extend:
public interface IEvalExp
{
int Eval();
}
public class Lit : IEvalExp
{
public Lit(int n)
{
N = n;
}
public int N { get; }
public int Eval()
{
return N;
}
}
public class Add : IEvalExp
{
public Add(IEvalExp left, IEvalExp right)
{
Left = left;
Right = right;
}
public IEvalExp Left { get; }
public IEvalExp Right { get; }
public int Eval()
{
return Left.Eval() + Right.Eval();
}
}
public static class ExampleOne
{
public static IEvalExp AddOneAndTwo() => new Add(new Lit(1), new Lit(2));
public static int EvaluateTheSumOfOneAndTwo() => AddOneAndTwo().Eval();
}
Using this library we can express the arithmetic expression as we did in and can evaluate the expression by calling . Now imagine that we wish to extend this library, adding a new type is easy because we are working with an Object-oriented programming language. For example, we might create the following class:
public class Mult : IEvalExp
{
public Mult(IEvalExp left, IEvalExp right)
{
Left = left;
Right = right;
}
public IEvalExp Left { get; }
public IEvalExp Right { get; }
public int Eval()
{
return Left.Eval() * Right.Eval();
}
}
However, if we wish to add a new function over the type (a new method in C# terminology) we have to change the interface and then modify all the classes that implement the interface. Another possibility is to create a new interface that extends the interface and then create sub-types for , and classes, but the expression returned in has already been compiled so we will not be able to use the new function over the old type. The problem is reversed in functional programming languages like F# where it is easy to add a function over a given type, but extending or adding types is difficult.
Problem Solution using Object Algebra
Let us redesign the original library with extensibility in mind using the ideas from the paper Extensibility for the Masses.
public interface ExpAlgebra<T>
{
T Lit(int n);
T Add(T left, T right);
}
public class ExpFactory : ExpAlgebra<IEvalExp>
{
public IEvalExp Lit(int n)
{
return new Lit(n);
}
public IEvalExp Add(IEvalExp left, IEvalExp right)
{
return new Add(left, right);
}
}
public static class ExampleTwo<T>
{
public static T AddOneToTwo(ExpAlgebra<T> ae) => ae.Add(ae.Lit(1), ae.Lit(2));
}
We use the same implementation as in the first code example but now add a new interface containing the functions over the type as well as a factory for the algebra. Notice that we now generate the expression in using the interface instead of directly from the types. We can now add a function by extending the interface, we will add functionality to print the expression:
public interface IPrintExp : IEvalExp
{
string Print();
}
public class PrintableLit : Lit, IPrintExp
{
public PrintableLit(int n) : base(n)
{
N = n;
}
public int N { get; }
public string Print()
{
return N.ToString();
}
}
public class PrintableAdd : Add, IPrintExp
{
public PrintableAdd(IPrintExp left, IPrintExp right) : base(left, right)
{
Left = left;
Right = right;
}
public new IPrintExp Left { get; }
public new IPrintExp Right { get; }
public string Print()
{
return Left.Print() + " + " + Right.Print();
}
}
public class PrintFactory : ExpFactory, ExpAlgebra<IPrintExp>
{
public IPrintExp Add(IPrintExp left, IPrintExp right)
{
return new PrintableAdd(left, right);
}
public new IPrintExp Lit(int n)
{
return new PrintableLit(n);
}
}
public static class ExampleThree
{
public static int Evaluate() => ExampleTwo<IPrintExp>.AddOneToTwo(new PrintFactory()).Eval();
public static string Print() => ExampleTwo<IPrintExp>.AddOneToTwo(new PrintFactory()).Print();
}
Notice that in we are printing an expression that was already compiled in , we did not need to modify any existing code. Notice also that this is still strongly typed, we do not need reflection or casting. If we would replace the with the in the we would get a compilation error since the method does not exist in that context.
See also
Applications of FOSD program cubes
Generic programming
POPLmark challenge
References
External links
The Expression Problem by Philip Wadler.
Lecture: The Expression Problem by Ralf Lämmell.
C9 Lectures: Dr. Ralf Lämmel - Advanced Functional Programming - The Expression Problem at Channel 9.
Independently Extensible Solutions to the Expression Problem, Matthias Zenger and Martin Odersky, EPFL Lausanne.
Programming language design
Unsolved problems in computer science | 0.761457 | 0.977922 | 0.744646 |
Articulation (architecture) | Articulation, in art and architecture, is a method of styling the joints in the formal elements of architectural design. Through degrees of articulation, each part is united with the whole work by means of a joint in such a way that the joined parts are put together in styles ranging from exceptionally distinct jointing to the opposite of high articulation—fluidity and continuity of joining. In highly articulated works, each part is defined precisely and stands out clearly. The articulation of a building reveals how the parts fit into the whole by emphasizing each part separately.
Continuity and fusion
The opposite of distinct articulation is continuity and fusion which reduces the separateness of the parts. Distinct articulation emphasizes the "strategic break" while the articulation of continuity concentrates on smooth transitions. Continuity (or fusion) reduces the independence of the elements and focuses on the largest element of the whole, while reducing focus on the other independent elements.
Articulation and space
Architecture is said to be the art of the articulation of spaces. And geometry is the architect's basic tool, but it is not the architect's system of communication. That system is the defining of object in the surrounding space. Articulation is the geometry of form and space.
Examples
Romanesque architecture
Vertical wall articulation set Romanesque churches apart from their predecessors. Dividing the church height into bays using pilasters gave the interior space a new vertical unity. It also added a new three-dimensional vision by using the horizontal line of the arcade and clerestory. The use of the compound pier allowed the wall columns to rise together with other shafts supporting walls such those supporting arches and aisle vaults into three or more levels.
Sydney Opera House
This structure is a combination of both articulation and fusion styles. Although the "wings" of the opera house stand articulated from the whole, within the wings the ribs of the structure have been fused, or made continuous, by covering the structure with a smooth surface. The smooth covering creates in the process other, larger symbolic forms in rhythmic succession on the roof. (See photo of completed structure in the gallery below.)
The result here is sensuous, related to both earth and sky, as the fused forms are more natural in form than are sharp angles with strong definition. The sharper forms, the result of the smooth surface fusion, intrude with sharp articulation into the sky.
Casa da Música
The design of Casa da Música in Portugal produced a building in which the formal intellectual underpinning is equalled by the continuity, the fusion of forms, in its attempt to achieve sensual beauty. Its emotionality comes through in its exuberant external design where articulation in structure has been overwhelmed by continuity and fusion.
Guggenheim Museum Bilbao
In this structure, fusion and continuity dominate over articulation. The organically shaped curves on the building have been designed to appear random. According to the architect, "the randomness of the curves are designed to catch the light". Thus there is an interaction between space (environment) and form.
Articulation vs. continuity
The articulated form emphasizes the building's distinct parts. Articulation accentuates the visible aspect of the different parts of a building. Sometimes the effect completely obscures the sense of the whole, breaking it down into too many pieces, but in most cases the articulation expresses a balance between the two.
The result is often a potential sensuality, as the fused forms are closer to the form of the human body than are sharp angles with strong definition.
A highly articulated art form expresses its culture's sense of it place in the world. In architecture spatial organization or articulation shows the following uses:
Uses of articulation
Movement and circulation
Uses and accessibility
Sequence and succession
Symbolism and meaning
Highly articulated buildings
Seagram Building
Centre Georges Pompidou
The Fuji Television building, Tokyo (architectural style- Structural Expressionism) which was designed by Architect Kenzo Tange is more of an articulated structure. The distinct elements synchronise to form a rigid-looking structure.
References
External links
Architecture in the Age of Divided Representation: The Question of Creativity in the Shadow of Production
Rethinking Architecture: A Reader in Cultural Theory
Architecture Theory Since 1968
"articulated form"
Architectural elements
Architectural design | 0.771014 | 0.965713 | 0.744578 |
Digital media in education | Digital media in education refers to an individual's ability to access, analyze, evaluate, and create media content and communication in various forms. This includes the use of multiple digital softwares, devices, and platforms as tools for learning. The integration of digital media in education has been increased over time, rivaling books as a primary means of communication and gradually transforming traditional educational practices.
History
20th century
Technological advances, including the invention of the Internet in the late 20th century, introduced the possibility of incorporating technology into education. In the early 1900s, the overhead projector was used as an educational tool, along with on-air classes available via radio. The first use of computers in classrooms occurred in 1950, when a flight simulation program was used to train pilots at the Massachusetts Institute of Technology. However, access to computers remained extremely limited. In 1964, researchers John Kemeny and Thomas Kurtz developed a new computer language called BASIC, which was easier to learn and popularized time-sharing, enabling multiple students to use a computer simultaneously. By the 1980s, schools began to show more interest in computers as companies released mass-market devices to the public. Networking further facilitated the connection of computers into a single communication system, which was both more efficient and cost-effective than previous stand-alone machines, prompting widespread adoption in schools.
By 1999, 99% of public school teachers in the United States reported access to at least one computer in their schools, and 84% had access to a computer in their classroom. The invention of the World Wide Web in 1992 simplified internet navigation and sparked further interest in educational settings. Computers were initially integrated into school curricula for tasks such as word processing, spreadsheet creation, and data organization. By the late 1990s, the Internet became a research tool, functioning as a vast library resource.
The World Wide Web also led to the development of learning management systems, which allowed educators to create online teaching environments for content storage, student activities, discussions, and assignments. Advances in digital compression and high-speed Internet made video creation and distribution more affordable, contributing to the rise of systems designed for recording lectures. These systems were often incorporated into learning management platforms, supporting the growth of fully online courses.
21st century
By 2002, the Massachusetts Institute of Technology began offering recorded lectures to the public, marking a significant step toward accessible online education. The creation of YouTube in 2005 further revolutionized educational content distribution. Many educators started uploading lectures and instructional videos, with platforms like Khan Academy, which began posting on YouTube in 2006, helping to establish the site as a valuable educational tool. In 2007, Apple launched iTunesU, another platform for sharing educational resources and videos. Meanwhile, learning management systems gained popularity, with Blackboard and Canvas becoming two of the most widely used platforms after Canvas's release in 2008. That same year saw the introduction of the first Massive Open Online Course (MOOC), which offered webinars and expert posts accessible to anyone.
As technology evolved, traditional projectors were gradually replaced by interactive whiteboards, which enabled teachers to integrate digital tools more effectively in their classrooms. By 2009, 97% of U.S. classrooms had at least one computer, and 93% had Internet access.
The COVID-19 pandemic, which forced schools across the world to close, significantly impacted education with schools shifting to distance education. Students attended classes remotely using devices such as laptops, phones, and tablets, utilizing digital platforms as tools for creating at-home learning environments.
Some schools faced challenges in adapting assessments and exams to the new learning environment. In a study by Eddie M. Mulenga and José M. Marbán on Zambian students during the pandemic, students struggled to adapt to online learning in subjects like mathematics, as they were unprepared for the unfamiliar digital platforms. Similar issues were observed among students in Romania, where the transition to virtual learning presented significant obstacles in engagement and adaptation.
References
Digital media
Educational technology | 0.760115 | 0.979557 | 0.744575 |
Critical criminology | Critical criminology applies critical theory to criminology. Critical criminology examines the genesis of crime and the nature of justice in relation to factors such as class and status. Law and the penal system are viewed as founded on social inequality and meant to perpetuate such inequality. Critical criminology also looks for possible biases in criminological research.
Critical criminology sees crime as a product of oppression of workers – in particular, those in greatest poverty – and less-advantaged groups within society, such as women and ethnic minorities, are seen to be the most likely to suffer oppressive social relations based upon class division, sexism and racism. More simply, critical criminology may be defined as any criminological topic area that takes into account the contextual factors of crime or critiques topics covered in mainstream criminology.
Convict criminology
Convict criminology, which is one type of critical criminology, emerged in the United States during the late 1990s. It offers an alternative epistemology on crime, criminality and punishment. Scholarship is conducted by PhD-trained former prisoners, prison workers and others who share a belief that in order to be a fully rounded discipline, mainstream criminology needs to be informed by input from those with personal experience of life in correctional institutions. Contributions from academics who are aware of the day-to-day realities of incarceration, the hidden politics that infuse prison administration, and the details and the nuances of prison language and culture, have the potential significantly to enrich scholarly understanding of the corrections system. In addition, convict criminologists have been active in various aspects of correctional reform advocacy, particularly where prisoner education is concerned.
Socially contingent definitions of crime
It can also rest upon the fundamental assertion that definitions of what constitute crimes are socially and historically contingent, that is, what constitutes a crime varies in different social situations and different periods of history.
For example, homosexuality was illegal in the United Kingdom up to 1967 when it was legalized for men over 21. If the act itself remained the same, how could its 'criminal qualities' change such that it became legal? What this question points out to us is that acts do not, in themselves, possess 'criminal qualities', that is, there is nothing inherent that makes any act a crime other than that it has been designated a crime in the law that has jurisdiction in that time and place.
Whilst there are many variations on the critical theme in criminology, the term critical criminology has become a guiding principle for perspectives that take to be fundamental the understanding that certain acts are crimes because certain people have the power to make them so. The reliance on what has been seen as the oppositional paradigm, administrational criminology, which tends to focus on the criminological categories that governments wish to highlight (mugging and other street crime, violence, burglary, and, as many critical criminologists would contend, predominantly the crimes of the poor) can be questioned.
The gap between what these two paradigms suggest is of legitimate criminological interest, is shown admirably by Stephen Box in his book Power, Crime, and Mystification where he asserts that one is seven times more likely (or was in 1983) to be killed as a result of negligence by one's employer, than one was to be murdered in the conventional sense (when all demographic weighting had been taken into account).
Yet, to this day, no one has ever been prosecuted for corporate manslaughter in the UK. The effect of this, critical criminologists tend to claim, is that conventional criminologies fail to 'lay bare the structural inequalities which underpin the processes through which laws are created and enforced' and that 'deviancy and criminality' are 'shaped by society's larger structure of power and institutions'. Further failing to note that power represents the capacity 'to enforce one's moral claims' permitting the powerful to 'conventionalize their moral defaults' legitimizing the processes of 'normalized repression'. Thus, fundamentally, critical criminologists are critical of state definitions of crime, choosing instead to focus upon notions of social harm or human rights.
Conflict theories
According to criminologists, working in the conflict tradition, crime is the result of conflict within societies that is brought about through the inevitable processes of capitalism. Dispute exists between those who espouse a 'pluralist' view of society and those who do not. Pluralists, following from writers like Mills (1956, 1969 for example) are of the belief that power is exercised in societies by groups of interested individuals (businesses, faith groups, government organizations for example)– vying for influence and power to further their own interests. These criminologists like Vold have been called 'conservative conflict theorists'. They hold that crime may emerge from economic differences, differences of culture, or from struggles concerning status, ideology, morality, religion, race or ethnicity. These writers are of the belief that such groups, by claiming allegiance to mainstream culture, gain control of key resources permitting them to criminalize those who do not conform to their moral codes and cultural values. (Selin 1938; Vold 1979 [1958]; Quinney 1970 inter alia). These theorists, therefore, see crime as having roots in symbolic or instrumental conflict occurring at multiple sites within a fragmented society.
Others are of the belief that such 'interests', particularly symbolic dimensions such as status are epiphenomenological by-products of more fundamental economic conflict (Taylor, Walton & Young 1973; Quinney 1974, for example). For these theorists, societal conflict from which crime emerges is founded on the fundamental economic inequalities that are inherent in the processes of capitalism (see, for example, Wikipedia article on Rusche and Kirchheimer's Punishment and Social Structure, a book that provides a seminal exposition of Marxian analysis applied to the problem of crime and punishment). Drawing on the work of Marx (1990 [1868]); Engels, (1984 [1845]); and Bonger (1969 [1916]) among others, such critical theorists suggest that the conditions in which crime emerges are caused by the appropriation of the benefits others' labor through the generation of what is known as surplus value, concentrating in the hands of the few owners of the means of production, disproportionate wealth and power.
There are two main strands of critical criminological theory following from Marx, divided by differing conceptions of the role of the state in maintenance of capitalist inequalities. On the one hand instrumental Marxists hold that the state is manipulated by the ruling classes to act in their interests. On the other, structuralist Marxists believe that the state plays a more dominant, semi-autonomous role in subjugating those in the (relatively) powerless classes (Sheley 1985; Lynch & Groves 1986). Instrumental Marxists such as Quinney (1975), Chambliss (1975), or Krisberg (1975) are of the belief that capitalist societies are monolithic edifices of inequality, utterly dominated by powerful economic interests. Power and wealth are divided inequitably between the owners of the means of production and those who have only their labor to sell. The wealthy use the state's coercive powers to criminalize those who threaten to undermine that economic order and their position in it. Structural Marxist theory (Spitzer 1975; Greenberg 1993 [1981]; ) on the other hand holds that capitalist societies exhibit a dual power structure in which the state is more autonomous. Through its mediating effect it ameliorates the worst aspects of capitalist inequalities, however, it works to preserve the overall capitalist system of wealth appropriation, criminalizing those who threaten the operation of the system as a whole. As such this means that the state can criminalize not only those powerless who protest the system's injustices but also those excessive capitalists whose conduct threatens to expose the veneer of the legitimacy of capitalist endeavor.
Whereas Marxists have conventionally believed in the replacement of capitalism with socialism in a process that will eventually lead to communism, anarchists are of the view that any hierarchical system is inevitably flawed. Such theorists (Pepinsky 1978; Tift & Sulivan 1980; Ferrell 1994 inter alia) espouse an agenda of defiance of existing hierarchies, encouraging the establishment of systems of decentralised, negotiated community justice in which all members of the local community participate. Recent anarchist theorists like Ferrell attempt to locate crime as resistance both to its social construction through symbolic systems of normative censure and to its more structural constructions as threat to the state and to capitalist production.
In a move diametrically opposed to that of anarchist theorists, Left Realists wish to distance themselves from any conception of the criminal as heroic social warrior. Instead they are keen to privilege the experience of the victim and the real effects of criminal behaviour. In texts such as Young 1979 & 1986, Young and Matthews 1991, Lea and Young 1984 or Lowman & MacLean 1992, the victim, the state, the public, and the offender are all considered as a nexus of parameters within which talk about the nature of specific criminal acts may be located. Whilst left realists tend to accept that crime is a socially and historically contingent category that is defined by those with the power to do so, they are at pains to emphasise the real harms that crime does to victims who are frequently no less disadvantaged than the offenders.
All of the above conflict perspectives see individuals as being inequitably constrained by powerful and largely immutable structures, although they to varying degrees accord to humans a degree of agency. Ultimately, however, the relatively powerless are seen as being repressed by societal structures of governance or economics. Even left realists who have been criticised for being 'conservative' (not least by Cohen 1990), see the victim and the offender as being subject to systems of injustice and deprivation from which victimising behaviour emerges.
It is important to keep in mind that conflict theory while derived from Marxism, is distinct from it. Marxism is an ideology, accordingly it is not empirically tested. Conversely, conflict theory is empirically falsifiable and thus, distinct from Marxism.
Criticism
Conflict Criminologies have come under sustained attack from several quarters, not least from those – left realists – who claim to be within the ranks. Early criminologies, pejoratively referred to as 'left idealist' by Jock Young 1979, were never really popular in the United States, where critical criminology departments at some universities were closed for political reasons (Rock 1997). These early criminologies were called into question by the introduction of mass self-report victim surveys (Hough & Mayhew 1983) that showed that victimisation was intra-class rather than inter-class. Thus notions that crimes like robbery were somehow primitive forms of wealth redistribution were shown to be false. Further attacks emanated from feminists who maintained that the victimisation of women was no mean business and that left idealists' concentration on the crimes of the working classes that could be seen as politically motivated ignored crimes such as rape, domestic violence, or child abuse (Smart 1977). Furthermore, it was claimed, left idealists neglected the comparative aspect of the study of crime, in that they ignored the significant quantities of crime in socialist societies, and ignored the low crime levels in capitalist societies like Switzerland and Japan (Incardi 1980).
Feminist theories
Feminism in criminology is more than the mere insertion of women into masculine perspectives of crime and criminal justice, for this would suggest that conventional criminology was positively gendered in favour of the masculine. Feminists contend that previous perspectives are un-gendered and as such ignore the gendered experiences of women. Feminist theorists are engaged in a project to bring a gendered dimension to criminological theory. They are also engaged in a project to bring to criminological theory insights to be gained from an understanding of taking a particular standpoint, that is, the use of knowledge gained through methods designed to reveal the experience of the real lives of women.
The primary claim of feminists is that social science in general and criminology in particular represents a male perspective upon the world in that it focuses largely upon the crimes of men against men. Moreover, arguably the most significant criminological fact of all, namely that women commit significantly less crime than men, is hardly engaged with either descriptively or explanatory in the literature. In other words, it is assumed that explanatory models developed to explain male crime are taken to be generalizable to women in the face of the extraordinary evidence to the contrary. The conclusion that must be drawn is that not only can those theories not be generalized to women, but that that failure might suggest they may not explain adequately male crime either (Caulfield and Wonders 1994)
A second aspect of feminist critique centers upon the notion that even where women have become criminologists, they have adopted 'malestream' modes of research and understanding, that is they have joined and been assimilated into the modes of working of the masculine paradigm, rendering it simultaneously gender blind and biased (Menzies & Chunn 1991). However, as Menzies and Chunn argue, it is not adequate merely to 'insert' women into 'malestream' criminology, it is necessary to develop a criminology from the standpoint of women. At first glance this may appear to be gender biased against the needs and views of men. However, this claim is based on a position developed by Nancy Hartsock known as standpoint feminism. Based on the work of Marx, Hartsock suggests that the view of the world from womanhood is a 'truer' vision than that from the viewpoint of man. According to Marx (Marx 1964, Lucacs 1971) privilege blinds people to the realities of the world meaning that the powerless have a clearer view of the world – the poor see the wealth of the rich and their own poverty, whilst the rich are inured, shielded from, or in denial about the sufferings of the poor. Hartsock (1983 & 1999) argues that women are in precisely the same position as Marx's poor. From their position of powerlessness they are more capable of revealing the truth about the world than any 'malestream' paradigm ever can. Thus there are two key strands in feminist criminological thought; that criminology can be made gender aware and thus gender neutral; or that that criminology must be gender positive and adopt standpoint feminism.
Cutting across these two distinctions, feminists can be placed largely into four main groupings: liberal, radical, Marxist, and socialist (Jaggar 1983). Liberal feminists are concerned with discrimination on the grounds of gender and its prevalence in society and seek to end such discrimination. Such ends are sought through engagement with existing structures such as governments and legal frameworks, rather than by challenging modes of gender construction or hegemonic patriarchy (Hoffman Bustamante 1973, Adler 1975, Simon 1975, Edwards 1990). Thus liberal feminists are more or less content to work within the system to change it from within using its existing structures.
Critical feminists – radical feminists, Marxists, and socialists – are keen to stress the need to dispense with masculine systems and structures. Radical feminists see the roots of female oppression in patriarchy, perceiving its perpetrators as primarily aggressive in both private and public spheres, violently dominating women by control of their sexuality through pornography, rape (Brownmiller 1975), and other forms of sexual violence, thus imposing upon them masculine definitions of womanhood and women's roles, particularly in the family. Marxist feminists, (Rafter & Natalizia 1981, MacKinnon 1982 & 1983) however, hold that such patriarchal structures are emergent from the class producing inequalities inherent in capitalist means of production. The production of surplus value requires that the man who works in the capitalist's factory, pit, or office, requires a secondary, unpaid worker – the woman – to keep him fit for his labours, by providing the benefits of a home – food, keeping house, raising his children, and other comforts of family. Thus, merely in order to be fit to sell his labour, the proletarian man needs to 'keep' a support worker with the already meagre proceeds of his labour. Hence women are left with virtually no economic resources and are thus seen to exist within an economic trap that is an inevitable outcome of capitalist production. Socialist feminists attempt to steer a path between the radical and the Marxist views, identifying capitalist patriarchy as the source of women's oppression (Danner 1991). Such theorists (Eisenstein 1979, Hartmann 1979 & 1981, Messerschmidt 1986, Currie 1989) accept that a patriarchal society constrains women's roles and their view of themselves but that this patriarchy is the result not of male aggression but of the mode of capitalist production. Thus neither capitalist production nor patriarchy is privileged in the production of women's oppression, powerlessness, and economic marginalization. Socialist feminists believe that gender based oppression can only be overcome by creating a non-patriarchal, non-capitalist society, and that attempting merely to modify the status quo from within perpetuates the very system that generates inequalities.
Of significant importance in understanding the positions of most of the feminists above is that gender is taken to be a social construct. That is, the differences between men and women are not by and large biological (essentialism) but are insociated from an early age and are defined by existing patriarchal categories of womanhood. In the face of this pacifying or passive image of women, feminist criminologists wish to generate a discursive and real (extended) space within which expressions of women's own views of their identity and womanhood may emerge.
There are many forms of criticism leveled at feminist criminology, some 'facile' (Gelsthorpe 1997) such as those of Bottomley & Pease (1986), or Walker (1987) who suggest that feminist thinking is irrelevant to criminology. A major strand of criticism is leveled at what it is argued is its ethnocentrism (Rice 1990, Mama 1989, Ahluwalia 1991), that is, that in its silence on the experience of black women it is as biased as male criminology in its ignorance of the experience of women. Criminology, claim these writers, is sexist and racist and that both errors need to be corrected. A significant number of criticisms are leveled at feminist criminology by Pat Carlen in an important paper from 1992 (Carlen 1992). Among Carlen's criticisms is that of an apparent inability of feminist criminology to reconcile theoretical insight with political reality, exhibiting a "theoreticist, libertarian, separatist and gender-centric tendenc[y]". She suggests that this libertarianism reflects itself in a belief that crime reduction policies can be achieved without some form of 'social engineering'. Further criticizing feminism's libertarian streak, Carlen suggests that feminists injunction to allow women to speak for themselves reveals a separatist tendency, arguing that what feminists call for is merely good social science and should be extended to let all classes of humans speak for themselves. This separatism, claims Carlen, further manifests itself in a refusal to accept developments in mainstream criminology branding them 'malestream' or in other pejorative terms. Perhaps the most damning criticism of feminism and of certain stripes of radical feminism in particular is that, in some aspects of western societies, it has itself become the dominant interest group with powers to criminalize masculinity (see Nathanson & Young 2001).
Postmodern theories
In criminology, the postmodernist school applies postmodernism to the study of crime and criminals, and understands "criminality" as a product of the power to limit the behaviour of those individuals excluded from power, but who try to overcome social inequality and behave in ways which the power structure prohibits. It focuses on the identity of the human subject, multiculturalism, feminism, and human relationships to deal with the concepts of "difference" and "otherness" without essentialism or reductionism, but its contributions are not always appreciated (Carrington: 1998). Postmodernists shift attention from Marxist concerns of economic and social oppression to linguistic production, arguing that criminal law is a language to create dominance relationships. For example, the language of courts (the so-called "legalese") expresses and institutionalises the domination of the individual, whether accused or accuser, criminal or victim, by social institutions. According to postmodernist criminology, the discourse of criminal law is dominant, exclusive and rejecting, less diverse, and culturally not pluralistic, exaggerating narrowly defined rules for the exclusion of others.
References
External links
Critical Criminology Division - American Society of Criminology
Critical Criminology web site
Criminology | 0.768246 | 0.969119 | 0.744522 |
Point of difference | A point of difference is a factor of products or services that establishes differentiation. Differentiation is the way in which the goods or services of a company differ from its competitors. Indicators of the point of difference's success would be increased customer benefit and brand loyalty. However, an excessive degree of differentiation could cause the goods or services to lose their standard within a given industry, leading to a subsequent loss of consumers. Hence, a balance of differentiation and association is required, and a point of parity has to be adopted in order to allow a business to remain or further enhance its competitiveness.
Significance of differentiation
Standing out from the competitors
By differentiating itself from competitors, a business might have greater potential income. Because having differentiated goods or services limits the choices of consumers, which drive them to purchase goods or services from a particular company. In addition to that, the threats brought by competitors would be lowered significantly, which means, by adopting differentiation strategy, it would allow businesses to be more competitive and be able to have a greater source of income.
Effects on brand loyalty
As the choices of consumers would be lowered significantly from differentiation strategy, consumers are unlikely to accept other variations without being convinced. Which means, it drives the consumers to lean towards a particular company, and establish a better relationship with the company. Thus, businesses would be able to take advantage from brand loyalty and further enhance the competitiveness.
Point of difference vs. point of parity
Points of difference and points of parity are both utilized in the positioning of a brand for competitive advantage via brand/product.
Points-of-difference (PODs) – Attributes or benefits consumers strongly associate with a brand, positively evaluate and believe they could not find to the same extent with a competing brand i.e. points where you are claiming superiority or exclusiveness over other products in the category.
Points-of-parity (POPs) – Associations that are not necessarily unique to the brand but may be shared by other brands i.e. where you can at least match the competitors claimed best. While POPs may usually not be the reason to choose a brand, their absence can certainly be a reason to drop a brand.
While it is important to establish a POD, it is equally important to nullify the competition by matching them on the POP. As a late entrant into the market, many brands look at making the competitor's POD into a POP for the category and thereby create a leadership position by introducing a new POD.
POP refers to the way in which a company's product offers similarity with its competitors within an industry. It could also be known as the elements that are considered mandatory for a brand to be recognized as a legitimate competitor within a given industry.
As an excessive degree of differentiation would cause the goods or services losing its standard, but not having differentiation would not benefit businesses as well. Therefore, in order to avoid excessive differentiation, adopting point of parity would be the solution. In terms of offering similarities, businesses should look at the benefits and all the positive features of the competitor's product, and take advantage from it. At the same time, businesses could work on the negative aspects or even further enhance the positive features of the particular product in order to achieve differentiation, and to take advantage from it. Therefore, finding a balance between point of difference and point of parity is a critical factor for businesses to succeed.
Kevin Keller and Alice Tybout note there are three types of difference: brand performance associations; brand imagery associations; and consumer insight associations. The last only comes into play when the others are at parity. Insight alone is a weak point of difference, easily copied. Putting these together, check their desirability, deliverability and eliminate contradictions.
Traditionally, the people responsible for positioning brands have concentrated on the differences that set each brand apart from the competition. But emphasizing differences isn't enough to sustain a brand against competitors. Managers should also consider the frame of reference within which the brand works and the features the brand shares with other products.
Three questions about a brand can help:
Have we established a frame? A frame of reference signals to consumers the goal they can expect to achieve by using a brand.
Are we leveraging our points of parity? Certain points of parity must be met if consumers are to perceive your product as a legitimate player within its frame of reference.
Are the points of difference compelling? A distinguishing characteristic that consumers find both relevant and believable can become a strong, favourable, unique brand association, capable of distinguishing the brand from others in the same frame of reference.
Assessment
The assessment of consumer desirability criteria for PODs should be against:
Relevance
Distinctiveness
Deliverability
Whilst when assessing the deliverability criteria for PODs look at their:
Feasibility
Communicability
Sustainability
Differentiation types
Product differentiation
In order to achieve product differentiation, that particular product needs to have other unique features and stands out from its competing products, or that particular product becomes the only product that offers certain features to consumers by entering a new industry. Achieving product differentiation is one of the ways for businesses to become the market leader. However, if the product differentiation were too radical, it would lead to acceptance problems on the consumer side, because the product might not meet the expected standards, or it could be quickly obsolete. Products can be differentiated in form, features, performance quality, conformance quality, durability, reliability, repairability, style and customization.
Price differentiation
Price differentiation is where a business offers a different price (lower or higher) from the industry's standard or its competitors. By offering a lower price, it would attract consumers to purchase, as their demand is likely to be higher, when the price is lower. In terms of offering a higher price, it also has the effect of drawing the attention from consumers, as consumers would wonder the reason behind it, and higher price product tends to be more appealing to the upper class group. However, in order to take advantage from offering a higher price, the quality of the product has to match the price, otherwise, consumers would lose interests because of not getting what they pay for.
Differentiation focus
The principles of differentiation focus are similar to all the other differentiation strategies, where it differentiates some of the features from the competitors. However, differentiation focus targets a particular segment within a market, where it allows businesses to focus on their strength. Thus, the user experience of the particular segment would be better, as all the marketing and pre-production work of the goods or services are focused on specific segment.
Case studies
Apple's Mac OS System The Mac OS System is an example of a successful differentiation strategy. Mac OS offers a variety of features and advantages that Windows PC does not have. For example, MacBook does run faster in the SSD drives and flash storage through the use of PCIe connections, as opposed to the majority of PCs, where SATA is being used instead. Additionally, the Mac OS System has a better security against virus attack. Also, MacBooks come with software included, for example, iMovie, GarageBand and FaceTime, for a better user experience. Apple is able to stand out from its competitors with its unique features through product differentiation strategy, as well as being able to take advantage from using premium-pricing strategy as their price differentiation.
Snapchat Snapchat, founded in 2011, was recognized in 2014, where 9% of the US smartphone owners use Snapchat. Developers Bobby Murphy and Evan Spiegel took advantage from the growing trend of the social media, and targeted people from the age of 13 to 34, as expressing feelings and thoughts through the social media is one of the biggest phenomenon of teenagers. It was a successful differentiation focus strategy implemented by Snapchat, as they focused on a particular segment instead of targeting people from every age group. By implementing differentiation focus strategy, it allowed Snapchat to focus on the features that people from the age of 13 – 34 prefer, and as a result, user experience would be enhanced, as the app is tailored for them. Thus, in 2014, Snapchat became the top three leading social media app.
Impact on businesses
Advantages and disadvantages of differentiation
Despite the significance of implementing differentiation strategy, differentiation itself has its advantages and disadvantages. In terms of advantages, one of the examples would be it enhances innovation, where new goods and services would draw public's attention, and allowing businesses to survive in the global competitive market, as differentiated products would stand out from other competitors and avoid the threats brought by substituted products. In addition, differentiation would allow businesses to take advantage from brand loyalty. It is because differentiated goods and services are likely to have a better user experience, hence, the chance of having returning customers would be significantly increased.
However, differentiation also contains disadvantages, an example would high cost and time consuming, especially in the introduction stage of the product life cycle. It is because R&D (research and development) and marketing would require certain amount of money and time in order to be accomplished properly. Apart from the concern over cost and time, another disadvantage would be differentiation hinders market entry. It is because successful differentiation would attract competitors to replicate, and once competitors implement price differentiation strategy, customers are likely to leave for a better offer in terms of the price. Therefore, in order to avoid all the threats, businesses would have to ensure marketing and branding have been well accomplished in order to take advantage from brand loyalty.
Short- and long-term effects
From a short-term point of view, differentiation strategy is less likely to favor businesses, as R&D (research and development) and marketing would require certain amount of money and time. Also, it takes time for differentiated products to gain recognition and be accepted by the public. Arguably, differentiated products are able to draw attention and spark people's interest, as they are newly introduced.
On the other hand, from a long-term point of view, differentiation strategy is more likely to favor businesses. It is because once R&D and marketing are being done, businesses would be able to have a clearer picture regarding what do people prefer, then differentiation focus strategy can also be implemented. Furthermore, differentiated products are likely to be recognized, as people would discover and examine them, which means, the risk of not gaining recognition would be reduced, and businesses are able to take advantage from brand loyalty if the user experience is exceptional. On the flipside, it is also conceivable that successful differentiation would hinder new entry, i.e. the competition in a particular industry could become more competitive. Additionally, innovation is required in order to survive in the competitive global market, which means, it could be challenging for businesses to ensure their differentiated products are innovative and contemporary for people over a long period of time. Thus, even though the upside of differentiation strategy is more likely to outweigh the downside of differentiation strategy in the long-term, businesses also have to ensure its sustainability at the same time.
References
Product management
Uniqueness | 0.776438 | 0.958892 | 0.74452 |
Community art | Community art, also known as social art, community-engaged art, community-based art, and, rarely, dialogical art, is the practice of art based in—and generated in—a community setting. It is closely related to social practice and social turn. Works in this form can be of any media and are characterized by interaction or dialogue with the community. Professional artists may collaborate with communities which may not normally engage in the arts. The term was defined in the late 1960s as the practice grew in the United States, Canada, the Netherlands, the United Kingdom, Ireland, and Australia. In Scandinavia, the term "community art" more often refers to contemporary art projects.
Community art is a community-oriented, grassroots approach, often useful in economically depressed areas. When local community members come together to express concerns or issues through this artistic practice, professional artists or actors may be involved. This artistic practice can act as a catalyst to trigger events or changes within a community or at a national or international level.
In English-speaking countries, community art is often seen as the work of community arts centers, where visual arts (fine art, video, new media art), music, and theater are common media. Many arts organizations in the United Kingdom do community-based work, which typically involves developing participation by non-professional members of local communities.
Public art
The term "community art" may also apply to public art efforts when, in addition to the collaborative community artistic process, the resulting product is intended as public art and installed in public space. Popular community art approaches to public art can include environmental sustainability themes associated with urban revitalization projects.
Forms of collaborative practices
Models of community-engaged arts can vary with three forms of collaborative practices emerging from among the sets of common practices. In the artist-driven model, artists are seen as the catalysts for social change through the social commentary addressed in their works. A muralist whose work elicits and sustains political dialogue would be a practitioner of this model. In the second model, artists engage with community groups to facilitate specialized forms of art creation, often with the goal of presenting the work in a public forum to promote awareness and to further discourse within a larger community. In the process-driven or dialogic model, artists may engage with a group to facilitate an artistic process that addresses particular concerns specific to the group. The use of an artistic process (such as dance or social circus) for problem-solving, therapeutic, group-empowerment or strategic planning purposes may result in artistic works that are not intended for public presentation. In the second and third models, the individuals who collaborate on the artistic creation may not define themselves as artists but are considered practitioners of an art-making process that produces social change.
Due to its roots in social justice and collaborative, community-based nature, art for social change may be considered a form of cultural democracy. Often, the processes (or the works produced by these processes) intend to create or promote spaces for participatory public dialogue.
In Canada, the field of community-engaged arts has recently seen broader use of art for social change practices by non-arts change organizations. The resultant partnerships have enabled these collaborative communities to address systemic issues in health, education, as well as empowerment for indigenous, immigrant, LGBT and youth communities. A similar social innovation trend has appeared where business development associations have engaged with artists/artistic organizations to co-produce cultural festivals or events that address social concerns.
As the field diversifies and practices are adopted by various organizations from multiple disciplines, ethics and safety have become a concern to practitioners. As a result, opportunities for cross-disciplinary training in art for social change practices have grown within the related field of arts education.
Online community art
A community can be seen in many ways, and it can refer to different kind of groups. There are also virtual communities or online communities. Internet art has many different forms, but often there is some kind of community that is created for a project or it is an effect of an art project. One example of community art is the so-called image worm, whereby artists on a forum will build upon a canvas and smoothly transition in their own piece between the last piece using image stitching, and then the next artist will build up on it, and so on. Such pieces will eventually take on the form of a panorama, stretching on as infinitely as the community decides to continue building upon the piece.
Community theatre
Community theatre includes theatre made by, with, and for a community—it may refer to theatre that is made almost by a community with no outside help, or to a collaboration between community members and professional theatre artists, or to performance made entirely by professionals that is addressed to a particular community. Community theatres range in size from small groups led by single individuals that perform in borrowed spaces to large permanent companies with well-equipped facilities of their own. Many community theatres are successful, non-profit businesses with a large active membership and, often, a full-time professional staff. Community theatre is often devised and may draw on popular theatrical forms, such as carnival, circus, and parades, as well as performance modes from commercial theatre. Community theatre is understood to contribute to the social capital of a community, insofar as it develops the skills, community spirit, and artistic sensibilities of those who participate, whether as producers or audience-members.
Community-engaged dance
Community-engaged dance includes dance made by, with, and for a community. There are several models for creating community-engaged dance, primarily concerned with participatory art practices and cooperative values. Community-engaged dance generally focuses on exploration, creation and relationship building rather than technical skills development. Like community theatre, community-engaged dance is understood to contribute to the social capital of a community, insofar as it develops the skills, community spirit, and artistic sensibilities of those who participate, whether as producers or audience-members.
Benefits
Many communities have some form of art institution that furthers their community by providing access to activities and programs the government or other institutions cannot provide. These community-based art centers or nonprofit organizations are at the forefront of bringing emotional and physical wellness to the communities they reside in. All art community nonprofits have different programs, these "programs can focus on building community, increasing awareness...developing creativity, or addressing common issues." Many art institutions provide programs and services like art classes for painting or drawing etc. for all ages. It is vital to the continuation of the organization to keep the love of art alive in younger generations.
Having an art institution or nonprofit can provide that outlet for individuals to create, showcase their artistic talents, and express themselves, and many businesses benefit economically from having nonprofits in their towns. One of the most important aspects of a program offered at an art institution or nonprofit organization is that it provides the participant with a stress free and fun experience. Art is a tool that helps in reducing stress, anxiety, and is helpful to move towards healing. The creative and relaxed environment of these programs may serve as a way for individuals to express themselves without fear of repercussions.
One non-profit organization aimed at helping underprivileged communities and families is "Free Arts for Abused Children" out of Los Angeles. This organization focuses on bringing families together through art, and allowing children and families to express their artistic abilities and feelings in a safe environment.
Notable artists
Jerri Allyn
Judith F. Baca
Joseph Beuys
Cheryl Capezzuti
Helen Crummy
Harrell Fletcher
Robert Hooks
Ruth Howard
Karen Jamieson
künstlerinnenkollektiv marsie (Simone Etter)
JR (artist),
Paul Kuniholm
Suzanne Lacy
Alan Lyddiard
Royston Maldoom
Celeste Miller
Adrian Piper
Mierle Laderman Ukeles
Community arts center
Self Help Graphics & Art
See also
Artivism
Citizen media
Community media
Community radio
Environmental sculpture
Installation art
Not-for-profit arts organization
Participatory art
Public art
Site-specific art
Social center
Street art
References
Further reading
Cleveland, William. Art and Upheaval: Artists on the World's Frontlines. Oakland, CA: New Village Press, 2008.
Elizabeth, Lynne and Suzanne Young. Works of Heart: Building Village Through the Arts. Oakland, CA: New Village Press, 2006.
Fox, John. Eyes on Stalks. London: Methuen, 2002.
Goldbard, Arlene. New Creative Community: The Art of Cultural Development. Oakland, CA: New Village Press, 2006.
Hirschkop, Ken. Mikhail Bakhtin: An Aesthetic for Democracy. New York: Oxford University Press, 1999.
Kester, Grant. Conversation Pieces: Community + Communication in Modern Art. Berkeley: University of California Press, 2004.
Knight, Keith and Mat Schwarzman. Beginner's Guide to Community-Based Arts. Oakland, CA: New Village Press, 2006.
Kwon, Miwon. One Place after Another Site-Specific Art and Locational Identity. Boston: MIT Press. 2004.
Lacy, Suzanne. Mapping the Terrain: New Genre Public Art. Seattle: Bay Press, 1995.
Pete Moser and George McKay, eds. (2005) Community Music: A Handbook. Russell House Publishing.
Helen Crummy (1992) Let The People Sing. Craigmillar Communiversity
"An Outburst of Frankness: Community Arts in Ireland – A Reader" edited by Sandy Fitzgerald. Tasc at New Island, 2004.
Sloman, Annie (2011) ijkey=fYtK0bzzkyivzEg&keytype=ref Using Participatory Theatre in International Community Development, Community Development Journal.
De Bruyne, Paul and Gielen, Pascal (2011), Community Art. The Politics of Trespassing. Valiz: Amsterdam.
The arts
Visual arts genres | 0.769495 | 0.967531 | 0.74451 |
Circumscription (taxonomy) | In biological taxonomy, circumscription is the content of a taxon, that is, the delimitation of which subordinate taxa are parts of that taxon. For example, if we determine that species X, Y, and Z belong in genus A, and species T, U, V, and W belong in genus B, those are our circumscriptions of those two genera. Another systematist might determine that T, U, V, W, X, Y, and Z all belong in genus A. Agreement on circumscriptions is not governed by the Codes of Zoological or Botanical Nomenclature, and must be reached by scientific consensus.
A goal of biological taxonomy is to achieve a stable circumscription for every taxon. This goal conflicts, at times, with the goal of achieving a natural classification that reflects the evolutionary history of divergence of groups of organisms. Balancing these two goals is a work in progress, and the circumscriptions of many taxa that had been regarded as stable for decades are in upheaval in the light of rapid developments in molecular phylogenetics. New evidence may suggest that a traditional circumscription should be revised, particularly if the old circumscription is shown to be paraphyletic (a group containing some but not all of the descendants of the common ancestor).
For example, the family Pongidae contained orangutans (Pongo), chimpanzees (Pan) and gorillas (Gorilla), but not humans (Homo), which are placed in Hominidae. Once molecular phylogenetic data showed that chimpanzees were more closely related to humans than to gorillas or orangutans, it became clear that Pongidae is a paraphyletic group, and the circumscription of Hominidae was changed to include all four extant genera of great apes.
Sometimes, systematists propose novel circumscriptions that do not address paraphyly. For example, the broadly circumscribed monophyletic moth superfamily Pyraloidea can be split into two families, Pyralidae and Crambidae, which are reciprocally monophyletic sister taxa.
An example of a botanical group with unstable circumscription is Anacardiaceae, a family of flowering plants. Some experts favor a circumscription in which this family includes the Blepharocaryaceae, Julianaceae, and Podoaceae, which are sometimes considered to be separate families.
See also
Glossary of scientific naming
Circumscriptional name
References
Logic
Taxonomy (biology) | 0.76125 | 0.977991 | 0.744495 |
Archaic globalization | Archaic globalization is a phase in the history of globalization, and conventionally refers to globalizing events and developments from the time of the earliest civilizations until roughly 1600 (the following period is known as early modern globalization). Archaic globalization describes the relationships between communities and states and how they were created by the geographical spread of ideas and social norms at both local and regional levels.
States began to interact and trade with others within close proximity as a way to acquire coveted goods that were considered a luxury. This trade led to the spread of ideas such as religion, economic structures and political ideals. Merchants became connected and aware of others in ways that had not been apparent. Archaic globalization is comparable to present day globalization on a much smaller scale. It not only allowed the spread of goods and commodities to other regions, but it also allowed people to experience other cultures. Cities that partook in trading were bound together by sea lanes, rivers, and great overland trade routes, some of which had been in use since antiquity. Trading was broken up according to geographic location, with centers between flanking places serving as "break-in-bulk" and exchange points for goods destined for more distant markets. During this time period the subsystems were more self-sufficient than they are today and therefore less vitally dependent upon one another for everyday survival. While long-distance trading came with many trials and tribulations, still so much of it went on during this early time period. Linking the trade together involved eight interlinked subsystems that were grouped into three large circuits, which encompassed the western European, the Middle Eastern, and the Far Eastern circuits. This interaction during trading was early civilization's way to communicate and spread many ideas that caused modern globalization to emerge and allowed a new aspect to present-day society.
Defining globalization
Globalization is the process of increasing interconnectedness between regions and individuals. Steps toward globalization include economic, political, technological, social, and cultural connections around the world. The term "archaic" can be described as early ideals and functions that were once historically apparent in society but may have disintegrated over time.
There are three main prerequisites for globalization to occur. The first is the idea of Eastern Origins, which shows how Western states have adapted and implemented learned principles from the East. Without the traditional ideas from the East, Western globalization would not have emerged the way it did. The second is distance. The interactions amongst states were not on a global scale and most often were confined to Asia, North Africa, the Middle East and certain parts of Europe. With early globalization it was difficult for states to interact with others that were not within close proximity. Eventually, technological advances allowed states to learn of others existence and another phase of globalization was able to occur. The third has to do with interdependency, stability and regularity. If a state is not dependent on another then there is no way for them to be mutually affected by one another. This is one of the driving forces behind global connections and trade; without either globalization would not have emerged the way it did and states would still be dependent on their own production and resources to function. This is one of the arguments surrounding the idea of early globalization. It is argued that archaic globalization did not function in a similar manner to modern globalization because states were not as interdependent on others as they are today.
Emergence of a world system
Historians argue that a world system was in order before the rise of capitalism between the sixteenth and nineteenth centuries. This is referred to as the early age of capitalism where long-distance trade, market exchange and capital accumulation existed amongst states. In 800 AD Greek, Roman and Muslim empires emerged covering areas known today as China and the Middle East. Major religions such as Christianity, Islam and Buddhism spread to distant lands where many are still intact today. One of the most popular examples of distant trade routes can be seen with the silk route between China and the Mediterranean, movement and trade with art and luxury goods between Arab regions, South Asia and Africa. These relationships through trade mainly formed in the east and eventually led to the development of capitalism. It was at this time that power and land shifted from the nobility and church to the bourgeoisie and division of labor in production emerged. During the later part of the twelfth century and the beginning of the thirteenth century an international trade system was developed between states ranging from northwestern Europe to China.
During the 1500s other Asian empires emerged, which included trading over longer distances than before. During the early exchanges between states, Europe had little to offer with the exception of slaves, metals, wood and furs. The push for selling of items in the east drove European production and helped integrate them into the exchange. The European expansion and growth of opportunities for trade made possible by the Crusades increased the renaissance of agriculture, mining, and manufacturing. Rapid urbanization throughout Europe allowed a connection from the North Sea to Venice. Advances in industrialization coupled with the rouse of population growth and the growing demands of the eastern trade, led to the growth of true trading emporia with outlets to the sea.
There is a 'multi-polar' nature to archaic globalization, which involved the active participation of non-Europeans. Because it predated the Great Divergence of the nineteenth century, in which Western Europe pulled ahead of the rest of the world in terms of industrial production and economic output, archaic globalization was a phenomenon that was driven not only by Europe but also by other economically developed Old World centers such as Gujarat, Bengal, coastal China and Japan.
These pre-capitalist movements were regional rather than global and for the most part temporary. This idea of early globalization was proposed by the historian A.G. Hopkins in 2001. Hopkins main points on archaic globalization can be seen with trade, and diaspora that developed from this, as well as religious ideas and empires that spread throughout the region. This new interaction amongst states led to interconnections between parts of the world which led to the eventual interdependency amongst these state actors. The main actors that partook in the spreading of goods and ideas were kings, warriors, priests and traders. Hopkins also addresses that during this time period mini-globalizations were prominent and that some collapsed or became more insular. These mini-globalizations are referred to as episodic and ruptured, with empires sometimes overreaching and having to retract. These mini-globalizations left remnants that allowed the West to adopt these new ideals, leading to the idea of Western capitalism. The adopted ideals can be seen in the Western monetary system and are central to systems like capitalism that define modernity and modern globalization.
The three principles of archaic globalization
Archaic globalization consists of three principles: universalizing kingship, expansion of religious movements, and medicinal understanding.
The universalizing of kingship led soldiers and monarchs far distances to find honor and prestige. However, the crossing over foreign lands also gave the traveling men opportunity to exchange prized goods. This expanded trade between distant lands, which consequently increased the amount of social and economic relations.
Despite the vast distances covered by monarchs and their companies, pilgrimages remain one of the greatest global movements of people.
Finally, the desire for better health was the remaining push behind archaic globalization. While the trading of spices, precious stones, animals, and weapons remained of major importance, people began to seek medicine from faraway lands. This implemented more trade routes, especially to China for their tea.
Economic exchange
With the increase in trade and state linkage, economic exchange extended throughout the region and caused actors to form new relationships. This early economic development can be seen in Champagne Fairs, which were outdoor markets where traveling merchants came to sell their products and make purchases. Traditionally, market fairs used barter as opposed to money, once larger itinerant merchants began to frequent them, the need for currency became greater and a money changer needed to be established. Some historical scholars argue that this was the beginning of the role of banker and the institution of credit. An example can be seen with one individual in need of an item the urban merchant does not ordinarily stock. The product seeker orders the item, which the merchant promises to bring him next time. The product seeker either gives credit to the merchant by paying them in advance, gets credit from the merchant by promising to pay them once the item is in stock, or some type of concession is made through a down payment. If the product seeker does not have the amount required by the merchant he may borrow from the capital stored by the money changer or he may mortgage part of his expected harvest, either from the money charger or the merchant he is seeking goods from. This lengthy transaction eventually resulted in a complex economic system and once the weekly market began to expand from barter to the monetized system required by long-distance trading.
A higher circuit of trade developed once urban traders from outside city limits travelled from distant directions to the market center in the quest to buy or sell goods. Merchants would then begin to meet at the same spot on a weekly basis allowing for them to arrange with other merchants to bring special items for exchange that were not demanded by the local agriculturalists but for markets in their home towns. When the local individuals placed advanced orders, customers from towns of different traders may begin to place order for items in a distant town that their trader can order from their counterpart. This central meeting point, becomes the focus of long-distance trade and how it began to increase.
Expansion of long distance trade
In order for trade to be able to expand during this early time period, it required some basic functions of the market as well as the merchants. The first was security. Goods that were being transported began to have more value and the merchants needed to protect their coveted goods especially since they were often traveling through poor areas where the risk of theft was high. To overcome this problem merchants began to travel in caravans as a way to ensure their personal safety as well as the safety of their goods. The second prerequisite to early long distant trade had to be an agreement on a rate of exchange. Since many of the merchants came from distant lands with different monetary systems a system had to be put into place as a way to enforce repayment of previous goods, repay previous debt and to ensure contracts were upheld. Expansion was also able to thrive so long as it had a motive for exchange as a way to promote trade amongst foreign lands. Also, outside merchants access to trading sites was a critical factor in trade route growth.
The spread of goods and ideas
The most popular goods produced were spices, which were traded over short distances, while manufactured goods were central to the system and could not have been aided without them. The invention of money in the form of gold coins in Europe and Middle East and paper money in China around the thirteenth century allowed trade to move more easily between the different actors. The main actors involved in this system viewed gold, silver, and copper as valuable on different levels. Nevertheless, goods were transferred, prices set, exchange rates agreed upon, contracts entered into, credit extended, partnerships formed and agreements that were made were kept on record and honored. During this time of globalization, credit was also used as a means for trading. The use of credit began in the form of blood ties but later led to the emergence the "banker" as a profession.
During the this period the Republic of Genoa and the Republic of Venice emerged as prominent commercial and maritime powers in Europe and in the Mediterranean area as a whole. Genoa, strategically located in the Mediterranean, controlled crucial trade routes connecting Western Europe with the Middle East and North Africa and the Black Sea. This positioning solidified its role as a vital commercial hub, facilitating the exchange of goods and ideas across continents. Meanwhile, Venice dominated trade in the Adriatic, establishing and maintaining an extensive network of routes known as the Venetian maritime empire. These routes reached into the Byzantine and Ottoman Empires, allowing Venice to wield significant economic and political influence in the region. Both republics fiercely competed for control of territories and trade routes, shaping the economic landscape and cultural exchange of their time.
With the spread of people came new ideas, religion and goods throughout the land, which had never been apparent in most societies before the movement. Also, this globalization lessened the degree of feudal life by transitioning from self-sufficient society to a money economy. Most of the trade connecting North Africa and Europe was controlled by the Middle East, China and India around 1400. Because of the danger and great cost of long-distance travel in the pre-modern period, archaic globalization grew out of the trade in high-value commodities which took up a small amount of space. Most of the goods that were produced and traded were considered a luxury and many considered those with these coveted items to have a higher place on the societal scale.
Examples of such luxury goods would include Chinese silks, exotic herbs, coffee, cotton, iron, Indian calicoes, Arabian horses, gems and spices or drugs such as nutmeg, cloves, pepper, ambergris and opium. The thirteenth century as well as present day favor luxury items due to the fact that small high-value goods can have high transport costs but still have a high value attached to them, whereas low-value heavy goods are not worth carrying very far. Purchases of luxury items such as these are described as archaic consumption since trade was largely popular for these items as opposed to everyday needs. The distinction between food, drugs and materia medica is often quite blurred in regards to these substances, which were valued not only for their rarity but because they appealed to humoral theories of health and the body that were prevalent throughout premodern Eurasia.
Major trade routes
During the time of archaic globalization there were three major trade routes which connected Europe, China and the Middle East. The northernmost route went through mostly the Mongol Empire and was nearly 5000 miles long. Even though the route consisted of mostly vast stretches of desert with little to no resources, merchants still traveled it. The route was still traveled because during the 13th century Kubilai Khan united the Mongol Empire and charged only a small protective rent to travelers. Before the unification, merchants from the Middle East used the path but were stopped and taxed at nearly every village. The middle route went from the coast of Syria to Baghdad from there the traveler could follow the land route through Persia to India, or sail to India via the Persian Gulf. Between the 8th and 10th centuries, Baghdad was a world city but in the 11th century it began to decline due to natural disasters including floods, earthquakes, and fires. In 1258, Baghdad was taken over by the Mongols. The Mongols forced high taxes on the citizens of Baghdad which led to a decrease in production, causing merchants to bypass the city The third, southernmost route, went through Mamluk controlled Egypt, After the fall of Baghdad, Cairo became the Islamic capital.
Some major cities along these trading routes were wealthy and provided services for merchants and the international markets. Palmyra and Petra which are located on the fringes of the Syrian Desert, flourished mainly as power centers of trading. They would police the trade routes and be the source of supplies for the merchants caravans. They also became places where people of different ethnic and cultural backgrounds could meet and interact. These trading routes were the communication highways for the ancient civilizations and their societies. New inventions, religious beliefs, artistic styles, languages, and social customs, as well as goods and raw materials, were transmitted by people moving from one place to another to conduct business.
Proto-globalization
Proto-globalization is the period following archaic globalization which occurred from the 17th through the 19th centuries. The global routes established within the period of archaic globalization gave way to more distinguished expanding routes and more complex systems of trade within the period of proto-globalization. Familiar trading arrangements such as the East India Company appeared within this period, making larger-scale exchanges possible. Slave trading was especially extensive and the associated mass-production of commodities on plantations is characteristic of this time.
As a result of a measurable amount of polyethnic regions due to these higher frequency trade routes, war became prominent. Such wars include the French and Indian War, American Revolutionary War. and the Anglo-Dutch War between England and the Dutch Republic.
Modern globalization
The modern form of globalization began to take form during the 19th century. The evolving beginnings of this period were largely responsible for the expansion of the West, capitalism and imperialism backed up by the nation-state and industrial technology. This began to emerge during the 1500s, continuing to expand exponentially over time as industrialization developed in the 18th century. The conquests of the British Empire and the Opium Wars added to the industrialization and formation of the growing global society because it created vast consumer regions.
World War I is when the first phase of modern globalization began to take force. It is said by VM Yeates that the economic forces of globalization were part of the cause of the war. Since World War I, globalization has expanded greatly. The evolving improvements of multinational corporations, technology, science, and mass media have all been results of extensive worldwide exchanges. In addition, institutions such as the World Bank, the World Trade Organization and many international telecommunication companies have also shaped modern globalization. The World Wide Web has also played a large role in modern globalization. The Internet provides connectivity across national and international borders, aiding in the enlargement of a global network.
See also
History of globalization
Military globalization
References
History of globalization | 0.768992 | 0.968048 | 0.744421 |
Yutori education | is a Japanese education policy which reduces the hours and the content of the curriculum in primary education. In 2016, the mass media in Japan used this phrase to criticize drops in scholastic ability.
Background
In education in Japan, primary education is prescribed by Japanese curriculum guidelines (学習指導要領 gakushū shidō yōryō). Since the 1970s, the Japanese government has gradually reduced the amount of class time and the contents given in the guideline, and this tendency is called yutori education. However, in recent years, notably after the 2011 earthquake, this has been a controversial issue.
Yutori education may be translated as "relaxed education" or "education free from pressure", stemming from the word .
History
In the 1970s, school violence and the collapse of classroom discipline became a big problem in junior high schools. So, the government revised the teaching guidelines in 1977. The main purpose was to reduce education stress and to introduce relaxed classes called .
In 1984, during the time of Prime Minister Yasuhiro Nakasone, the was established as a consultative body. The council recommended that education regard the individual personalities of each student as paramount. Two major revisions of the teaching guidelines in 1989 and 1998 were implemented following this announcement.
In 1987, declared four basic core principles to improve education in kindergartens, elementary schools, and junior and senior high schools.
To form people with strength, confidence, and open minds.
To create self-motivated students able to deal with changes in society.
To teach the fundamental knowledge needed by Japanese people and to enrich education to ensure it considers individuality as very important.
To form people who fully understand international society while still respecting Japanese culture and traditions.
Under these principles, the teaching guidelines were revised in 1989. In the lower grades of elementary schools, science and social studies classes were abolished and "environmental studies" was introduced. In junior high school, the number of elective classes was increased to further motivate students.
From 1992, schools closed on the second Saturday of every month to increase student spare time in accordance with the teaching guidelines. From 1995, schools closed on the fourth Saturday also.
In 1996, when the 15th was asked about what the Japanese education of the 21st century should be like, it submitted a report suggesting "the ability to survive" should be the basic principle of education. "The ability to survive" is defined as a principle that tries to keep the balance of intellectual, moral, and physical education.
In 1998, the teaching guidelines were revised to reflect the council's report. 30% of the curriculum was cut and "time for integrated study" in elementary and junior high school was established. It was a drastic change.
The School Curriculum Council stated its goals in a report.
To enrich humanity, sociability, and the awareness of living as a Japanese within international society.
To develop the ability to think and learn independently.
To inculcate fundamental concepts in children at an appropriate pace while developing their individuality.
To let every school form its own ethos.
Around 1999, a decline in the academic abilities of university students became a serious concern. Elementary and secondary education started to be reconsidered. This trend focused criticism on the new teaching guidelines and aroused controversy.
In 2002, schools were no longer compulsory on Saturdays.
In 2007, a was created.
See also
Education in Japan
History of education in Japan
Juku
Ministry of Education, Culture, Sports, Science and Technology (Japan)
Pi is 3
References
Further reading
Yamanouchi, Kenshi (山内乾史) Hara, Kiyoharu (原清治). 2006. Gakuryoku mondai・Yutori kyouiku (学力問題・ゆとり教育). Nihontoshosentā.
Ken Terawaki (寺脇研). 2007. Soredemo, Yutori kyouiku wa machigatteinai(それでも、ゆとり教育は間違っていない) Fusousha.
Hideo Iwaki (岩木秀夫). 2004. Yutori kyouiku kara Kosei rouhi shakai e (ゆとり教育から個性浪費社会へ), Chikumashinsho.
Asahi Shimbun kyouiku shuzaihan(朝日新聞教育取材班). 2003. Tenki no Kyouiku (転機の教育, Education at a turning point) Asahibunko.
Academic pressure in East Asian culture
Education laws and guidelines in Japan | 0.766511 | 0.971127 | 0.74438 |
Heritage studies | Heritage studies looks at the relationship between people and tangible and intangible heritage through the use of social science research methods. The publication of the book by David Lowenthal, The Past is a Foreign Country, in 1985 is credited with creating the field (Carman & Sørensen 2009). While related to the disciplines of history, heritage conservation, and historic preservation, heritage studies is not necessarily concerned with the objective representation of the past. History is "the raw facts of the past" (Aitchison, MacLeod, & Shaw 2000, p. 96) while heritage is "history processed through mythology, ideology, nationalism, local pride, romantic ideas or just plain marketing" (Schouten 1995, p. 21). The meanings of heritage are therefore subjective and rooted in the present; these meanings are defined by social, cultural, and individual processes. In other words, the meanings of heritage can be understood through contemporary sociocultural and experiential values. Lowenthal (1985, p. 410) argues that in the realm of human experience, we create heritage; to most people, heritage is therefore more important than history and is a product of human invention and creativity.
See also
Heritage science
Bibliography
Aitchison, C., MacLeod, N.E., Shaw, S.J. (2000). Leisure and tourism landscapes: Social and cultural geographies. London: Routledge.
Carman, J., & Sørensen, M. L. S. (2009). Heritage studies: An outline. In M. L. S. Sørensen & J. Carman (Eds.), Heritage studies: Methods and approaches (pp. 11–28). Routledge.
Lowenthal, D. (1985). The past is a foreign country. Cambridge, UK: Cambridge University Press.
Schouten, F.F.J. (1995). Heritage as historical reality. In D. T. Herbert (ed.), Heritage, tourism and society. London: Mansell.
External links
Conserving the Human Environment
Association of Critical Heritage Studies
International Journal of Heritage Studies
Cultural heritage
Cultural studies | 0.783851 | 0.949644 | 0.74438 |
Phenomenon-based learning | Phenomenon-based learning is a constructivist form of learning or pedagogy, where students study a topic or concept in a holistic approach instead of in a subject-based approach. Phenomenon-based learning includes both topical learning (also known as topic-based learning or instruction), where the phenomenon studied is a specific topic, event, or fact, and thematic learning (also known as theme-based learning or instruction), where the phenomenon studied is a concept or idea. Phenomenon-based learning emerged as a response to the idea that traditional, subject-based learning is outdated and removed from the real-world and does not offer the optimum approach to development of 21st century skills. It has been used in a wide variety of higher educational institutions and more recently in grade schools.
Features
PhBL forges connections across content and subject areas within the limits of the particular focus. It can be a used as part of teacher-centered passive learning although in practice it is used more in student-centered active learning environments, including inquiry-based learning, problem-based learning, or project-based learning. An example of topical learning might be studying a phenomenon or topic (such as a geographical feature, historical event, or notable person) instead of isolated subjects (such as geography, history, or literature). In the traditional subject-based approach of most Western learning environments, the learner would spend a set amount of time studying each subject; with topical learning, the trend is to spend a greater amount of time focused on the broader topic. During this topical study, specific knowledge or information from the individual subjects would normally be introduced in a relevant context instead of in isolation or the abstract.
Topical learning is most frequently applied as a learner-centered approach, where the student, not the teacher, selects the topic or phenomenon to be studied. This is thought to be more successful at engaging students and providing deeper learning as it will be more likely to align with their own interests and goals. This aspect has also been recognized as facilitating the integration of education as well as a method to enable students to obtain core knowledge and skills across a range of subjects, it has been considered effective in promoting enthusiasm and greater organization, communication, and evaluation.
Similar to project-based learning, it also provides opportunities to explore a topic or concept in detail. With deeper knowledge students develop their own ideas, awareness, and emotions about the topic.
While not absolute, PhBL has several main features:
Inquiry-based
The PhBL approach supports learning in accordance with inquiry learning, problem-based learning, and project and portfolio learning in formal educational as well as in the workplace. It begins with studying and developing an understanding of the phenomenon through inquiry. A problem-based learning approach can then be used to discover answers and develop conclusions about the topic.
Anchored in the real world
The phenomenon-based approach is a form of anchored learning, although it is not necessarily linked to technology. The questions asked and items studied are anchored in real-world phenomena, and the skills that are developed and information learned can be applied across disciplines and beyond the learning environments in real-world situations. Real-world phenomena can also be based on fictional narratives, for example a story, book or fictional character, but these are elements drawn from the real world.
Contextual
PhBL provides a process where new information is applied to the phenomenon or problem. This context demonstrates to the learner immediate utility value of the concepts and information being studied. Application and use of this information during the learning situation is very important for retention. Information that is absorbed only through listening or reading, or in the abstract (such as formulas and theories) without clear and obvious application to the learning at hand, or to real-world application, often remain in short-term memory and are not internalized.
Authenticity
PhBL can demonstrate the authenticity of learning, a key requirement for deeper learning. In a PhBL environment, cognitive processes correspond to those in the actual/real-world situations where the learned subject matter or skills are used. Manowaluilou et al. (2022) found that Project-Based Learning (PBL) can improve children's learning outcomes when authentic, real-world case-based phenomena are employed. This method promotes greater engagement and a deeper understanding of concepts among students. The intent is to bring genuine practices and processes into learning situations to allow participation in the "expert culture" of the area and practices being studied.
Constructivism
PhBL is a constructivist form of learning, in which learners are seen as active knowledge builders and information is seen as being constructed as a result of problem-solving. Information and skills are constructed out of ‘little pieces’ into a whole relevant to the situation at the time. When phenomenon based learning occurs in a collaborative setting (the learners work in teams, for example), it supports the socio-constructivist and sociocultural learning theories, in which information is not seen only as an internal element of an individual; instead, information is seen as being formed in a social context. Central issues in the sociocultural learning theories include cultural artifacts (e.g. systems of symbols such as language, mathematical calculation rules and different kinds of thinking tools) – not every learner needs to reinvent the wheel, they can use the information and tools transmitted by cultures.
Topical learning
Topical learning (TL) has been used for decades to study a specific topic such as a geographical feature, historical event, legal case, medical condition, or notable person, each of which may cover more than one academic subject such as geography, history, law, or medicine. TL forges connections across content areas within the limits of the particular topic. As a cross-disciplinary application, it has been used as a means of assisting foreign language learners to use the topic as a means to learn the foreign language. There are several benefits of topic-based learning. When students focus on learning a topic, the specific subject, such as a foreign language, becomes an important tool or medium to understand the topic, thus providing a meaningful way for learners to use and learn the subject (or language).
Thematic learning
Thematic learning is used to study a macro theme, such as a broad concept or large and integrated system (political system, ecosystem, growth, etc.). In the United States, it is used to study concepts identified in the Core Curriculum Content Standards. As with topical learning, it forges connections across content areas within the limits of the particular topic. Proponents state that by studying the broad concepts that connect what would otherwise be isolated subject areas, learners can develop skills and insights for future learning and the workplace.
Finland
Commencing in the 2016–2017 academic year, Finland will begin implementing educational reform that will mandate that topical learning (phenomenon-based learning) be introduced alongside traditional subject-based instruction. As part of a new National Curriculum Framework, it will apply to all basic schools for students aged 7–16 years old. Finnish schools have used PhBL for several decades, but it was not previously mandatory. It is anticipated that educators around the world will be studying this development as Finland's educational system is considered to be a model of success by many. This shift coincides with other changes that are encouraging development of 21st century skills such as collaboration, communication, creativity, and critical thinking.
References
Further reading
External links
Phenomenal Education
How is Finland building schools of the future?, Enterprise Innovation
Next Generation Science Standards – Using Phenomena in NGSS-Designed Lessons and Units
FAO – Agroecology Knowledge Hub
Pedagogy
Learning methods
Learning programs
Learning programs in Europe | 0.778245 | 0.95644 | 0.744345 |
Dialogic learning | Dialogic learning is learning that takes place through dialogue. It is typically the result of egalitarian dialogue; in other words, the consequence of a dialogue in which different people provide arguments based on validity claims and not on power claims.
The concept of dialogic learning is not a new one. Within the Western tradition, it is frequently linked to the Socratic dialogues. It is also found in many other traditions; for example, the book The Argumentative Indian, written by Nobel Prize of Economics winner Amartya Sen, situates dialogic learning within the Indian tradition and observes that an emphasis on discussion and dialogue spread across Asia with the rise of Buddhism.
In recent times, the concept of dialogic learning has been linked to contributions from various perspectives and disciplines, such as the theory of dialogic action, the dialogic inquiry approach, the theory of communicative action, the notion of dialogic imagination and the dialogical self. In addition, the work of an important range of contemporary authors is based on dialogic conceptions. Among those, it is worth mentioning transformative learning theory; Michael Fielding, who sees students as radical agents of change; Timothy Koschmann, who highlights the potential advantages of adopting dialogicality as the basis of education; and Anne Hargrave, who demonstrates that children in dialogic-learning conditions make significantly larger gains in vocabulary, than do children in a less dialogic reading environment.
Specifically, the concept of dialogic learning (Flecha) evolved from the investigation and observation of how people learn both outside and inside of schools, when acting and learning freely is allowed. At this point, it is important to mention the "Learning Communities", an educational project which seeks social and cultural transformation of educational centers and their surroundings through dialogic learning, emphasizing egalitarian dialogue among all community members, including teaching staff, students, families, entities, and volunteers. In the learning communities, it is fundamental the involvement of all members of the community because, as research shows, learning processes, regardless of the learners' ages, and including the teaching staff, depend more on the coordination among all the interactions and activities that take place in different spaces of the learners' lives, like school, home, and workplace, than only on interactions and activities developed in spaces of formal learning, such as classrooms. Along these lines, the "Learning Communities" project aims at multiplying learning contexts and interactions with the objective of all students reaching higher levels of development.
Classroom education
Dialogic education is an educational philosophy and pedagogical approach that draws on many authors and traditions and applies dialogic learning. In effect, dialogic education takes place through dialogue by opening up dialogic spaces for the co-construction of new meaning to take place within a gap of differing perspectives. In a dialogic classroom, students are encouraged to build on their own and others’ ideas, resulting not only in education through dialogue but also in education for dialogue. Teachers and students are in an equitable relationship and listen to multiple points of view. The pedagogy aims on arriving at the goal: the students’ knowing for and through themselves and therefore “casting the teacher as a guide rather than a director”.
Dialogic approaches to education typically involve dialogue in the form of face-to-face talk including questioning and exploring ideas within a ‘dialogic space’ but can also encompass other instances where 'signs' are exchanged between people, for instance via computer-mediated communication. In this way, dialogic approaches need not be limited only to classroom-based talk or "external talk".
In teaching through the opening of a shared dialogic space, dialogic education draws students into the co-construction of shared knowledge by questioning and building on dialogue rather than simply learning a set of facts. As argued by Mikhail Bakhtin, children learn through persuasive dialogue rather than an authoritative transmission of facts, which enables them to understand by seeing from different points of view. Merleau-Ponty writes that when dialogue works it should no longer be possible to determine who is thinking because learners will find themselves thinking together. It has been suggested by Robin Alexander that in dialogic education, teachers should frame questions carefully in order to encourage reflection and take different students' contributions and present them as a whole. In addition, answers should be considered as leading to further questions in dialogue rather than an end goal.
Definitions of dialogic
There is a lack of clarity around what is meant by the term ‘dialogic’ when used to refer to educational approaches. The term ‘dialogue’ itself is derived from two words in classical Greek, ‘dia’ meaning ‘through’ and ‘logos’ meaning ‘word’ or 'discourse'. Dialogic is defined by the Oxford English Dictionary as an adjective applied to describe anything ‘relating to or in the form of dialogue’. Dialogic can also be used in contrast to ‘monologic’, which is the idea that there is only one true perspective and so that everything has one final correct meaning or truth. Dialogic, however, contends that there is always more than one voice in play behind any kind of explicit claim to knowledge. If knowledge is a product of dialogue it follows that knowledge is never final since the questions we ask and so the answers that we receive, will continue to change.
Dialogic education has been defined as engaging students in an ongoing process of shared inquiry taking the form of a dialogue and as Robin Alexander outlines in his work on dialogic teaching, it involves drawing students into a process of co-constructing knowledge. Rupert Wegerif sums this up by claiming that 'Dialogic Education is education for dialogue as well as education through dialogue'.
Formats
There are a number of formats of instruction, that have been recognized as "dialogic" (as opposed to "monologic").
Interactional: Dialogue involves a high student-teacher talk ratio, short utterances/turns, and interactive exchanges.
Question-answer: Dialogue involves either a teacher asking students questions and eliciting answers from the students or students asking questions and eliciting answers from the teacher and/or one another.
Conversational: Instructional dialogue is modeled after natural mundane everyday conversations.
Without authority: Dialogic guidance occurs among equal peers as authority distorts dialogic processes. Jean Piaget was the first scholar who articulated this position.
Types
There are a number of types of dialogic pedagogy, that is, where the form and the content is recognized as "dialogic".
Paideia: Learning through asking thought-provoking questions, challenging assumptions, beliefs, and ideas, that involve argumentation and disagreements. This notion comes from Socratic dialogues described and developed by Plato.
Exploratory talk for learning: Collective mindstorming and probing ideas, enabling "the speaker to try out ideas, to hear how they sound, to see what others make of them, to arrange information and ideas into different patterns" (p. 4).
Internally persuasive discourse: Bakhtin's notion of "internally persuasive discourse" (IPD) has become influential in helping conceptualize learning. There are at least three approaches to how this notion is currently used in the literature on education:
IPD is understood as appropriation when somebody else's words, ideas, approaches, knowledge, feelings, become one's own. In this approach, "internal" in IPD is understood as an individual's psychological and personal deep conviction.
IPD understood as a student's authorship recognized and accepted by a community of practice, in which the student generates self-assignments and long-term projects within the practice.
IPD is understood as a dialogic regime of the participants' testing ideas and searching for the boundaries of personally-vested truths. In this approach, "internal" is interpreted as internal to the dialogue itself in which everything is "dialogically tested and forever testable" (p. 319).
Instrumental
Instrumental dialogic pedagogy uses dialogue for achieving non-dialogic purposes, usually making students arrive at certain preset learning outcomes. For example, Nicolas Burbules defines dialogue in teaching instrumentally as facilitating new understanding, "Dialogue is an activity directed toward discovery and new understanding, which stands to improve the knowledge, insight, or sensitivity of its participants".
The teacher presents the endpoint of the lesson, for example, "At the end of the lesson, the students will be able to understand/master the following knowledge and skills." However, the teacher's method of leading students to the endpoint can be individualized both in instruction techniques and in time taken. Different students are "closer" or further" from the endpoint and require different strategies to get them there. Thus, for Socrates to manipulate Meno to the preset endpoint – what is virtue is not known and problematic – is not the same as manipulating Anytus to the same endpoint. It takes different and individualized instructional strategies.
Socrates, Paulo Freire and Vivian Paley all strongly critique the idea of preset endpoints however in practice they often set endpoints.
Instrumental dialogic pedagogy remains influential and important for scholars and practitioners of dialogic pedagogy field. Some appreciate its focus on asking good questions, attendance to subjectivity, use of provocations and contradictions, and the way it disrupts familiar and unreflected relations. However, others are concerned about the teacher's manipulation of the student's consciousness and its intellectualism.
Non-instrumental
In contrast to instrumental approaches to dialogic pedagogy, non-instrumental approaches to dialogic pedagogy view dialogue not as a pathway or strategy for achieving meaning or knowledge but as the medium in which they live. Following Bakhtin, meaning is understood as living in the relationship between a genuine question seeking for information and a sincere answer aiming at addressing this question. Non-instrumental dialogic pedagogy focuses on "eternal damn final questions". It is interested in the mundane only because it can give it the material and opportunity to move to the sublime. This is seen, for example, in the work of Christopher Phillips.
The non-instrumental "epistemological dialogue", a term introduced by Alexander Sidorkin, is a purified dialogue to abstract a single main theme, a development of a main concept, and unfold the logic. According to Sidorkin, ontological dialogic pedagogy priorities human ontology in pedagogical dialogue:
Sociolinguist Per Linell and educational philosopher Alexander Sidorkin evidence a non-instrumental ecological approach to dialogic pedagogy that focuses on the dialogicity of the mundane everyday social interaction, its non-constrained nature, in which participants can have freedom to move in and out of the interaction, and the absence or minimum of pedagogical violence. Using the metaphor of "free-range kids", Lenore Skenazy defines the participants in this ecological dialogue as free-range dialogic participants.
Theories
Wells: dialogic inquiry
Gordon Wells (1999) defines "inquiry" not as a method but as a predisposition for questioning, trying to understand situations collaborating with others with the objective of finding answers. "Dialogic inquiry" is an educational approach that acknowledges the dialectic relationship between the individual and the society, and an attitude for acquiring knowledge through communicative interactions. Wells points out that the predisposition for dialogic inquiry depends on the characteristics of the learning environments, and that is why it is important to reorganize them into contexts for collaborative action and interaction. According to Wells, dialogic inquiry not only enriches individuals' knowledge but also transforms it, ensuring the survival of different cultures and their capacity to transform themselves according to the requirements of every social moment.
Freire: the theory of dialogic action
Paulo Freire (1970) states that human nature is dialogic, and believes that communication has a leading role in our life. We are continuously in dialogue with others, and it is in that process that we create and recreate ourselves. According to Freire, dialogue is a claim in favor of the democratic choice of educators. Educators, in order to promote free and critical learning should create the conditions for dialogue that encourages the epistemological curiosity of the learner. The goal of the dialogic action is always to reveal the truth by interacting with others and the world. In his dialogic action theory, Freire distinguishes between dialogical actions, the ones that promote understanding, cultural creation, and liberation; and non-dialogic actions, which deny dialogue, distort communication, and reproduce power.
Habermas: the theory of communicative action
Rationality, for Jürgen Habermas (1984), has less to do with knowledge and its acquisition than with the use of knowledge that individuals who are capable of speech and action make. In instrumental rationality, social agents make an instrumental use of knowledge: they propose certain goals and aim to achieve them in an objective world. On the contrary, in communicative rationality, knowledge is the understanding provided by the objective world as well as by the intersubjectivity of the context where action develops. If communicative rationality means understanding, then the conditions that make reaching consensus possible have to be studied. This need brings us to the concepts of arguments and argumentation. While arguments are conclusions that consist of validity claims as well as the reasons by which they can be questioned, argumentation is the kind of speech in which participants give arguments to develop or turn down the validity claims that have become questionable. At this point, Habermas' differentiation between validity claims and power claims is important. We may attempt to have something we say to be considered good or valid by imposing it by means of force, or by being ready to enter a dialogue in which other people's arguments may lead us to rectify our initial stances. In the first case, the interactant holds power claims, while in the second case, validity claims are held. While in power claims, the argument of force is applied; in validity claims, the force of an argument prevails. Validity claims are the basis of dialogic learning.
Bakhtin: dialogic imagination
Mikhail Bakhtin established (1981) that there is a need of creating meanings in a dialogic way with other people. His concept of dialogism states a relation among language, interaction, and social transformation. Bakhtin believes that the individual does not exist outside of dialogue. The concept of dialogue, itself, establishes the existence of the "other" person. In fact, it is through dialogue that the "other" cannot be silenced or excluded. Bakhtin states that meanings are created in processes of reflection between people. And these are the same meanings that we use in later conversations with others, where those meanings get amplified and even change as we acquire new meanings. In this sense, Bakhtin states that every time that we talk about something that we have read about, seen, or felt; we are actually reflecting the dialogues we have had with others, showing the meanings that we have created in previous dialogues. This is, what is said cannot be separated from the perspectives of others: the individual speech and the collective one are deeply related. It is in this sense that Bakhtin talks about a chain of dialogues, to point out that every dialogue results from a previous one and, at the same time, every new dialogue is going to be present in future ones.
CREA: dialogic interactions and interactions of power
In their debate with John Searle (Searle & Soler 2004) the Centre of Research in Theories and Practices that Overcome Inequalities (CREA, from now on) made two critiques to Habermas. CREA's work on communicative acts points out, on the one hand, that the key concept is interaction and not claim; and, on the other hand, that in relationships can be identified power interactions and dialogic interactions. Although a manager can hold validity claims when inviting his employee to have a coffee with him, the employee can be moved to accept because of the power claim that arises from the unequal structure of the company and of the society, which places her in a subordinate position to the employer. CREA defines power relations as those in which the power interactions involved predominate over the dialogic interactions, and dialogic relations as those in which dialogic interactions are prevalent over power interactions. Dialogic interactions are based on equality and seek understanding through speakers appreciating the provided arguments to the dialogue regardless of the position of power of the speaker. In the educational institutions of democracies, we can find more dialogic interactions than in the educational centers of dictatorships. Nonetheless, even in the educational centers of democracies, when discussing curricular issues, the voice of the teaching staff prevails over the voice of the families, which is almost absent. The educational projects that have contributed to transforming some power interactions into dialogic interactions show that one learns much more through dialogic interactions than through power ones.
History
Dialogic education is argued to have historical roots in ancient oral educational traditions. The chavrusa rabbinic approach, for example, involved pairs of learners analyzing, discussing, and debating shared texts during the era of the Tannaim (approximately 10-220 CE).
Dialogue was also a defining feature of early-Indian texts, rituals, and practices that spread across Asia with the rise of Buddhism. Indeed, one of the earliest references to an idea of dialogue is in the Rigveda (c. 1700-1100 BC), where the poet asks the deities Mitra and Varuna to defend him from the one “who has no pleasure in questioning, or in repeated calling, or in dialogue”. Later, Buddhist educators such as Nichiren (1222-1282) would themselves present work in a dialogic form. It has also been linked to traditional Islamic education with Halaqat al-’Ilm, or Halaqa for short, in mosque-based education whereby small groups participate in discussion and questioning in 'circles of knowledge'. A dialogic element has similarly been found in Confucian education.
Links are often also made with the Socratic method, established by Socrates (470-399 BC), which is a form of cooperative argumentative dialogue to stimulate critical thinking and to draw out ideas and underlying presumptions. Dialogic practices and dialogic pedagogy existed in Ancient Greece, before, during, and after Socrates' time, possibly in other forms than those depicted by Plato. There is some debate over whether the Socratic method should be understood as dialectic rather than as dialogic. However it is interpreted, Socrates approach as described by Plato has been influential in informing modern-day conceptions of dialogue, particularly in Western culture. This is notwithstanding the fact that dialogic educational practices may have existed in Ancient Greece prior to the life of Socrates.
Although modern interest in dialogic pedagogy seems to have emerged only in the 1960s, it was a very old and probably widespread educational practice. In more recent times, Mikhail Bakhtin introduced the idea of dialogism, as opposed to "monologism", to literature. Paulo Freire's work, Pedagogy of the Oppressed introduced these ideas to educational theory. Over the last five decades, robust research evidence has mounted on the impact of dialogic education. A growing body of research indicates that dialogic methods lead to improved performance in students’ content knowledge, text comprehension, and reasoning capabilities. The field has not, however, been without controversy. Indeed, dialogic strategies may be challenging to realize in educational practice given limited time and other pressures. It has also been acknowledged that forms of cultural imperialism may be encouraged through the implementation of a dialogic approach.
Notable authors
Robin Alexander
Mikhail Bakhtin
Karen Barad
Jerome Bruner
Martin Buber
Jacques Derrida
John Dewey
Paulo Freire
Antonio Gramsci
Jürgen Habermas
William James
Julia Kristeva
Matthew Lipman
George Herbert Mead
Maurice Merleau-Ponty
Neil Mercer
Michael Oakeshott
Jean Piaget
Charles Sanders Peirce
Plato
Lev Vygotsky
Rupert Wegerif
See also
Dialogic
Dialectic process vs. dialogic process
Dialogical analysis
Dialogical self
Heteroglossia
Intertextuality
Learning theory (education)
Pedagogy
Relational dialectics
References
Bibliography
Aubert, A., Flecha, A., García, C., Flecha, R., y Racionero, S. (2008). Aprendizaje dialógico en la sociedad de la información. Barcelona: Hipatia Editorial.
Freire, P. (1997). Pedagogy of the Heart. New York: Continuum (O.V. 1995).
Mead, G.H. (1934). Mind, self & society. Chicago: University of Chicago Press.
Searle J., & Soler M. (2004). Lenguaje y Ciencias Sociales. Diálogo entre John Searle y CREA. Barcelona: El Roure Ciencia.
Sen, A. (2005) The argumentative Indian: Writings on Indian history, culture and identity. New York: Farrar, Straus and Giroux.
External links
Journals
Dialogic Pedagogy: An International Online Journal
International Journal for Dialogic Science
Research groups
Cambridge Educational Dialogue Research Group (CEDiR) operates out of the University of Cambridge and contributes to this field. As taken from their website, CEDiR's aim is to consolidate and extend research on dialogic education, reaching across disciplines and contexts to influence theory, policy and practice.
The Center for Research on Dialogic Instruction and the In-Class Analysis of Classroom Discourse is a joint effort housed within the Wisconsin Center for Education Research at the School of Education, University of Wisconsin-Madison.
Learning
Philosophy of psychology
Psychological theories
Theory of mind | 0.764772 | 0.973255 | 0.744318 |
Contemporary-Traditional Art | Contemporary-Traditional Art refers to an art produced at the present period of time that reflects the current culture by utilizing classical techniques in drawing, painting, and sculpting. Practicing artists are mainly concerned with the preservation of time-honored skills in creating works of figurative and representational forms of fine art as a means to express human emotions and experiences. Subjects are based on the aesthetics of balancing external reality with the intuitive, internal conscience driven by emotion, philosophical thought, or the spirit. The term is used broadly to encompass all styles and practices of representational art, such as Classicism, Impressionism, Realism, and Plein Air (En plein air) painting. Technical skills are founded in the teachings of the Renaissance, Academic Art, and American Impressionism.
Organizations
Organized groups of practicing artists and institutions dedicated to furthering classical techniques include the Grand Central Atelier, Art Renewal Center, California Art Club, Florence Academy of Art, the Imperial Academy of Arts in St. Petersburg, Russia, Laguna College of Art and Design, Los Angeles Academy of Figurative Art, New York Academy of Art, and Portrait Society of America.
Publications
Publications referencing the term, Contemporary-Traditional or living artists working in traditional styles:
Empathy For Beauty, Carnegie Art Museum, essay by museum director Suzanne Bellah, 2014
California Light, A Century of Landscapes - Paintings of the California Art Club by Jean Stern and Molly Siple, foreword by Elaine Adams, 2011
Land of Sunlight; Contemporary Paintings of San Diego County (2007) by James Lightner
On Location in Malibu (1999, 2003, 2006, 2009, 2012, 2015) Frederick R. Weisman Museum of Art, curated by museum director Michael Zakian, PhD. Essay, Painting in Malibu, 1999
Selections From the Permanent Collection, Southern Alleghenies Museum of Art, essay by Michael Tomar, PhD., 1996
Notable artists
Peter Seitz Adams (b. 1950)
Ned Bittinger (b. 1951)
Jacob Collins (b. 1964)
Karl Dempwolf (b. 1939)
Frederick Hart (1943-1999, Sculptor)
Everett Raymond Kinstler (b. 1926)
Jeremy Lipking (b. 1974)
Richard Schmid (b. 1934)
Nelson Shanks (1937-2015)
Tim Solliday (b. 1952)
Alexey Steele (b. 1967)
Patricia Watwood (b. 1971)
Bruce Wolfe (b. 1941, Sculptor)
References
Contemporary art | 0.789426 | 0.942827 | 0.744292 |
Cognitive inertia | Cognitive inertia is the tendency for a particular orientation in how an individual thinks about an issue, belief, or strategy to resist change. Clinical and neuroscientific literature often defines it as a lack of motivation to generate distinct cognitive processes needed to attend to a problem or issue. The physics term inertia emphasizes the rigidity and resistance to change in the method of cognitive processing that has been used for a significant amount of time. Commonly confused with belief perseverance, cognitive inertia is the perseverance of how one interprets information, not the perseverance of the belief itself.
Cognitive inertia has been causally implicated in disregarding impending threats to one's health or environment, enduring political values and deficits in task switching. Interest in the phenomenon was primarily taken up by economic and industrial psychologists to explain resistance to change in brand loyalty, group brainstorming, and business strategies. In the clinical setting, cognitive inertia has been used as a diagnostic tool for neurodegenerative diseases, depression, and anxiety. Critics have stated that the term oversimplifies resistant thought processes and suggests a more integrative approach that involves motivation, emotion, and developmental factors.
History and methods
Early history
The idea of cognitive inertia has its roots in philosophical epistemology. Early allusions to a reduction of cognitive inertia can be found in the Socratic dialogues written by Plato. Socrates builds his argument by using the detractor's beliefs as the premise of his argument's conclusions. In doing so, Socrates reveals the detractor's fallacy of thought, inducing the detractor to change their mind or face the reality that their thought processes are contradictory. Ways to combat persistence of cognitive style are also seen in Aristotle's syllogistic method which employs logical consistency of the premises to convince an individual of the conclusion's validity.
At the beginning of the twentieth century, two of the earliest experimental psychologists, Müller and Pilzecker, defined perseveration of thought to be "the tendency of ideas, after once having entered consciousness, to rise freely again in consciousness". Müller described perseveration by illustrating his own inability to inhibit old cognitive strategies with a syllable-switching task, while his wife easily switched from one strategy to the next. One of the earliest personality researchers, W. Lankes, more broadly defined perseveration as "being confined to the cognitive side" and possibly "counteracted by strong will". These early ideas of perseveration were the precursor to how the term cognitive inertia would be used to study certain symptoms in patients with neurodegenerative disorders, rumination and depression.
Cognitive psychology
Originally proposed by William J. McGuire in 1960, the theory of cognitive inertia was built upon emergent theories in social psychology and cognitive psychology that centered around cognitive consistency, including Fritz Heider's balance theory and Leon Festinger's cognitive dissonance. McGuire used the term cognitive inertia to account for an initial resistance to change how an idea was processed after new information, that conflicted with the idea, had been acquired.
In McGuire's initial study involving cognitive inertia, participants gave their opinions of how probable they believed various topics to be. A week later, they returned to read messages related to the topics they had given their opinions on. The messages were presented as factual and were targeted to change the participants' belief in how probable the topics were. Immediately after reading the messages, and one week later, the participants were again assessed on how probable they believed the topics to be. Discomforted by the inconsistency of the related information from the messages and their initial ratings on the topics, McGuire believed the participants would be motivated to shift their probability ratings to be more consistent with the factual messages. However, the participants' opinions did not immediately shift toward the information presented in the messages. Instead, a shift towards consistency of thought on the information from the messages and topics grew stronger as time passed, often referred to as "seepage" of information. The lack of change was reasoned to be due to persistence in the individual's existing thought processes which inhibited their ability to re-evaluate their initial opinion properly, or as McGuire called it, cognitive inertia.
Probabilistic model
Although cognitive inertia was related to many of the consistency theories at the time of its conception, McGuire used a unique method of probability theory and logic to support his hypotheses on change and persistence in cognition. Utilizing a syllogistic framework, McGuire proposed that if three issues (a, b and c) were so interrelated that an individual's opinion were in complete support of issues a and b then it would follow their opinion on issue c would be supported as a logical conclusion. Furthermore, McGuire proposed if an individual's belief in the probability (p) of the supporting issues (a or b) was changed, then not only would the issue (c) explicitly stated change, but a related implicit issue (d) could be changed as well. More formally:
This formula was used by McGuire to show that the effect of a persuasive message on a related, but unmentioned, topic (d) took time to sink in. The assumption was that topic d was predicated on issues a and b, similar to issue c, so if the individual agreed with issue c then so too should they agree with issue d. However, in McGuire's initial study immediate measurement on issue d, after agreement on issues a, b and c, had only shifted half the amount that would be expected to be logically consistent. Follow-up a week later showed that shift in opinion on issue d had shifted enough to be logically consistent with issues a, b, and c, which not only supported the theory of cognitive consistency, but also the initial hurdle of cognitive inertia.
The model was based on probability to account for the idea that individuals do not necessarily assume every issue is 100% likely to happen, but instead there is a likelihood of an issue occurring and the individual's opinion on that likelihood will rest on the likelihood of other interrelated issues.
Examples
Public health
Historical
Group (cognitive) inertia, how a subset of individuals view and process an issue, can have detrimental effects on how emergent and existing issues are handled. In an effort to describe the almost lackadaisical attitude from a large majority of U.S. citizens toward the insurgence of the Spanish flu in 1918, historian Tom Dicke has proposed that cognitive inertia explains why many individuals did not take the flu seriously. At the time, most U.S. citizens were familiar with the seasonal flu. They viewed it as an irritation that was often easy to treat, infected few, and passed quickly with few complications and hardly ever a death. However, this way of thinking about the flu was detrimental to the need for preparation, prevention, and treatment of the Spanish flu due to its quick spread and virulent form until it was much too late, and it became one of the most deadly pandemics in history.
Contemporary
In the more modern period, there is an emerging position that anthropogenic climate change denial is a kind of cognitive inertia. Despite the evidence provided by scientific discovery, there are still those – including nations – who deny its incidence in favor of existing patterns of development.
Geography
To better understand how individuals store and integrate new knowledge with existing knowledge, Friedman and Brown tested participants on where they believed countries and cities to be located latitudinally and then, after giving them the correct information, tested them again on different cities and countries. The majority of participants were able to use the correct information to update their cognitive understanding of geographical locations and place the new locations closer to their correct latitudinal location, which supported the idea that new knowledge affects not only the direct information but also related information. However, there was a small effect of cognitive inertia as some areas were unaffected by the correct information, which the researchers suggested was due to a lack of knowledge linkage in the correct information and new locations presented.
Group membership
Politics
The persistence of political group membership and ideology is suggested to be due to the inertia of how the individual has perceived the grouping of ideas over time. The individual may accept that something counter to their perspective is true, but it may not be enough to tip the balance of how they process the entirety of the subject.
Governmental organizations can often be resistant or glacially slow to change along with social and technological transformation. Even when evidence of malfunction is clear, institutional inertia can persist. Political scientist Francis Fukuyama has asserted that humans imbue intrinsic value on the rules they enact and follow, especially in the larger societal institutions that create order and stability. Despite rapid social change and increasing institutional problems, the value placed on an institution and its rules can mask how well an institution is functioning as well as how that institution could be improved. The inability to change an institutional mindset is supported by the theory of punctuated equilibrium, long periods of deleterious governmental policies punctuated by moments of civil unrest. After decades of economic decline, the United Kingdom's referendum to leave the EU was seen as an example of the dramatic movement after a long period of governmental inertia.
Interpersonal roles
The unwavering views of the roles people play in our lives have been suggested as a form of cognitive inertia. When asked how they would feel about a classmate marrying their mother or father, many students said they could not view their classmate as a step-father/mother. Some students went so far as to say that the hypothetical relationship felt like incest.
Role inertia has also been implicated in marriage and the likelihood of divorce. Research on couples who cohabit together before marriage shows they are more likely to get divorced than those who do not. The effect is most seen in a subset of couples who cohabit without first being transparent about future expectations of marriage. Over time, cognitive role inertia takes over, and the couple marries without fully processing the decision, often with one or both of the partners not fully committed to the idea. The lack of deliberative processing of existing problems and levels of commitment in the relationship can lead to increased stress, arguments, dissatisfaction, and divorce.
In business
Cognitive inertia is regularly referenced in business and management to refer to consumers' continued use of products, a lack of novel ideas in group brainstorming sessions, and lack of change in competitive strategies.
Brand loyalty
Gaining and retaining new customers is essential to whether a business succeeds early on. To assess a service, product, or likelihood of customer retention, many companies will invite their customers to complete satisfaction surveys immediately after purchasing a product or service. However, unless the satisfaction survey is completed immediately after the point of purchase, the customer response is often based on an existing mindset about the company, not the actual quality of experience. Unless the product or service is extremely negative or positive, cognitive inertia related to how the customer feels about the company will not be inhibited, even when the product or service is substandard. These satisfaction surveys can lack the information businesses need to improve a service or product that will allow them to survive against the competition.
Brainstorming
Cognitive inertia plays a role in why a lack of ideas is generated during group brainstorming sessions. Individuals in a group will often follow an idea trajectory, in which they continue to narrow in on ideas based on the very first idea proposed in the brainstorming session. This idea trajectory inhibits the creation of new ideas central to the group's initial formation.
In an effort to combat cognitive inertia in group brainstorming, researchers had business students either use a single-dialogue or multiple-dialogue approach to brainstorming. In the single dialogue version, the business students all listed their ideas. They created a dialogue around the list, whereas, in the multi-dialogue version, ideas were placed in subgroups that individuals could choose to enter and talk about and then freely move to another subgroup. The multi-dialogue approach was able to combat cognitive inertia by allowing different ideas to be generated in sub-groups simultaneously and each time an individual switched to a different sub-group, they had to change how they were processing the ideas, which led to more novel and high-quality ideas.
Competitive strategies
Adapting cognitive strategies to changing business climates is often integral to whether or not a business succeeds or fails during economic stress. In the late 1980s in the UK, real estate agents' cognitive competitive strategies did not shift with signs of an increasingly depressed real estate market, despite their ability to acknowledge the signs of decline. This cognitive inertia at the individual and corporate level has been proposed as reasons to why companies do not adopt new strategies to combat the ever-increasing decline in the business or take advantage of the potential. General Mills' continued operation of mills long after they were no longer necessary is an example of when companies refuse to change the mindset of how they should operate.
More famously, cognitive inertia in upper management at Polaroid was proposed as one of the main contributing factors to the company's outdated competitive strategy. Management strongly held that consumers wanted high-quality physical copies of their photos, where the company would make their money. Despite Polaroid's extensive research and development into the digital market, their inability to refocus their strategy to hardware sales instead of film eventually led to their collapse.
Scenario planning has been one suggestion to combat cognitive inertia when making strategic decisions to improve business. Individuals develop different strategies and outline how the scenario could play out, considering different ways it could go. Scenario planning allows for diverse ideas to be heard and the breadth of each scenario, which can help combat relying on existing methods and thinking alternatives is unrealistic.
Management
In a recent review of company archetypes that lead to corporate failure, Habersang, Küberling, Reihlen, and Seckler defined "the laggard" as one who rests on the laurels of the company, believing past success and recognition will shield them from failure. Instead of adapting to changes in the market, "the laggard" assumes that the same strategies that won the company success in the past will do the same in the future. This lag in changing how they think about the company can lead to rigidity in company identity, like Polaroid, conflict in adapting when the sales plummet, and resource rigidity. In the case of Kodak, instead of reallocating money to a new product or service strategy, they cut production costs and imitation of competitors, both leading to poorer quality products and eventually bankruptcy.
A review of 27 firms integrating the use of big data analytics found cognitive inertia to hamper the widespread implementation, with managers from sectors that did not focus on digital technology seeing the change as unnecessary and cost prohibitive.
Managers with high cognitive flexibility that can change the type of cognitive processing based on the situation at hand are often the most successful in solving novel problems and keeping up with changing circumstances. Interestingly, shifts in mental models (disrupting cognitive inertia) during a company crisis are frequently at the lower group level, with leaders coming to a consensus with the rest of the workforce in how to process and deal with the crisis, instead of vice versa. It is proposed that leaders can be blinded by their authority and too easily disregard those at the front-line of the problem causing them to reject remunerative ideas.
Applications
Therapy
An inability to change how one thinks about a situation has been implicated as one of the causes of depression. Rumination, or the perseverance of negative thoughts, is often correlated with the severity of depression and anxiety. Individuals with high levels of rumination test low on scales of cognitive flexibility and have trouble shifting how they think about a problem or issue even when presented with facts that counter their thinking process.
In a review paper that outlined strategies that are effective for combating depression, the Socratic method was suggested to overcome cognitive inertia. By presenting the patient's incoherent beliefs close together and evaluating with the patient their thought processes behind those beliefs, the therapist is able to help them understand things from a different perspective.
Clinical diagnostics
In nosological literature relating to the symptom or disorder of apathy, clinicians have used cognitive inertia as one of the three main criteria for diagnosis. The description of cognitive inertia differs from its use in cognitive and industrial psychology in that lack of motivation plays a key role. As a clinical diagnostic criterion, Thant and Yager described it as "impaired abilities to elaborate and sustain goals and plans of actions, to shift mental sets, and to use working memory". This definition of apathy is frequently applied to onset of apathy due to neurodegenerative disorders such as Alzheimer's and Parkinson's disease but has also been applied to individuals who have gone through extreme trauma or abuse.
Neural anatomy and correlates
Cortical
Cognitive inertia has been linked to decreased use of executive function, primarily in the prefrontal cortex, which aids in the flexibility of cognitive processes when switching tasks. Delayed response on the implicit associations task (IAT) and Stroop task have been related to an inability to combat cognitive inertia, as participants struggle to switch from one cognitive rule to the next to get the questions right.
Before taking part in an electronic brainstorming session, participants were primed with pictures that motivated achievement to combat cognitive inertia. In the achievement-primed condition, subjects were able to produce more novel, high-quality ideas. They used more right frontal cortical areas related to decision-making and creativity.
Cognitive inertia is a critical dimension of clinical apathy, described as a lack of motivation to elaborate plans for goal-directed behavior or automated processing. Parkinson's patients whose apathy was measured using the cognitive inertia dimension showed less executive function control than Parkinson's patients without apathy, possibly suggesting more damage to the frontal cortex. Additionally, more damage to the basal ganglia in Parkinson's, Huntington's and other neurodegenerative disorders have been found with patients exhibiting cognitive inertia in relation to apathy when compared to those who do not exhibit apathy. Patients with lesions to the dorsolateral prefrontal cortex have shown reduced motivation to change cognitive strategies and how they view situations, similar to individuals who experience apathy and cognitive inertia after severe or long-term trauma.
Functional connectivity
Nursing home patients who have dementia have been found to have larger reductions in functional brain connectivity, primarily in the corpus callosum, important for communication between hemispheres. Cognitive inertia in neurodegenerative patients has also been associated with a decrease in the connection of the dorsolateral prefrontal cortex and posterior parietal area with subcortical areas, including the anterior cingulate cortex and basal ganglia. Both findings are suggested to decrease motivation to change one's thought processes or create new goal-directed behavior.
Alternative theories
Some researchers have refuted the cognitive perspective of cognitive inertia and suggest a more holistic approach that considers the motivations, emotions, and attitudes that fortify the existing frame of reference.
Alternative paradigms
Motivated reasoning
The theory of motivated reasoning is proposed to be driven by the individual's motivation to think a certain way, often to avoid thinking negatively about oneself. The individual's own cognitive and emotional biases are commonly used to justify a thought, belief, or behavior. Unlike cognitive inertia, where an individual's orientation in processing information remains unchanged either due to new information not being fully absorbed or being blocked by a cognitive bias, motivated reasoning may change the orientation or keep it the same depending on whether that orientation benefits the individual.
In an extensive online study, participant opinions were acquired after two readings about various political issues to assess the role of cognitive inertia. The participants gave their opinions after the first reading and were then assigned a second reading with new information; after being assigned to read more information on the issue that either confirmed or disconfirmed their initial opinion, the majority of participants' opinions did not change. When asked about the information in the second reading, those who did not change their opinion evaluated the information that supported their initial opinion as stronger than information that disconfirmed their initial opinion. The persistence in how the participants viewed the incoming information was based on their motivation to be correct in their initial opinion, not the persistence of an existing cognitive perspective.
Socio-cognitive inflexibility
From a social psychology perspective, individuals continually shape beliefs and attitudes about the world based on interaction with others. What information the individual attends to is based on prior experience and knowledge of the world. Cognitive inertia is seen not just as a malfunction in updating how information is being processed but as the assumptions about the world and how it works can impede cognitive flexibility. The persistence of the idea of the nuclear family has been proposed as a socio-cognitive inertia. Despite the changing trends in family structure, including multi-generational, single-parent, blended, and same-sex parent families, the normative idea of a family has centered around the mid-twentieth century idea of a nuclear family (i.e., mother, father, and children). Various social influences are proposed to maintain the inertia of this viewpoint, including media portrayals, the persistence of working-class gender roles, unchanged domestic roles despite working mothers, and familial pressure to conform.
The phenomenon of cognitive inertia in brainstorming groups has been argued to be due to other psychological effects such as fear of disagreeing with an authority figure in the group, fear of new ideas being rejected and the majority of speech being attributed to the minority group members. Internet-based brainstorming groups have been found to produce more ideas of high-quality because it overcomes the problem of speaking up and fear of idea rejection.
See also
References
Cognitive psychology
Heuristics
Management
Behavioral economics | 0.76618 | 0.971426 | 0.744287 |
Perdurantism | Perdurantism or perdurance theory is a philosophical theory of persistence and identity. The debate over persistence currently involves three competing theories—one three-dimensionalist theory called "endurantism" and two four-dimensionalist theories called
"perdurantism" and "exdurantism". For a perdurantist, all objects are considered to be four-dimensional worms and they make up the different regions of spacetime. It is a fusion of all the perdurant's instantaneous time slices
compiled and blended into a complete mereological whole. Perdurantism posits that temporal parts alone are what ultimately change. Katherine Hawley in How Things Persist states that change is "the possession of different properties by different temporal parts of an object".
Take any perdurant and isolate a part of its spatial region. That isolated spatial part has a corresponding temporal part to match it. We can imagine an object, or four-dimensional worm: an apple. This object is not just spatially extended but temporally extended. The complete view of the apple includes its coming to be from the blossom, its development, and its final decay. Each of these stages is a temporal time slice of the apple, but by viewing an object as temporally extended, perdurantism views the object in its entirety.
The use of "endure" and "perdure" to distinguish two ways in which an object can be thought to persist can be traced to David Kellogg Lewis (1986). However, contemporary debate has demonstrated the difficulties in defining perdurantism (and also endurantism). For instance, the work of Ted Sider (2001) has suggested that even enduring objects can have temporal parts, and it is more accurate to define perdurantism as being the claim that objects have a temporal part at every instant that they exist. Currently, there is no universally acknowledged definition of perdurantism. Others argue that this problem is avoided by creating time as a continuous function, rather than a discrete one.
Perdurantism is also referred to as "four-dimensionalism" (by Ted Sider, in particular), but perdurantism also applies if one believes there are temporal but non-spatial abstract entities (like immaterial souls or universals of the sort accepted by David Malet Armstrong).
Worm theorists and stage theorists
Four-dimensionalist theorists break into two distinct sub-groups: worm theorists and stage theorists.
Worm theorists believe that a persisting object is composed of the various temporal parts that it has. It can be said that objects that persist are extended through the time dimension of the block universe much as physical objects are extended in space. Thus, they believe that all persisting objects are four-dimensional "worms" that stretch across space-time, and that you are mistaken in believing that chairs, mountains, and people are simply three-dimensional.
Stage theorists take discussion of persisting objects to be talk of a particular temporal part, or stage, of an object at any given time. So, in a manner of speaking, a subject only exists for an instantaneous period of time. However, there are other temporal parts at other times which that subject is related to in a certain way (Sider talks of "modal counterpart relations", whilst Hawley talks of "non-Humean relations") such that when someone says that they were a child, or that they will be an elderly person, these things are true, because they bear a special "identity-like" relation to a temporal part that is a child (that exists in the past) or a temporal part that is an elderly person (that exists in the future). Stage theorists are sometimes called "exdurantists".
Exdurantism, like perdurantism, presumes the temporal ontology of eternalism. With this alternative four-dimensionalist persistence theory, however, ordinary objects are no longer perduring worms but, rather, are wholly present instantaneous stages. Moreover, things also do not gain or lose properties/parts because each distinct stage has all these properties/parts in their entirety from one counterpart stage to the next.
It has been argued that stage theory, unlike the worm theory, should be favored as it accurately accounts for the contents of our experience. The latter requires that we currently experience more than a single moment of our lives while we actually find ourselves experiencing only one instant of time, in line with the stage view. Contemporary perdurantists disagree, arguing that it is a fusion of all the perdurant’s instantaneous time slices compiled and blended into a complete mereological whole. Perdurantist do not think you are experiencing more than one time slice at a time, but that all those moments are a part of reality, and comprise you as a whole.
Recently, it's been argued that perdurantism is superior to exdurantism because exdurantism is too extravagant in counting ordinary objects in the world. Extravagant for the reason that objects in their entirety are bound to their momentary stages, and there is practically an interminable number of these stages, which is not reasonable when counting in the ordinary world. An exdurantist claims a continuant to hold the same identity simply from this stage’s being similar to a subsequent stage, which is what makes the two stages temporal counterparts. Resemblance amongst momentary counterpart stages is insufficient to escape vagueness because similarity itself is vague. Similar in what way? By noting when there is a similarity amongst sortals and that there are adequate causal relations held between them, exdurantists avoid vagueness the best they can. Counterpart theorists follow the identity of a continuant from following the relationship among stages. The problem still lies that there is no clear cutoff point concerning what was and what was not a counterpart of the object and whether we can really attribute a causal relationship between the distinct momentary counterpart object-stages.
For an exdurantist, there are as many objects as there are moments in a continuant’s spacetime career, i.e., there are as many objects as there are stages of a continuant’s existence; e.g., with a continuant like an apple, there are as many distinct objects as there are stages in the span of the apple’s spacetime career, which is an enormous number. Perdurantists and endurantists both think there is only one object—one continuant—that persists, while
exdurantists think that there is one continuant but a multiplicity of object-stages that exdure. However, on the other hand, as Stuchlik (2003) states, the stage theory will not work under the possibility of gunky time, which states that for every interval of time, there is a sub-interval, and according to Zimmerman (1996), there have been many self-professed perdurantists who believe that time is gunky or contains no instants. Some perdurantists think the idea of gunk means there are no instants, since they define these as intervals of time with no subintervals.
See also
Counterpart theory
World line
Ship of Theseus
Notes
References
Bibliography
Balashov, Y. (2015). “Experiencing the Present”. Epistemology & Philosophy of Science 44 (2): 61-73.
Callais, Richard. "Persistence as a Four-Dimensionalist: Perdurantism vs. Exdurantism", Dialogue 64 (1):24-29 (2021).
Lewis, D.K. 1986. On the Plurality of Worlds. Oxford: Blackwell.
McKinnon, N. 2002. "The Endurance/Perdurance Distinction", The Australasian Journal of Philosophy 80 (3): 288-306.
Merricks, T. 1999. "Persistence, Parts and Presentism", Noûs 33: 421-38.
Parsons, J. (2015). “A phenomenological argument for stage theory”. Analysis 75 (2): 237-242
Sider, T. 2001. Four-Dimensionalism. Oxford: Clarendon Press.
Sider, T. 1996.“All the World's a Stage”, Australasian Journal of Philosophy 74 (3): 433-453.
Skow, B. (2011). “Experience and the Passage of Time”. Philosophical Perspectives 25 (1): 359-387.
Joshua M. Stuchlik.“Not All Worlds Are Stages”, Philosophical Studies Vol. 116, No. 3 (Dec. 2003): 309–321
Zimmerman, D. 1996. “Persistence and Presentism”, Philosophical Papers 25: 2.
External links
"Time." Internet Encyclopedia of Philosophy.
"Persistence in Time." Internet Encyclopedia of Philosophy.
Temporal parts at the Stanford Encyclopedia of Philosophy
Mereology
Theories of time
Identity (philosophy) | 0.761169 | 0.977755 | 0.744237 |
Afrocentricity | Afrocentricity is an academic theory and approach to scholarship that seeks to center the experiences and peoples of Africa and the African diaspora within their own historical, cultural, and sociological contexts. First developed as a systematized methodology by Molefi Kete Asante in 1980, he drew inspiration from a number of African and African diaspora intellectuals including Cheikh Anta Diop, George James, Harold Cruse, Ida B. Wells, Langston Hughes, Malcolm X, Marcus Garvey, and W. E. B. Du Bois. The Temple Circle, also known as the Temple School of Thought, Temple Circle of Afrocentricity, or Temple School of Afrocentricity, was an early group of Africologists during the late 1980s and early 1990s that helped to further develop Afrocentricity, which is based on concepts of agency, centeredness, location, and orientation.
Definition
Afrocentricity was coined to evoke "African-centeredness", and, as a unifying paradigm, draws from the foundational scholarship of Africana studies and African studies. Those who identify as specialists in Afrocentricity, including historians, philosophers, and sociologists, call themselves "Africologists" or "Afrocentrists." Africologists seek to ground their work in the perspective and culture common to African peoples, and center African peoples and their experiences as agents and subjects.
Ama Mazama defined the paradigm of Afrocentricity as being composed of the "ontology/epistemology, cosmology, axiology, and aesthetics of African people" and as being "centered in African experiences", which then conveys the "African voice". According to her, Afrocentricity incorporates African dance, music, rituals, legends, literature, and oratures as key features of its expository approach. Axiological features of Afrocentricity that Mazama identifies include explorations of African ethics, and the aesthetic aspects incorporate African mythology, rhythm, and the performing arts. Mazama also argues that Afrocentricity can integrate aspects of African spiritualities as essential components of African worldviews. Mazama sees spirituality and other intuitive methods of acquiring knowledge and emotional responses used in the paradigm as a counterbalance to rationality, and firsthand experience of these cultural and spiritual artifacts can inform Afrocentricity. Mazama indicates that many of the terms and concepts used in Afrocentricity are meant to shift the conceptual status of Africans from being objects that are acted upon to subjects who are agents that act.
In contrast to the hegemonic ideology of Eurocentrism, the paradigm of Afrocentricity is argued by Africologists to be non-hierarchical and pluralistic and not intended to supplant "'white knowledge' with 'black knowledge'". As a holistic multidisciplinary theory with a strong focus on the location and agency of Africans, Afrocentricity is designed not to accept the role of subaltern prescribed to Africans by Eurocentrism. An important aspect of Afrocentricity is therefore a deconstruction and criticism of hegemony, racism, and prejudice.
Africologists, who produce Afrocentric academic works, identify their professional field as Afrocentricity – not Afrocentrism. Crucially, not all academic works that focus on African or African-American topics are necessarily Afrocentric and neither are works on melanist theories nor those rooted in matters of color of the skin, biology, or biological determinism; this means that some claims to Afrocentricity are not strictly part of the paradigm, and certain critiques of supposedly Afrocentric ideas may not be critiques of Afrocentricity per se.
History
Midas Chanawe outlined in his historical survey of the development of Afrocentricity how experiences of the trans-Atlantic slave trade, Middle Passage, and legal prohibition of literacy, shared by enslaved African-Americans, followed by the experience of dual cultures (e.g., Africanisms, Americanisms), resulted in some African-Americans re-exploring their African cultural heritage rather than choosing to be Americanized. Additionally, the African-American experience of ongoing racism emphasized the importance that culture and its relative nature could have on their intellectual enterprise. All of this cultivated a foundation for the development of Afrocentricity. Examples of the kinds of arguments that presaged Afrocentricity include pieces published in the Freedom's Journal (1827) that drew connections between Africans and ancient Egyptians, African-American abolitionists, such as Frederick Douglass and David Walker, who highlighted the accomplishments of the ancient Egyptians as Africans to undermine the white supremacist assertion that Africans were inferior, and the assertions of the Pan-Africanist, Marcus Garvey, who argued that ancient Egypt laid the foundation for civilization in world history. These would be echoed in the contexts of Black Nationalism, Negritude, Pan-Africanism, Black Power movement, and the Black is Beautiful movement that served as harbingers for the formal development of Afrocentricity.
Molefi Kete Asante dates the first use of the term, "Afro-centric", to 1964, when the Institute of African Studies was being established in Ghana and its founder, Kwame Nkrumah, said to the Editorial Board of the Encyclopedia Africana: "[T]he Africana Project must be frankly Afro-centric in its interpretation of African history, and of the social and cultural institutions of the African and people of African descent everywhere." Other antecedents to Afrocentricity identified by Asante include the 1948 work of Cheikh Anta Diop when he introduced the idea of an "African Renaissance", J.A. Sofala's 1973 treatise The African Culture and Personality, and the three 1973 publications of The Afrocentric Review. Following the example of these and other preceding African intellectuals, Asante formally proposed the concept of Afrocentricity in a 1980 publication, Afrocentricity: The Theory of Social Change, and further refined the concept in The Afrocentric Idea (1987). Other influential publications that helped to develop Afrocentricity include Linda James Myers' Understanding the Afrocentric Worldview (1988), Asante's Kemet, Afrocentricity and Knowledge (1992), Ama Mazama's edited compilation The Afrocentric Paradigm (2003), and Asante's An Afrocentric Manifesto (2007).
Temple University, the institutional home of Molefi Kete Asante and site of the first PhD program in the field of Africana Studies, which at Temple is named Africology and African American Studies, is widely regarded as the leading institution for scholarship in Afrocentricity. In addition to Molefi Kete Asante, Afrocentricity developed among the "Temple Circle" (e.g., Abu Abarry, Kariamu Welsh Asante, Terry Kershaw, Tsehloane Keto, Ama Mazama, Theophile Obenga). As a result of the scholarly development of Afrocentricity, several scholarly journals and professional associations have developed throughout the United States of America and Africa. As a global intellectual enterprise, Afrocentricity is studied, taught, and exemplified at institutions and locations, such as Quilombismo (which was initiated by Abdias Nascimento) in Brazil, the Universitario del Pacifico in Buenaventura, Colombia, the programs of Africamaat in Paris, France, the Centre for African Renaissance at the University of South Africa in South Africa, a training program operated by Stanley Mkhize at the University of Witwatersrand in South Africa, and the Molefi Kete Asante Institute in Philadelphia, Pennsylvania, United States. Africological conferences also developed, some which operate by invitation, and some which occur on a yearly basis, such as the Cheikh Anta Diop Conference. The theory of Afrocentricity also had subsequent impact on other academic fields and theories, such as anthropology, education, jazz theory, linguistics, organizational theory, and physical education.
Differences between Afrocentricity and Afrocentrism
Afrocentricity and Afrocentrism are not synonymous, but, instead, are distinct from one another, and should not be mistaken for one another. Molefi Kete Asante explains:
By way of distinction, Afrocentricity should not be confused with the variant Afrocentrism. The term “Afrocentrism” was first used by the opponents of Afrocentricity who in their zeal saw it as an obverse of Eurocentrism. The adjective “Afrocentric” in the academic literature always referred to “Afrocentricity.” However, the use of “Afrocentrism” reflected a negation of the idea of Afrocentricity as a positive and progressive paradigm. The aim was to assign religious signification to the idea of African centeredness. However, it has come to refer to a broad cultural movement of the late twentieth century that has a set of philosophical, political, and artistic ideas which provides the basis for the musical, sartorial, and aesthetic dimensions of the African personality. On the other hand, Afrocentricity, as I have previously defined it, is a theory of agency, that is, the idea that African people must be viewed and view themselves as agents rather than spectators to historical revolution and change. To this end Afrocentricity seeks to examine every aspect of the subject place of Africans in historical, literary, architectural, ethical, philosophical, economic, and political life.
In addition to Molefi Kete Asante, many other academics have explained that Afrocentricity and Afrocentrism are distinct from one another, and that critics have often conflated the two when criticizing Afrocentricity. Further, Asante indicates that by conflating Afrocentricity with Afrocentricism, critics of Afrocentrism have mischaracterized Afrocentricity as being a "'religious' movement based on an essentialist paradigm." Other academics have also been critical of the criticism of Afrocentricity that seek to define it as a religious movement. Historian and medical anthropologist Katherine Bankole-Medina notes that rather than seeking to understand the theory of Afrocentricity or engage in constructive discourse with the scholars of the theory, many critical academics seek to critique and discredit the theory as well as engage in intellectual militarism. Consequently, between Afrocentrism and Afrocentricity, many critical academics tend to overlook their key suffix distinction (i.e., -ism and -icity). Philosopher Ramose indicated that, in contrast to Afrocentricity, Afrocentrism has been characterized as a notion that negates the notion of Afrocentricity being a “positive and progressive paradigm.”
Other academics have indicated that since Afrocentricity has been made increasingly well-known inside and outside of academia, it has resulted in non-academics developing their own forms of analysis that are not so precise or accurate and these subsequently developed forms of analysis have been incorporated into various forms of media (e.g., music, film). This form of popular culture, or Afrocentrism, has also subsequently been mistaken for the systematic methodology of Afrocentricity. As a result of the popular misconceptions of what Afrocentricity is not, Stewart indicates that this has had a negative impact in terms of public perception. Some academics have stated that, while Afrocentrism is popular culture, Afrocentricity is an academic theory and that Afrocentricity has been depicted by mass media and critics as Afrocentrism in order to attempt to mischaracterize and/or invalidate Afrocentricity. Karenga indicated that distinctions exist between the public understanding of Afrocentrism that has been conveyed through mass media, which is held by some proponents and held by some critics of Afrocentrism, and the academic conceptualization of Afrocentricity held by Africologists. Karenga indicates that Afrocentricity is an intellectual paradigm or methodology, whereas, Afrocentrism, by merit of the term’s suffix (i.e., -ism), is an ideological and political disposition. Additionally, Karenga indicates that, in Afrocentricity, African behaviors and African culture are subject to examination through the centered lens of African ideals. M'Baye indicates that, unlike Afrocentrism, the intellectual theory of Afrocentricity adds value to the field of Black studies.
Some academics have stated that some of the more radical views of Afrocentricism have been unfairly attributed to Kete Asante.
Some academics have indicated that Afrocentricity is distinct from Afrocentrism, and that Afrocentrism is frequently confused with ethnonationalism, often simplified to black pride or romanticized black history, often misconstrued by progressive/liberal academics as being a black version of white nationalism, or mischaracterized as being a black version of Eurocentrism. They further state, that Afrocentrism has been fallaciously characterized as being a notion based on black supremacy and as being the black equivalent of hegemonic Eurocentrism. Rasekoala states that, while Afrocentrism has been characterized as an ideology focused on cultural traits (e.g., customs, habits, traditions, values, value systems) of Africans, Afrocentricity is a methodology that focuses on the positionality, agency, and experiences of Africans.
Proponents of Afrocentricity state that it is a theoretic concept of agency. They further state that the detractors of Afrocentricity intentionally mislabel Afrocentricity as Afrocentrism in order to steer people of African descent, who are not yet aware of what composes Afrocentricity, away from it. This has been characterized as an “ongoing ideological warfare to ensure the continuation of the subjugation of African people as objects of analysis, thus discouraging them from being agents in their own history.” Additionally, it has been further indicated that those who charge scholars of Afrocentricity of producing political propaganda, do so as well, while portraying it as scholarship, in order to deny the agency of Africans and to avoid critique. Hilliard and Alkebulan indicate that, rather than the academic work of scholars of Afrocentricity being used to define Afrocentricity, mass media has shaped the public understanding of Afrocentricity using the work of journalists and the work of academics, who are not professionals in the field of Afrocentricity – such as Mary Lefkowitz and her work, Not Out of Africa, which also confuses Afrocentrism with Afrocentricity – as authoritative sources for criticisms of Afrocentricity. Cultural critic and postcolonial studies professor Edward Said has also been criticized of confusing Afrocentricity with Afrocentricism.
In 1991, the New York Times, or Newsweek, created the term Afrocentrism in opposition to Afrocentricity and critics of Afrocentricity advanced this effort. Zulu indicates that Afrocentrism was an imposed term, which was part of a deceptive grand narrative, intended to derail and curtail the momentum of the paradigm of Afrocentricity being adopted and used.
Asante indicates that Afrocentrism post-dates Afrocentricity as a concept. Other scholars indicate that what has come to be known as Afrocentrism has existed among black communities for centuries as a grassroots political understanding and narrative tradition about the history of Africa and Africans, which lies in contrast to and is distinct from the theory of Afrocentricity and Africology movement that developed in the 1980s. Additionally, use of the term Afrocentric preexisted the birth of Kete Asante and it later became incorporated into the Afrocentric methodology and paradigm created by Asante. As Kete Asante further notes, while African-centeredness may suggest a limitation in geography, Afrocentricity can be performed anywhere in the world as a form of academic study.
While there are different designations (e.g., Africanity, Gloriana Afrocentricity, Proletarian Afrocentricity) for Afrocentricity, Amo‑Agyemang indicates that Afrocentricity should not be mistaken for Afrocentrism and does not seek to replace Eurocentrism. As Afrocentricity centers African identity, and privileges the concepts, traditions, and history of Africans, Amo‑Agyemang indicates that Afrocentricity clarifies, deconstructs, and undermines hegemonic epistemologies; also, that it serves as a liberatory method that "negates/repudiates exploitations, oppression, repression, domination and marginality of indigenous cultural knowledge" and seeks the "democratisation of knowledge, de‑hegemonisation of knowledge, de‑westernisation of knowledge, and de‑Europeanisation of knowledge".
Criticisms and responses to criticisms
Major critics of Afrocentricity have been Tunde Adeleke (e.g., The Case Against Afrocentrism, 2009), Clarence Walker (e.g., Why We Can't Go Home Again, 2001), Stephen Howe (e.g., Afrocentrism: Mythical Pasts and Imagined Homes, 1998), and Mary Lefkowitz (e.g., Not Out of Africa, 1997). These major critical works were characterized in Asante (2017) as being a "misunderstanding of Afrocentricity or an attempt to relaunch the Eurocentric domination in knowledge, criticism, and literature."
Esonwanne (1992) critiqued Asante's Kemet, Afrocentricity and Knowledge (1990) and characterized its discourse as "implausible", its argumentation as "disorganized", its analysis as "crude and garbled", its perceived lack of seriousness in study as harmful to the "serious study of African American and African cultures", as being part of a "whole project of Afrocentrism", and as being "off-handedly racist". Esonwanne (1992) indicates that the redeeming quality and "intellectual value" of Asante's earlier work is its "negative value" and that it is a prime example of what researchers in African studies and African-American studies "would do well to avoid". Esonwanne (1992) further characterizes Asante’s Afrocentricity as being a "post-Civil Rights individualist version of the pan-Africanist doctrine" that merits not giving into "temptation to dismiss the notion of Afrocentricity completely in abeyance".
Asante (1993) critiqued Esonwanne (1992) and the critical review that was given to his earlier work. Asante indicated that scholars who considered using Esonwanne (1992) as a means to comprehend his earlier work would have a limited comprehension of his earlier work. Esonwanne's characterization of Asante's work as "off-handedly racist" was characterized by Asante as "gratuitous mudslinging" that lacked specificity about what was being characterized as "off-handedly racist". Additionally, Asante indicated that, due to the lack of specific example cited from his earlier work to support the characterization of it as "off-handedly racist", it was "not only a serious breach of professionalism but a grotesque and dishonest intellectual ploy".
Esonwanne (1992) indicated that grouping Cheikh Anta Diop, Maulana Karenga, and Wade Nobles together was a "strange mix" due to each of the scholars having different methodological approaches to African studies and African-American studies. Based on this characterization of Asante's earlier work as a "strange mix", Asante (1993) viewed this as indication of Esonwanne (1992) showing a lack of comprehension and familiarity with his earlier work, with the works of Diop, Karenga, and Wade, as well as the theory of Afrocentricity. Asante (1993) went on to clarify that Cheikh Anta Diop, Maulana Karenga, and Wade Nobles, despite differences in professional backgrounds or academic interests, were all scholars in the theory of Afrocentricity.
Asante (1993) went on to clarify that, similar to the use of the term "European", the use of the composite term "African" is not used it in reference to an abstraction, but is used in reference to ethnic identity and cultural heritage; as such, there are modal uses of terms such as "African civilization" and "African culture", which do not deny the significance of the discrete identities and heritages of more specific African groupings (e.g., African-American, Hausa, Jamaican, Kikuyu, Kongo, Yoruba). Asante (1993) indicates that usage of such terms, in reference to Ma'at, was addressed in a chapter of his earlier work, but that the shortcomings of the critiques presented in Esonwanne (1992) show that Esonwanne may not have read as far as that chapter.
Hill-Collins (2006) characterized Afrocentrism as essentially being a civil religion (e.g., common beliefs and values; common tenets that distinguish believers from non-believers; views on the unknowns of life, on suffering, and on death; common places of gathering and rituals that establish one as a member of an institutionalized belief system). Some aspects that she defined and related to Asante's Afrocentricity was a fundamental love for black people and blackness (e.g., negritude) and common black values (e.g., Karenga’s established values and principles of Kwanzaa); another aspect was black centeredness as a form of grace or relief from white racism; another aspect was the "original sin" of the Trans-Atlantic slave trade as the major reasoning for the suffering and death of black people, Africa as the promised land, and a form of salvation through self-redefinition and self-reclamation as an African people as well as rejection of what is perceived as being of white people and white culture (which are viewed as bearing evil qualities in relation to black people). Another aspect of the characterization of Afrocentrism as a civil religion involves the homophobic and sexist exclusion of black GLBTQ individuals, black women, biracial and multiracial individuals, and wealthy black individuals.
Asante (2007) characterized Hill-Collins (2006) as following a similar approach as Stephen Howe and Mary Lefkowitz of not providing a clear definition for the concept of Afrocentricity that they are attempting to critique and then, subsequently, negatively and incorrectly characterizing Afrocentricity as Afrocentrism (i.e., a black form of Eurocentrism). Asante indicates that Afrocentricity is not an enclosed system of thought or religious belief; rather, he indicated that it is an unenclosed, critical dialectic that allows for open-ended dialogue and debate on the fundamental assumptions that the theory of Afrocentricity is based on. Asante further critiqued and characterized Hill-Collins (2006) as being "not only poor scholarship", but a "form of self-hatred" that is typically "engaged in by vulgar careerists whose plan is to distance themselves from African agency". Asante highlighted Hill-Collins' intellectual work on the centeredness of women of the African diaspora to contrast with her characterized lack of understanding of the intellectual work on the centeredness of African people that Afrocentricity focuses on. As a follow-up to Hill-Collins' Black Power to Hip Hop: Racism, Nationalism, and Feminism?, she authored Ethnicity, Culture, and Black Nationalist Politics, which Asante characterizes as having vaguely defined notions of black nationalism, Afrocentrism/Afrocentricity, civil religion, and African-American ethnic identity. Asante characterized her critiques of Afrocentricity as being supportive of a manufactured intellectual agenda and predicated on the reactionary politics surrounding modern American history.
Asante (2007) highlights that Hill-Collins' perspective on black nationalism, rather than being distinct from usual approaches, derives from the same origin as these approaches (e.g., black feminist nationalism, cultural nationalism, religious nationalism, revolutionary nationalism). Within the context of racialized American national identity, Asante characterizes Hill-Collins' notion of civil religion as the reverence for American civil government and its political principles; along with this notion is the characterized view of immigrating Afro-Caribbeans choosing how to not become "black" Americans (who later join with African-Americans and partake in the UNIA movement), immigrating Europeans choosing how to become "white" Americans, the European-American social power of whiteness to erase their racial identity and become any other identity (e.g., Native American, of Irish descent, of Italian descent) except an identity of African descent, the European-American social power to operate as individuals rather than as a monolithic racial identity (e.g., Black American), and a tradition of racism operating in the modern context of color-blindness, desegregation, and the illusion of equality.
Following her characterized view of black nationalism, Asante (2007) indicates that Hill-Collins conflates black nationalism (e.g., Louis Farrakhan and the Nation of Islam) with Afrocentricity (e.g., Molefi Kete Asante and Afrocentricity). Asante indicates that black nationalism, as a political ideology, is distinct from Afrocentricity, which is a philosophical paradigm, and that both serve distinct purposes and operate in distinct spheres. Rather than being a reformulation of black cultural nationalism and being a civil religion, Asante indicates that Black studies derived and developed from black nationalism and that the development of Afrocentricity post-dates the development of Black studies. Asante indicates that the correct understanding that Hill-Collins has is that "Afrocentricity is a social theory in the sense that it explains the dislocation, disorientation, and mental enslavement of African people as being a function of white racial hegemony." In relation to this view, Asante indicates that mutilating one’s own people is one of the greatest forms of dislocation and that revering the instruction of a "slave master" to intellectually attacking one's own people is a form of dislocated behavior.
The centerpiece of Hill-Collins’ approach, as Asante (2007) characterized it, is that "Afrocentricity took the framework of American civil religion and stripped it of its American symbols and substituted a black value system." Asante indicates that the earliest Africologists (e.g., Nah Dove, Tsehloane Keto, Ama Mazama, Kariamu Welsh, Terry Kershaw) of the "Temple Circle" or contemporaneous scholars (e.g., Maulana Karenga, Wade Nobles, Asa Hilliard, Clenora Hudson-Weems, Linda Myers) had no conscious intention of creating a civil religion as Hill-Collins claims.
List of Africologists
Temple Circle
Abu Abarry
Kariamu Welsh Asante
Molefi Kete Asante
Aisha Blackshire-Belay
Nah Dove
Charles Fuller
Terry Kershaw
C. Tsehloane Keto
Ama Mazama
Theophile Obenga
James Ravell
Thelma Ravell
References
African diaspora
African studies
Black studies
Research
Methodology
Postmodernism
1964 neologisms | 0.764069 | 0.974044 | 0.744237 |
Critical design | Critical design uses design fiction and speculative design proposals to challenge assumptions and conceptions about the role objects play in everyday life. Critical design plays a similar role to product design, but does not emphasize an object's commercial purpose or physical utility. It is mainly used to share a critical perspective or inspire debate, while increasing awareness of social, cultural, or ethical issues in the eyes of the public. Critical design was popularized by Anthony Dunne and Fiona Raby through their firm, Dunne & Raby.
Critical design can make aspects of the future physically present to provoke a reaction. "Critical design is critical thought translated into materiality. It is about thinking through design rather than through words and using the language and structure of design to engage people."
It may be conflated with the critical theory or the Frankfurt School, but it is not related.
Definition
A critical design object challenges an audience's preconceptions, provoking new ways of thinking about the object, its use, and the surrounding culture. Its adverse is affirmative design: design that reinforces the status quo. For a project to succeed in critical design, the viewer must be mentally engaged and willing to think beyond the expected and ordinary. Humor is important, but satire is not the goal.
Many practitioners of critical design have never heard of the term itself, and/or would describe their work differently. Referring to it as critical design simply garners more attention to it, and emphasizes that design has applications beyond problem solving.
It is more of an attitude than a style or movement; a position rather than a method. Critical design builds on this attitude by creatively critiquing concepts and ideologies using fabricated artifacts to embody commentaries around everything from consumer culture to the #MeToo Movement. Regardless of its processes, critical design is often discussed as a unique approach in Design Research, perhaps because of its focus on critiquing widely held social, cultural, and technical beliefs. The process of designing such an object, as well as the presentation and narrative around the object itself, allows for reflection on existing cultural values, morals, and practices. In making such an object, critical designers frequently employ classic design processes—research, user experience, iteration—while working to conceptualize scenarios intended to highlight social, cultural, or political paradigms. Design as societal critique is not a new idea.
History
Italian Radical Design of the 1960s and 70s was highly critical of prevailing social values and design ideologies.
The term "critical design" was first used in Anthony Dunne's book Hertzian Tales (1999) and further developed in Design Noir: The Secret Life of Electronic Objects (2001).
According to Sanders, critical design probes an "ambiguous stimuli that
designers send to people who then respond to them, providing insights for the design process." Uta Brandes identifies critical design as a discrete Design Research method and Bowen integrates it into human-centered design activities as a useful tool for stakeholders to critically think about possible futures.
FABRICA, a communication research center owned by Italian fashion giant Benetton Group, has been actively involved in producing provocative imagery and critical design projects. FABRICA's Visual Communication department, led by Omar Vulpinari, actively participates in critiquing social, political and environmental issues through global awareness campaigns for international magazines and organizations like UN-WHO. Several young artists who have produced critical design projects at FABRICA in recent years are Erik Ravello (Cuba), Yianni Hill (Australia), Marian Grabmayer (Austria), Priya Khatri (India), Andy Rementer (United States), and An Namyoung (South Korea).
Function
To attribute to design practice, critical design broadens the vision in design from traditional practice. It is no longer limited to highlighting the physical function in product design, though this causes some ambiguities in the discussion of critical design's function as it maintains in design area. Matt Malpass addresses Larry Ligo's classification of five different types of function: Structural articulation, Physical function, Psychological function, Social function, as well as Cultural-existential function in his article, with a further discussion of how Modernism leaves a narrower understanding of physical utility when we think about function, which leads to the ambiguity in critical design's function. As critical design focuses on present social, cultural, and ethical implications of design objects and practice, it mostly emphasizes on social and cultural impact from its function.
In addition, critical design objects have a lot of potential to contribute to testing ideas during the process of the development of new technology. As Dunne and Raby express their concerns about always lacking communication between the specialists and the general public to form a two-way discussion of new technology. It always limits to one-way flow from specialists to the public. Critical design provides a stage to give scenarios, completes the dialog between specialists and the general public and helps to collect feedback from the public for further refinements before the idea is going too far for any changes.
Critical play
Researcher Mary Flanagan wrote Critical Play:Radical Game Design in 2009, the same year that Lindsay Grace started the Critical Gameplay project. Grace's Critical Gameplay project is an internationally exhibited collection of video games that apply critical design. The games provoke questions about the way games are designed and played. The Critical Gameplay Game, Wait, was awarded the Games for Change hall of fame award for being one of the 5 most important games for social impact since 2003. The work has been shown at Electronic Language International Festival, Games, Learning & Society Conference, Conference on Human Factors in Computing Systems among other notable events.
Critiques
As critical design has gained mainstream exposure, the discipline has been itself criticized by some for dramatizing so-called 'dystopian scenarios,' which may, in fact, be reflective of real-life conditions in some places in the world. Some see critical design as rooted in the fears of a wealthy, urban, western population and failing to engage with existing social problems. As an example, a project titled Republic of Salivation, by designers Michael Burton and Michiko Nitta, featured as part of MoMA's Design and Violence series, portrays a society plagued by overpopulation and food scarcity which is reliant on heavily modified, government-provided, nutrient blocks. Certain media responses to the work, point to the "presumed naivety of the project," which presents a scenario that "might be dystopian to some, but in some other parts of the world it has been the reality for decades."
Critical acclaim
In recognition of their formalization of the field, Anthony Dunne and Fiona Raby were presented with the inaugural MIT Media Lab Award in June 2015 with director Joichi Ito pointing out that "[Dunne and Raby's] pioneering approach to critical design and its intersection with science, technology, art, and the humanities has changed the landscape of design education and practice worldwide."
Distinctions with Conceptual art
Conceptual art practice has a very similar role as critical design since both of them are sharing critical perspectives to the public and being commentators to issues, the public may get confused to understand these two different fields. However, Matt Malpass points out that the critical designer still applies the skills from the training and practice as designer but re-orientates these skills from a focus on practical ends to a focus on design work that functions symbolically, culturally, existentially, and discursively. Critical design objects are made precisely based on the design principles and carefully follow the design and design research process. Also, critical design objects always stay close to people's everyday life. They tend to be tested on real people and get feedback for further developments. Conceptual art usually associates with gallery spaces and mostly tends to apply the artistic media in the process.
See also
Social fiction
Design fiction
Critical making
Critical technical practice
Science fiction prototyping
Speculative design
Talk to Me (exhibition) (MoMA), 2011
References
Critical theory | 0.766324 | 0.971101 | 0.744179 |
Traditionalism | Traditionalism is the adherence to traditional beliefs or practices. It may also refer to:
Religion
Traditional religion, a religion or belief associated with a particular ethnic group
Traditionalism (19th-century Catholicism), a 19th-century theological current
Traditionalist Catholicism, a movement that emphasizes beliefs, practices, customs, traditions, liturgical forms, devotions and presentations of teaching associated with the Catholic Church before the Second Vatican Council (1962–1965).
Traditionalist Christianity, also known as Conservative Christianity
Traditionalism (Islam), an early Islamic movement advocating reliance on the prophetic traditions (hadith)
Traditionalist theology (Islam), a modern movement that rejects rationalistic theology (kalam)
Traditionalism (Islam in Indonesia), an Indonesian Islamic movement upholding vernacular and syncretic traditions
Traditionalist School (perennialism), a school of religious interpretation concerned with the perceived demise of Western knowledge
Politics
Traditionalist conservatism, a school concerned about traditional values, practical knowledge and spontaneous natural order
Traditionalist conservatism in the United States, a post-World War II American political philosophy
Carlism, a 19th–20th century Spanish political movement related to Traditionalism
Traditionalism (Spain), a Spanish political doctrine
Other uses
Traditionalist School (architecture), a movement in early 20th-century Dutch architecture
Traditionalism Revisited, a 1957 album by American jazz musician Bob Brookmeyer
See also
Radical Traditionalism (disambiguation)
Tradition (disambiguation)
Trad (disambiguation) | 0.761143 | 0.977698 | 0.744168 |
Style of life | The term style of life was used by psychiatrist Alfred Adler as one of several constructs describing the dynamics of the personality.
Origins
Adler was influenced by the writings of Hans Vaihinger, and his concept of fictionalism, mental constructs, or working models of how to interpret the world. From them he evolved his notion of the teleological goal of an individual's personality, a fictive ideal, which he later elaborated with the means for attaining it into the whole style of life.
The Life Style
The Style of Life reflects the individual's unique, unconscious, and repetitive way of responding to (or avoiding) the main tasks of living: friendship, love, and work. This style, rooted in a childhood prototype, remains consistent throughout life, unless it is changed through depth psychotherapy.
The style of life is reflected in the unity of an individual's way of thinking, feeling, and acting. The life style was increasingly seen by Adler as a product of the individual's own creative power, as well as being rooted in early childhood situations. Clues to the nature of the life style are provided by dreams, memories (real or constructed), and childhood/adolescent activities.
Often bending an individual away from the needs of others or of common sense, in favor of a private logic, movements are made to relieve inferiority feelings or to compensate for those feelings with an unconscious fictional final goal.
At its broadest, the life style includes self-concept, the self-ideal (or ego ideal), an ethical stance and a view of the wider world.
Classical Adlerian psychotherapy attempts to dissolve the archaic style of life and stimulate a more creative approach to living, using the standpoint of social usefulness as a benchmark for change.
Types of style
Adler felt he could distinguish four primary types of style. Three of them he said to be "mistaken styles".
These include:
the ruling type: aggressive, dominating people who don't have much social interest or cultural perception;
the getting type: dependent people who take rather than give;
the avoiding type: people who try to escape life's problems and take little part in socially constructive activity.
the socially useful type: people with a great deal of social interest and activity.
Adler warns that the heuristic nature of types should not be taken seriously, for they should only be used "as a conceptual device to make more understandable the similarities of individuals". Furthermore, he claims that each individual cannot be typified or classified because each individual has a different/unique meaning and attitude toward what constitutes success.
Religious interpretation
Adler used life style as a way of psychologising religion, seeing evil as a distortion in the style of life, driven by egocentrism, and grace as first the recognition of the faulty life style, and then its rectification by human help to rejoin the human community.
Wider influence
Wilhelm Stekel discussed the 'Life goals' (Lebensziele) set in childhood, and neurosis as their product, in what Henri Ellenberger described as "Adler's ideas expressed almost in his own words".
Strongly influenced by Adler was the idea of a life script in Transactional analysis. Discussing the script as "an ongoing life plan formed in early childhood", Eric Berne wrote that "of all those who preceded transactional analysis, Alfred Adler comes the closest to talking as a script analyst". He quoted him as saying: "'If I know the goal of a person I know in a general way what will happen...a long-prepared and long-meditated plan for which he alone is responsible'".
See also
Classical Adlerian psychology
Lifestyle (sociology)
Neo-Adlerian
Individual Psychology
References
Further reading
Shulman, Bernard H. & Mosak, Harold H. (1988). Manual for Life Style Assessment. Muncie, IN: Accelerated Development.
Powers, Robert L. & Griffith, Jane (1987). Understanding Life-Style: The Psycho-Clarity Process. Chicago, IL: Americas Institute of Adlerian Studies.
Eckstein, Daniel & Kern, Roy (2009). Psychological Fingerprints: Lifestyle Assessments and Interventions. Dubuque, IA: Kendall/Hunt.
Bishop, Malachy L. & Rule, Warren L. (2005). Adlerian Lifestyle Counseling: Practice and Research. New York: Routledge.
External links
Alfreed Adler's 'Individual Psychology'
Adlerian psychology | 0.768188 | 0.968638 | 0.744096 |
Post-evangelicalism | Post-evangelicalism is a movement of former adherents of evangelicalism, sometimes linked with the emerging church phenomenon, but including a variety of people who have distanced themselves from mainstream evangelical Christianity for theological, political, or cultural reasons. Most who describe themselves as post-evangelical are still adherents of the Christian faith in some form.
Origin of the term
While the origin of the term post-evangelical is uncertain, it was brought into broad usage by Dave Tomlinson and through his 1995 book of the same name. Tomlinson has said that he first heard the term from a friend, although he "suspect[ed] the term had entered our consciousness surreptitiously a couple of years earlier." In his usage of the term, Tomlinson argues that evangelicalism is a response to modernism, no longer appropriate in a post modern world.
Criticisms of evangelicalism
Some post-evangelical criticisms of the evangelical church include but are not limited to:
Individualism, pursuit of tangible success as a sign of spiritual maturity, and a consequently underdeveloped ecclesiology
Politicization of Christian doctrine; "theologization" of political ideology
Ethnocentric, especially Americentric, bias in theology, often in conjunction with nationalistic or exceptionalist politics
General lack of positive engagement with the social and natural sciences, music, art, philosophy, news media, and other expressions of culture
Materialist and consumerist lifestyles, as well as the strong promotion of capitalist economics and neoconservative (in the United States, Republican) politics as quasi-religious obligations due to the influence of the Christian right
Strong opposition from Reformational traditions, particularly Calvinism, to developments in biblical theology (such as the New Perspective on Paul)
Denominationalism and resistance to ecumenical efforts
Other definitions
Christianity Today explains that post-evangelicals have become willingly disassociated with the mainstream evangelical belief system over difficulties with any combination of at least the following issues:
Questions over biblical inerrancy. Questions may relate to the biblical record of history, contradictions between scientific and scriptural explanations of the nature of the universe and humanity (e.g., the origin of the universe, homosexuality) or the discrepancies in descriptions of the personality of God in the different books of the Bible. Shrouding these issues are considerations about how the cultural understandings and linguistical limitations of the written word have influenced the way Scripture has been recorded and used.
Jesus versus Paul - Some post-evangelicals express concern over the role that the Apostle Paul of Tarsus played in the formation of the earliest Christian Church.
The moral failure of prominent evangelical leaders.
Many post-evangelicals have come of age during times of increasing multi-cultural awareness in Western society. They are presented with the educational lessons of the validity of all cultures and necessity for a pluralistic world-view. Tension exists between religious pluralism and the evangelical message of Christianity.
Questions of the role of women in church and society and the model of Christian marriage as taught in many evangelical churches.
Publications
Publications identifying as post-evangelical include Recovering Evangelical, an online news and opinion portal for "evangelicals, post-evangelicals and those outside the church who still like Jesus", the blog Internet Monk, and Patrol Magazine.
Dave Tomlinson's book The Post Evangelical and Graham Cray's The Post Evangelical Debate are useful texts for understanding the movement and the debate surrounding it. See also David Gushee's book, After Evangelicalism: The Path to a New Christianity
See also
Atonement in Christianity
Constructive theology
New Monasticism
Open evangelical
Paleo-orthodoxy
Postdenominationalism
Postliberal theology
External links
Post-Evangelical Collective
References
Christian terminology
Evangelical movement
Christianity in the late modern period | 0.762069 | 0.976412 | 0.744094 |
Attitude object | Definition
An attitude object is the concept around which an attitude is formed and can change over time. This attitude represents an evaluative integration of both cognition and affect in relation to the attitude object. An example of an attitude object is a product (e.g., a car). People can hold various beliefs about cars (cognitions, e.g., that a car is fast) as well as evaluations of those beliefs (affect, e.g., they might like or enjoy that the car is fast). Together these beliefs and affective evaluations of those beliefs represent an attitude toward the object.
Attitude objects also play a significant role in shaping and determining the functions of attitudes, which can be classified as utilitarian, social identity, or self-esteem maintenance functions. The utilitarian function involves attitudes toward objects that provide direct benefits (e.g., coffee or air conditioners), which help maximize rewards or minimize discomfort. As for the social identity function relates to objects (e.g., wedding rings or national flags), which symbolize values and social identity, helping individuals express who they are and their affiliations. Lastly, the self-esteem maintenance function involves objects that impact self-worth (e.g., personal attributes, help boost, and protect one's self-esteem).
Factors
Attitude towards an object are influenced not only by the characteristics of the object itself (cognitive aspect) but also by the context in which the object is encountered, a concept known as "attitude-toward-situation."
The behavior is better predicted when both the attitude-toward-object and attitude-toward-situation are considered. This interaction highlights the significance of the situational context alongside of the object in shaping attitudes and determining behavior. For instance, the study showed that an individual's behavior is influenced by their attitude toward both a particular professor and their general view of attending classes, illustrating the combined impact of object and situational attitudes.
Role of Attitude Objects in Attitude Change Theories
Attitude objects are central to attitude change theories as well. For instance, the action-based model of dissonance describes how conflicting beliefs about an attitude object can create a state of dissonance, which leads to efforts to align one's attitudes and reduce discomfort.
See also
Attitude (psychology)
References
Attitude change | 0.764094 | 0.973799 | 0.744075 |
Science, technology, society and environment education | Science, technology, society and environment (STSE) education, originates from the science technology and society (STS) movement in science education. This is an outlook on science education that emphasizes the teaching of scientific and technological developments in their cultural, economic, social and political contexts. In this view of science education, students are encouraged to engage in issues pertaining to the impact of science on everyday life and make responsible decisions about how to address such issues (Solomon, 1993 and Aikenhead, 1994)
Science technology and society (STS)
The STS movement has a long history in science education reform, and embraces a wide range of theories about the intersection between science, technology and society (Solomon and Aikenhead, 1994; Pedretti 1997). Over the last twenty years, the work of Peter Fensham, the noted Australian science educator, is considered to have heavily contributed to reforms in science education. Fensham's efforts included giving greater prominence to STS in the school science curriculum (Aikenhead, 2003). The key aim behind these efforts was to ensure the development of a broad-based science curriculum, embedded in the socio-political and cultural contexts in which it was formulated. From Fensham's point of view, this meant that students would engage with different viewpoints on issues concerning the impact of science and technology on everyday life. They would also understand the relevance of scientific discoveries, rather than just concentrate on learning scientific facts and theories that seemed distant from their realities (Fensham, 1985 & 1988).
However, although the wheels of change in science education had been set in motion during the late 1970s, it was not until the 1980s that STS perspectives began to gain a serious footing in science curricula, in largely Western contexts (Gaskell, 1982). This occurred at a time when issues such as, animal testing, environmental pollution and the growing impact of technological innovation on social infrastructure, were beginning to raise ethical, moral, economic and political dilemmas (Fensham, 1988 and Osborne, 2000). There were also concerns among communities of researchers, educators and governments pertaining to the general public's lack of understanding about the interface between science and society (Bodmer, 1985; Durant et al. 1989 and Millar 1996). In addition, alarmed by the poor state of scientific literacy among school students, science educators began to grapple with the quandary of how to prepare students to be informed and active citizens, as well as the scientists, medics and engineers of the future (e.g. Osborne, 2000 and Aikenhead, 2003). Hence, STS advocates called for reforms in science education that would equip students to understand scientific developments in their cultural, economic, political and social contexts. This was considered important in making science accessible and meaningful to all students—and, most significantly, engaging them in real world issues (Fensham, 1985; Solomon, 1993; Aikenhead, 1994 and Hodson 1998).
Goals of STS
The key goals of STS are:
An interdisciplinary HI approach to science education, where there is a seamless integration of economic, ethical, social and political aspects of scientific and technological developments in the science curriculum.
Engaging students in examining a variety of real world issues and grounding scientific knowledge in such realities. In today's world, such issues might include the impact on society of: global warming, genetic engineering, animal testing, deforestation practices, nuclear testing and environmental legislations, such as the EU Waste Legislation or the Kyoto Protocol.
Enabling students to formulate a critical understanding of the interface between science, society and technology.
Developing students’ capacities and confidence to make informed decisions, and to take responsible action to address issues arising from the impact of science on their daily lives.
STSE education
There is no uniform definition for STSE education. As mentioned before, STSE is a form of STS education, but places greater emphasis on the environmental consequences of scientific and technological developments. In STSE curricula, scientific developments are explored from a variety of economic, environmental, ethical, moral, social and political (Kumar and Chubin, 2000 & Pedretti, 2005) perspectives.
At best, STSE education can be loosely defined as a movement that attempts to bring about an understanding of the interface between science, society, technology and the environment. A key goal of STSE is to help students realize the significance of scientific developments in their daily lives and foster a voice of active citizenship (Pedretti & Forbes, 2000).
Improving scientific literacy
Over the last two decades, STSE education has taken a prominent position in the science curricula of different parts of the world, such as Australia, Europe, the UK and USA (Kumar & Chubin, 2000). In Canada, the inclusion of STSE perspectives in science education has largely come about as a consequence of the Common Framework of science learning outcomes, Pan Canadian Protocol for collaboration on School Curriculum (1997). This document highlights a need to develop scientific literacy in conjunction with understanding the interrelationships between science, technology, and environment. According to Osborne (2000) & Hodson (2003), scientific literacy can be perceived in four different ways:
Cultural: Developing the capacity to read about and understand issues pertaining to science and technology in the media.
Utilitarian: Having the knowledge, skills and attitudes that are essential for a career as scientist, engineer or technician.
Democratic: Broadening knowledge and understanding of science to include the interface between science, technology and society.
Economic: Formulating knowledge and skills that are essential to the economic growth and effective competition within the global market place.
However, many science teachers find it difficult and even damaging to their professional identities to teach STSE as part of science education due to the fact that traditional science focuses on established scientific facts rather than philosophical, political, and social issues, the extent of which many educators find to be devaluing to the scientific curriculum.
Goals
In the context of STSE education, the goals of teaching and learning are largely directed towards engendering cultural and democratic notions of scientific literacy. Here, advocates of STSE education argue that in order to broaden students' understanding of science, and better prepare them for active and responsible citizenship in the future, the scope of science education needs to go beyond learning about scientific theories, facts and technical skills. Therefore, the fundamental aim of STSE education is to equip students to understand and situate scientific and technological developments in their cultural, environmental, economic, political and social contexts (Solomon & Aikenhead, 1994; Bingle & Gaskell, 1994; Pedretti 1997 & 2005). For example, rather than learning about the facts and theories of weather patterns, students can explore them in the context of issues such as global warming. They can also debate the environmental, social, economic and political consequences of relevant legislation, such as the Kyoto Protocol. This is thought to provide a richer, more meaningful and relevant canvas against which scientific theories and phenomena relating to weather patterns can be explored (Pedretti et al. 2005).
In essence, STSE education aims to develop the following skills and perspectives
Social responsibility
Critical thinking and decision-making skills
The ability to formulate sound ethical and moral decisions about issues arising from the impact of science on our daily lives
Knowledge, skills and confidence to express opinions and take responsible action to address real world issues
Curriculum content
Since STSE education has multiple facets, there are a variety of ways in which it can be approached in the classroom. This offers teachers a degree of flexibility, not only in the incorporation of STSE perspectives into their science teaching, but in integrating other curricular areas such as history, geography, social studies and language arts (Richardson & Blades, 2001). The table below summarizes the different approaches to STSE education described in the literature (Ziman, 1994 & Pedretti, 2005):
Summary table: Curriculum content
Opportunities and challenges of STSE education
Although advocates of STSE education keenly emphasize its merits in science education, they also recognize inherent difficulties in its implementation. The opportunities and challenges of STSE education have been articulated by Hughes (2000) and Pedretti & Forbes, (2000), at five different levels, as described below:
Values & beliefs: The goals of STSE education may challenge the values and beliefs of students and teachers—as well as conventional, culturally entrenched views on scientific and technological developments. Students gain opportunities to engage with, and deeply examine the impact of scientific development on their lives from a critical and informed perspective. This helps to develop students' analytical and problem solving capacities, as well as their ability to make informed choices in their everyday lives.
As they plan and implement STSE education lessons, teachers need to provide a balanced view of the issues being explored. This enables students to formulate their own thoughts, independently explore other opinions and have the confidence to voice their personal viewpoints. Teachers also need to cultivate safe, non-judgmental classroom environments, and must also be careful not to impose their own values and beliefs on students.
Knowledge & understanding: The interdisciplinary nature of STSE education requires teachers to research and gather information from a variety of sources. At the same time, teachers need to develop a sound understanding of issues from various disciplines—philosophy, history, geography, social studies, politics, economics, environment and science. This is so that students’ knowledge base can be appropriately scaffolded to enable them to effectively engage in discussions, debates and decision-making processes.
This ideal raises difficulties. Most science teachers are specialized in a particular field of science. Lack of time and resources may affect how deeply teachers and students can examine issues from multiple perspectives. Nevertheless, a multi-disciplinary approach to science education enables students to gain a more rounded perspective on the dilemmas, as well as the opportunities, that science presents in our daily lives.
Pedagogic approach: Depending on teacher experience and comfort levels, a variety of pedagogic approaches based on constructivism can be used to stimulate STSE education in the classroom. As illustrated in the table below, the pedagogies used in STSE classrooms need to take students through different levels of understanding to develop their abilities and confidence to critically examine issues and take responsible action.
Teachers are often faced with the challenge of transforming classroom practices from task-oriented approaches to those which focus on developing students' understanding and transferring agency for learning to students (Hughes, 2000). The table below is a compilation of pedagogic approaches for STSE education described in the literature (e.g. Hodson, 1998; Pedretti & Forbes 2000; Richardson & Blades, 2001):
Projects in the field of STSE
Science and the City
STSE education draws on holistic ways of knowing, learning, and interacting with science. A recent movement in science education has bridged science and technology education with society and environment awareness through critical explorations of place. The project Science and the city, for example, took place during the school years 2006-2007 and 2007-2008 involving an intergenerational group of researchers: 36 elementary students (grades 6, 7 & 8) working with their teachers, 6 university-based researchers, parents and community members. The goal was to come together, learn science and technology together, and use this knowledge to provide meaningful experiences that make a difference to the lives of friends, families, communities and environments that surround the school. The collective experience allowed students, teachers and learners to foster imagination, responsibility, collaboration, learning and action. The project has led to a series of publications:
Alsop, S., & Ibrahim, S. 2008. Visual journeys in critical place based science education. In Y-J. Lee, & A-K. Tan (Eds.), Science education at the nexus of theory and practice. Rotterdam: SensePublishers 291–303.
Alsop, S., & Ibrahim, S. 2007. Searching for Science Motive: Community, Imagery and Agency. Alberta Science Education Journal (Special Edition, Shapiro, B. (Ed.) Research and writing in science education of interest to those new in the profession). 38(2), 17–24.
Science and the city: A Field Zine
One collective publication, authored by the students, teachers and researchers together is that of a community zine that offered a format to share possibilities afforded by participatory practices that connect schools with local-knowledges, people and places.
Tokyo Global Engineering Corporation, Japan (and global)
Tokyo Global Engineering Corporation is an education-services organization that provides capstone STSE education programs free of charge to engineering students and other stakeholders. These programs are intended to complement—but not to replace—STSE coursework required by academic degree programs of study. The programs are educational opportunities, so students are not paid for their participation. All correspondence among members is completed via e-mail, and all meetings are held via Skype, with English as the language of instruction and publication. Students and other stakeholders are never asked to travel or leave their geographic locations, and are encouraged to publish organizational documents in their personal, primary languages, when English is a secondary language.
See also
Citizen Science, cleanup projects that people can take part in.
Educational assessment
Learning theory (education)
Science
STEM fields
Notes
Bibliography
Aikenhead, G.S. (2003) STS Education: a rose by any other name. In A Vision for Science Education: Responding to the world of Peter J. Fensham, (ed.) Cross, R.: Routledge Press.
Aikenhead, G.S. (1994) What is STS science teaching? In Solomon, J. & G. Aikenhead (eds.), STS Education: International Perspectives in Reform. New York: Teacher's College Press.
Alsop, S. & Hicks, K. (eds.), (2001) Teaching Science. London: Kogan Page.
Bencze, J.L. (editor) (2017). Science & technology education promoting wellbeing for individuals, societies & environments. Dordrecht: Springer.
Bingle, W. & Gaskell, P. (1994) Science literacy for decision making and the social construction of scientific knowledge. Science Education, 78(2): pp. 185–201.
Bodmer, W., F.(1985) The Public Understanding of Science. London: The Royal Society
Durant, J., R., Evans, G.A., & Thomas, G.P. (1989) The public understanding of science. Nature, 340, pp. 11–14.
Fensham, P.J. (1985) Science for all. Journal of Curriculum Studies, 17: pp415–435.
Fensham, P.J. (1988) Familiar but different: Some dilemmas and new directions in science education. In P.J. Fensham (ed.), Developments and dilemmas in science education. New York: Falmer Press pp. 1–26.
Gaskell, J.P. (1982) Science, technology and society: Issues for science teachers. Studies in Science Education, 9, pp. 33–36.
Harrington, Maria C.R. (2009). An ethnographic comparison of real and virtual reality field trips to Trillium Trail: The salamander find as a Salient Event. In Freier, N.G. & Kahn, P.H. (Eds.), Children, Youth and Environments: Special Issue on Children in Technological Environments, 19 (1): [page-page]. http://www.colorado.edu/journals/cye.
Hodson, D. (1998)Teaching and Learning Science: Towards a Personalized Approach. Buckingham: Open University Press
Hodson, D. (2003) Time for action: Science education for an alternative future. International Journal of Science Education, 25 (6): pp. 645–670.
Hughes, G. (2000) Marginalization of socio-scientific material in science-technology-society science curricula: some implications for gender inclusivity and curriculum reform, Journal of Research in Science Teaching, 37 (5): pp. 426–40.
Kumar, D. & Chubin, D.(2000) Science Technology and Society: A sourcebook or research and practice. London: Kluwer Academic.
Millar, R. (1996) Towards a science curriculum for public understanding. School Science Review, 77 (280): pp. 7–18.
Osborne, J. (2000) Science for citizenship. In Good Practice in Science Teaching, (eds.) Monk, M. & Osborne, J.: Open University Press: UK.
Pedretti, E. (1996) Learning about science, technology and society (STS) through an action research project: co-constructing an issues based model for STS education. School Science and Mathematics, 96 (8), pp. 432–440.
Pedretti, E. (1997) Septic tank crisis: a case study of science, technology and society education in an elementary school. International Journal of Science Education, 19 (10): pp. 1211–30.
Pedretti, E., & Forbes (2000) From curriculum rhetoric to classroom reality, STSE education. Orbit, 31 (3): pp. 39–41.
Pedretti, E., Hewitt, J., Bencze, L., Jiwani, A. & van Oostveen, R. (2004) Contextualizing and promoting Science, Technology, Society and Environment (STSE) perspectives through multi-media case methods in science teacher education. In D.B Zandvliet (Ed.), Proceedings of the annual conference of the National Association for Research in Science Teaching, Vancouver, BC. CD ROM.
Pedretti, E. (2005) STSE education: principles and practices in Aslop S., Bencze L., Pedretti E. (eds.), Analysing Exemplary Science Teaching: theoretical lenses and a spectrum of possibilities for practice, Open University Press, Mc Graw-Hill Education
Richardson, G., & Blades, D. (2001) Social Studies and Science Education: Developing World Citizenship Through Interdisciplinary Partnerships
Solomon, J. (1993) Teaching Science, Technology & Society. Philadelphia, CA: Open University Press.
Solomon, J. & Aikenhead, G. (eds.) (1994) STS Education: International Perspectives in Reform. New York: Teacher's College Press.
Ziman, J. (1994) The rationale of STS education is in the approach. In Solomon, J. & Aikenhead, G. (eds.) (1994). STS Education: International Perspectives in Reform. New York: Teacher's College Press, pp. 21–31.
External links
Samples of science curricula
Council of Ministers of Education, Canada
The Councils of Ministers of Education, Canada, website is a useful resource for understanding the goals and position of STSE education in Canadian Curricula.
STEPWISE Site
UK Science Curriculum
USA Science Curriculum Standards
Books
These are examples of books available for information on STS/STSE education, teaching practices in science and issues that may be explored in STS/STSE lessons.
Alsop S., Bencze L., Pedretti E. (eds), (2005). Analysing Exemplary Science Teaching. Theoretical lenses and a spectrum of possibilities for practice, Open University Press, Mc Graw-Hill Education
Bencze, J.L. (editor) (2017). Science & technology education promoting wellbeing for individuals, societies & environments. Dordrecht: Springer.
Gailbraith D. (1997). Analyzing Issues: science, technology, & society. Toronto: Trifolium Books. Inc.
Homer-Dixon, T. (2001). The Ingenuity Gap: Can We Solve the Problems of the Future? (pub.) Vintage Canada.
Science education
Environmental education
Environmental science | 0.774944 | 0.960155 | 0.744066 |
Neue Musik | Neue Musik (English new music, French nouvelle musique) is the collective term for a wealth of different currents in composed Western art music from around 1910 to the present. Its focus is on compositions of 20th century music. It is characterised in particular by – sometimes radical – expansions of tonal, harmonic, melodic and rhythmic means and forms. It is also characterised by the search for new sounds, new forms or new combinations of old styles, which is partly a continuation of existing traditions, partly a deliberate break with tradition and appears either as progress or as renewal (neo- or post-styles).
Roughly speaking, Neue Musik can be divided into the period from around 1910 to the Second World War – often referred to as "modernism" – and the reorientation after the Second World War, which is perceived as "radical" – usually apostrophised as avant-garde – up to the present. The latter period is sometimes subdivided into the 1950s, 1960s and 1970s, whereby the following decades have not yet been further differentiated (the summary term "postmodernism" has not become established).
In order to describe contemporary music in a narrower sense, the term Zeitgenössische Musik (English contemporary music, French musique contemporaine) is used without referring to a fixed periodisation. The term "Neue Musik" was coined by the music journalist Paul Bekker in 1919.
Representatives of Neue Musik are sometimes called neutoners.
Compositional means and styles
The most important step in the reorientation of musical language was taken in the field of harmony, namely the gradual abandonment of tonality – towards free atonality and finally towards twelve-tone technique. Towards the end of the 19th century, the tendency to use increasingly complex chord formations led to harmonic areas that could no longer be clearly explained by the underlying major-minor tonality – a process that had already begun with Wagner and Liszt. From this, Arnold Schönberg and his students Alban Berg and Anton Webern drew the most systematic consequence, which culminated in the formulation (1924) of the method of "composition with twelve tones related only to one another" (dodecaphony). These atonal rules of composition provide composers with a toolkit that helps to avoid the principles of tonality. The designation as "Second Viennese School" in analogy to the "First Viennese School" (Haydn, Mozart, Beethoven) already betrays the special position that this group of composers has as a mediating authority.
The principle of using all twelve tones of the tempered scale equally, without favouring individual tones, seems to have occupied various composers in the first two decades of the 20th century, who simultaneously, but independently of Schönberg, advanced to similarly bold results. Among these experimenters, in whose works twelve-tone and serial approaches can be discerned, is first of all Josef Matthias Hauer, who publicly argued with Schönberg about the "copyright" to twelve-tone music. Also to be mentioned is Alexander Scriabin, whose atonal sound-centre technique, based on quartal layering, subsequently paved the way for remarkable experiments by a whole generation of young Russian composers. The significance of this generation of composers for New Music, which emerged in the climate of the revolutions of 1905 and 1917, could only penetrate into consciousness in the second half of the century, as they were already systematically eliminated by the Stalinist dictatorship in the late 1920s. Here, Nikolai Roslavets, Arthur Lourié, Alexander Mossolov and Ivan Wyschnegradsky should be mentioned as representatives.
A major shortcoming of the abandonment of major-minor tonality, however, was the extensive loss of the form-forming forces of this harmonic system. Composers countered this deficiency in very different ways. In order to avoid the classical-romantic musical forms, they now chose for the new music partly free (rhapsody, fantasy), or neutral (concert, orchestral piece) designations, or self-chosen, sometimes extremely short, aphoristic forms (Webern, Schönberg). Others adhered to traditional forms, even though their works themselves took this concept ad absurdum, or filled the traditional ideas of form with new content (Scriabin's single-movement piano sonatas, Schoenberg's sonata forms with the abandonment of the tonality that founded them in the first place). Even the fundamental idea of a continuous, purposeful processing of musical thoughts within a work loses its primacy, parallel to the loss of the 19th century's belief in progress. New possibilities of shaping form, beyond parameters of music, which had previously been treated rather stepmotherly, such as timbre, rhythm, dynamics, systematic resp. Free montage techniques in Igor Stravinsky or Charles Ives, the rejection of the time directionality of music, as well as an increasing individualism claim their place.
A musical source whose potential was also used for experimentation is folklore. While previous generations of composers had repeatedly chosen exotic subjects in order to legitimise structures that deviated from the prevailing rules of composition, it was in the work of Claude Debussy that a stylistic and structural adaptation of javanese gamelan music, which he had become acquainted with in 1889 at the Paris 1889 World's Fair, can be observed for the first time. In this context, the work of Béla Bartók, who had already explored most of the fundamental characteristics of his new style by means of a systematic study of Balkan folklore in 1908, is to be regarded as exemplary. In the course of this development, Bartók arrived at the treatment of the "piano as a percussion instrument" with his Allegro barbaro (1911), which subsequently had a decisive influence on composers' treatment of this instrument. The rhythmic complexities peculiar to Slavic folklore were also appropriated by Igor Stravinsky in his early ballet compositions written for Sergei Diaghilev's Ballets Russes. Significantly, Stravinsky uses a given "barbaric-pagan" stage plot for his most revolutionary experiment in this respect (The Rite of Spring 1913).
It was also Stravinsky who, in the further course of the 1910s, developed his compositional style in a direction that became exemplary for Neoclassicism. In France, various young composers appeared on the scene who devoted themselves to a similar emphatically anti-romantic aesthetic. The Groupe des Six was formed around Erik Satie, whose leading theoretician was Jean Cocteau. In Germany, Paul Hindemith was the most prominent representative of this movement. The proposal to use the canon of musical forms, such as the Baroque, to renew the musical language had already been put forward by Ferruccio Busoni in his Draft of a New Aesthetic of Musical Art. In the spring of 1920, Busoni formulated this idea again in an essay entitled Young Classicism.
Furthermore, the radical experiments devoted to the possibilities of microtonal music are exceptional. These include Alois Hába, who, encouraged by Busoni, found his preconditions in Bohemian-Moravian musicianship, and on the other hand Ivan Wyschnegradsky, whose microtonality is to be understood as a consistent further development of the sound-centre technique of Alexander Scriabin. In the wake of the Italian Futurism around Filippo Tommaso Marinetti and Francesco Balilla Pratella, which was based in the visual arts, Luigi Russolo in his manifesto The Art of Noises (1913, 1916) designed a style called Bruitism, which made use of newly constructed sound generators, the so-called Intonarumori.
The spectrum of musical expression is extended by another interesting experiment that also enters the realm of the musical application of sounds, namely the tone cluster by Henry Cowell. Some of the early piano pieces by Leo Ornstein and George Antheil also tend towards quite comparable tone clusters. With Edgar Varèse and Charles Ives, two composers should be mentioned whose works, which are exceptional in every respect, cannot be attributed to any larger movement and whose significance was only fully recognised in the second half of the century.
The increasing industrialisation, which slowly began to take hold of all areas of life, is reflected in an enthusiasm for technology and (compositional) machine aesthetics, which was initially carried by the Futurist movement. Thus, the various technical innovations, such as the invention of the vacuum tube, the development of radio technology, the sound film and tape technology, moved into the musical field of vision. These innovations also favoured the development of new electric playing instruments, which is also significant with regard to the original compositions created for them. Lew Termen's Theremin, Friedrich Trautwein's trautonium and the Ondes Martenot of the Frenchman Maurice Martenot should be highlighted here. The partly enthusiastic hope for progress that was attached to the musically useful application of these early experiments was, however, only partially fulfilled. Nevertheless, the new instruments and technical developments possessed a musically inspiring potential that, in the case of some composers, was reflected in extraordinarily visionary conceptions that could only actually be technically realised decades later. The first compositional explorations of the musical possibilities of pianola also belong in this context. The medial dissemination of music by means of records and radio also made possible the enormously accelerated exchange or reception of musical developments that had been almost unknown until then, as can be seen from the rapid popularisation and reception of jazz. In general, it can be said that the period from around 1920 onwards was one of general "departure for new shores" – with many very different approaches. Essentially, this pluralism of styles has been preserved until today or, after a short period of mutual polemics between Serialism and adherents of traditional compositional styles (from about the mid-1950s onwards), has ceased again.
Historical preconditions
In the 20th century, a line of development of musical progress continued; every composer still known today has contributed something to it. This old longing for progress and modernity – through conscious separation from tradition and convention – can, however, take on a fetish-like character in Western society, which is shaped by science and technology. The appearance of the "new" is always accompanied by a feeling of uncertainty and scepticism. At the beginning of the 20th century, the use of music and the discussion of its meaning and purpose was still reserved for an infinitesimally small, but all the more knowledgeable part of society. This relationship – the small elite group of privileged people here and the large uninvolved masses there – has only changed outwardly through the increasing dissemination of music through the media. Today, music is accessible to everyone, but as far as understanding "Neue Musik" is concerned, there is a lack of education in many cases, including that of the ear. The changed relationship between man and music has made aesthetic questions about the nature and purpose of music a public debate.
In the history of music, transitional phases (epoch boundaries) arose in which the "old" and the "new" appeared simultaneously. The traditional period or epoch was still cultivated, but parallel to this a "Neue musik" was introduced which subsequently replaced it. These transitions were always understood by contemporaries as phases of renewal and were described accordingly. The Ars Nova of the 14th century, for example, also has "new" in its name, and Renaissance also characterises a consciously chosen new beginning. The transitional phases are usually characterised by an increase in stylistic means, in which these – in the sense of mannerism – are exaggerated to the point of absurdity. The stylistic change to the "new" music then takes place, for example, through the removal of one of the traditional stylistic means, on the basis of which a compositional-aesthetic progress can then be systematically striven for and realised, or on the gradual preference for alternatives introduced in parallel.
In this sense, the classical Romantic music of the 19th century can be understood as an intensification of Viennese classicism. The increase in means is most noticeable here in the quantitative aspect – the length and instrumentation of Romantic orchestral compositions increased drastically. In addition, the composers' increased need for expression and extra-musical (poetic) content came more into focus. The attempts to create musical national styles must also be seen as a reaction to the various revolutionary social events of the century. Furthermore, the economic conditions for musicians, based on patronage and publishing, changed. Social and political circumstances affected the composition of the audience and the organisation of concert life. In addition, there was a strong individualisation (personal style) of the Romantic musical language(s).
Historical overview
The following overview provides only a keyword-like orientation about the corresponding periods, outstanding composers, rough style characteristics and masterpieces. Corresponding in-depth information is then reserved for the main articles.
Any periodisation is a shortening. In many cases, the sometimes seemingly contradictory styles not only took place simultaneously, but many composers also composed in several styles – sometimes even in one and the same work.
Even if one composer appears to be outstanding for a style or period, there has always been a multitude of composers who have also written exemplary works in a sometimes very independent manner. The following applies: every successful work deserves its own consideration and classification – regardless of the framework in which it is usually placed for stylistic reasons.
Basically, the dictum of Rudolf Stephan applies to the classification of works into styles: "If, however, stylistic criteria [...] are presupposed, then [...] such [works] by numerous other, mostly younger composers [...] can also be counted [...] But in the case of works [...] (which can certainly be named in this context), boundaries then become perceptible which perhaps cannot be fixed exactly, but which are nevertheless (to speak with Maurice Merleau-Ponty) quickly noticed as having already been crossed. " A fixed stylistic or epochal scheme does not exist and is in principle impossible. All attributions of similarities or differences are interpretations that require precise explanation. The fact that works are classified partly according to stylistic terms (for example "expressionism") and partly according to compositional criteria (e.g. "atonality") inevitably leads to multiple overlaps.
The turn from the 19th to the 20th century
The traditional compositional means of the classical period were only able to cope with these increasing tendencies to a certain extent. Towards the end of the 19th century, the musical development began to take shape in which Paul Bekker then retrospectively recognised "New Music" (as a term it was only later written with a capital "N"). His attention had initially been particularly focused on Gustav Mahler, Franz Schreker, Ferruccio Busoni and Arnold Schönberg. Overall, the turn of the century had come to be understood as fin de siècle. In any case, it was under the auspices of modernity, as the radicalisation of which the "new music" can be regarded and whose manifold consequences influenced the entire 20th century. The qualitative difference of this epochal transition from the earlier ones is essentially that now some composers saw their historical mission in developing the "new" out of tradition and consistently searching for new means and ways to replace the outdated classical-romantic aesthetics would be able to completely replace.
The deliberate break with tradition is the most striking feature of this transitional phase. The will to renew gradually encompasses all stylistic means (harmony, melody, rhythm, dynamics, form, orchestration, etc.). The new musical styles of the turn of the century, however, still clearly stand in the context of 19th century tradition. Early Expressionism inherits Romanticism and increases its (psychologised) expressive will, Impressionism refines timbres, etc. But soon those parameters were also taken into account and used for musical experiments that had previously had only marginal importance, such as rhythm, or – as a significant novelty - the inclusion of sounds as musically formable material. The progressive mechanisation of urban living conditions found expression in Futurism. Another significant aspect is the equal coexistence of very different procedures in dealing with and in relation to tradition. In any case, "Neue musik" cannot be understood as a superordinate style, but can only be identified on the basis of individual composers or even individual works in the various styles. The 20th century thus appears as a century of polystylistics.
At first, the "new" was neither accepted without comment nor welcomed by the majority of the audience. The premiere of particularly advanced pieces regularly led to the most violent reactions on the part of the audience, which in their drastic nature seem rather alienating today. The vivid descriptions of various legendary scandalous performances (e.g. Richard Strauss' Salome 1905, Stravinsky's The Rite of Spring 1913) with scuffles, key whistles, police intervention etc., as well as the journalistic response with blatant polemics and crude defamations testify to the difficult position that the "neutöners" had from the beginning. After all, "new music" still seems to have met with a surprisingly high level of public interest at this early stage. However, with increasing acceptance by the public, a certain ("scandalous") expectation also set in. This in turn resulted in a discreet compulsion for originality, modernity and novelty, which entailed the danger of fashionable flattening and routine repetition.
The composers of New Music did not make it easy for themselves, nor for their listeners and performers. Regardless of the nature of their musical experiments, they seem to have quickly found that audiences were helpless and uncomprehending in the face of their sometimes very demanding creations. This was all the more disappointing for many, since it was the very same audience that unanimously applauded the masters of the classical-romantic tradition, whose legitimate heirs they saw themselves as. As a result, the need to explain the new was recognised. Many composers therefore endeavoured to provide the theoretical and aesthetic underpinnings needed to understand their works. In particular, musicological and music-theoretical writings, such as Schönberg's or Busonis visionary Entwurf einer neuen Ästhetik der Tonkunst (1906) are of great influence on the development of New Music. Also noteworthy in this context is the almanac Der Blaue Reiter (1912) edited by Kandinsky and Marc, which contains, among other things, an essay on Free Music by the Russian Futurist Nikolai Kulbin. This willingness to engage intellectually and technically with the unsolved problems of tradition, as well as the sometimes unbending attitude in the pursuit of set compositional goals and experimental arrangements, are further characteristic features of Neue Musik.
The stylistic pluralism that emerged under these conditions continues into the present. In this respect, the term "Neue Musik" is suitable neither as a designation for an epoch nor as a style. Rather, it has a qualitative connotation that is related to the degree of originality (in the sense of novel or unheard-of) of the production method as well as the final result. Expressionism and Impressionism, but also styles of visual art such as Futurism and Dadaism provide aesthetic foundations on which new music can be created. Perhaps the composers and works that have been able to establish themselves as "classics of modernism" in the concert hall in the course of the last century and whose innovations have found their way into the canon of compositional techniques can best be understood under the heading of "new music": Thus, in addition to Arnold Schoenberg and Anton Webern, Igor Stravinsky, Phillipp Jarnach, Béla Bartók and Paul Hindemith. The depiction and assessment of historical development on the basis of an assumed "rivalry" between Schoenberg and Stravinsky is a construct that can be traced back to Theodor W. Adorno. The Second World War represents a clear caesura. Many of the early stylistic, formal and aesthetic experiments of New Music then pass into the canon of compositional tools taught from mid-century onwards and passed on to a younger generation of composers of (again) New Music. In this respect, the technical innovations of sound recording and radio technology are also causally linked to New Music. First of all, they contributed significantly to the popularisation of music and also brought about a change in audience structure. Furthermore, they provided – for the first time in the history of music – an insight into the history of the interpretation of old and new music. They ultimately made possible the (technically reproduced) presence of all music. Moreover, this technique itself is a novelty, whose musical potential was systematically explored from the beginning and used by composers in corresponding compositional experiments.
Moderne (1900–1933)
Impressionism or: Debussy – Ravel – Dukas
Impressionism is the transfer of the term from the visual arts to a music from about 1890 to the First World War in which tonal "atmosphere" dominates and colourful intrinsic value is emphasised. It differs from the late Romanticism that took place at the same time, with its heavy overloading, by Mediterranean lightness and agility (which does not exclude spooky or shadowy moods) and by avoiding complex counterpoint and excessive chromaticism in favour of sensitive tone colouring, especially in orchestral instrumentation. The centre of this movement is France, the main representatives being Claude Debussy, Maurice Ravel (who, however, also composed many works that cannot be described as impressionistic) and Paul Dukas.
The moment of colour, freedom of form and a penchant for exoticism are what musical works have in common with those of painting. Through the Paris World's Fair of 1889, Claude Debussy learned the sound of Javanese gamelan ensembles, which strongly influenced him, as did the chinoiserie of his time. In addition to the use of pentatonics (for example in Préludes I, Les collines d'Anacapri) and whole-tone scales (for example in Préludes I, Voiles), Debussy made use of the salon music of the time. (for example, Préludes I, Minstrels) and harmonies borrowed from early jazz music (as in Children's Corner and Golliwogg's Cakewalk). Like Ravel, Debussy loved the colour of Spanish dance music.
The fact that some of Debussy's works, which satisfy the characteristics of Impressionism, can also be attributed to Art Nouveau, Jugendstil or Symbolism for good reasons only shows that the pictorial/literary parallels do bear some common stylistic features, but that no clear stylistic attribution can be derived from them.
The characteristics of impressionist music are:
Melodic: coloured by pentatonic, church keys, whole tone scales and exotic scales; their core form is closely related to the chordal; often rambling, meandering, without a clear internal structure.
Harmonics: dissolution of the cadence as a structure-forming feature; concealment of tonality; transition to bitonality and polytonality. Change in attitude to dissonance: no more compulsion to resolve dissonant chords. Preference for altered chords; layering of chords (dominant and tonic at the same time) in thirds up to the undecimal; layering of fourths and fifths.
Rhythm: tendency to veil bars and even to abolish bar patterns; metre becomes unimportant, accents are set freely; frequent bar changes, frequent syncopations.
Instrumentation: differentiation of colour nuances; search for new sound effects with a preference for blending sounds; shimmering, iridescent, blurring sound surfaces with rich inner movement. Pointillism (setting of sound spots). Preference for harp. Differentiated pedal effects in piano music. In many cases, Arnold Schönberg's idea of a timbre melody is already realised.
Form: Loosening up and abandoning traditional forms; no rigid formal schemes. Often repetition of a phrase two or more times.
The works that have become famous are:
Debussy: Prélude à l'après-midi d'un faune for orchestra (1892–94)
Debussy: Pelléas et Mélisande, lyric drama in five acts and twelve pictures with orchestra after a text by Maurice Maeterlinck (1893–1902)
Dukas: L'Apprenti sorcier (The Sorcerer's Apprentice) for orchestra (1897)
Ravel: Pavane pour une infante défunte (for piano 1899; orchestral version 1910)
Ravel: Jeux d'eau for piano (1901)
Debussy: Pour le piano (1901–02)
Debussy: La Mer for orchestra (1903–05)
Ravel: Daphnis et Chloé, ballet music for orchestra (1909–1912)
Debussy: Préludes – Livre I (1909–10) and Préludes – Livre II for piano (1910–12)
Vienna School or Schönberg – Webern – Berg
The so-called Viennese School, considered as such since 1904 and more rarely called the Second or New Viennese School or Viennese Atonal School, refers to the circle of Viennese composers with Arnold Schönberg and his pupils Anton Webern and Alban Berg as its centre. Due to Schönberg's strong appeal as a teacher, who attracted students from many countries, and due to his teaching activities in changing cities, the term transferred from the designation of a 'school' to the style that this school produced. The term is narrowly applied mostly to compositions worked in twelve-tone technique.
The composers of the Viennese School were, although not exclusively, stylistically influential for the Late Romanticism with the main work Verklärte Nacht Op. 4, a string sextet by Schönberg from 1899. Alongside this is Webern's Piano Quintet (1907), which, however, did not have any impact, as it was not published until 1953. Alban Berg's "Jugendlieder" also belong to this corpus..
The school had a style-defining effect on so-called musical expressionism, which was joined by some – mostly early – works by other composers.
Under the keyword atonality, which refers less to a style than to a compositional technique subsequently designated as such, the Viennese School is "leading the way". The compositional development then leads on to the twelve-tone technique, which also designates a compositional technique and not a style.
It should not be overlooked that Schoenberg and Berg also developed a number of intersections with neoclassicism – mainly on the level of form and less in terms of composition and adopted stylistic elements.
Expressionism
Expressionism in music was developed in direct contact with the currents of the same name in the visual arts (Die Brücke, Dresden 1905; Der Blaue Reiter, Munich 1909; Galerie Der Sturm, Berlin 1910) and literature (Trakl, Heym, Stramm, Benn, Wildgans, Wedekind, Toller and others) from around 1906. As a style, it was completed around 1925, but the musical characteristics and many of the expressive gestures have endured to the present day.
The main representatives are the composers of the Second Viennese School: Arnold Schönberg, Anton Webern and Alban Berg as well as, against a different background of the history of ideas, Alexander Scriabin.
Composers have sought a subjective immediacy of expression, drawn as directly as possible from the human soul. To achieve this, a break with tradition, with traditional aesthetics and the previous, hackneyed forms of expression was unavoidable.
Stylistically, the changed function of dissonances is particularly striking; they appear on an equal footing with consonances and are no longer resolved – what was also called the "emancipation of dissonance". The tonal system is largely dissolved and expanded into atonality. Musical characteristics include: extreme pitches, extreme dynamic contrasts (from whispering to screaming, from pppp to ffff), jagged melody lines with wide leaps; metrically unbound, free rhythm and novel instrumentation. Form: asymmetrical period structure; rapid succession of contrasting moments; often very short "aphoristic" pieces.
Rudolf Stephan: "Expressionist art, wherever and in whatever form it first appeared, was alienated, fiercely rejected and publicly opposed, but also enthusiastically welcomed by individuals. It had abandoned the traditional ideal of art being 'beautiful' in favour of a (claimed) claim to truth; it was probably not infrequently even deliberately 'ugly'. It was thus the first deliberate 'no-longer-beautiful art'."
Main works:
Scriabin: The Poem of Ecstasy op. 54 for orchestra (1905–1908)
Webern: Five movements for string quartet op. 5 (1909)
Webern: Six Pieces for Large Orchestra Op. 6 (1909)
Schoenberg: Three Piano Pieces op. 11 (1909)
Schönberg: Five orchestral pieces op. 16 (1909, revised 1922)
Schönberg: Erwartung op. 17, monodrama (1909, not performed until 1924)
Schoenberg: Sechs kleine Klavierstücke op. 19 (1911)
Webern: Five Pieces for Orchestra op. 10 (1911)
Schoenberg: Pierrot Lunaire op. 21 for one speaking voice and ensemble (1912)
Berg: Five orchestral songs after poems by Peter Altenberg op. 4 (1912)
Stravinsky: The Rite of Spring (1913)
Berg: Three orchestral pieces op. 6 (1914)
Scriabin: Vers la flamme, poème op. 72 for piano (1914)
Webern: Songs for voice and ensembles opp. 14–18 (1917–1925)
Berg: Wozzeck op. 7, opera (1917–1922, first performance 1925)
Bartók: The Miraculous Mandarin for orchestra (1918–1923, rev. 1924 and 1926–31)
Atonality
The term "atonal" appeared in music theory literature around 1900 and from there migrated into music journalistic usage – usually used in a negative, combative manner. It is usually used to describe music with a harmony that does not establish any binding keys or references to a fundamental, i.e. to tonality. "Atonality", although often used in this way, is not a stylistic term, but belongs to the field of compositional techniques; the works written atonally belong predominantly to expressionism. In addition to the main works mentioned there, the following were important, especially for the transitional phase from extended tonality to atonality:
Schoenberg: Chamber Symphony No. 1 op. 9 (1906).
Schoenberg: String Quartet No. 2 op. 10 (1907–08), still in the key of F-sharp minor, but already freitonal, especially in the two vocal movements (soprano) "Litanei" and "Entrückung".
Schoenberg: Das Buch der hängenden Gärten op. 15, 15 poems by Stefan George for one voice and piano. (1908–1909)
Rupture through fascism or: The Second World War
During the National Socialist era, most forms of new music, as well as jazz music, were designated as degenerate and their performance and dissemination banned or suppressed. The exhibition degenerate music on the occasion of the Reichsmusiktage in Düsseldorf in 1938 denounced the work of composers such as Paul Hindemith, Arnold Schönberg, Alban Berg, Kurt Weill and others, as well as all Jewish composers. Instead, in the spirit of the "NS-Kulturpolitik", harmless untertainment and Gebrauchsmusik such as operetta, dance and March music, especially also Folk music, were promoted and included in the propaganda. Numerous composers and musicians were persecuted or murdered by the National Socialists, often because of their Jewish origins. Many went into exile. Those who remained in Germany were partly attributed an "inner exile".
An important source on the position of New Music during the National Socialist era was the annotated reconstruction of the above-mentioned exhibition Degenerate Music, which was first shown in Frankfurt from 1988 onwards, thus gradually beginning a reappraisal of this topic.
Institutionalisation and the new musical beginning after 1945
The harsh rejection of New Music by concert audiences, which has gone down in history in a series of spectacular premiere scandals, has significantly promoted the literary discussion of New Music. Thus, first of all, the critics of the relevant journals took up their positions, but composers also found themselves increasingly called upon to comment on their creations or to take up the cause of their colleagues' works. Parallel to this, an increasingly extensive body of musical literature emerged that also sought to describe the philosophical, sociological and historical dimensions of New Music. Another subsequent phenomenon was the creation of specialised forums for the performance of New Music. Schönberg's "Society for Private Musical Performances" (1918) is an early consistent step, which, however, slowly removes "Neue Musik" from the field of vision of the (quantitatively large) concert audience and turns it into a matter of specialists for specialists. The establishment of regular concert events, such as the Donaueschingen Festival and the founding of Internationale Gesellschaft für Neue Musik are a further reaction to the significantly changed sociological situation in which composers of New Music and their audiences found themselves. The caesura in the development of New Music brought about by the catastrophe of the Second World War is attempted to be compensated for by the progressive institutionalisation of musical life after 1945. The conscious new beginnings of the reopened or newly founded music academies attempted to pick up the thread of the interrupted development. The founding of the public broadcasting companies gave composers a new forum for their works, and the awarding of composition commissions additionally stimulated their production.
After the end of the Second World War, the Kranichsteiner Ferienkurse für Neue Musik, organised every two years by the Staatstheater Darmstadt, became the most influential international event for new music in Germany. The dominant compositional techniques there were those of serialism. Anton Webern became the leading figure. Olivier Messiaen, who uses in his works among others musical techniques of non-European musical cultures, but also methods of serial music, is the teacher of some of the composers who cause the most sensation there. Among them are:
Pierre Boulez (also working as a conductor of "Neue Musik")
Karlheinz Stockhausen (among others composer of electronic music and active at the Studio for Electronic Music (WDR) in Cologne)
Luciano Berio
Mauricio Kagel (experimental music theatre)
Iannis Xenakis
(Important in this context are also the Institute for New Music and Music Education (INMM) Darmstadt with its annual spring conference and the Darmstadt International Music Institute (IMD), which has an extensive archive of rare recordings, especially of earlier events of the International Summer Courses for New Music. The recordings are available on various media; since at least 1986 also on digital media).
While in the pre-war period the main impulses for the development of New Music came from Central Europe, primarily from the German-speaking countries, and other avant-gardists, for example Charles Ives in the US, received little attention, the development now became increasingly international. Traditionally strong musical countries such as France (with Olivier Messiaen, Pierre Boulez and Iannis Xenakis), Italy (Luciano Berio, Luigi Nono) made important contributions, others such as Poland (Witold Lutosławski, Krzysztof Penderecki) or Switzerland with Heinz Holliger and Jacques Wildberger joined in. In the US, the circle around John Cage and Morton Feldman was significant for Europe. It was not atypical for post-war developments in Germany that the emigrated musicians could contribute little, but rather that the "new generation" (especially Karlheinz Stockhausen) became influential – with considerable support for example from France: as a teacher of Stockhausen and Boulez, Messiaen was a regular guest at the International Summer Courses in Darmstadt. In this sense, music may even have helped in the post-war peace process. Last but not least, some important representatives of New Music found their way from elsewhere to their places of work in Germany, such as György Ligeti from Hungary, Isang Yun from Korea and Mauricio Kagel from Argentina.
The most important (albeit controversial) theoretician of New Music in the German-speaking world is Theodor W. Adorno (1903–1969), a student of Alban Berg. In his Philosophy of New Music, published in 1949, Adorno argues in favour of Schoenberg's atonal compositional style and contrasts it with Stravinsky's neoclassical style, which was seen as a relapse into already outdated compositional techniques. For Adorno, the atonal revolution around 1910 by Schönberg meant the liberation of music from the constraints of tonality and thus the unhindered development of musical expression qua free atonality with the full impulse life of the sounds. In the German-speaking world, Adorno's thinking was then taken up by others Heinz-Klaus Metzger.
The first turning point was the period around 1950, when the critic Karl Schumann summed up that the economic miracle had also led to a "cultural miracle". From the 1950s onwards, various developments took place, among others:
Aleatoric music (dice coincidence): John Cage, Earle Brown, "moderate aleatoricism": Witold Lutosławski
Neo-Dada (from about 1968)
Expansion of traditional playing techniques: Helmut Lachenmann, Friedrich Goldmann and the young Krzysztof Penderecki
micropolyphony, timbre music: György Ligeti,
Polystylistic, collage: Bernd Alois Zimmermann, Alfred Schnittke, the Sinfonia of Luciano Berio.
Minimal music in America: among others. Terry Riley; Steve Reich, Philip Glass, John Adams.
Low-event, meditative and single-sound-oriented music, also mainly in the USA, close in thought to minimal music, but using other compositional principles (no pattern formation): Morton Feldman, George Crumb, in Germany Peter Michael Hamel and Walter Zimmermann.
in Germany: New Simplicity; among its representatives are. Hans-Jürgen von Bose, Peter Michael Hamel, Wolfgang Rihm, Manfred Trojahn, Detlev Müller-Siemens
New complexity : continuation of serial and constructive procedures; often emphasis on the performative. Main representatives: Brian Ferneyhough as well as his student Claus-Steffen Mahnkopf
Spectral music, especially in France: harmony and melody are derived from the acoustics Gérard Grisey was the initiator, and Tristan Murail is a main representative and repeatedly cited model. Another important representative of spectral music is the Austrian Georg Friedrich Haas.
Conceptual music: Peter Ablinger, Antoine Beuger, Johannes Kreidler, Hannes Seidl, Martin Schüttler, Trond Reinholdtsen, Anton Wassiljew.
Electroacoustic music: collective term for various conceptions of electronic sound production or transformation:
Electronic Music: music composed with synthetically generated sounds that originated in Cologne and initially emanated from Serialism. Its pioneers include Herbert Eimert, Karlheinz Stockhausen and Gottfried Michael Koenig.
Musique concrète: Electronic transformation of recorded sounds or noises, representatives include. Pierre Schaeffer, Pierre Henry, Luc Ferrari and Michel Chion
Acousmatic music: Music produced electronically in the form of sound objects whose means of sound production are not identifiable. The term became widespread mainly through François Bayle and Francis Dhomont.
Algorithmic composition: composition using computer-generated structures – Jean-Claude Risset, Orm Finnendahl, Hanspeter Kyburz, Enno Poppe. Among the pioneers of this field are the o. g. Gottfried Michael Koenig and Iannis Xenakis.
Another dimension in the case of some composers is the addition of an ideological or political (as a rule, "left-wing") orientation, which is particularly noticeable in vocal compositions. The quasi father of the idea is Hanns Eisler, later Luigi Nono, Hans Werner Henze, Rolf Riehm, Helmut Lachenmann, Nicolaus A. Huber and Mathias Spahlinger.
Especially from the 1970s onwards, a trend towards individualisation sets in, in particular a definitive detachment from serial composing. In the music of our time, one can therefore speak of a stylistic pluralism. In György Ligeti's music for example, musical influences from different cultures and times can be observed. The Italian improviser and composer Giacinto Scelsi, the Englishman Kaikhosru Shapurji Sorabji, the Estonian Arvo Pärt and the Mexican by choice Conlon Nancarrow represent completely independent positions. The American Harry Partch represents a special extreme case: the dissemination of his music was opposed by the fact that it depended on its own microtonal instrumentation.
A fixed classification of composers into currents and "schools" cannot be compelling, since many contemporary composers have dealt with several styles in their lifetime (best example: Igor Stravinsky, who, although treated for decades as the antipode of Schoenberg, switched to the serial technique in his old age). In addition, alongside the respective avant-garde, there is a large number of composers who integrate new techniques more or less partially and selectively into their compositional style, which is determined by tradition, or who attempt a synthesis between the two worlds, which is not quite adequately described by the keyword moderate modernism or "naive modernism", because it is too one-sided.
Forums
Acht Brücken, Cologne
Aspekte Salzburg, Salzburg
chiffren – Kieler Tage für Neue Musik, Kiel
Donaueschingen Festival, Donaueschingen
Dresdner Tage der zeitgenössischen Musik, Dresden
Eclat, Stuttgart
Ensemble intercontemporain, Paris
Festival Archipel, Geneva
Festival L’art pour l’Aar, Bern
Hallische Musiktage
Internationale Ferienkurse für Neue Musik, Darmstadt
Internationale Weingartener Tage für Neue Musik, Weingarten
ISCM World (New) Music Days
Klangwerkstatt Berlin
MaerzMusik, Berlin
Musicarama, Hongkong
Randspiele, Zepernick bei Berlin
Tage für Neue Musik Zürich
Ultraschall Berlin
Warsaw Autumn
Weimarer Frühjahrstage für zeitgenössische Musik, Weimar
Wien Modern, Wien
Wittener Tage für neue Kammermusik
Zeit für Neue Musik, Bayreuth
Ensembles
One of the first ensembles for New Music was the Domaine Musical initiated by Pierre Boulez. In 1976, he founded the Ensemble intercontemporain, on whose model numerous ensembles of new music with similar instrumentation were subsequently formed, such as the Ensemble Modern in Frankfurt, the Klangforum Wien, the musikFabrik NRW, the Asko Ensemble, the London Sinfonietta and the KammarensembleN in Stockholm.
Alter Ego
Arditti Quartet
AuditivVokal Dresden
Basel Sinfonietta, Basel/CH
Collegium Novum Zürich, CH
Contrechamps, Genf/CH
CQ – Cologne Contemporary Ukulele Ensemble, Cologne (one of the few ukulele orchestras in Germany dedicated to Neue Musik)
Ensemble Modern, Freiburg
Ensemble Dal Niente, Chicago/USA
ensemble für neue musik zürich
Ensemble Interface, Frankfurt
Ensemble Modern, Frankfurt
Ensemble Phoenix, Basel/CH
Ensemble Phorminx, Darmstadt
Ensemble Proton Bern, CH
ensemble recherche, Freiburg
Ensemble Sortisatio, Leipzig
Ensemble Vortex, Genf/CH
Gruppe Neue Musik Hanns Eisler, Leipzig
Interzone perceptible, Essen
Kairos Quartet, Berlin
Klangforum Wien, Vienna
Kronos Quartet, San Francisco
Ensemble intercontemporain, Paris
Le NEC, La Chaux-de-Fonds/CH
LUX:NM, Berlin
Ensemble Musikfabrik, Cologne
Neue Vocalsolisten Stuttgart
Österreichisches Ensemble für Neue Musik, Salzburg
Pegnitzschäfer-Klangkonzepte, Nürnberg
piano possibile, Munich
Remix Ensemble Casa da Música, Porto
Organisations and institutions
Centre de documentation de la musique contemporaine (Cdmc), Paris
Deutsche Gesellschaft für Elektroakustische Musik (DEGEM) – Vereinigung zur Verbreitung und Förderung elektroakustischer Musik
Forum Zeitgenössischer Musik Leipzig (FZML)
Gare du Nord (Basel) – Bahnhof für Neue Musik
Gesellschaft für Neue Musik (GNM) – deutsche Sektion der IGNM bzw. ISCM
IGNM-Sektion Österreich
Institut für Computermusik und Elektronische Medien (ICEM)
Institut für kulturelle Innovationsforschung – new classical e. V. (IKI)
Institut für Neue Musik und Musikerziehung (INMM), Darmstadt
International Society for Contemporary Music (IGNM) bzw. International Society for Contemporary Music (ISCM) – organisiert die von Mitgliedsland zu Mitgliedsland jährlich wechselnden Weltmusiktage
Internationales Musikinstitut Darmstadt (IMD), Darmstadt
IRCAM, Paris/F
Netzwerk Neue Musik
Sächsischer Musikbund
Schweizerische Gesellschaft für Neue Musik (SGNM)
Gesellschaft für Zeitgenössische Musik Aachen (GZM)
Journals
Dissonance. Schweizer Musikzeitschrift für Forschung und Kreation (Published in a bilingual version: German and French, discontinued in 2018)
KunstMusik. Schriften zur Musik als Kunst
MusikTexte. Zeitschrift für neue Musik
nmz Neue Musikzeitung
Neue Zeitschrift für Musik
Positionen. Texte zur aktuellen Musik
Seiltanz. Beiträge zur Musik der Gegenwart
See also
Electronic music
Electroacoustic music
Computer music
Sonart
Further reading
chronological; see also under each of the main articles
Overall presentations
Christoph von Blumröder: Neue Musik, 1980, 13 pages., in Hans Heinrich Eggebrecht and Albrecht Riethmüller (ed.): Handwörterbuch der musikalischen Terminologie, Loseblatt-Sammlung, Wiesbaden: Steiner 1971–2006. Zur Geschichte des Begriffs, keine Musikgeschichte.
Hanns-Werner Heister, Walter-Wolfgang Sparrer (ed.): Komponisten der Gegenwart (KDG). Loseblatt-Lexikon, edition text+kritik, Munich 1992 ff., . Biographisches Lexikon mit großen Lücken, aber auch detaillierten Artikeln.
Hermann Danuser (ed.): Die Musik des 20. Jahrhunderts (Neues Handbuch der Musikwissenschaft vol. 07), Laaber 1984, 465 pages.
Paul Griffiths: Modern Music and after. Oxford University Press, 1995,
Helga de la Motte-Haber et al. (ed.): Handbuch der Musik im 20. Jahrhundert. 13 volumes, Laaber 1999–2007,
Anton Haefeli: IGNM. Die Internationale Gesellschaft für Neue Musik. Ihre Geschichte von 1922 bis zur Gegenwart. Atlantis, Zürich 1982, .
Moderne
Paul Bekker: Neue Musik [Vorträge 1917–1921] (volume 3 der Gesammelten Schriften), Berlin: Deutsche Verlagsanstalt 1923, 207 pages.
Adolf Weißmann: Die Musik in der Weltkrise, Stuttgart 1922; English translation 1925: The Problems of Modern Music
Hans Mersmann: Die moderne Musik seit der Romantik (Handbuch der Musikwissenschaft [ohne Bandzählung]), Potsdam: Akademische Verlagsanstalt 1928, 226 pages.
Theodor W. Adorno: Philosophie der neuen Musik, Tübingen: J.C.B. Mohr 1949; 2nd edition Frankfurt: Europäische Verlagsanstalt 1958; 3rd ed. 1966, last edition.
Hans Heinz Stuckenschmidt: Neue Musik zwischen den beiden Kriegen, Berlin: Suhrkamp 1951, 2nd edition as Neue Musik, Frankfurt: Suhrkamp 1981, latest edition
Hans Heinz Stuckenschmidt: Schöpfer der neuen Musik – Porträts und Studien, Frankfurt: Suhrkamp 1958
Hans Heinz Stuckenschmidt: Musik des 20. Jahrhunderts, Munich: Kindler 1969
Hans Heinz Stuckenschmidt: Die Musik eines halben Jahrhunderts – 1925 bis 1975 – Essay und Kritik, München: Piper 1976
Stephan Hinton: Neue Sachlichkeit, 1989, 12 S., in Hans Heinrich Eggebrecht and Albrecht Riethmüller (ed.): Handwörterbuch der musikalischen Terminologie, Loseblatt-Sammlung, Wiesbaden: Steiner 1971–2006
Martin Thrun: Neue Musik im deutschen Musikleben bis 1933. Orpheus, Bonn 1995,
Avantgarde
Josef Häusler: Musik im 20. Jahrhundert – Von Schönberg zu Penderecki, Bremen: Schünemann 1969. 80 pages Überblick, 340 pages „Portraitskizzen moderner Komponisten“.
Ulrich Dibelius: Moderne Musik nach 1945, 1966/1988, 3rd extended new edition Munich: Piper 1998, 891 pages.
Hans Vogt: Neue Musik seit 1945, 1972, 3rd extended edition Stuttgart: Reclam 1982, 538 pages.
Dieter Zimmerschied (ed.): Perspektiven Neuer Musik. Material und didaktische Information, Mainz: Schott 1974, 333 pages.
References
External links
Themenportal Neue Musik in Deutschland des Deutschen Musikinformationszentrums
BabelScores BabelScores Förderung und Verbreitung Neuer Musik
Online-Dossier Neue Musik aus Deutschland des Goethe-Instituts
www.neue-musik.fm Internetradio mit ausschließlich Neuer Musik.
Datenbank Neue Musik
Walther Erbacher, Komponist und Experte für Neue Musik
Music history | 0.768477 | 0.968234 | 0.744065 |
Technopoly | Technopoly: The Surrender of Culture to Technology is a book by Neil Postman published in 1992 that describes the development and characteristics of a "technopoly". He defines a technopoly as a society in which technology is deified, meaning “the culture seeks its authorisation in technology, finds its satisfactions in technology, and takes its orders from technology”. It is characterised by a surplus of information generated by technology, which technological tools are in turn employed to cope with, in order to provide direction and purpose for society and individuals.
Postman considers technopoly to be the most recent of three kinds of cultures distinguished by shifts in their attitude towards technology – tool-using cultures, technocracies, and technopolies. Each, he says, is produced by the emergence of new technologies that "compete with old ones…mostly for dominance of their worldviews".
Tool-using culture
According to Postman, a tool-using culture employs technologies only to solve physical problems, as spears, cooking utensils, and water mills do, and to "serve the symbolic world" of religion, art, politics and tradition, as tools used to construct cathedrals do. He claims that all such cultures are either theocratic or "unified by some metaphysical theory", which forced tools to operate within the bounds of a controlling ideology and made it "almost impossible for technics to subordinate people to its own needs".
Technocracy
In a technocracy, rather than existing in harmony with a theocratic world-view, tools are central to the "thought-world" of the culture. Postman claims that tools "attack culture…[and] bid to become culture", subordinating existing traditions, politics, and religions. Postman cites the example of the telescope destroying the Judeo-Christian belief that the Earth is the centre of the Solar System, bringing about a "collapse…of the moral centre of gravity in the West".
Postman characterises a technocracy as compelled by the "impulse to invent", an ideology first advocated by Francis Bacon in the early 17th Century. He believed that human beings could acquire knowledge about the natural world and use it to "improve the lot of mankind", which led to the idea of invention for its own sake and the idea of progress. According to Postman, this thinking became widespread in Europe from the late 18th Century.
However, a technocratic society remains loosely controlled by social and religious traditions, he clarifies. For instance, he states that the United States remained bound to notions of "holy men and sin, grandmothers and families, regional loyalties and two-thousand-year-old traditions" at the time of its founding.
Technopoly
Postman defines technopoly as a "totalitarian technocracy", which demands the "submission of all forms of cultural life to the sovereignty of technique and technology". Echoing Ellul's 1964 conceptualisation of technology as autonomous, "self-determinative" independently of human action, and undirected in its growth, technology in a time of Technopoly actively eliminates all other ‘thought-worlds’. Thus, it reduces human life to finding meaning in machines and technique.
This is exemplified, in Postman's view, by the computer, the "quintessential, incomparable, near-perfect" technology for a technopoly. It establishes sovereignty over all areas of human experience based on the claim that it "'thinks' better than we can".
Values of "technological theology"
A technopoly is founded on the belief that technique is superior to lax, ambiguous and complex human thinking and judgement, in keeping with one of Frederick W. Taylor’s ‘Principles of scientific management’. It values efficiency, precision, and objectivity.
It also relies upon the "elevation of information to a metaphysical status: information as both the means and end of human creativity". The idea of progress is overcome by the goal of obtaining information for its own sake. Therefore, a technopoly is characterised by a lack of a cultural coherence or a "transcendent sense of purpose or meaning".
Postman attributes the origins of technopoly to ‘scientism’, the belief held by early social scientists including Auguste Comte that the practices of natural and social science would reveal the truth of human behaviour and provide "an empirical source of moral authority".
Consequences of technopoly
Postman refers to Harold Innis’ concept of "knowledge monopolies" to explain the manner in which technology usurps power in a technopoly. New technologies transform those who can create and use them into an "elite group", a knowledge monopoly, which is granted "undeserved authority and prestige by those who have no such competence". Subsequently, Postman claims, those outside of this monopoly are led to believe in the false "wisdom" offered by the new technology, which has little relevance to the average person.
Telegraphy and photography, he states, redefined information from something that was sought out to solve particular problems to a commodity that is potentially irrelevant to the receiver. Thus, in technopoly, "information appears indiscriminately, directed at no one in particular, in enormous volume at high speeds, and disconnected from theory, meaning, or purpose".
In the U.S. technopoly, excessive faith and trust in technology and quantification has led to absurdities such as an excess of medical tests in lieu of a doctor's judgment, treatment-induced illnesses (‘iatrogenics’), scoring in beauty contests, an emphasis on exact scheduling in academic courses, and the interpretation of individuals through "invisible technologies" like IQ tests, opinion polls, and academic grading, which leave out meaning or nuance. If bureaucracies implement their rules in computers, it can happen that the computer's output is decisive, the original social objective is treated as irrelevant, and the prior decisions about what the computer system says are not questioned in practice when they should be. The author criticizes the use of metaphors that characterize people as information-processing machines or vice versa—e.g. that people are "programmed" or "de-programmed" or "hard-wired", or "the computer believes ..."; these metaphors are "reductionist".
A technopoly also trivialises significant cultural and religious symbols through their endless reproduction. Postman echoes Jean Baudrillard in this view, who theorises that "technique as a medium quashes … the ‘message’ of the product (its use value)", since a symbol's "social finality gets lost in seriality".
Criticism of Technopoly
Technological determinism
Postman's argument stems from the premise that the uses of a technology are determined by its characteristics – "its functions follow from its form". This draws on Marshall McLuhan's theory that "the medium is the message" because it controls the scale and form of human interaction. Hence, Postman claims that once introduced, each technology "plays out its hand", leaving its users to be, in Thoreau's words, "tools of our tools".
According to Tiles and Oberdiek, this pessimistic understanding of pervasive technology renders individuals "strangely impotent". David Croteau and William Hoynes criticise such technologically deterministic arguments for underestimating the agency of a technology's users. Russell Neuman suggests that ordinary people skilfully organise, filter, and skim information, and actively “seek out” information rather than feeling overwhelmed by it.
It has also been argued that technologies are shaped by social factors more so than by their inherent properties. Star suggests that Postman neglects to account for the "actual development, adaptation and regulation of technology".
Values
According to Tiles and Oberdiek, pessimistic accounts of technology overriding culture are based on a particular vision of human values. They emphasise "artistic creativity, intellectual culture, development of interpersonal relations, or religion as being the realms in which human freedom finds expression and in which human fulfilment is to be found". They suggest that technological optimists merely adhere to an alternative worldview that values the "exercise of reason in the service of free will" and the ability of technological developments to "serve human ends".
Science and ideology
Postman's characterisation of technology as an ideological being has also been criticised. He refers to the "god" of technopolists speaking of "efficiency, precision, objectivity", and hence eliminating the notions of sin and evil which exist in a separate "moral universe". Stuart Weir argues that technologies are "not ideological beings that take…near-anthropomorphic control of people’s loves, beliefs and aspirations". He in fact suggests that new technologies have had remarkably little effect on pre-existing human beliefs.
Persistence of old world ideologies
Postman speaks of technological change as "ecological…one significant change generates total change". Hence, technopoly brought about by communications technologies must result in a drastic change in the beliefs of a society, such that prior "thought worlds" of ritual, myth, and religion cannot exist. Star conversely argues that new tools may create new environments, but do "not necessarily extinguish older beliefs or the ability to act pragmatically upon them".
Reviews
Gonzaga University professor Paul De Palma wrote for the technology journal ACM SIGCAS Computers and Society in March 1995 praising "the elegant little book". He also remarked:
See also
The Cult of the Amateur
Amusing Ourselves to Death
An Army of Davids
The Global Trap
Notes
References
External links
Booknotes interview with Postman on Technopoly (August 30, 1992)
Smart Reads. An interactive reading by Richard Grove (November 26, 2019)
1992 non-fiction books
Technology books
Technology in society | 0.760973 | 0.977741 | 0.744035 |
The Act of Creation | The Act of Creation is a 1964 book by Arthur Koestler. It is a study of the processes of discovery, invention, imagination and creativity in humour, science, and the arts. It lays out Koestler's attempt to develop an elaborate general theory of human creativity.
From describing and comparing many different examples of invention and discovery, Koestler concludes that they all share a common pattern which he terms "bisociation" – a blending of elements drawn from two previously unrelated matrices of thought into a new matrix of meaning by way of a process involving comparison, abstraction and categorisation, analogies and metaphors. He regards many different mental phenomena based on comparison (such as analogies, metaphors, parables, allegories, jokes, identification, role-playing, acting, personification, anthropomorphism etc.), as special cases of "bisociation".
Book One: The Art of Discovery and the Discoveries of Art
The Act of Creation is divided into two books. In the first book, Koestler proposes a global theory of creative activity encompassing humour, scientific inquiry, and art. Koestler's fundamental idea is that any creative act is a bisociation (not mere association) of two (or more) apparently incompatible frames of thought. Employing a spatial metaphor, Koestler calls such frames of thought matrices: "any ability, habit, or skill, any pattern of ordered behaviour governed by a 'code' of fixed rules." Koestler argues that the diverse forms of human creativity all correspond to variations of his model of bisociation.
In jokes and humour, the audience is led to expect a certain outcome compatible with a particular matrix (e.g. the narrative storyline); a punch line, however, replaces the original matrix with an alternative to comic effect. The structure of a joke, then, is essentially that of bait-and-switch. In scientific inquiry, the two matrices are fused into a new larger synthesis. The recognition that two previously disconnected matrices are compatible generates the experience of eureka. Finally, in the arts and in ritual, the two matrices are held in juxtaposition to one another. Observing art is a process of experiencing this juxtaposition, with both matrices sustained.
According to Koestler, many bisociative creative breakthroughs occur after a period of intense conscious effort directed at the creative goal or problem, in a period of relaxation when rational thought is abandoned, like during dreams and trances. Koestler affirms that all creatures have the capacity for creative activity, frequently suppressed by the automatic routines of thought and behaviour that dominate their lives.
Book Two: Habit and Originality
The second book of The Act of Creation aims to develop a biological and psychological foundation for the theory of creation proposed in book one. Koestler found the psychology of his day (behaviorism, cognitivism) portraying man merely as an automaton, disregarded the creative abilities of the mind. Koestler draws on theories of play, imprinting, motivation, perception, Gestalt psychology, and others to lay a theoretical foundation for his theory of creativity.
Literature
Reed Merrill: Arthur Koestler. In: Irene R. Makaryk (Ed.): Encyclopedia of contemporary literary theory. University of Toronto Press, 1993, , pp. 390–392.
See also
Analogy
Conceptual blending
Conceptual metaphor theory
Creativity
Gregory Bateson
Invention
Smile
References
External links
Influence of Koestler's writings on creativity
How Creativity in Humor, Art, and Science Works: Arthur Koestler’s Theory of Bisociation
1964 non-fiction books
Books by Arthur Koestler
Books about creativity
English-language books
Hutchinson (publisher) books | 0.762858 | 0.9753 | 0.744016 |
Syntagma (linguistics) | In linguistics, a syntagma is an elementary constituent segment within a text. Such a segment can be a phoneme, a word, a grammatical phrase, a sentence, or an event within a larger narrative structure, depending on the level of analysis. Syntagmatic analysis involves the study of relationships (rules of combination) among syntagmas.
At the lexical level, syntagmatic structure in a language is the combination of words according to the rules of syntax for that language. For example, English uses determiner + adjective + noun, e.g. the big house. Another language might use determiner + noun + adjective (Spanish ) and therefore have a different syntagmatic structure.
At a higher level, narrative structures feature a realistic temporal flow guided by tension and relaxation; thus, for example, events or rhetorical figures may be treated as syntagmas of epic structures.
Syntagmatic structure is often contrasted with paradigmatic structure. In semiotics, "syntagmatic analysis" is analysis of syntax or surface structure (syntagmatic structure), rather than paradigms as in paradigmatic analysis. Analysis is often achieved through commutation tests.
See also
Ferdinand de Saussure
Lexeme
Morphology (linguistics)
Phonetic word
Notes
Sources
Middleton, Richard (1990/2002). Studying Popular Music. Philadelphia: Open University Press. .
Cubitt, Sean (1984). Cited in Middleton (2002).
Narratology
Linguistic units | 0.761022 | 0.977566 | 0.743949 |
Deconstruction (fashion) | Deconstruction (or deconstructivism) is a fashion phenomenon of the 1980s and 1990s. It involves the use of costume forms that are based on identifying the structure of clothing - they are used as an external element of the costume. This phenomenon is associated with designers Martin Margiela, Yohji Yamamoto, Rei Kawakubo, Karl Lagerfeld, Ann Demeulemeester and Dries van Noten. Deconstructivism in fashion is considered as part of a philosophical system formed under the influence of the works of Jacques Derrida.
Term
In fashion, the term "deconstructivism" emerged in the second half of the 1980s and early 1990s. The principles of this direction were outlined in 1985 in Harold Code's article "Rei Kawakubo and the Aesthetics of Poverty". In the early 1990s, Harold Koda and Richard Martin introduced the concept of fashion deconstruction in the Infra-Apparel exhibition catalog, where "deconstructivism" was described as a unified trend of the 1990s. It is believed that the term "deconstructivism" in relation to fashion began to be used after an architectural exhibition in 1988 at the Museum of Modern Art in New York. The work that summarized the basic principles of deconstructivism in the 1990s can be considered the text by Alison Gill "Deconstruction Fashion: The Making of Unfinished, Decomposing and Re-Assembled Clothes". Gill defined deconstruction in fashion as a term to describe "garments on a runway that are unfinished, coming apart, recycled, transparent or grunge".
General principles
Origin
Deconstructivism is considered one of the most influential fashion trends of the 1980s and 1990s. It arose as a reaction to continental philosophy and can be seen as one of the attempts to present fashion as an intellectual movement. Designers and critics have emphasized the alternative nature of fashion deconstruction to commercial or runway fashion, although this opposition is rather relative. Deconstructivism was focused not so much on the mechanism and rules of the fashion industry, but on philosophy and architecture.
Basic elements
Deconstructivism is associated with the emergence of a new cutting technique that emphasized the structural elements of the costume. At the same time, deconstruction is considered a protest against the style of the 1980s. It is assessed as an attempt to create a new direction in costume both in terms of shaping and in the sense of creating a new fashion ideology. Deconstructivism involves identifying elements of cut in the external appearance of a suit.
Designers
There are different points of view as to which designers should be considered representatives of deconstruction in fashion. The list of main participants is ambiguous. In some cases, it is limited to representatives of the "Antwerp Six", with special emphasis on such names as Martin Margiela and Ann Demeulemeester.
Deconstructivism and the concept of intellectual fashion
The idea of resistance, embedded within the framework of deconstruction, implied the desire to see fashion as an intellectual sphere. The structure of the costume was represented by the intellectual side of the clothing. Under the influence of deconstruction, a new strategy was formed in fashion - an understanding of fashion as an intellectual phenomenon.
Deconstruction in fashion and architecture
The emergence of deconstructivism in fashion is associated with the architectural tradition. The starting point is considered to be the exhibition "Deconstructivist Architecture", which took place at the Museum of Modern Art in New York in 1988. The exhibition presented works by then little-known artists Rem Koolhaas, Zaha Hadid, Frank Gehry, Peter Eisenman, Daniel Libeskind and Bernard Tschumi.
Deconstructivism in fashion and philosophy
Deconstructivism in fashion is usually correlated with deconstruction as a philosophical movement - primarily with the works of Jacques Derrida. Fashionable deconstructivism is presented as a rethinking of the philosophical method formed by representatives of the European and Yale schools. Fashion deconstruction also implies that the fashion system in general, and costume in particular, is erroneously thought of as a structure. Deconstruction in fashion was part of a philosophical movement where the ideas of deconstruction could be expressed in applied forms. For fashion, turning to the philosophy of deconstruction was one of the ways to confirm its intellectual status.
Deconstruction and the idea of disorder
Deconstructivism in fashion was not a protest against the idea of order as such. It developed as resistance to a certain type of order: deconstructivism assumed the possibility of decentralization of the system (including the fashion system) and the possibility of verifying externally established rules. In fashionable deconstruction, the disorderly was part of an established system. Fashionable deconstruction positioned clutter as a structural element.
Deconstruction and anti-fashion
Deconstructivism in costume has become one of the consistent trends built on opposition to the idea of fashion. It became a form of criticism of standard commercial clothing and implied the possibility of a system focused on a philosophical prototype. Deconstructivism suggested the possibility of a new social reference point for fashion. In addition, deconstructivism was one of the first large-scale movements that outlined the very possibility of alternative fashion.
See also
Antwerp Six
Martin Margiela
1980s in fashion
1990s in fashion
References
Sources
Brunette P., Wills D. Deconstruction and the Visual Arts: Art, Media, Architecture. Cambridge: Cambridge University Press, 1994.
Cunningham B. Fashion du Siècle // Details, 1990, No. 8. pp. 177–300.
Koda H. Rei Kawakubo and the Aesthetic of Poverty / Costume: Journal of Costume Society of America, 1985, No. 11, pp. 5–10.
Martin R., Koda H. Infra-Apparel. [Exhibition catalogue]. New York: Metropolitan Museum of Art, 1993.
O'Shea S. La mode Destroy // Vogue (Paris), 1992, May.
O'Shea S. 1991. Recycling: An All-New Fabrication of Style // Elle, 1991, No. 2, pp. 234–239.
Further reading
Wilcox C. Radical Fashion. [Exhibition catalogue]. London: V & A Publications, 2001. 144 p.
Martin R. Destitution and Deconstruction: The Riches of Poverty in the Fashion of the 1990s. // Textile & Text, 1992, vol. 15, No. 2, pp. 3 – 12.
McLeod M. Undressing Architecture: Fashion, Gender, and Modernity // Architecture: In Fashion / Ed. by D. Fausch et al. Princeton: Princeton Architectural Press, 1994.
Granata F. Deconstruction and the Grotesque: Martin Margiela / Experimental Fashion: Performance Art, Carnival and the Grotesque Body. London — New York, I.B.Tauris: 2017. p. 74 — 102.
Gill A. Deconstruction Fashion: The Making of Unfinished, Decomposing and Re-Assembled Clothes // Fashion Theory: The Journal of Dress, Body & Culture. 1998. Vol. 2.1. pp. 25–49.
Gill A. Jacques Derrida: fashion under erasure. / A. Rocamora & A. Smelik (Eds.), Thinking Through Fashion: A Guide to Key Theorists. London: I.B. Tauris, 2016. pp. 251–268.
Deconstructivism
Fashion aesthetics
20th-century fashion | 0.761495 | 0.976914 | 0.743915 |
Background, foreground, sideground and postground intellectual property | In the context of research and development (R&D) collaborations, background, foreground, sideground and postground intellectual property (IP) are four distinct forms of intellectual property assets. These are included in the broader and more general categories of knowledge in R&D collaborations or open innovation. While background and foreground IP and knowledge are fairly established concepts, sideground and postground IP and knowledge have more recently been added to the conceptual vocabulary. This set of four concepts was first introduced by Prof. Ove Granstrand in a European Commission report in 2001.
The four knowledge/IP types are defined by Granstrand and Holgersson (2014):
Background knowledge/IP is knowledge/IP that is relevant to a collaborative venture or open innovation project that is supplied by the partners at the start of the project.
Foreground knowledge/IP is all the knowledge/IP produced within the collaborative venture or open innovation project during the project’s tenure.
Sideground knowledge/IP is knowledge/IP that is relevant to a collaborative venture or open innovation project, but produced outside the project by any of the partners during the project’s tenure.
Postground knowledge/IP is knowledge/IP that is relevant to a collaborative venture or open innovation project that is produced by any of the partners after the project ends.
References
Further reading
Intellectual property law
Economics of intellectual property
Research and development | 0.761416 | 0.976974 | 0.743884 |
Teleonomy | Teleonomy is the quality of apparent purposefulness and of goal-directedness of structures and functions in living organisms brought about by natural processes like natural selection. The term derives from two Greek words, τέλος, from τελε-, ("end", "goal", "purpose") and νόμος nomos ("law"). Teleonomy is sometimes contrasted with teleology, where the latter is understood as a purposeful goal-directedness brought about through human or divine intention. Teleonomy is thought to derive from evolutionary history, adaptation for reproductive success, and/or the operation of a program. Teleonomy is related to programmatic or computational aspects of purpose.
Relationship with teleology
Colin Pittendrigh, who coined the term in 1958, applied it to biological phenomena that appear to be end-directed, hoping to limit the much older term teleology to actions planned by an agent who can internally model alternative futures with intention, purpose and foresight:
In 1965 Ernst Mayr cited Pittendrigh and criticized him for not making a "clear distinction between the two teleologies of Aristotle"; evolution involves Aristotle's material causes and formal causes rather than efficient causes. Mayr adopted Pittendrigh's term, but supplied his own definition:
Richard Dawkins described the properties of "archeo-purpose" (by natural selection) and "neo-purpose" (by evolved adaptation) in his talk on the "Purpose of Purpose". Dawkins attributes the brain's flexibility as an evolutionary feature in adapting or subverting goals to making neo-purpose goals on an overarching evolutionary archeo-purpose. Language allows groups to share neo-purposes, and cultural evolution - occurring much faster than natural evolution - can lead to conflict or collaborations.
In behavior analysis, Hayne Reese made the adverbial distinction between purposefulness (having an internal determination) and purposiveness (serving or effecting a useful function). Reese implies that non-teleological statements are called teleonomic when they represent an "if A then C" phenomenon's antecedent; where, teleology is a consequent representation. The concept of purpose, as only being the teleology final cause, requires supposedly impossible time reversal; because, the future consequent determines the present antecedent. Purpose, as being both in the beginning and the end, simply rejects teleology, and addresses the time reversal problem. In this, Reese sees no value for teleology and teleonomic concepts in behavior analysis; however, the concept of purpose preserved in process can be useful, if not reified. A theoretical time-dimensional tunneling and teleological functioning of temporal paradox would also fit this description without the necessity of a localized intelligence. Whereas the concept of a teleonomic process, such as evolution, can simply refer to a system capable of producing complex products without the benefit of a guiding foresight.
In 1966 George C. Williams approved of the term in the last chapter of his Adaptation and Natural Selection; a critique of some current evolutionary thought. In 1970, Jacques Monod, in Chance and Necessity, an Essay on the Natural Philosophy of Modern Biology, suggested teleonomy as a key feature that defines life:
In 1974 Ernst Mayr illustrated the difference in the statements:
"The Wood Thrush migrates in the fall in order to escape the inclemency of the weather and the food shortages of the northern climates."
"The Wood Thrush migrates in the fall and thereby escapes the inclemency of the weather and the food shortages of the northern climates."
Subsequently, philosophers like Ernest Nagel further analysed the concept of goal-directedness in biology and by 1982, philosopher and historian of science David Hull joked about the use of teleology and teleonomy by biologists:
Relationship to evolution
The concept of teleonomy was largely developed by Mayr and Pittendrigh to separate biological evolution from teleology. Pittendrigh's purpose was to enable biologists who had become overly cautious about goal-oriented language to have a way of discussing the goals and orientations of an organism's behaviors without inadvertently invoking teleology. Mayr was even more explicit, saying that while teleonomy certainly operates on the level of organisms, the process of evolution itself is necessarily non-teleonomic.
This attitude towards the role of teleonomy in the evolutionary process is the consensus view of the modern synthesis.
Evolution largely hoards hindsight, as variations unwittingly make "predictions" about structures and functions which could successfully cope with the future, and which participate in a process of natural selection that culls the unfit, leaving the fit to the next generation. Information accumulates about functions and structures that are successful, exploiting feedback from the environment via the selection of fitter coalitions of structures and functions. Robert Rosen has described these features as an anticipatory system which builds an internal model based on past and possible future states.
In 1962, Grace A. de Laguna's "The Role of Teleonomy in Evolution" attempted to show how different stages of evolution were characterized by different types of teleonomy. de Laguna points out that humans have oriented teleonomy so that the teleonomic goal is not restricted to the reproduction of humans, but also to cultural ideals.
In recent years, a few biologists believe that the separation of teleonomy from the process of evolution has gone too far. Peter Corning notes that behavior, which is a teleonomic trait, is responsible for the construction of biological niches, which is an agent of selection. Therefore, it would be inaccurate to say that there was no role for teleonomy in the process of evolution, since teleonomy dictates the fitness landscape according to which organisms are selected. Corning calls this phenomenon "teleonomic selection". Additionally, recent research has demonstrated that mutations are not random with reference to their value to the organism. Monroe and colleagues presented solid evidence that the most important genes undergo fewer mutations. If the phenomenon responsible for making the most important genes undergo fewer mutations remained an enigma, many would easily assume that there is some form of control systems (teleonomy) in the generation of mutations. Assuming this would be incorrect, as the phenomenon responsible for making genes more "protected" from mutations occurs completely automatically, without any teleonomic aspect.
Philosophy
The Dutch Jewish philosopher Baruch Spinoza defined conatus as the tendency for individual things to persist in existence, meaning the pursuit of stability within the internal relations between their individual parts, in a similar way to homeostasis.
Spinoza also rejected the idea of finalism and asserted nature does not pursue specific goals and acts in a deterministic although non-directed way.
In teleology, Kant's positions as expressed in Critique of Judgment, were neglected for many years because in the minds of many scientists they were associated with vitalist views of evolution. Their recent rehabilitation is evident in teleonomy, which bears a number of features, such as the description of organisms, that are reminiscent of the Aristotelian conception of final causes as essentially recursive in nature. Kant's position is that, even though we cannot know whether there are final causes in nature, we are constrained by the peculiar nature of the human understanding to view organisms teleologically. Thus the Kantian view sees teleology as a necessary principle for the study of organisms, but only as a regulative principle, and with no ontological implications.
Talcott Parsons, in the later part of his working with a theory of social evolution and a related theory of world-history, adopted the concept of teleonomy as the fundamental organizing principle for directional processes and his theory of societal development in general. In this way, Parsons tried to find a theoretical compromise between voluntarism as a principle of action and the idea of a certain directionality in history.
Current status
Teleonomy is closely related to concepts of emergence, complexity theory, and self-organizing systems. It has extended beneath biology to be applied in the context of chemistry. Some philosophers of biology resist the term and still employ "teleology" when analyzing biological function and the language used to describe it, while others endorse it.
See also
Anthropic principle
Autopoiesis
Conatus
Naturalism (philosophy)
Orthogenesis
Religious naturalism
Theistic evolution
T-symmetry
References
Further reading
Allen, C., M. Bekoff, G. Lauder, eds., Nature's Purposes: Analyses Of Function and Design in Biology. MIT Press, 1998.
Mayr, Ernst, The Growth of Biological Thought. Diversity, Evolution, and Inheritance. Cambridge (MA): Belknap Press of Harvard University Press, 1982 : pp. 47–51 (differentiating four kinds of teleology).
Mayr, Ernst, What Makes Biology Unique?: Considerations on the Autonomy of a Scientific Discipline, Cambridge University Press, 2004..
Ruse, Michael Darwin and Design, Harvard University Press; 2004.
External links
Merriam Webster definition
Nonlinearity and Teleology
Biological Information
Teleology
Evolution
Concepts in metaphysics | 0.765537 | 0.971672 | 0.743851 |
Fashion psychology | Fashion psychology, as a branch of applied psychology, applies psychological theories and principles to understand and explain the relationship between fashion and human behavior, including how fashion affects emotions, self-esteem, and identity. It also examines how fashion choices are influenced by factors such as culture, social norms, personal values, and individual differences. Fashion psychologists may use their knowledge and skills to advise individuals, organizations, or the fashion industry on a variety of issues, including consumer behavior, marketing strategies, design, and sustainability.
Significance
Fashion psychology is an interdisciplinary field that examines the interaction between human behavior, individual psychology, and fashion, as well as the various factors that impact an individual's clothing choice. The fashion industry is actively seeking to establish a connection with fashion psychology, with a focus on areas such as trend prediction and comprehension of consumer behavior.
It is important to acknowledge the significance of clothing choices, irrespective of gender. Fashion choices can have a profound impact on self-perception, the image a person projects to others, and consequently, the way people interact. In fact, they can influence a wide range of scenarios, from the result of a sporting event to how an interviewer perceives capability to perform well in a job role.
Fashion psychology holds significant relevance for marketers as they strive to comprehend the variables that enhance the likelihood of a product's adoption by a consumer group. Additionally, marketers must predict the duration for which the product remains fashionable. Hence, a segment of fashion psychology is dedicated to analyzing the shifts in acceptance of fashion trends over time.
Clothing
Clothing serves as an extension of identity and provides a tangible reflection of a person's perceptions, dissatisfactions, and desires. The terms "clothing" and "dress" typically denote a type of body covering that can be worn, which is commonly made of textile material but may also utilize other materials or substances to be fashioned and secured in place. Clothing primarily served the purpose of providing warmth and protection against the elements. However, in modern times, it is important to note that clothing serves multiple functions beyond just protection, including identification, modesty, status, and adornment. Clothing is used to identify group membership, cover the body appropriately, indicate rank or position within a group, and facilitate self-expression and creativity. The clothing a person chooses to wear is significant in terms of their image and reputation, as it sends out messages to both familiar and unfamiliar people, showcasing the person's image. When an object is worn on the body, it takes on the social significance in relation to the person wearing it.
Fashion
The prevalent understanding of fashion refers to the prevailing style that is adopted by a significant portion of a particular group, at a given time and location. For example, during the era of cave dwellers, animal skins were considered fashionable, while the sari is a popular style among Indian women, and the miniskirt has become a trend among women in Western cultures. Fashion psychology is typically characterized as the examination of how the selections of attire effects perceptions and peoples' evaluations of one another.
Psychology of clothing
Throughout history, clothing has not held the same degree of importance in conveying personality as it does in present times. Technological advancements over the centuries have resulted in fashion choices becoming a significant aspect of identity.
During early civilizations, clothing served the primary purpose of keeping us warm and dry. Today, with the advent of technological facilities such as central heating, we have become less reliant on clothing as a means of survival. Clothes have evolved from being merely a practical necessity to becoming a social marker, influencing self-perception and allowing people to present themselves in the desired light while also showcasing their personalities and social status.
In numerous societies, one's dress sense is considered a reflection of personal wealth and taste, as highlighted by Economist George Taylor through the Hemline index.
The fashion impulse is a highly influential and potent social phenomenon owing to its pervasive and expeditious character, its capacity to influence an individual's conduct, and its close association with the societal and economic fabric of a nation.
The phrase "You Are What You Wear" implies that people can be judged based on their clothing choices. It suggests that clothing is not just a means of covering the body, but a reflection of a person's identity, values, and social status. The garments we choose to wear serve as a representation of our current thoughts and emotions. Frequently, instances of clothing mishaps can be attributed to underlying internal conflicts manifesting themselves outwardly. Choosing clothing that provides comfort, joy, and a positive self-image can genuinely enhance one's quality of life. Even the slightest modification in one's wardrobe can trigger a sequence of events that leads to new experiences, self-discovery, and cherished moments.
Socio-psychological Impact
The clothing a person chooses can reflect mental and emotional state, making clothing mishaps a visible manifestation of internal struggles. According to Mary Lynn Damhorst, a researcher in this field, clothing is a systematic method of conveying information about the person who wears it. This suggests that an individual's selection of attire can significantly impact the impression they convey and, consequently, serves as a potent means of communication.
Upbringing and fashion choice
Madonna describes her upbringing in a strict Catholic family, where wearing pants to church was strongly discouraged by her father. Reflecting on this experience, she acknowledges the powerful influence that clothing can have and how it inspired her to incorporate a mix of conservative and daring elements in her personal style. She refers to this combination as "combinations of strictness and rebelliousness." Madonna's fashion choices, including her crucifix earrings and rosary bead necklaces, were influenced by this realization.
Body image
Clothing can be perceived as an extension of an individual's physical self and serves the purpose of modifying the body's appearance. The way in which a person perceives their own physical appearance has a significant impact on their attitudes and preferences towards clothing.
Millennial females, also known as Generation Y, are being socialized to begin their fashion consumption at an earlier age than their predecessors, resulting in a shift in the typical starting point of fashion consumption. Even though Generation Y consumers play a crucial role in the decision-making process of the market, retailers are finding it increasingly difficult to comprehend the behavior and psychology of these consumers.
Brand
Consumers purchase fashion-branded products not only to meet their functional requirements but also to fulfill their desires for social recognition, self-image projection, and a desirable lifestyle. The implementation of effective branding strategies is a crucial determinant of success for all types of fashion brands, as it has a direct impact on the welfare of consumers.
Marketing strategies
The fashion industry is currently shifting towards a data-driven approach, where brands are leveraging analytical services to formulate innovative marketing strategies.
The impact of artificial intelligence on marketing strategies is expected to extend to various areas, such as business models, sales processes, customer service options, and even consumer behaviours.
Impact of clothing color
Psychologists hold the belief that the color of apparel can have an impact on emotional states and stress levels. The presence of color has the potential to augment an individual's perception of their environment.
Design
Fashion psychology concerns itself with examining the ways in which fashion design can influence a positive body image, utilizing psychological insights to foster a sustainable approach towards clothing production and disposal, and understanding the underlying reasons behind the development of specific shopping behaviors.
Men's fashion insecurities
Research has shown that the conventional gender stereotype suggesting that females are more fashion-conscious and observant of others' clothing and makeup choices than males is not completely accurate. Instead, these studies have highlighted that men also encounter insecurities linked to their clothing decisions. In fact, research has shown that men often exhibit higher levels of self-consciousness than women when it comes to their personal sense of style and the public perception of their appearance.
Dress to impress
In research conducted by Joseph Benz from the University of Nebraska, over 90 men and women were surveyed to investigate their behavior of deceiving potential partners during dates. The study revealed that both sexes engage in deceptive behaviors while dating, albeit for distinct reasons.
The study findings suggest that men engage in deceptive behavior to create a positive impression on their romantic partners. This can include highlighting their financial resources or showing willingness to provide security and stability in the relationship. Similarly, women tend to exhibit deceptive conduct concerning their physical appearance, amplifying specific bodily attributes to enhance their appeal to their romantic partner.
Shopping behavior
Compulsive buying disorder
CBD or Compulsive buying disorder is a condition in which an individual experiences distress or impairment due to their excessive shopping thoughts and buying behavior.
According to Bleuler, Kraepelin identifies a final category of individuals known as "buying maniacs" or "oniomaniacs." These individuals experience compulsive buying behavior, leading to the accumulation of debt that is often left unpaid and can ultimately result in a catastrophic situation. The oniomaniacs never fully acknowledge their debts and therefore continue to struggle with them.
In contemporary consumer-oriented societies, the act of purchasing branded fashion apparel has become a significant aspect of our daily routines and economy. It is often regarded as a source of entertainment and a means of rewarding oneself. However, when this behavior is overindulged, it may lead to a serious psychological condition known as compulsive buying behavior.
Revenge buying and panic buying
In April 2020, when the lockdown restrictions were largely lifted and markets resumed operation in China, a phenomenon known as "revenge buying" took place. During this time, the renowned French luxury brand Hermès achieved exceptional sales of $2.7 million in a single day.
Sociologists posit that compulsive and impulsive purchasing tendencies, including panic buying and revenge buying, function as coping mechanisms that alleviate negative emotions. The phenomena of panic buying, and revenge buying are essentially attempts by consumers to compensate for a situation that is beyond their personal control. These actions serve as a therapeutic means of exerting control over external circumstances, while also offering a sense of comfort, security, and an overall improvement in well-being.
Fast fashion
The emergence of fast fashion has had a significant impact on the fashion industry, altering the ways in which fashion is conceptualized, manufactured, and consumed, resulting in negative consequences across all three domains. The popularity of fast fashion among consumers can be attributed to its capability of appealing to their emotional, financial, and psychological needs by tapping into their desire for self-expression, social status, and immediate satisfaction.
See also
Attitude (psychology)
Cognitive dissonance
Feeling
Neuromarketing
Retail marketing
Semiotics of dress
Semiotics of fashion
Sensory branding
References
Fashion
Psychology of art | 0.762885 | 0.975029 | 0.743835 |
Postmodern dance | Postmodern dance is a 20th century concert dance form that came into popularity in the early 1960s. While the term "postmodern" took on a different meaning when used to describe dance, the dance form did take inspiration from the ideologies of the wider postmodern movement, which "sought to deflate what it saw as overly pretentious and ultimately self-serving modernist views of art and the artist" and was, more generally, a departure from modernist ideals. Lacking stylistic homogeny, postmodern dance was discerned mainly by its anti-modern dance sentiments rather than by its dance style. The dance form was a reaction to the compositional and presentational constraints of the preceding generation of modern dance, hailing the use of everyday movement as valid performance art and advocating for unconventional methods of dance composition.
Postmodern dance made the claim that all movement was dance expression and any person was a dancer regardless of training. In this, early postmodern dance was more closely aligned with the ideologies of modernism rather than the architectural, literary and design movements of postmodernism. However, the postmodern dance movement rapidly developed to embrace the ideas of postmodernism, which rely on chance, self-referentiality, irony, and fragmentation. Judson Dance Theater, the postmodernist collective active in New York in the 1960s, is credited as a pioneer of postmodern dance and its ideas.
The peak popularity of Postmodern dance as a performance art was relatively short, lasting from the early 1960s to the mid 1980s, but due to the changing definitions of postmodernism, it technically reaches the mid 1990s and beyond. The form's influence can be seen in various other dance forms, especially contemporary dance, and in postmodern choreographic processes that are utilized by choreographers in a wide range of dance works.
Influences
postmodern dance can be understood as a continuation in dance history: stemming from early modernist choreographers like Isadora Duncan, who rejected the rigidity of an academic approach to movement, and modernists like Martha Graham, whose emotion-filled choreography sought to exploit gravity, unlike the illusionistic floating of ballet.
Merce Cunningham, who studied under Graham, was one of the first choreographers to take major departures from the then-formalized modern dance in the 1950s. Among his innovations was the severance of the connection between music and dance, leaving the two to operate by their own logic. He also removed dance performance from the proscenium stage. To Cunningham, dance could be anything, but its foundation was in the human body— specifically beginning with walking. He also incorporated chance into his work, using methods like tossing dice or coins at random to determine movements in a phrase. These innovations would become essential to the ideas in postmodern dance, however, Cunningham’s work remained grounded in the tradition of dance technique, which would later be eschewed by the postmodernist.
Other avant-garde artists who influenced the postmodernists include John Cage, Anna Halprin, Simone Forti, and other choreographers of the 1950s, as well as non-dance artistic movements such as Fluxus (a neo-dada group), Happenings, and Events.
Characteristics
Major characteristics of postmodern dance of the 1960s and 1970s can be attributed to its goals of questioning the process behind and reasons for dance-making while simultaneously challenging the expectations of the audience. Many dancemakers employed improvisation, spontaneous determination, and chance to create their works, instead of rigid choreography. In order to demystify and draw attention away from technique-driven dance, pedestrian movement was also employed to include everyday and casual postures. In some cases, choreographers cast non-trained dancers. Furthermore, movement was no longer bound to the tempo created by accompanying music, but to actual time. One dance artist, Yvonne Rainer, did not inflect her phrasing, which had the effect of flattening the amount of time passing as dynamics no longer had a role to play between time and dance.
Evolution
Early postmodern dance
The earliest usage of the term "postmodern" in dance was in the early 1960s. During the formative years of the performance art, the only defining characteristic was the participants' rejection of its predecessor, modern dance. The pioneer choreographers utilized unconventional methods, such as chance procedures and improvisation. Chance procedure, also known as dance by chance, is a method of choreography "based on the idea that there are no prescribed movement materials or orders for a series of actions." This means that the chance methods, which could be the toss of a coin, determine the movements rather than the choreographer. Dance by chance was not a distinctly postmodern method – it was first used by modern dancer and choreographer Merce Cunningham. Thus, despite their adamant rejection of their predecessors, many early postmodern choreographers embraced the techniques of modern and classical ballet.
Analytical postmodern dance
As postmodern dance progressed into the 1970s, a more identifiable, postmodern style emerged. Sally Banes uses the term "analytical postmodern" to describe the form during the 70s. It was more conceptual, abstract, and distanced itself from expressive elements such as music, lighting, costumes, and props. In this way, analytical postmodern dance aligned more with modernist criteria as defined by art critic Clement Greenberg. Analytical postmodern "became objective as it was distanced from personal expression through the use of scores, bodily attitudes that suggested work and other ordinary movements, verbal commentaries, and tasks." Modernist influence can also be seen in the analytical postmodern choreographers' use of minimalism, a method used in art that relies on "excessive simplicity and objective approach."
Analytical postmodern dance was also heavily influenced by the political activism taking place in the U.S. during the late 60s and early 70s. The Black Power movement, the anti-Vietnam war movement, the second-wave feminist movement, and the LGBTQ movement all became more explicitly explored in analytical postmodern dance. Many postmodern dancers during this time, despite their Euro-American backgrounds, were heavily influenced by African-American and Asian forms of dance, music and martial arts.
Postmodern dance 1980 and beyond
The 1980s saw a distancing from the analytical postmodern dance of the previous decade, and a return to expression in meaning, which was rejected by the postmodern dance of the '60s and '70s. Though stylistically, postmodern dance of the '80s and beyond lacked a unifying style, specific aspects could be seen throughout the work of various choreographers. The form took on an "alliance with the avant/pop music world" and saw increased distribution on international main stages, with performances in venues such as City Center and the Brooklyn Academy of Music, both in New York City. There was also an increased interesting in preserving dance on film, in repertory, etc., which contrasts the improvisational attitudes of early postmodern dance choreographers.
Another aspect that unifies the postmodern dance of 1980 forward is the interest in "narrative content and the traditions of dance history." The more recent forms of postmodern dance have distanced themselves from the formalism of the '70s and began a greater exploration into "meaning of all kinds, from virtuosic skill to language and gesture systems to narrative, autobiography, character, and political manifestos."
Postmodern choreographic process
Postmodern dance utilized many unconventional methods during the choreographic process. One of the main methods used was chance, which is a technique pioneered in dance by Merce Cunningham that relied on the idea that there were "no prescribed movement materials or orders for a series of action." Choreographers would use random numbers and equations or even roll dice to determine "how to sequence choreographic phrases, how many dancers would perform at any given point, where they would stand on stage, and where they would enter and exit.” In using the chance technique, it was not uncommon for dancers in a postmodern piece to hear the music they were dancing to for the first time during the premiere performance.
Postmodern choreographers also often utilized an objectivism similar to literary theorist Roland Barthes' idea of "death of the author." Narratives were rarely conveyed in postmodern dance, with the choreographer more focused on "creating an objective presence." Performances were stripped down – dancers wore simple costumes, the music was minimalist or, in some cases, nonexistent, and performances often "[unfolded] in objective or clock-time rather than a theatrically-condensed or musically-abstract time." In this, postmodern choreography reflects the objective present, rather than the thoughts and ideas of the choreographer.
Although postmodern choreography may have seldom conveyed conventional narrative, postmodern artists of the 1960s and 1970s have also been known to make dances with implicit or explicit political themes. Yvonne Rainer has a history of politically conscious and active dance-making. For example, while still recovering from a major abdominal surgery, she performed her work Trio A and called it Convalescent Dance as part of a program of anti-Vietnam War works during Angry Arts Week in 1967. The works Steve Paxton created in the 1960s also were politically sensitive, exploring issues of censorship, war, and political corruption.
References
Notes
Further reading
Banes, S (1987) Terpsichore in Sneakers: Post-Modern Dance. Wesleyan University Press.
Banes, S (Ed) (1993) Greenwich Village 1963: Avant-Garde Performance and the Effervescent Body. Duke University Press.
Banes, S (Ed) (2003) Reinventing Dance in the 1960s: Everything Was Possible. University of Wisconsin Press.
Bremser, M. (Ed) (1999) Fifty Contemporary Choreographers. Routledge.
Carter, A. (1998) The Routledge Dance Studies Reader. Routledge.
Copeland, R. (2004) Merce Cunningham: The Modernizing of Modern Dance. Routledge.
Denby, Edwin "Dancers, Buildings, and People in the Streets".(1965) Curtis Books.
Reynolds, N. and McCormick, M. (2003) No Fixed Points: Dance in the Twentieth Century. Yale University Press.
Dance
20th-century dance
Concert dance | 0.761151 | 0.977201 | 0.743797 |
Dogme language teaching | Dogme language teaching is considered to be both a methodology and a movement. Dogme is a communicative approach to language teaching that encourages teaching without published textbooks and focuses instead on conversational communication among learners and teacher. It has its roots in an article by the language education author, Scott Thornbury. The Dogme approach is also referred to as "Dogme ELT", which reflects its origins in the ELT (English language teaching) sector. Although Dogme language teaching gained its name from an analogy with the Dogme 95 film movement (initiated by Lars von Trier) in which the directors, actors, and actresses commit a "vow of chastity" to minimize their reliance on special effects that may create unauthentic feelings from the viewers, the connection is not considered close.
Key principles
Dogme has ten key principles.
Interactivity: the most direct route to learning is to be found in the interactivity between teachers and students and amongst the students themselves.
Engagement: students are most engaged by content they have created themselves.
Dialogic processes: learning is social and dialogic, where knowledge is co-constructed.
Scaffolded conversations: learning takes place through conversations, where the learner and teacher co-construct the knowledge and skills.
Emergence: language and grammar emerge from the learning process. This is seen as distinct from the 'acquisition' of language.
Affordances: the teacher's role is to optimize language learning affordances through directing attention to emergent language.
Voice: the learner's voice is given recognition along with the learner's beliefs and knowledge.
Empowerment: students and teachers are empowered by freeing the classroom of published materials and textbooks.
Relevance: materials (e.g. texts, audios and videos) should have relevance for the learners.
Critical use: teachers and students should use published materials and textbooks in a critical way that recognizes their cultural and ideological biases.
Main precepts
There are three precepts (later described by Thornbury as the "three pillars" of Dogme ) that emerge from the ten key principles.
Conversation-driven teaching
Conversation is seen as central to language learning within the Dogme framework, because it is the "fundamental and universal form of language" and so is considered to be "language at work". Since real life conversation is more interactional than it is transactional, Dogme places more value on communication that promotes social interaction. Dogme also places more emphasis on a discourse-level (rather than sentence-level) approach to language, as it is considered to better prepare learners for real-life communication, where the entire conversation is more relevant than the analysis of specific utterances. Dogme considers that the learning of a skill is co-constructed within the interaction between the learner and the teacher. In this sense, teaching is a conversation between the two parties. As such, Dogme is seen to reflect Tharp's view that "to most truly teach, one must converse; to truly converse is to teach".
Revision to the concept of Dogme as being conversation driven
The immutability of conversation as one of the "pillars" of Dogme was called into question by Scott Thornbury himself in a 2020 interview. When asked what might happen should a student not wish to engage in classroom conversation, Thornbury suggested that saying Dogme had to be "conversation driven" might have been a "mistake": I think one of the mistakes we made was making conversation part of the... "three pillars", and what really should be said, is that Dogme is driven not by conversations, but by texts... texts meaning both written and spoken. Arguably, this suggestion that Dogme language teaching should be seen as being "text-driven", rather than "conversation-driven", caters to more reflective learners.
Materials light approach
The Dogme approach considers that student-produced material is preferable to published materials and textbooks, to the extent of inviting teachers to take a 'vow of chastity' (which Thornbury and Meddings have since pointed out was "tongue-in-cheek") and not use textbooks. Dogme teaching has therefore been criticized as not offering teachers the opportunity to use a complete range of materials and resources. However, there is a debate over the extent to which Dogme is actually anti-textbook or anti-technology. Meddings and Thornbury focus their critique of textbooks on their tendency to focus on grammar more than on communicative competency and also on the cultural biases often found in textbooks, especially those aimed at global markets. Indeed, Dogme can be seen as a pedagogy that is able to address the lack of availability or affordability of materials in many parts of the world. Proponents of a Dogme approach argue that they are not so much anti-materials as pro-learner, and thus align themselves with other forms of learner-centered instruction and critical pedagogy.
Emergent language
Dogme considers language learning to be a process where language emerges rather than one where it is acquired. Dogme shares this belief with other approaches to language education, such as task-based learning. Language is considered to emerge in two ways. Firstly classroom activities lead to collaborative communication amongst the students. Secondly, learners produce language that they were not necessarily taught. The teacher's role, in part, is to facilitate the emergence of language. However, Dogme does not see the teacher's role as merely to create the right conditions for language to emerge. The teacher must also encourage learners to engage with this new language to ensure learning takes place. The teacher can do this in a variety of ways, including rewarding, repeating and reviewing it. As language emerges rather than is acquired, there is no need to follow a syllabus that is externally set. Indeed, the content of the syllabus is covered (or 'uncovered') throughout the learning process. Ali ketabi (TESOL holder)
Pedagogical foundations
First, Dogme is based not only on theories of language teaching and learning but also on progressive, critical, and humanist educational theories. Adopting the Dialogic model, Dogme encourages students and teachers to communicate in order to exchange ideas, which is the prerequisite for education to occur.
Dogme also has its roots in communicative language teaching (in fact Dogme sees itself as an attempt to restore the communicative aspect to communicative approaches). Dogme has been noted for its compatibility with reflective teaching and for its intention to "humanize the classroom through a radical pedagogy of dialogue". It also shares many qualities with task-based language learning and only differs with task-based learning in terms of methodology rather than philosophy. Research evidence for Dogme is limited but Thornbury argues that the similarities with task-based learning suggest that Dogme likely leads to similar results. An example is the findings that learners tend to interact, produce language and collaboratively co-construct their learning when engaged in communicative tasks.
Another significant milestone that contributed to the birth of Dogme was the introduction of Emergentism. Dogme follows the same idea of language emergence through dialogs, which allows learners to enhance the effectiveness of communication. Later on, through the action of language awareness-raising activities and focus-on-form tasks, learners can refine the interlanguage and get more proximate to the target language
As a critical pedagogy
Although Thornbury notes that Dogme is not inherently seeking social change and therefore does not fulfill generally held criteria for a critical pedagogy, Dogme can be seen as critical in terms of its anti-establishment approach to language teaching.
Technology and web 2.0
Although Dogme teaching has been seen to be anti-technology, Thornbury maintains that he does not see Dogme as being opposed to technology as such, rather that the approach is critical of using technology that does not enable teaching that is both learner centered and is based upon authentic communication. Indeed, more recent attempts to map Dogme principles on to language learning with web 2.0 tools (under the term "Dogme 2.0") are considered evidence of Dogme being in transition and therefore of being compatible with new technology. However, although there is not a clear consensus among Dogme teachers on this issue (see discussions on the ELT Dogme Yahoo Group), there is a dominant view that the physical classroom will be preferable to attempts to substitute physical presence with communication via digital technology. Dogme can combine with different technological tools as our society is constantly changing. Teachers can combine Dogme philosophy with the other methods such as flipped classrooms or e-learning environments. However, what matters is that Dogme, as critical pedagogy, is transformative and seeks social changes
Criticism
Dogme has come under criticism from a wide range of teachers and educators for its perceived rejection of both published textbooks and modern technology in language lessons. Furthermore, it has been suggested that the initial call for a 'vow of chastity' is unnecessarily purist, and that a weaker adoption of Dogme principles would allow teachers the freedom to choose resources according to the needs of a particular lesson. Maley also presents Dogme as an approach that "[increases] the constraints on teachers". Christensen notes that adoption of Dogme practices may face greater cultural challenges in countries outside of Europe, such as Japan. Questions have also been raised about the appropriateness of Dogme in low resource contexts and where students are preparing for examinations that have specific syllabi.
In general, the criticisms and concerns that Dogme encounters revolve around several major issues: the theoretical foundation of the conversation-driven perspective, the under-preparedness of lesson structure, and the potential pressure on teachers and students in various learning contexts. Dogme can challenge inexperienced teachers who have an inadequate pedagogical repertoire, and limited access to resources. It may also face challenges regarding its applicability in classes of students with low levels of proficiency. Low-level students cannot interact with the teacher and peers effectively in the target language.
Notes
References
Language-teaching methodology | 0.766638 | 0.970109 | 0.743723 |
Online youth radicalization | Online youth radicalization is the action in which a young individual or a group of people come to adopt increasingly extreme political, social, or religious ideals and aspirations that reject, or undermine the status quo or undermine contemporary ideas and expressions of a state, which they may or may not reside in. Online youth radicalization can be both violent or non-violent.
The phenomenon, often referred to as "incitement to radicalization towards violent extremism" (or "violent radicalization") has grown in recent years, due to the Internet and social media in particular. In response to the increased attention on online "incitement to extremism and violence", attempts to prevent this phenomenon have created challenges for freedom of expression. These range from indiscriminate blocking, censorship over-reach (affecting both journalists and bloggers), and privacy intrusions—right through to the suppression or instrumentalization of media at the expense of independent credibility.
After terrorist attacks, political pressure is often put on social media companies to do more to prevent online radicalization of young people leading to violent extremism. UNESCO calls for "a policy that is constructed on the basis of facts and evidence, and not founded on hunches—or driven by panic and fearmongering."
Cyberspace is used to denote the Internet, as a network of networks, and social media as a social network that may combine various Internet platforms and applications to exchange and publish online: the online production of radical (political, social, religious) resources or content, the presence of terrorist or radicalized groups within the social networks, and the participation of young people in radical conversations.
Definitions and approaches
While there is no consensus definition, broadly speaking "radicalization" refers to a process in which individuals are moved towards beliefs deemed "extreme" by the status quo. Not all processes of radicalization, however, have acts of violence as either their goal or their outcome. Concern is with radicalization processes which intentionally result in violence, and particularly when that violence is terroristic in targeting civilians. Communications—online and offline—play a part in radicalization processes, along with events and how individuals interpret their life experiences.
Yet distinctions need to be made between communications that may be perceived as "extreme", but which do not rise to the level of constituting criminal incitement or recruitment, and those which advocate for violent acts to be committed. Although scholars emphasize different aspects, there are three main recurring characteristics in the way that they conceptualize specifically violent radicalization.
In this sense, the concept of violent radicalization (or radicalization leading to violent acts) covers an observable process involving the individual person's search for fundamental meaning, origin and return to a root ideology, the polarization of the social space and the collective construction of a threatened ideal "us" against "them", where others are dehumanized by a process of scapegoating, a group's adoption of violence as a legitimate means for the expansion of root ideologies and related oppositional objectives.
Two major schools of theory can be discerned in the reception of the Internet and social media. These schools largely originate in pre-digital media, but are still being applied (usually implicitly) to the Internet era. The effects-based school perceives the Internet and social media as highly powerful means of communication and propaganda that over-determine other communication tools and processes. Social media are seen as highly effective drivers of propaganda, conspiracy theories and the rise of extremism through de-sensitization which leads to individuals accepting the use of violence. The uses-based school sheds doubts on the structuring effects of social media by empirically identifying only indirect and limited effects. In this paradigm, "the role of social media in violent radicalization and extremism constitutes a reflection of real offline social ruptures".
Algorithmic radicalization
Youth and violent extremism
Specificities of social media
The Internet has remained a medium for the spread of narratives. It has often been mistaken as a driver of violent extremism rather than the medium that it is. Unfortunately, social media has not only been used to bring people closer, to share thoughts and opinions, but also to spread false information. Additionally, the application of privacy rules has made it easier for closing the niche and advancing the targeting of vulnerable individuals. These privacy rules though welcomed, have made the process of analysis for prevention; challenging.
Chatrooms
Chatrooms can be embedded within most Internet-based media. Reports that have looked into the use of chatrooms by violent extremist groups describe these as the space where at-risk youth without previous exposure would be likely to come across radicalizing religious narratives. This falls in line with Sageman's emphasis on the role of chatrooms and forums, based on his distinction between websites as passive sources of news and chat rooms as active sources of interaction. According to Sageman, "networking is facilitated by discussion forums because they develop communication among followers of the same ideas (experiences, ideas, values), reinforce interpersonal relationships and provide information about actions (tactics, objectives, tutorials)". Chatrooms can also include spaces where extremist people share information such as photos, videos, guides, and manuals. Discussion forums such as Reddit, 4chan, and 8chan have become focal points on internet meme-based and other forms of radicalization.
Facebook
Many extremist groups are ideologically and strategically anti-Facebook, but a strong presence still exists on this platform either directly or through supporters. Facebook does not seem to be used for direct recruitment or planning, possibly because it has mechanisms of tracking and can link users with real places and specific times. Facebook appears to have been more often used by extremists as a decentralized center for the distribution of information and videos or a way to find like-minded supporters and show support rather than direct recruitment. This may be on the possibility that young sympathizers can share information and images and create Facebook groups in a decentralized way.
The terrorist perpetrator of the Christchurch mosque shootings live-streamed, on Facebook, a video of the attacks which resulted in the deaths of 51 people; this was then extensively shared on social media. In the wake of this tragedy, Facebook and Twitter became more active in banning extremists from their platforms. Facebook pages associated with Future Now Australia have been removed from the platform, including their main page, "Stop the Mosques and Save Australia." On March 28, Facebook announced that they have banned white nationalist and white separatist content along with white supremacy.
Twitter
Micro-blogging sites like Twitter present more advantages for extremist groups because traceability of the identity and the source of the tweets are harder to achieve, thus increasing the communication potential for recruiters. Analyses of Twitter feeds generated by Islamist violent extremist groups show that they are mostly used for engaging with the opposition and the authorities, in what appear to be tweetclashes that mobilize the two sides, and also used for provocation. Through Twitter, extremists can easily comment publicly on international events or personalities in several languages, enabling the activists to be vocal and timely when mounting campaigns.
YouTube and other video platforms
YouTube has the advantage of being difficult to trace the identity of people posting content, while offering the possibility for users to generate comments and share contents. Several researchers have conducted content analyses of YouTube and Facebook extremist discourses and video contents to identify the production features most used, including their modus operandi and intended effects. Studies that have focused on the rhetorical strategy of extremist groups show the multifaceted use of online resources by extremist that is, they produce "hypermedia seduction" via the use of visual motifs that are familiar to young people online, and they provide content in several languages, mostly Arabic, English and French using subtitles or audio dubbing, to increase the recruitment capacity of youth across nations. These videos provide rich media messaging that combines nonverbal cues and vivid images of events that can evoke psychological and emotional responses as well as violent reactions. Terrorists capture their attacks on video and disseminate them though the Internet, communicating an image of effectiveness and success. Such videos in turn are used to mobilize and recruit members and sympathizers. Videos also serve as authentication and archive, as they preserve live footage of actual damage and they validate terrorist performance acts. In 2018, researchers from the Data & Society thinktank identified the YouTube recommendation system as promoting a range of political positions from mainstream libertarianism and conservatism to overt white nationalism.
Other areas of the social media scape: video games
Video games can be placed in a similar category as social media because they increasingly have their own forums, chatrooms and microblogging tools. Video games, widely used by young people, are under-researched in relation to extremism and violent radicalization. There is mostly anecdotal evidence that ISIS supporters have proposed the modification of some games to spread propaganda (e.g. Grand Theft Auto V), mods that allow players to act as terrorists attacking Westerners (Arma 3), and provide for hijacking of images and titles to allude to a notion of jihad (e.g. Call of Duty).
Selepack used qualitative textual analysis of hate-based video games found on right-wing religious supremacist groups’ websites to explore the extent to which they advocate violence. The results show that most hate groups were portrayed positively, and that video games promoted extreme violence towards people represented as Black or Jewish people. The games were often modified versions of classic video games in which the original enemies were replaced with religious, racial and/or ethnic minorities. Their main purpose is to indoctrinate players with white supremacist ideology and allow those who already hold racist ideologies to practice aggressive scripts toward minorities online, which may later be acted upon offline. Some experimental social psychologists show that cumulative violent video games can increase hostile expectations and aggressive behavior.
Uses of Internet and social media by extremist groups
The Internet and social media have numerous advantages for extremist groups using religion as part of a radicalization strategy. The advantages stem from the very nature of Internet and social media channels and the way they are used by extremist groups. These include communication channels that are not bound to national jurisdictions and are informal, large, cheap, decentralized, and anonymous. This allows terrorists to network across borders and to bypass time and space. Specifically, these channels provide networks of recruiters, working horizontally in all the countries they target due to the transborder nature of the Internet.
Weinmann describes extremist groups’ use of Internet and social media in eight process strategies: "psychological warfare, publicity and propaganda, data mining, fundraising, recruitment and mobilization, networking, information sharing and planning and coordination". Conway identifies five-core terrorist uses of the Internet and social media: "information provision, financing, networking, recruitment and information gathering". The ones most relevant to social media and radicalization of young people are information provision, such as profiles of leaders, manifestos, publicity and propaganda, and recruitment. Some studies show that social media enables people to isolate themselves in an ideological niche by seeking and consuming only information consistent with their views (confirmation bias), as well as simultaneously self-identifying with geographically distant international groups, which creates a sense of community that transcends geographic borders. This ability to communicate can promote membership and identity quests faster and in more efficient ways than in the "real" social world.
While recruitment is not an instantaneous process, it is seen in the literature as a phase of radicalization, taking the process to a new level of identification and possible action. Indoctrination is easier post-recruitment and often occurs in specific virtual spaces where the extremist rhetoric is characterized by a clear distinction between "them" (described negatively) and "us" (described positively), and where violent actions are legitimized according to the principle of "no other option available". These advantages of the Internet and social media open up prospects for extremist groups, by facilitating what used to be referred previously as block recruitment and by substituting group decision to individual decision-making.
Political radicalization
Reception and influence on youth
Bouzar, Caupenne and Sulayman (2014) present the results of interviews with 160 French families with radicalized (though not violent) children aged mainly between 15 and 21. The vast majority of the youth interviewed claimed to have been radicalized through the Internet. This held true regardless of their family characteristics and dynamics. The vast majority of the families (80%) did not follow any specific religious beliefs or practices and only 16% belonged to the working class.
Wojcieszak analysed cross-sectional and textual data obtained from respondents in neo-Nazi online discussion forums. The author found that "extremism increases with increased online participation, probably as a result of the informational and normative influences within the online groups". In addition, exposure to different parties/views offline that are dissimilar to the extremist group's values has in some instances reinforced radical beliefs online.
Many authors hypothesize potential causation by associating online radicalization with external factors such as: search for identity and meaning, the growing inequalities in European and other societies, unemployment and fewer opportunities for development especially for minority youth, exclusion, discrimination and inequality that are massively used in extremist discourses.
Social media and violent radicalization
In the Arab World
The analysis of the profiles of researchers and publications on violent radicalization from the Arab world reveals the prominence of specialists on Islamist movements. They are, most often, humanities and social science researchers and some are specialists in media and public opinion, international relations, or even security. Another specificity of research on violent radicalization in the Arabic-speaking region is the involvement of religious researchers in this field. The main objective of this contribution is part of a state strategy to counter faith advocated by violent radical groups. In this logic, radicalization or jihadism are replaced by the term terrorist in referral to these groups. In other regions, experts use terms such as jihadist Salafism or jihadism or violent radicalization. There is a clear tendency among most Arabic-speaking researchers to avoid the use of the word Islam and its semantic field to denote violent radical groups. This is also why researchers from the region prefer to use the Arabic acronym Daesh or the State Organization instead of the ‘Islamic State.’ Most research published from the Arab world does not focus on the relation between violent radicalization and Internet or social media, nor does it evaluate the effect of prevention or intervention cyberinitiatives.
Arab youth are major consumers of social media networks and especially Facebook, which is one of the top ten most used sites by Arab Internet users, a tendency that quickly found its translation into the Arab political realm. According to a study by Mohamed Ibn Rachid Faculty for governance in the United Arab Emirates, the number of Facebook users in 22 Arab countries increased from 54.5 million in 2013 to 81.3 million in 2014 with a majority being young people. The study of literature in the region reveals the role played by social networks, especially Facebook and Twitter, as platforms for collective expression for Arab youth on current issues, conflicts and wars (e.g., Gaza situation in particular). In Iraq, for example, young Internet users and bloggers launched several campaigns on Facebook and Twitter at the beginning of military operations to free the major cities occupied by ISIS (Fallujah and Mosul). In Morocco, other initiatives with the same objective were launched such as the one by Hamzah al-Zabadi on Facebook ( مغاربة_ضد_داعش#; Moroccans against Daesh), which consisted of sharing all kinds of content (images, texts, etc.) to contradict and challenge ISIS's narratives. The involvement of civil society actors on the web in the fight against terrorism and violent radicalization in the Arab region remains modest for many reasons including the lack of media policies dedicated to this struggle.
In Asia
Researchers in Asia have developed a complex understanding of radicalization as being deeply connected to psychosocial and economic grievances such as poverty and unemployment, marginalization through illiteracy and lack of education, admiration for charismatic leaders, pursuit of social acceptability, and psychological trauma. These factors are considered by authors to facilitate online radicalization-oriented recruitment, especially among young people, who are more vulnerable and spend more time online.
A 2016 report by "We Are Social" revealed that East Asia, Southeast Asia, and North America were the first, second, and third largest social media markets worldwide respectively. According to the same report, Facebook and Facebook Messenger are the predominant social and communications tools, followed by Twitter, Line and Skype. China is the notable exception as Facebook Messenger is outpaced by far by Chinese social media tools. China presents a very different profile from most countries in its mainstream social media and networks. American platforms such as Google, Yahoo!, Facebook, Twitter and YouTube have very little market penetration due to state restrictions and the strong monopoly of homegrown search engines and Internet platforms in Chinese language.
There is rising interest among Chinese researchers in examining the relationship between social media and violent radicalization. Research into violent radicalization and terrorism in China is mainly focused on radicalization in Xinjiang. This could be linked to the fact that most of the recent terrorist attacks in China were not perpetrated by local residents, but by outsider violent extremist organizations that seek to separate the Xinjiang area from China. Terrorist organizations spread their messages via TV, radio and the Internet. Though there is no empirical evidence linking youth radicalization to online social media, the anonymity and transborder capacity of such media is seen as a "support for organized terrorist propaganda". The Chinese government has been responding to terrorist attacks by taking down sites, blocking and filtering content. In return, Chinese government also uses the social media for messaging against terrorism.
Indonesia has an estimated 76 million Indonesians who connect regularly on Facebook, establishing the nation as the fourth largest user of the world, after India, the United States and Brazil. Indonesia is also the fifth largest user of Twitter, after the United States, Brazil, Japan and the United Kingdom. The Institute for Policy Analysis of Conflict (IPAC) examines how Indonesian extremists use Facebook, Twitter and various mobile phone applications such as WhatsApp and Telegram. Social media use by extremists in Indonesia is increasing. They use social media, such as Facebook and Twitter, to communicate with young people, to train and to fundraise online. Recruitment is done through online games, propaganda videos on YouTube and calls to purchase weapons. The proliferation of ISIS propaganda via individual Twitter accounts has raised concerns about the possibility of "lone actor" attacks. That being said, the report points out that such attacks are extremely rare in Indonesia.
In Africa
There is little contemporary research on online radicalization in Sub-Saharan Africa. However, at its heart, Africa carries a powerful extremist group: "Boko Haram", whose real name is Jama’atu Ahlu-Sunna wal Jihad Adda’wa Li («Group of the People of Sunnah for Preaching and Jihad») since 2002 and has pledged allegiance to the Daesh. The network is less resourceful and financed compared to Daesh, but it seems to have entered in a new era of communication by the use of social media networks, more so since its allegiance to Daesh. To spread their principles this terrorist group uses the Internet and adapts Daesh communication strategies to the sub-Saharan African context to spread its propaganda (also in French and English) with more sophisticated videos. By its presence on the most used digital networks (Twitter, Instagram), Boko Haram breaks with traditional forms of communication in the region such as propaganda videos sent to agencies on flash drives or CD-ROM. Video content analyses has also shown a major shift from long monologues from the leader Abubakar Shekau, that had poor editing and translation, to messages and videos that have increased its attractiveness among sub-Saharan youth. Today, Boko-Haram owns a real communications agency called «al-Urwa Wuqta» (literally «the most trustworthy», «the most reliable way»). Moreover, the group multiplies its activities on Twitter especially via their smartphones, as well as through YouTube news channels. Most tweets and comments of the group's supporters denounce the Nigerian government and call for support for Boko Haram movement. The tweets are written in Arabic at first and then translated and passed on into English and French, which reflect the group's desire to place itself in the context of what it sees as global jihad. In a recent study conducted in 2015, researchers have shown how Boko Haram-related tweets include rejection of the movement by non-members of the organisation.
In Kenya, and by extension the Horn of Africa, online radicalization and recruitment processes are dependent on narrative formations and dissemination. However, other than one documented case of purely online radicalization and recruitment, evidence shows that the process is cyclic involving both an online-offline-online, process that advances depending on the level of socialization and resonance factors shared with the vulnerable populations. A recent study from Scofield Associates shows that narrative formation depends on three major attributes; having a believable story, actionable plans for those who encounter it, and the need for a religious cover. The third characteristic provides support to the persuasion process and adds to the global whole. The persuasion process plays out very well with an Online platform or audience.
Online prevention initiatives
Alternative narratives
Van Eerten, Doosje, Konijn, De Graaf, and De Goede suggest that counter or alternative narratives could be a promising prevention strategy. Some researchers argue that a strong alternative narrative to violent jihadist groups is to convey the message that they mostly harm Muslims. During the last decade, the United States government has set up two online programs against radicalization designed to counter anti-American propaganda and misinformation from al-Qaeda or the Islamic state. These programs seek to win the "war of ideas" by countering self-styled jihadist rhetoric.
Private sector counter-initiatives involve the YouTube Creators for Change with young "ambassadors" mandated to "drive greater awareness and foster productive dialogue around social issues through content creation and speaking engagements"; the "redirectmethod.org" pilot initiative to use search queries in order to direct vulnerable young people to online videos of citizen testimonies, on-the-ground reports, and religious debates that debunk narratives used for violent recruitment. The initiative avoids "government-produced content and newly or custom created material, using only existing and compelling YouTube content".
Several governments are opting to invest in primary prevention through education of the public at large, and of young public in particular, via various "inoculatory" tactics that can be grouped under the broad label of Media and Information Literacy (MIL). Based on knowledge about the use of MIL in other domains, this initiative can be seen, interalia, as a long term comprehensive preventive strategy for reducing the appeal of violent radicalization.
Media and information literacy
MIL has a long tradition of dealing with harmful content and violent representations, including propaganda. In its early history, MIL was mostly put in place to fight misinformation (particularly in advertising) by developing critical skills about the media. By the 1980s, MIL also introduced cultural and creative skills to use the media in an empowering way, with active pedagogies. Since the year 2000, MIL has enlarged the media definition to incorporate the Internet and social media, adding issues related to ethical uses of online media to the traditional debates over harmful content and harmful behavior and aligning them more with the perspectives that consider issues of gratifications of media users.
International human rights standards
References
Sources
Information and communications technology
Social media
Youth
Radicalization | 0.763695 | 0.97384 | 0.743717 |
Expressivity (genetics) | In genetics, expressivity is the degree to which a phenotype is expressed by individuals having a particular genotype. Alternatively, it may refer to the expression of a particular gene by individuals having a certain phenotype. Expressivity is related to the intensity of a given phenotype; it differs from penetrance, which refers to the proportion of individuals with a particular genotype that share the same phenotype.
Variable expressivity
Variable expressivity refers to the phenomenon by which individuals with a shared genotype exhibit varying phenotypes. This can be further described as a spectrum of associated traits that can range in size, colour, intensity, and so forth. Variable expressivity can be seen in plants and animals, such as differences in hair colour, leaf size, and severity of diseases.
Mechanisms influencing expressivity
This variation in expression can be affected by modifier genes, epigenetic factors or the environment.
Modifier genes can alter the expression of other genes in either an additive or multiplicative way. Meaning the phenotype that is observed can be a result of two different alleles (gene variants) being summed or multiplied. However, a reduction in expression may also occur in which the primary locus, where the gene is located, is affected.
Epigenetic factors are heritable changes in the chromatin accessibility that affect the gene expression. Epigenetic factors can include:
Cis-regulatory elements, which are regions of non-coding DNA that regulate transcription of genes, such as promoters or enhancers.
Trans-regulatory elements, which are regulatory proteins, such as transcription factors (TFs) that bind to DNA to regulate gene expression.
Histone modifications, which regulate the accessibility of chromatin for gene transcription.
Chromatin variants, which are different states of chromatin.
Genomic imprinting, which determines whether some genes inherited from the mother and father get expressed.
The expressivity of a gene can be influenced by the environmental conditions. For example, pigmentation in the fur of Himalayan rabbits is determined by the C gene, the activity of which is dependent on temperature. During rearing of genetically identical rabbits, if a rabbit’s fur reaches a temperature higher than 35 oC, the fur will develop as white. If a rabbit’s fur stays at a temperature between 15 and 25 oC, the fur will develop as black.
Variable expressivity in plants and animals
Plants
Expressivity is commonly seen in plants and can be regulated by complex interactions between the environment, hormonal signalling, and genetics. An example of expressivity in plants caused by a rare gene is the variation in the number of branches. Initially identified in sorghum plants, this rare gene is called the Sorghum bicolor Axillary Branched Mutant (SbABM). Over several years of studies on SbABM in the rabi sorghum plant, researchers found that the progeny of the plants ranged from having 0 to 33 branches, even though they all had the same SbABM genotype.
Animals
A well-known example is polydactyly in Hemingway’s cats, which is the presence of extra toes. The number of extra toes can differ between cats, due to variable expressivity of the ZRS gene in the feline chromosome A2. ZRS enhances the activity of the SHH gene, which is involved in limb development, and this has been shown to cause extra toes. Although polydactyly is caused by an autosomal dominant allele, the variable expressivity (number of toes) of polydactyly in cats may be influenced by the tissues surrounding the region that would develop into toes.
Clinical application
Some common syndromes that involved phenotypic variability due to expressivity include: Marfan syndrome, Van der Woude Syndrome, and neurofibromatosis.
The characteristics of Marfan syndrome widely vary among individuals. The syndrome affects connective tissue in the body and has a spectrum of symptoms ranging from mild bone and joint involvement to severe newborn forms and cardiovascular disease. This diversity in symptoms is a result of variable expressivity of the FBN1 gene found on chromosome 15 (see figure 2). The gene product is involved in the proper assembly of microfibrils, which are structures found in connective tissues to provide support and elasticity. In Marfan patients, different levels of FBN1 mRNA and FBN1 expression levels were observed. These varying levels were not associated with either sex or age. Lower levels of mRNA expression were associated with a higher risk for ectopia lentis, the displacement of the crystalline lens of the eye, and pectus deformity, an abnormality of the chest muscle, indicating that variation in expression could be due to levels of expressivity and not genotype.
Van der Woude syndrome is a condition that affects the development of the face, specifically a cleft lip, cleft palate or both (see Figure 3). Carriers of the rare allele can also have pits near the centre of the lower lip which may appear to be wet due to the presence of salivary glands. The resulting phenotypes expressed varies significantly among individuals. This variation can range so broadly that a study published by the Department of Orthodontics at the University of Athens showed that some individuals were unaware that they possessed the genotype for this condition until they were tested.
Neurofibromatosis (NF1), also known as Von Recklinghausen disease, is a genetic disorder that is caused by a rare mutation in the neurofibromin gene (NF1) on chromosome 17. This loss of function mutation in the tumor suppressor gene can cause tumors on the nerves called neurofibromas. These appear as small bumps under the skin. It is stipulated that the phenotypic variation is a result of genetic modifiers.
Some hemoglobinopathies (diseases of the blood) like Sickle Cell Anemia exist on a spectrum. Sickle Cell Anemia is an autosomal recessive, prototypical monogenic Mendelian disease, meaning that the disease follows Mendelian inheritance and is traced back to a single gene. Individuals with Sickle Cell Anemia present different severities of symptoms. Fetal Hemoglobin (HbF) concentration and the presence of alpha-thalassemia, a genetic blood disease in which the alpha globin subunit of the hemoglobin protein is underproduced, are thought to be major contributors to the genetic modification leading to the variable expressivity of hemolysis (destruction of red blood cells) and increasing the severity of the disease.
See also
Anticipation
Pleiotropy
Mendelian inheritance
Genetic heterogeneity
Haploinsufficiency
References
Further reading
Population genetics | 0.770005 | 0.965824 | 0.743689 |
Polyculturalism | Polyculturalism is an ideological approach to the consequences of intercultural engagements within a geographical area which emphasises similarities between, and the enduring interconnectedness of, groups which self-identify as distinct, thus blurring the boundaries which may be perceived by members of those groups.
The concept of polyculturalism was first proposed by Robin Kelley and Vijay Prashad. It differs from multiculturalism which instead emphasises the separateness of the identities of self-identifying cultural groups with an aim of preserving and celebrating their differences in spite of interactions between them. Supporters of polyculturalism oppose multiculturalism, arguing that the latter's emphasis on difference and separateness is divisive and harmful to social cohesion.
Polyculturalism was the subject of the 2001 book Everybody Was Kung Fu Fighting: Afro-Asian Connections and the Myth of Cultural Purity by Vijay Prashad.
Comparison of cultural ideologies
Like advocates of multiculturalism, proponents of polyculturalism encourage individuals to learn about different cultures, especially those they may come into contact with in their own areas. However whereas multiculturalism advocates for toleration between members of distinctly different cultures groups, polyculturalism is less rigid and acknowledges that individuals shape their own identities and may choose to change so as to express their culture in a different way to their own ancestors, either by adding elements of other cultures to it or by eliminating aspects of it.
Polyculturalism views the concept of race as a social construct with no scientific basis, however it recognises the concept of ethnicity, considering ethno-nationalism a barrier which must be transcended in the pursuit of a dynamic community culture.
Critics of multiculturalism argue that it entrenches identity politics while polyculturalism aims to forge a common new identity
Polyculturalism acknowledges that cultures are dynamic, interactive and impure while multiculturalism treats them as static, isolated and complete.
Examples of polycultural states
Revolutionary France
The different successive polities over the last two centuries in France has tried various policies in intercultural relations, creating much research material for Citizenship Studies.
The French Revolution was ground-breaking in its de-emphasis of religious and other cultural distinctions. On 27 August 1789, the National Constituent Assembly issued the Declaration of the Rights of Man and of the Citizen. On 27 September 1791, the National Constituent Assembly voted to give France’s Jewish population equal rights under the law. That a long-scapegoated minority were to be recognised simply as citizens regardless of their religion was unprecedented in Modern Europe.
Debates about citizenship, equality, and representative politics among intellectuals defined the Age of Enlightenment, and played a role in the shaping of political thinking in early America and France on the eve of the Revolution in 1789. Radical components of the Republican movement in France coalesced around the Jacobin Club in early 1790. One of their objectives was:
To work for the establishment and strengthening of the constitution in accordance with the spirit of the preamble (that is, of respect for legally constituted authority and the Declaration of the Rights of Man and of the Citizen).
The political views of the Jacobin Movement were rooted in Jean-Jacques Rousseau's notion of the social contract. The promotion and inculcation of civic values and the sense of nationhood became consolidated in France. A common identity for the peoples within France’s natural boundaries of the Alps, Jura mountains, Pyrenees, the river Rhine and the Atlantic coast was successfully cultivated and propagated. France’s regional identities were presented as parts of a wider whole in an effort to create common bonds across a much larger territorial area.
Socialist Yugoslavia
In the new socialist countries that arose in the 20th century religious and other cultural distinctions were played down in an effort to promote a new shared common citizenship. The capacity to satisfactorily facilitate cultural autonomy in poly-ethnic societies without reinforcing divisions and thereby weaken the state had exorcised socialist intellectuals from as far back as Otto Bauer in his 1907 book "The Nationalities Question and Social Democracy".
Edvard Kardelj, the constitutional architect of Socialist Federal Republic of Yugoslavia, had set out to delicately de-escalate the often fractious 'national question' in the Balkans. The 1946 Yugoslav Constitution was heavily influenced by the 1936 Constitution of the Soviet Union, another poly-ethnic socialist state. Kardelj pointed out: "For us the model was the Soviet Constitution, since the Soviet federation is the most positive example of the solution of relations between peoples in the history of Mankind."
The development of a Yugoslav socialist consciousness was further clarified in the 1953 Constitutional Law. The law referred to "all power in the FPRY belongs to the working people". The emphasis on class was an obvious effort to supersede individual ethnic and religious differences. The constitutional changes were explained by the gathering development of new "unified Yugoslav community".
In the practise of socialist self-management the establishment of a powerful body like the Council of Producers instead of a Council of Nationalities appeared to confirm the post-nationalist atmosphere the populations of Yugoslavia had entered.
The developing moves from a Yugoslav citizenship to a Yugoslav ethnicity was greatly aided by the mutual comprehensibility of many of the ethnic groups in the region. Even after Yugoslavia was dismantled there is much agreement about this shared linguistic heritage as can be seen in the Declaration on the Common Language.
See also
Civic engagement
Classical republicanism
Consociationalism
New Soviet man
Soviet people
Social engineering (political science)
Yugoslavs
Czechoslovaks
References
Ethnicity
Cultural geography
Politics and race
Sociology of culture
Social theories
Civil rights and liberties
Citizenship
Government and personhood
National identity
Political concepts | 0.770463 | 0.965221 | 0.743667 |
Adaptation (arts) | An adaptation is a transfer of a work of art from one style, culture or medium to another.
Some common examples are:
Film adaptation, a story from another work, adapted into a film (it may be a novel, non-fiction like journalism, autobiography, comic books, scriptures, plays or historical sources).
Literary adaptation, a story from a literary source, adapted into another work. A novelization is a story from another work, adapted into a novel.
Theatrical adaptation, a story from another work, adapted into a play.
Video game adaptation, a story from a video game, adapted into media (e.g. film, anime and manga, and television)
Types of adaptation
There is no end to potential media involved in adaptation. Adaptation might be seen as a special case of intertextuality or intermediality, which involves the practice of transcoding (changing the code or 'language' used in a medium) as well as the assimilation of a work of art to other cultural, linguistic, semiotic, aesthetic or other norms. Recent approaches to the expanding field of adaptation studies reflect these expansions of our perspective. Adaptation occurs as a special case of intertextual and intermedial exchange and the copy-paste culture of digital technologies has produced "new intertextual forms engendered by emerging technologies—mashups, remixes, reboots, samplings, remodelings, transformations— " that "further develop the impulse to adapt and appropriate, and the ways in which they challenge the theory and practice of adaptation and appropriation."
History of adaptation
The practice of adaptation was common in ancient Greek culture, for instance in adapting myths and narratives for the stage (Aeschylus, Sophocles' and Euripides' adaptations of Homer). William Shakespeare was an arch adaptor, as nearly all of his plays are heavily dependent on pre-existing sources. Prior to Romantic notions of originality, copying classic authors was seen as a key aesthetic practice in Western culture. This neoclassical paradigm was expressed by Alexander Pope who equated the copying of Homer with copying nature in An Essay on Criticism:
"And but from Nature's fountains scorned to draw;
But when to examine every part he came,
Nature and Homer were, he found, the same.
Convinced, amazed, he checks the bold design,
And rules as strict his labored work confine
As if the Stagirite o'erlooked each line.
Learn hence for ancient rules a just esteem;
To copy Nature is to copy them."
According to Pope in An Essay on Criticism, the task of a writer was to vary existing ideas: "What oft was thought, but ne'er so well expressed;". In the 19th century, many European nations sought to re-discover and adapt medieval narratives that might be harnessed to various kinds of nationalist causes.
See also
Appropriation (art)
Intermedia
Intertextuality
Remediation
Remix culture
Transmedia storytelling
References
Further reading
Cardwell, Sarah. 'Adaptation Revisited: Television and the Classic Novel'. Manchester: MUP, 2021.
Cutchins, Dennis, Katja Krebs, Eckart Voigts (eds.). The Routledge Companion to Adaptation. London: Routledge, 2018.
Elliott, Kamilla. Theorizing Adaptation. Oxford: OUP, 2020.
Hutcheon, Linda, with Siobhan O'Flynn. A Theory of Adaptation. 2nd ed. London: Routledge, 2013.
Leitch, Thomas (ed.) Oxford Handbook of Adaptation Studies. Oxford: OUP, 2017.
Murray, Simone. The Adaptation Industry: The Cultural Economy of Contemporary Adaptation. New York: Routledge, 2012.
Sanders, Julie. Adaptation and Appropriation. London: Routledge, 2006.
The arts | 0.767112 | 0.969375 | 0.743619 |
Legality | Legality, in respect of an act, agreement, or contract is the state of being consistent with the law or of being lawful or unlawful in a given jurisdiction, and the construct of power.
Merriam-Webster defines legality as "1: attachment to or observance of law. 2: the quality or state of being legal."
Businessdictionary.com, thelawdictionary.org, and mylawdictionary.org definition explains concept of attachment to law as "Implied warranty that an act, agreement, or contract strictly adheres to the statutes of a particular jurisdiction. For example, in insurance contracts it is assumed that all risks covered under the policy are legal ventures."
Definitions
Vicki Schultz states that we collectively have a shared knowledge about most concepts. How we interpret the reality of our actual understanding of a concept manifests itself through the different individual narratives that we tell about the origins and meanings of a particular concept. The difference in narratives, about the same set of facts, is what divides us. An individual has the ability to frame, or understand, something very differently than the next person. Evidence does not always lead to a clear attribution of the specific cause or meaning of an issue – meanings are derived through narratives. Reality, and the facts that surround it, are personally subjective and laden with assumptions based on clearly stated facts. Anna-Maria Marshall states, this shift in framing happens because our perceptions depend "on new information and experiences"; this very idea is the basis of Ewick and Sibley definition of legality – our everyday experiences shape our understanding of the law.
Ewik and Silbey define "legality" more broadly as "those meanings, sources of authority, and cultural practices that are in some sense legal although not necessarily approved or acknowledged by official law. The concept of legality the opportunity to consider how where and with what effect law is produced in and through commonplace social interactions. ... How do our roles and statuses our relationships, our obligations, prerogatives and responsibilities, our identities and our behavoiurs bear the imprint of law."
In a paper on Normative Phenomena of Morality, Ethics and Legality, legality is defined taking the state's role in to account as "The system of laws and regulations of right and wrong behavior that are enforceable by the state (federal, state, or local governmental body in the U.S.) through the exercise of its policing powers and judicial process, with the threat and use of penalties, including its monopoly on the right to use physical violence."
Other related concepts
Rule of law provides for availability of rules, laws and legal mechanism to implement them. Principle of legality checks for availability and quality of the laws. Legality checks for if certain behaviour is according to law or not. concept of Legitimacy of law looks for fairness or acceptability of fairness of process of implementation of law.
The quality of being legal and observance to the law may pertain to lawfulness, i.e. being consistent to the law or it may get discussed in principle of legality or may be discussed as legal legitimacy.
Legality of purpose
In contract law, legality of purpose is required of every enforceable contract. One can not validate or enforce a contract to do activity with unlawful purpose.
Constitutional legality
The principle of legality can be affected in different ways by different constitutional models. In the United States, laws may not violate the stated provisions of the United States Constitution which includes a prohibition on retrospective laws. In the United Kingdom under the doctrine of Parliamentary sovereignty, the legislature can (in theory) pass such retrospective laws as it sees fit, though article 7 of the European Convention on Human Rights, which has legal force in Britain, forbids conviction for a crime which was not illegal at the time it was committed. Article 7 has already had an effect in a number of cases in the British courts.
In contrast many written constitutions prohibit the creation of retroactive (normally criminal) laws. However the possibility of statutes being struck down creates its own problems. It is clearly more difficult to ascertain what is a valid statute when any number of statutes may have constitutional question marks hanging over them. When a statute is declared unconstitutional, the actions of public authorities and private individuals which were legal under the invalidated statute, are retrospectively tainted with illegality. Such a result could not occur under parliamentary sovereignty (or at least not before Factortame) as a statute was law and its validity could not be questioned in any court.
Principle of legality
The principle that no one be convicted of a crime without a written legal text which clearly describes the crime is widely accepted and codified in modern democratic states as a basic requirement of the rule of law. It is known in Latin as .
International law
Legality, in its criminal aspect, is a principle of international human rights law, and is incorporated into the Universal Declaration of Human Rights, the International Covenant on Civil and Political Rights and the European Convention on Human Rights. However the imposition of penalties for offences illegal under international law or criminal according to "the general principles of law recognized by civilized nations" are normally excluded from its ambit. As such the trial and punishment for genocide, war crimes and crimes against humanity does not breach international law.
There is some debate about whether this is really a true exception or not. Some people would argue that it is a derogation or – perhaps somewhat more harshly – an infringement of the principle of legality. While others would argue that crimes such as genocide are contrary to natural law and as such are always illegal and always have been. Thus imposing punishment for them is always legitimate. The exception and the natural law justification for it can be seen as an attempt to justify the Nuremberg trials and the trial of Adolf Eichmann, both of which were criticized for applying retrospective criminal sanctions.
The territorial principle, generally confining national jurisdiction to a nation’s borders, has been expanded to accommodate extraterritorial, national interest.
In criminal law, the principle of legality assures the primacy of law in all criminal proceedings.
Bibliography
Kelsen, Hans. General Theory of Law and State (Cambridge, Massachusetts: Harvard University Press, c. 1945) (Cambridge, Massachusetts: Harvard University Press, 1949) (New York: Russell & Russell, 1961) (New Brunswick, New Jersey: Transaction Publishers, c. 2006).
Kelsen, Hans. Principles of international law (New York: Rinehart, 1952) (New York: Holt, Rinehart & Winston, 1966) (Clark, New Jersey: Lawbook Exchange, 2003).
Slaughter, Anne-Marie. A new world order (Princeton: Princeton University Press, c. 2004).
Nye, Joseph S. Soft power (New York : Public Affairs, c2004).
de Sousa Santos, Boaventura, and Rodríguez-Garavito, César A., eds. Law and globalization from below: towards a cosmopolitan legality (Cambridge, UK: Cambridge University Press, 2005)
Marsh, James L. Unjust legality: a critique of Habermas's philosophy of law (Lanham: Rowman & Littlefield, c. 2001).
Sarat, Austin, et al., eds. The limits of law (Stanford: Stanford University Press, 2005).
Milano, Enrico. Unlawful territorial situations in international law: reconciling effectiveness, legality and legitimacy (Leiden ; Boston: M. Nijhoff, c. 2006).
Ackerman, Bruce, ed. Bush v. Gore: the question of legitimacy (New Haven: Yale University Press, c. 2002).
Gabriel Hallevy A Modern Treatise on the Principle of Legality in Criminal Law (Heidelberg: Springer-Heidelberg, c. 2010).
See also
Analytical jurisprudence
Legal positivism
Principle of legality in French criminal law
Sources of law
Nullum crimen, nulla poena sine praevia lege poenali
Socialist Legality
Agent of Record
References
External links
Legality, generally:
Cornell LII "Jurisprudence"
Cornell LII "International law"
YaleLS Avalon Project, "Documents in Law, History & Diplomacy" (many original documents, fulltexts online, defining The State and Legality)
Legality in actual operation in International Law, examples:
International Court of Justice
International Criminal Court
International Criminal Tribunal for the Former Yugoslavia
International Criminal Tribunal for Rwanda
International Tribunal for the Law of the Sea
European Court of Human Rights
Decisions of various international judicial and quasi-judicial bodies, including PCIJ, CACJ, CAT, CEDAW etc.
Legality in actual operation in national Constitutional Law, examples:
US Supreme Court
Conseil Constitutionnel, France
Law Lords, UK
Supreme Court, UK (New!)
Legal doctrines and principles | 0.764128 | 0.973126 | 0.743593 |
Rōnin (student) | In contemporary Japanese slang, a is a student who has graduated from middle school or high school but has failed to achieve admission to a desired school or even any school at the next level, and consequently is studying outside of the school system for admission in the next year. Rōnin may study at a yobikō. The equivalent term in Korean education is jaesusaeng.
Etymology
The term rōnin is colloquial while the word is more formal. The term derives from their having no school to attend, as a rōnin, a masterless samurai, had no leader to serve.
Now adapted into contemporary Japanese slang, a is a student who has graduated from middle school or high school but has failed to achieve admission to a desired school or even any school at the next level, and consequently is studying outside of the school system for admission in the next year.Rōnin may study at a yobikō or other shadow or supplementary education institutions to aid their studying when retaking their admission examinations.浪人 at Japanese-English dictionaries: プログレッシブ和英中辞典 Archived 2013-02-18 at archive.today or ニューセンチュリー和英辞典 Archived 2013-02-19 at archive.today
Sometimes, the term 二浪 (short form) or 二年浪人 (full form, 二年 - second year) is used for student who failed exams twice.
History
Society in Japan has developed to value social status, and changes in social status are believed to only be possible through the attendance of prestigious universities, such as the University of Tokyo and Kyoto University. The large number of individuals opting to become rōnin reflects their desire to enhance their chances of gaining admission to such prestigious institutions. This additional study period enables rōnin to meet societal and parental expectations related to educational achievement.
Parental pressure plays a large part in influencing the need for academic success in Japanese students, since they want their children to succeed. This led to students becoming rōnin and attending shadow or supplementary education institutions like yobikō in order to improve their chances of attending prestigious universities. Students usually choose to become rōnin based on the expectations placed on them by their parents and greater society. A rōnin may also be influenced to choose this path because of the economic benefits that having a prestigious education will grant them later on.
The Japanese education system is very competitive due to the existence of rōnin, and the entrance exams to enter these universities are rigorous. There is now a concept called 'examination hell' which refers to the period when rōnin undergo intense periods of studying in preparation for university entrance exams. Being a rōnin for too long has negative impacts, specifically regarding social expectations on the appropriate age to finalize your education and begin to enter the workforce.Rōnin significantly influence the quality of education and universities in Japan. Their rigorous preparation not only enhances their personal academic performance and prospects of attending prestigious universities but also raises the average academic standards among all applicants. This elevation in applicants' academic standards boosts the reputation of each university, thereby increasing the overall appeal and competitiveness of higher education in Japan.
Social context Rōnin need to find the balance between studying, gaining access to prestigious education, boosting their chances at work opportunities, all while falling into the age milestone categories that society normalizes.
The process of enduring examination hell, taking the entrance exam, and waiting for the results, leads to high bouts of emotion when entrance to university is finally achieved for rōnin.Rōnin who studied more are also more likely to have increased their academic knowledge, so those attending these universities are at a higher educational standard than previously.
However, eventually the benefits to being a rōnin end. This is because there is only so much preparation to be done prior to entrance exams, even with shadow education.
A large majority of rōnin participate in supplementary education in order to benefit their studying and preparation for entrance exams. These institutions have been found to disproportionately benefit higher socioeconomic class students for many reasons. One is that access to these institutions is limited to those with the financial resources to access them. Graduates from upper-class high schools also tend to choose becoming rōnin rather than enter less prestigious universities since they know they will have the support of supplementary education to guide their studies.
There are higher rates of rōnin from upper-class high schools for this reason and since their families have better access to information about supplementary education options. Rōnin from different socioeconomic classes have similar opinions on what the negatives of being rōnin are, but differ greatly on the positive aspects as a result of the opportunities higher-class rōnin know they will be able to access for support.
(In this table the high schools are labeled A through E based on prestige, with A being highest and E being lowest)
This table shows how many rōnin from five different high schools in Japan attended different ranks of supplementary education classes in a yobikō. The data collected shows that the higher the prestige of high school, which is typically attended by the upper-class, had more attendance in higher ranked supplementary education classes.
In popular cultureRōnin appear frequently in fiction and Japanese popular culture. As an example, the manga and anime series Love Hina features three main characters, Keitaro Urashima, Naru Narusegawa, and Mutsumi Otohime, who are described as rōnin throughout most of the series. In the manga and anime series Chobits, the protagonist, Hideki Motosuwa, is a rōnin studying at a preparatory school to get into college. Maison Ikkoku also features a rōnin as its main character; the series centers around his studying for exams as he is distracted by others that he lives with. The protagonist of Sekirei, Minato Sahashi, is also a rōnin. Kanamemo's Hinata Azuma is a rōnin'' as well due to her love of gambling and money making, activities which hinder her studies.
See also
Bǔ kè
Dek siw
Hikikomori
NEET
References
Academic pressure in East Asian culture
Japanese words and phrases
Testing and exams in Japan
ko:재수#일본 | 0.760415 | 0.977796 | 0.743531 |
Sacralism | Sacralism is the confluence of church and state wherein one is called upon to change the other. It also denotes a perspective that views church and state as tied together instead of separate entities so that people within a geographical and political region are considered members of the dominant ecclesiastical institution.
Concept
A Latin saying that has often been used to describe the principle of sacralism is cuius regio, eius religio, or "who has region, decides religion." The idea was that the ruler of each individual area would decide the religion of those under his control based upon his own faith. Another conceptualization refers to sacralism as a view that each fundamental relations that one occupies should be seen under the aspect of the sacred.
A critical description cite sacralism as the use of the concept of "the will of God" to legitimate oppression and violence. There are sources that consider it as a form of fundamentalism.
Examples
Christian sacralism is, according to Verduin, the hybrid product that resulted from the colossal change known as the Constantinian shift that began early in the fourth century AD, when Christianity was granted official tolerance in the Roman Empire by the Emperor Constantine, and was completed by the Emperor Theodosius's declaration in 392 outlawing paganism and making Christianity the official religion of the Empire. The so-called Constantinian formula was described as a system that required the rule-right expressed in the State coalesce with the rule-right that comes to expression in the Church. This resulted in the so-called age of Christian sacralism when Roman citizens who did not necessarily subscribe to the faith are coerced into it for fear of social discrimination and outright persecution. It is suggested that the Christian sacralism still had pagan roots and that theologians merely embraced it using precepts from the Old Testament and New Testament. For instance, theologians established that Christ authorized the use of two swords: the sword of the clergy, which is the sword of the Spirit; and the sword of the soldiers of the state or the sword of steel. Christian sacralism lasted until the Reformation when Christians gradually moved away from sacralism.
Sacralism is common in countries predominantly inhabited by followers of Islam. These have a tendency to comingle religion with politics and law, with the result viewed by Muslims as a compact and positive unity of all aspects of life.
Sacralism has also been applied in the area of international relations. There are modernists, for example, who approach world affairs from a range of analytical languages that have their origin within European Christendom. Thinkers who subscribe to the sacralist view also argue that the whole mind is capable of knowing and have put modernism in the context of their faith. The idea is that aspects of modernism have arisen in a particular sacral environment.
See also
Theocracy
Separation of church and state
The Anabaptists — whose history illustrates a continued rejection of sacralism.
References
External links
Dr. Martin Erdmann journalist and theologian. Book: Building the Kingdom of God on Earth (English)
Dr. Martin Erdmann journalist and theologian. Book: Der Griff zur Macht - Dominionismus der evangelikale Weg zu globalem Einfluss (German)
Dr. Martin Erdmann Video-Channel "Sacralism" (German)
collection of articles on the subjects dominionism, sacralism and theocracy - Rachel Tabachnik, Dr. John MacArthur, Dr. Martin Erdmann, Rudolf Ebertshäuser, Sarah Leslie, Discernment Ministries Inc. u.v.m (English + German)
Political science terminology
Religion and government | 0.76215 | 0.975526 | 0.743497 |