title
stringlengths 3
68
| text
stringlengths 785
186k
| relevans
float64 0.76
0.82
| popularity
float64 0.93
1
| ranking
float64 0.75
0.81
|
---|---|---|---|---|
Iron Age Europe
|
In Europe, the Iron Age is the last stage of the prehistoric period and the first of the protohistoric periods, which initially meant descriptions of a particular area by Greek and Roman writers. For much of Europe, the period came to an abrupt end after conquest by the Romans, though ironworking remained the dominant technology until recent times. Elsewhere, the period lasted until the early centuries AD, and either Christianization or a new conquest in the Migration Period.
Iron working was introduced to Europe in the late 11th century BC, probably from the Caucasus, and slowly spread northwards and westwards over the succeeding 500 years. For example, the Iron Age of Prehistoric Ireland begins around 500 BC, when the Greek Iron Age had already ended, and finishes around 400 AD. The use of iron and iron-working technology became widespread concurrently in Europe and Asia.
The start of the Iron Age is marked by new cultural groupings, or at least terms for them, with the Late Bronze Age Mycenaean Greece collapsing in some confusion, while in Central Europe the Urnfield culture had already given way to the Hallstatt culture. In north Italy the Villanovan culture is regarded as the start of Etruscan civilization. Like its successor La Tène culture, Hallstatt is regarded as Celtic. Further to the east and north, and in Iberia and the Balkans, there are a number of local terms for the early Iron Age culture. Roman Iron Age is a term used in the archaeology of Northern Europe (but not Britain) for the period when the unconquered peoples of the area lived under the influence of the Roman Empire.
The Iron Age in Europe is characterized by an elaboration of designs in weapons, implements, and utensils. These are no longer cast but hammered into shape, and decoration is elaborate curvilinear rather than simple rectilinear; the forms and character of the ornamentation of the northern European weapons resemble Roman arms in some respects, while in other respects they are peculiar and evidently representative of northern art.
Timeline
Dates are approximate, consult particular article for details
Prehistoric (or Proto-historic) Iron Age Historic Iron Age
Eastern Europe
The early first millennium BC marks the Iron Age in Eastern Europe. In the Pontic steppe and the Caucasus region, the Iron Age begins with the Koban and the Chernogorovka and Novocherkassk cultures from . By 800 BC, it was spreading to Hallstatt culture via the alleged "Thraco-Cimmerian" migrations.
Along with the Chernogorovka and Novocherkassk cultures, on the territory of ancient Russia and Ukraine the Iron Age is, to a significant extent, associated with Scythians, who developed iron culture since the 7th century BC. The majority of remains of their iron-producing and blacksmithing industries from the 5th to 3rd centuries BC was found near Nikopol in Kamenskoye Gorodishche, which is believed to be the specialized metallurgic region of the ancient Scythia.
The Old Iron Age was an era of immense changes in the lands inhabited by the Balts, i.e. the territories from the Vistula Lagoon and the Baltic Sea in the west to the Oka in the east, and between the Middle Dnieper in the south and northern Latvia to the north. In the first century A.D., the Baltic people began mass production of iron from the available limonite, widely available in swamps. The local smiths learned to harden iron into steel, which resulted in tougher weapons than stone or horn instruments.
Southeast Europe
In the Greek Dark Ages, edged iron weapons were widely available, but a variety of explanations fits the available archaeological evidence. From around 1200 BC, the palace centers and outlying settlements of the Mycenaean culture began to be abandoned or destroyed, and by 1050 BC, the recognizable cultural features (such as Linear B script) had disappeared.
The Greek alphabet began in the 8th century BC. It is descended from the Phoenician alphabet. The Greeks adapted the system, notably introducing characters for vowel sounds and thereby creating the first truly alphabetic (as opposed to abjad) writing system. As Greece sent colonists eastwards, across the Black Sea, and westwards towards Sicily and Italy (Pithekoussae, Cumae), the influence of their alphabet extended further. The ceramic Euboean artifact inscribed with a few lines written in the Greek alphabet referring to "Nestor's Cup", discovered in a grave at Pithekoussae (Ischia) dates from ; it seems to be the oldest written reference to the Iliad. The fragmentary Epic Cycles, a collection of Ancient Greek epic poems that related the story of the Trojan War, were a distillation in literary form of an oral tradition developed during the Greek Dark Age. The traditional material from which the literary epics were drawn treats the Mycenaean Bronze Age culture from the perspective of the Iron Age and later Greece.
Notable and autochthonous groups of peoples and tribes of Southeastern Europe organised themselves in large tribal unions such as the Thracian Odrysian kingdom in the east of Southeastern Europe in the 5th century BC. By the 6th century BC the first written sources dealing with the territory north of the Danube appear in Greek sources. By this time the Getae (and later the Daci) had branched out from the Thracian-speaking populations.
Central Europe
In Central Europe, the Iron Age is generally divided in the early Iron Age Hallstatt culture (HaC and D, 800–450 BC) and the late Iron Age La Tène culture (beginning in 450 BC). The transition from bronze to iron in Central Europe is exemplified in the great cemetery of Hallstatt, discovered near Gmunden in 1846, where the forms of the implements and weapons of the later part of the Bronze Age are imitated in iron. In the Swiss or La Tène group of implements and weapons, the forms are new and the transition complete.
The Celtic culture, or rather Proto-Celtic groups, had expanded to much of Central Europe (Gauls), and, following the Gallic invasion of the Balkans in 279 BC, as far east as central Anatolia (Galatians). In Central Europe, the prehistoric Iron Age ends with the Roman conquest.
From the Hallstatt culture, the Iron Age spreads westwards with the Celtic expansion from the 6th century BC. In Poland, the Iron Age reaches the late Lusatian culture in about the 6th century, followed in some areas by the Pomeranian culture.
The ethnic ascription of many Iron Age cultures has been bitterly contested, as the roots of Germanic, Baltic and Slavic peoples were sought in this area.
Italy
In Italy, the Iron Age was probably introduced by the Villanovan culture, which succeeded the Bronze Age Proto-Villanovan culture in the territory of Tuscany and northern Latium and spread in parts of Romagna, Campania and Fermo in the Marche. The burial characteristics relate the Villanovan culture to the Central European Urnfield culture (–750 BC), and 'Celtic' Hallstatt culture (which succeeded the Urnfield culture). Cremated remains were housed in double-cone shaped urns and buried. The Etruscans Old Italic alphabet spread throughout Italy from the 8th century. The Etruscan Iron Age was then ended with the rise of the Roman Republic, which conquered the last Etruscan city of Velzna in 264 BC.
In Sardinia, iron working seems to have begun around the 13th–10th century BC with the Nuragic civilization, perhaps via Cyprus.
Western Europe
The 'Celtic' culture had expanded to the group of islands of northwest Europe (Insular Celts) and Iberia (Celtiberians, Celtici and Gallaeci). In the British Isles, the British Iron Age lasted from about 800 BC until the Roman Conquest and until the 5th century in non-Romanized areas. Structures dating from this time are often impressive, for example, the brochs and duns of northern Scotland and the hillforts that dotted the islands. On the Iberian Peninsula, the Paleohispanic scripts began to be used between 7th century to the 5th century BC. These scripts were used until the end of the 1st century BC or the beginning of the 1st century AD.
In 2017, a Celtic warrior's grave, dated to about BC 320 to 174, was discovered at a housing development under construction in Pocklington at the Yorkshire Wolds. After archeologists completed a very long excavation project, the site was found to include a bronze shield, remains of a chariot and the skeletons of ponies. The shield's boss bears a resemblance to the Wandsworth shield boss (circa BC 350 to 150), owned by the British Museum. One design element on the extremely well-preserved Pocklington shield, a scalloped border, "is not comparable to any other Iron Age finds across Europe, adding to its valuable uniqueness", said Paula Ware, managing director at MAP Archaeological Practice Ltd in late 2019. Horses were rarely included in Iron Age burials, making the find particularly significant. "The discoveries are set to widen our understanding of the Arras (Middle Iron Age) culture and the dating of artefacts to secure contexts is exceptional," according to Paula Ware.
Northern Europe
The early Iron Age forms of Scandinavia show no traces of Roman influence, though such influences become abundant toward the middle of the period. The duration of the Iron Age is variously estimated according to how its commencement is placed nearer to or farther from the opening years of the Christian era, but it is generally agreed that the last division of the Iron Age of Scandinavia, the Viking Period, is considered to be from 700 to 1000 AD, when paganism in those lands was superseded by Christianity.
The Iron Age north of about the Rhine, beyond the Celts and then the Romans, is divided into two eras: the Pre-Roman Iron Age and the Roman Iron Age. In Scandinavia, further periods followed up to 1100: the Migration Period, the Vendel Period and the Viking Age. The earliest part of the Iron Age in northwestern Germany and southern Jutland was dominated by the Jastorf culture.
Early Scandinavian iron production typically involved the harvesting of bog iron. The Scandinavian peninsula, Finland and Estonia show sophisticated iron production from c. 500 BC. Metalworking and Ananyino culture pottery co-occur to some extent. Another iron ore used was ironsand (such as red soil). Its high phosphorus content can be identified in slag. Such slag is sometimes found together with asbestos-ceramic-associated axe types belonging to the Ananyino culture.
Transition to stationary agriculture due to the iron plough
In Southern Europe climates, forests consisted of open evergreen and pine forests. After slash and burn techniques these forests had little capacity for regrowth than the forests north of the Alps.
In Northern Europe, there was usually only one crop harvested before grass growth took over, while in the south, suitable fall was used for several years and the soil was quickly exhausted. Slash and burn shifting cultivation, therefore, ceased much earlier in the south than the north. Most of the forests in the Mediterranean had disappeared by classical times. The classical authors wrote about the great forests (Semple 1931 261–296).
Homer writes of wooded Samothrace, Zakynthos, Sicily and other wooded land. The authors give us the general impression that the Mediterranean countries had more forest than now, but that it had already lost much forest, and that it was left there in the mountains (Darby 1956 186).
It is clear that Europe remained wooded, and not only in the north. However, during the late Roman Iron Age and early Viking Age, forest areas drastically reduced in Northern Europe, and settlements were regularly moved. There is no good explanation for this mobility, and the transition to stable settlements from the late Viking period, as well as the transition from shifting cultivation to stationary use of arable land. At the same time plows appears as a new group of implements were found both in graves and in depots. It can be confirmed that early agricultural people preferred forest of good quality in the hillside with good drainage, and traces of cattle quarters are evident here.
The Greek explorer and merchant Pytheas of Massalia made a voyage to Northern Europe c. 330 BC. Part of his itinerary has survived to this day thanks to the accounts by Polybius, Strabo and Pliny. Pytheas had visited Thule, which lay a six-day voyage north of Britain. There "the barbarians showed us the place where the sun does not go to sleep. It happened because there the night was very short -- in some places two, in others three hours -- so that the sun shortly after its fall soon went up again." He says that Thule was a fertile land, "rich in fruits that were ripe only until late in the year, and the people there used to prepare a drink of honey. And they threshed the grain in large houses, because of the cloudy weather and frequent rain. In the spring they drove the cattle up into the mountain pastures and stayed there all summer." This description may fit well with Norwegian coast. Here is an instance of both dairy farming and drying/threshing in a building.
In Italy, shifting cultivation was already a thing of the past at the birth of Christ. Tacitus describes it as the strange cultivation methods he had experienced among the Germans, whom he knew well from his stay with them. Rome was entirely dependent on shifting cultivation by the barbarians to survive and maintain "Pax Romana", but when the supply from the colonies "trans alpina" began to wear out, the Roman Empire collapsed.
Tacitus writes in AD 98 about the Germans: fields are proportionate to the participating growers, but they share their crops with each other by reputation. Distribution is easy because there is great access to land. They change soil every year, and mark some off to spare, for they seek not a strenuous job in reaping from this fertile and vast land even greater yields—such as by planting apple orchards, or by fencing off fields; or by watering gardens; grain is the only thing they insist that the ground will provide. Tacitus discusses the shifting cultivation.
The Migration Period in Europe after the Roman Empire and immediately before the Viking Age suggests that it was still more profitable for the peoples of Central Europe to move on to new forests after the best parcels were exhausted than to wait for the new forest to grow up. Therefore, the peoples of the temperate zone in Europe slash and burners, remained for as long as the forests permitted. This exploitation of forests explains this rapid and elaborate move. But the forest could not tolerate this in the long run; it first ended in the Mediterranean. The forest here did not have the same vitality as the powerful coniferous forest in Central Europe. Deforestation was partly caused by burning for pasture fields. Missing timber delivery led to higher prices and more stone constructions in the Roman Empire (Stewart 1956 123).
The forest also decreased gradually northwards in Europe, but in the Nordic countries it has survived. The clans in pre-Roman Italy seemed to be living in temporary locations rather than established cities. They cultivated small patches of land, guarded their sheep and their cattle, traded with foreign merchants, and at times fought with one another: etruscans, umbriere, ligurianere, sabinere, Latinos, campaniere, apulianere, faliscanere, and samniter, just to mention a few. These Italic ethnic groups developed identities as settlers and warriors . They built forts in the mountains, today a subject of much investigation. The forest has hidden them for a long time, but eventually, they will provide information about the people who built and used these buildings. The ruin of a large samnittisk temple and theater at Pietrabbondante is under investigation. These cultural relics have slumbered in the shadow of the glorious history of the Roman Empire.
Many of the Italic tribes realized the benefits of allying with the powerful Romans. When Rome built the Via Amerina 241 BC, the Faliscan people established themselves in cities on the plains, and they collaborated with the Romans on road construction. The Roman Senate gradually gained representatives from many Faliscan and Etruscan families. The Italic tribes are now settled farmers. (Zwingle, National Geographic, January 2005).
An edition of Commentarii de Bello Gallico from the AD 800. Julius Caesar wrote about Svebians, "Commentarii de Bello Gallico, "book 4.1; they are not by private and secluded fields, "privati ac separati agri apud eos nihil est", they cannot stay more than one year in a place for cultivation's sake, "Neque longius anno remanere uno in loco colendi causa licet ". The Svebes lived between the Rhine and the Elbe. About the Germans, he wrote: No one has a particular field or area for themselves, for the magistrates and chiefs give fields every year to the people and the clans, which have gathered so much ground in such places that it seems good for them to continue on to somewhere else after a year. "Neque quisquam agri modum certum aut fines habet proprios, sed magistratus ac principes in annos singulos gentibus cognationibusque hominum, qui tum una coierunt, a quantum et quo loco visum est agri attribuunt atque anno post alio transire cogunt" book 6, 22.
Strabo (63 BC – about AD 20) also writes about sveberne in Geographicon VII, 1, 3. Common to all the people in this area is that they can easily change residence because of their sordid way of life; that they do not grow any fields and do not collect property, but live in temporary huts. They get their nourishment from their livestock for the most part, and like nomads, they pack all their goods in wagons and go on to wherever they want. Horazius writes in 17 BC (Carmen säculare, 3, 24, 9 ff .) about the people of Macedonia. The proud Getae also live happily, growing free food and cereal for themselves on land that they do not want to cultivate for more than a year, "vivunt et rigidi Getae, immetata quibus iugera liberal fruges et Cererem freunt, nec cultura placet longior annua." Several classical writers have descriptions of shifting cultivation people. Many peoples' various shifting cultivations characterized the migration Period in Europe. The exploitation of forests demanded constant displacement, and large areas were deforested.
Locations of the tribes described by Jordanes in Norway, contemporary with, and some possibly ruled by Rodulf. Jordanes was of Gothic descent and ended up as a monk in Italy. In his work De origine actibusque Getarum (The Origin and Deeds of the Getae/Goths), the Gothic origins and achievements, the author Procopius provides information on the big island Scandza, which the Goths come from. He expects that of the tribes who live here, some are adogit living far north with 40 days of the midnight sun. After adogit come screrefennae and suehans who also live in the north. Screrefennae moved a lot and did not bring to the field crops, but made their living by hunting and collecting bird eggs. Suehans was a seminomadic tribe that had good horses like Thüringians and ran fur hunting to sell the skins. It was too far north to grow grain. Prokopios, ca. AD 550, also describes a primitive hunter people he calls skrithifinoi. These pitiful creatures had neither wine nor corn, for they did not grow any crops. "Both men and women engaged incessantly just in hunting the rich forests and mountains, which gave them an endless supply of game and wild animals." Screrefennae and skrithifinoi is well Sami who often have names such as; skridfinner, which is probably a later form, derived from skrithibinoi or some similar spelling. The two old terms, screrefennae and skrithifinoi, are probably origins in the sense of neither ski nor finn. Furthermore, in Jordanes' ethnographic description of Scandza are several tribes, and among these are finnaithae "who was always ready for battle" Mixi evagre and otingis that should have lived like wild beasts in mountain caves, "further from them" lived osthrogoth, raumariciae, ragnaricii, finnie, vinoviloth and suetidi that would last prouder than other people.
Adam of Bremen describes Sweden, according to information he received from the Danish king Sven Estridson or also called Sweyn II of Denmark in 1068: "It is very fruitful, the earth holds many crops and honey, it has a greater livestock than all other countries, there are a lot of useful rivers and forests, with regard to women they do not know moderation, they have for their economic position two, three, or more wives simultaneously, the rich and the rulers are innumerable." The latter indicates a kind of extended family structure, and that forests are specifically mentioned as useful may be associated with shifting cultivation and livestock. The "livestock grazing, as with the Arabs, far out in the wilderness" can be interpreted in the same direction.
See also
Prehistoric Europe
Bronze Age Europe
Hallstatt culture
La Tène culture
Pre-Roman Iron Age
Roman Iron Age
Roman imperial period (chronology)
British Iron Age
References
| 0.764437 | 0.993181 | 0.759225 |
Steampunk fashion
|
Steampunk fashion is a subgenre of the steampunk movement in science fiction. It is a mixture of the Victorian era's romantic view of science in literature and elements from the Industrial Revolution in Europe during the 1800s. Steampunk fashion consists of clothing, hairstyling, jewellery, body modification and make-up.
More modern ideals of steampunk can include t-shirts with a variety of designs or the humble jeans being accessorised with belts and gun holsters.
History
Steampunk fashion is a subgenre of the steampunk movement in science fiction. It is a mixture of the Victorian era's romantic view of science in literature and the industrialisation in most parts of Europe. The aesthetics of the fashion are designed with a post-apocalyptic era in mind. At the first steampunk convention, "SalonCon", in 2006, steampunk enthusiasts dressed up in costumes reflecting that era. The costumes included clothing, hairstyling, jewellery, body modification and make-up. Steampunk fashion has later gone on to include gadgets and contrasting accoutrements.
Initially, the clothes such as bustiers, bodices, jackets and other items were mostly handmade and customized, but as the steampunk movement grew, entrepreneurs and companies became interested and started to mass-produce steampunk clothing to be sold both online and in stores.
Since the genre emerged, the aesthetic of steampunk fashion has remained constant. New ideas in literature, and advancements in science and technology have resulted in subtle changes. Even though the genre did not become widely known until the late 20th century, steampunk and its fashion is said to have existed earlier.
During the 1980s and 1990s, steampunk fashion grew along with the goth and punk movements in fashion. Cyberpunk and dieselpunk fashion are variants of the steampunk fashion of the 1980s.
Inspiration from literature
Works of writers from the end of the 19th century, such as those of Robert Louis Stevenson, G. K. Chesterton and Sir Arthur Conan Doyle are among the most influential for steampunk fashion. Those works attempted to domesticate Charles Dickens's London (from his industrial age novels). Sci-fi critics John Clute and Peter Nicholls have noted that steampunk is also inspired by a "strain of nostalgia". However, modern steampunk literature, which began only in the 1980s, has also influenced the steampunk fashion during the 2010s. Steampunk writers from that period are credited with creating fantasy tales set in cultures with style borrowed from the Victorian, with stories that includes giddy action scenes and elaborate baroque expeditions.
Kevin Jeter's 1979 novel Morlock Night is held to be the first steampunk novel, and the beginning of steampunk fashion.
Aesthetic
Just like its counterparts in other art forms, primarily literature, steampunk fashion is based on the aesthetic of an alternate history. Even though critics disagree about it being rooted in fantasy literature, there are elements that suggests that some part of its aesthetic is conceived from books and films that showcase alternate history using fantasy. Interest in steampunk aesthetics may also be due to an increased interest from the fashion industry in Victorian spiritualism, during the 2010s.
Within the steampunk fashion, there are a number of personas, or archetypes, such as the valiant explorer with pith helmet, brass telescope and binoculars; the debonair aviator with birdlike gadgets and devices, leather helmet, brass goggles and canvas coat; and the gentleman, with a lab coat over formal clothes and belts for all sorts of implements and instruments.
Styles
Steampunk fashion is a mixture of fashion trends from different historical periods. Steampunk clothing adds the looks of characters from the 19th century, explorers, soldiers, lords, countesses and harlots, to the punk, contemporary street fashion, burlesque, goth, fetishism, vampire and frills among others. Related to steampunk fashion is the Lolita fashion, which strand stands for a youthful expression of girlishness. Though they both take inspiration from the Victorian era, Lolita is more modest and focused on purchasing clothing from commercial vendors, as opposed to steampunk clothing, which is traditionally created from things bought in thrift stores.
The base of steampunk fashion is primarily influenced by the fashion of the mid-19th century. For women this fashion was often dominated by long, flowing dresses and regal jacket bodices. The latter extended over the hips and matched the skirt fabric only occasionally. In the beginning of the 1860s, the bodice ended at the waist. New styles emerged and the Garibaldi blouse, made its appearance. During the early 1860s, the hoops of the skirts were also taking on an elliptical shape, with a much fuller back and a narrower front silhouette. The ensuing conical shaped skirts have also inspired the steampunk fashion. At the beginning of the 20th century, skirts that were flared at the hem became popular. Dresses for women were sometimes masculine tailored and made to look intimidating. Evening wear for women were decorated with sparkling beads, bangles and gaudy embroideries. The hobble skirt was also introduced at that time.
Steampunk fashion did originally not include much jewelry, but a few accoutrements have made their way into the style during the 2010s.
In steampunk fashion, corsets are more of a clothing item than an undergarment. Being rather conspicuous, they have more or less become synonymous with the genre. Corsets in brocade or leather, with steel-boning are a form of steampunk clothing inspired by the Victorian era.
Brass goggles have become a trademark for steampunk fashion. Brass items are also a kind of official, standard steampunk accessory. Goggles with intricate patterns on large, round frames are the most commonly used ones. Hats in steampunk fashion may include all kinds of headgear like flight helmets, bowler hats, pith helmets and pirates' bandanas. The headgear in steampunk fashion is also inspired by Victorian era fashion styles.
Many of the skirt and dress styles worn in Steampunk fashion are derivative of Victorian era silhouettes (bell skirts, trumpet skirts, bustled skirts, etc.), but with a sexier, modern twist.
In popular culture
In 2005, Kate Lambert, known professionally as "Kato", founded the first steampunk clothing company, "Steampunk Couture", mixing Victorian, post-apocalyptic and tribal influences as well as sci-fi, shabby chic and Harajuku/Mori girl elements. As early as 2010, high fashion lines such as Prada, Dolce & Gabbana, Versace, Chanel and Christian Dior began introducing steampunk and neo-Victorian-inspired styles on the fashion runways. And in episode 7 of Lifetime'''s "Project Runway: Under the Gunn" reality series, contestants were challenged to create avant-garde "steampunk chic" looks. Steampunk masks made by Ukrainian design studio Bob Basset, named by William Gibson as "Probably the single best steampunk object I've seen", using by music band members: Sid Wilson from Slipknot and Zac Baird from Korn.
Since the early 2000s, steampunk fashion has been used frequently in films, photography and on television. Guy Ritchie's Sherlock Holmes and Warehouse 13 are examples of this. Films like The Golden Compass, Van Helsing, 'Sweeney Todd: The Demon Barber of Fleet Street (2007 film)' and The Three Musketeers'' also include elements of steampunk ideas and steampunk fashion. Steampunk fashion has also been showing up in music, such as in Lindsay Stirling's music video "Roundtable Rival". Members of the alternative band Abney Park perform in steampunk attire.
The crime show Castle had a steampunk-themed episode in which the title character is shown wearing a complete steampunk outfit after meeting with a steampunk society.
America's Next Top Model tackled Steampunk fashion in a 2012 episode where models competed in a Steampunk themed photo shoot, posing in front of a steam train while holding a live owl.
Becky Lynch, a wrestler currently employed by WWE, uses ring attire influenced by steampunk fashion. Most notable being her goggles she wears in her entrance.
Trends
Steampunk fashion has evolved into a culture of imaginative dressing, inspired by the aesthetics of the past. Modern-day fashion critics have actively sought to deconstruct steampunk as a term and as a philosophy in the context of fashion. Modern trends in steampunk fashion are grouped into historical recreationists and sci-fi cosplayers. Since the first steampunk convention in 2006, SalonCon, there have been a number of similar conventions where enthusiasts dress up as characters from steampunk culture. Modern steampunk clothing is based more on leather and metal as opposed to cotton or natural fabrics. More recently, steampunk has also been linked to fetishism, the genderqueer community and modern paganism.
In 2010, steampunk fashion entered the high-end market of fashion as designer John Galliano brought elements from the style to his spring haute couture show for Christian Dior. Another designer associated with the steampunk style is Jean Paul Gaultier, who frequently shows corsets in different material in his collections. , steampunk fashion started to influence the more mainstream fashion trends.
See also
Victorian decorative arts
Victorian fashion
Neo-Victorian
References
Further reading
External links
The Steampunk Workshop
15 Creative Works of Steampunk Art and Fashion: From Steam Tanks to Cufflinks
Steampunk Fashion examples for more everyday use
Steampunk costumes and accessories for cosplay
Fashion
Cosplay
History of fashion
Fashion aesthetics
| 0.763644 | 0.994206 | 0.759219 |
Analysis of European colonialism and colonization
|
Western European colonialism and colonization was the Western European policy or practice of acquiring full or partial political control over other societies and territories, founding a colony, occupying it with settlers, and exploiting it economically. For example, colonial policies, such as the type of rule implemented, the nature of investments, and identity of the colonizers, are cited as impacting postcolonial states. Examination of the state-building process, economic development, and cultural norms and mores shows the direct and indirect consequences of colonialism on the postcolonial states. It has been estimated that Britain and France traced almost 50% of the entire length of today's international boundaries as a result of British and French imperialism.
History of colonization and decolonization
The era of European colonialism can be defined by two big waves of colonialism: the first wave began in the 15th century, during the Age of Discovery of some European powers vastly extending their reach around the globe by establishing colonies in the Americas, and Asia. The second wave began during the 19th century, centering around Africa, in what is called the Scramble for Africa. The dismantling of European empires following World War II saw the process of decolonization begin in earnest. In 1941, President Franklin D. Roosevelt and British Prime Minister Winston Churchill jointly released the Atlantic Charter, which broadly outlined the goals of the U.S. and British governments. One of the main clauses of the charter acknowledged the right of all people to choose their own government. The document became the foundation for the United Nations and all of its components were integrated into the UN Charter, giving the organization a mandate to pursue global decolonization.
Varieties of colonialism
Historians generally distinguish two main varieties established by European colonials: the first is settler colonialism, where farms and towns were established by arrivals from Europe. Second, exploitation colonialism, purely extractive and exploitative colonies whose primary function was to develop economic exports. These frequently overlapped or existed on a spectrum.
Settler colonialism
Settler colonialism is a form of colonization where foreign citizens move into a region and create permanent or temporary settlements called colonies. The creation of settler colonies often resulted in the forced migration of indigenous peoples to less desirable territories. This practice is exemplified in the colonies established in what became the United States, New Zealand, Namibia, South Africa, Canada, Brazil, Uruguay, Chile, Argentina, Israel and Australia. Native populations frequently suffered population collapse due to contact with new diseases.
The resettlement of indigenous peoples frequently occurs along demographic lines, but the central stimulus for resettlement is access to desirable territory. Regions free of tropical disease with easy access to trade routes were favorable. When Europeans settled in these desirable territories, natives were forced out and regional power was seized by the colonialists. This type of colonial behavior led to the disruption of local customary practices and the transformation of socioeconomic systems. Ugandan academic Mahmood Mamdani cites "the destruction of communal autonomy, and the defeat and dispersal of tribal populations" as one primary factor in colonial oppression. As agricultural expansion continued through the territories, native populations were further displaced to clear fertile farmland.
Daron Acemoglu, James A. Robinson, and Simon Johnson theorize that Europeans were more likely to form settler colonies in areas where they would not face high mortality rates due to disease and other exogenous factors. Many settler colonies sought to establish European-like institutions and practices that granted certain personal freedoms and allowed settlers to become wealthy by engaging in trade. Thus, jury trials, freedom from arbitrary arrest, and electoral representation were implemented to allow settlers rights similar to those enjoyed in Europe, though these rights generally did not apply to the indigenous people.
Exploitation colonialism
Exploitation colonialism is a form of colonization where foreign armies conquer a country in order to control and capitalize on its natural resources and indigenous population. Acemoglu, Johnson, and Robinson argue, "institutions [established by colonials] did not introduce much protection for private property, nor did they provide checks and balances against government expropriation. In fact, the main purpose of the extractive state was to transfer as much of the resources of the colony to the colonizer, with the minimum amount of investment possible." Since these colonies were created with the intent to extract resources, colonial powers had no incentives to invest in institutions or infrastructure that did not support their immediate goals. Thus, Europeans established authoritarian regimes in these colonies, which had no limits on state power.
The policies and practices carried out by King Leopold II of Belgium as the absolute ruler of the Congo Free State in the Congo Basin are an extreme example of exploitation colonialism. E. D. Morel detailed the atrocities in multiple articles and books. Morel believed the Leopoldian system that eliminated traditional, commercial markets in favor of pure exploitation was the root cause of the injustice in the Congo. Under the "veil of philanthropic motive", King Leopold received the consent of multiple international governments (including the United States, Great Britain, and France) to assume trusteeship of the vast region in order to support the elimination of the slave trade. Leopold positioned himself as proprietor of an area totaling nearly one million square miles, which was home to nearly 20 million Africans.
After establishing dominance in the Congo Basin, Leopold extracted large quantities of ivory, rubber, and other natural resources. It has been estimated that Leopold made 1.1 billion in 2005 dollars by employing a variety of exploitative tactics. Soldiers demanded unrealistic quantities of rubber be collected by African villagers, and when these goals were not met, the soldiers held women hostage, beat or killed the men, and burned crops. These and other forced labor practices caused the birth rate to decline as famine and disease spread. All of this was done at very little monetary cost. M. Crawford Young observed, "[the concessionary companies] brought little capital – a mere 8000 pounds ... [to the Congo basin] – and instituted a reign of terror sufficient to provoke an embarrassing public-protest campaign in Britain and the United States at a time when the threshold of toleration for colonial brutality was high."
The system of government implemented in the Congo by Leopold and later Belgium was authoritarian and oppressive. Multiple scholars view the roots of authoritarianism under Mobutu as the result of colonial practices.
Indirect and direct rule of the colonial political system
Systems of colonial rule can be broken into the binary classifications of direct and indirect rule. During the era of colonization, Europeans were faced with the monumental task of administering the vast colonial territories around the globe. The initial solution to this problem was direct rule, which involves the establishment of a centralized European authority within a territory run by colonial officials. In a system of direct rule, the native population is excluded from all but the lowest level of the colonial government. Mamdani defines direct rule as centralized despotism: a system where natives were not considered citizens. By contrast, indirect rule integrates pre-established local elites and native institutions into the administration of the colonial government. Indirect rule maintains good pre-colonial institutions and fosters development within the local culture. Mamdani classifies indirect rule as “decentralized despotism,” where day-to-day operations were handled by local chiefs, but the true authority rested with the colonial powers.
Indirect rule
In certain cases, as in India, the colonial power directed all decisions related to foreign policy and defense, while the indigenous population controlled most aspects of internal administration. This led to autonomous indigenous communities that were under the rule of local tribal chiefs or kings. These chiefs were either drawn from the existing social hierarchy or were newly minted by the colonial authority. In areas under indirect rule, traditional authorities acted as intermediaries for the “despotic” colonial rule, while the colonial government acted as an advisor and only interfered in extreme circumstances. Often, with the support of the colonial authority, natives gained more power under indirect colonial rule than they had in the pre-colonial period. Mamdani points out that indirect rule was the dominant form of colonialism and therefore most who were colonized bore colonial rule that was delivered by their fellow natives.
The purpose of indirect rule was to allow natives to govern their own affairs through “customary law.” In practice though, the native authority decided on and enforced its own unwritten rules with the support of the colonial government. Rather than following the rule of law, local chiefs enjoyed judicial, legislative, executive, and administrative power in addition to legal arbitrariness.
Direct rule
In systems of direct rule, European colonial officials oversaw all aspects of governance, while natives were placed in an entirely subordinate role. Unlike indirect rule, the colonial government did not convey orders through local elites, but rather oversaw administration directly. European laws and customs were imported to supplant traditional power structures. Joost van Vollenhoven, Governor-General of French West Africa, 1917-1918, described the role of the traditional chiefs in by saying, “his functions were reduced to that of a mouthpiece for orders emanating from the outside...[The chiefs] have no power of their own of any kind. There are not two authorities in the cercle, the French authority and the native authority; there is only one.” The chiefs were therefore ineffective and not highly regarded by the indigenous population. There were even instances where people under direct colonial rule secretly elected a real chief in order to retain traditional rights and customs.
Direct rule deliberately removed traditional power structures in order to implement uniformity across a region. The desire for regional homogeneity was the driving force behind the French colonial doctrine of Assimilation. The French style of colonialism stemmed from the idea that the French Republic was a symbol of universal equality. As part of a civilizing mission, the European principles of equality were translated into legislation abroad. For the French colonies, this meant the enforcement of the French penal code, the right to send a representative to parliament, and imposition of tariff laws as a form of economic assimilation. Requiring natives to assimilate in these and other ways, created an ubiquitous, European-style identity that made no attempt to protect native identities. Indigenous people living in colonized societies were obliged to obey European laws and customs or be deemed “uncivilized” and denied access to any European rights.
Comparative outcomes between indirect and direct rule
Both direct and indirect rule have persistent, long term effects on the success of former colonies. Lakshmi Iyer, of Harvard Business School, conducted research to determine the impact type of rule can have on a region, looking at postcolonial India, where both systems were present under British rule. Iyer's findings suggests that regions which had previously been ruled indirectly were generally better-governed and more capable of establishing effective institutions than areas under direct British rule. In the modern postcolonial period, areas formerly ruled directly by the British perform worse economically and have significantly less access to various public goods, such as health care, public infrastructure, and education.
In his book Citizen and Subject: Contemporary Africa and the Legacy of Colonialism, Mamdani claims the two types of rule were each sides of the same coin. He explains that colonialists did not exclusively use one system of rule over another. Instead, European powers divided regions along urban-rural lines and instituted separate systems of government in each area. Mamdani refers to the formal division of rural and urban natives by colonizers as the “bifurcated state.” Urban areas were ruled directly by the colonizers under an imported system of European law, which did not recognize the validity of native institutions. In contrast, rural populations were ruled indirectly by customary and traditional law and were therefore subordinate to the “civilized” urban citizenry. Rural inhabitants were viewed as “uncivilized” subjects and were deemed unfit to receive the benefits of citizenship. The rural subjects, Mamdani observed, had only a “modicum of civil rights,” and were entirely excluded from all political rights.
Mamdani argues that current issues in postcolonial states are the result of colonial government partition, rather than simply poor governance as others have claimed. Current systems — in Africa and elsewhere — are riddled with an institutional legacy that reinforces a divided society. Using the examples of South Africa and Uganda, Mamdani observed that, rather than doing away with the bifurcated model of rule, postcolonial regimes have reproduced it. Although he uses only two specific examples, Mamdani maintains that these countries are simply paradigms representing the broad institutional legacy colonialism left on the world. He argues that modern states have only accomplished "deracialization" and not democratization following their independence from colonial rule. Instead of pursuing efforts to link their fractured society, centralized control of the government stayed in urban areas and reform focused on “reorganizing the bifurcated power forged under colonialism.” Native authorities that operated under indirect rule have not been brought into the mainstream reformation process; instead, development has been “enforced” on the rural peasantry. In order to achieve autonomy, successful democratization, and good governance, states must overcome their fundamental schisms: urban versus rural, customary versus modern, and participation versus representation.
Colonial actions and their impacts
European colonizers engaged in various actions around the world that had both short term and long term consequences for the colonized. Numerous scholars have attempted to analyze and categorize colonial activities by determining if they have positive or negative outcomes. Stanley Engerman and Kenneth Sokoloff categorized activities, which were driven by regional factor endowments, by determining whether they were associated with high or low levels of economic development. Acemoglu, Johnson, and Robinson attempted to understand what institutional changes caused previously rich countries to become poor after colonization. Melissa Dell documented the persistent, damaging effects of colonial labor exploitation under the mit'a mining system in Peru; showing significant differences in height and road access between previous mit'a and non-mit'a communities. Miriam Bruhn and Francisco A. Gallego employed a simple tripartite classification: good, bad, and ugly. Regardless of the system of classification, the fact remains, colonial actions produced varied outcomes which continue to be relevant.
In trying to assess the legacy of colonization, some researchers have focused on the type of political and economic institutions that existed before the arrival of Europeans. Heldring and Robinson conclude that while colonization in Africa had overall negative consequences for political and economic development in areas that had previous centralized institutions or that hosted white settlements, it possibly had a positive impact in areas that were virtually stateless, like South Sudan or Somalia. In a complementary analysis, Gerner Hariri observed that areas outside Europe which had State-like institutions before 1500 tend to have less open political systems today. According to the scholar, this is due to the fact that during the colonization, European liberal institutions were not easily implemented. Beyond the military and political advantages, it is possible to explain the domination of European countries over non-European areas by the fact that capitalism did not emerge as the dominant economic institution elsewhere. As Ugo Pipitone argues, prosperous economic institutions that sustain growth and innovation did not prevail in areas like China, the Arab world, or Mesoamerica because of the excessive control of these proto-States on private matters.
Another angle that can be considered when assessing colonial impacts is examining the institutions that formed across Africa after the withdrawal of European colonizers. In many cases, colonial rule led to the development of weak and flawed institutions in postcolonial Africa. Levitsky and Murillo further examine the importance of institutions with their research on the factors that contribute to institutional strength. They define rule enforcement and institutional stability (durability) as the main factors contributing to the success of an institution. In Africa, formal institutions had low stability and weak enforcement, leading to the emergence of dysfunctional institutions. A major source of the low institutional stability in African countries was the colonial partitioning of African borders, leading to political violence and ethnic conflict. Additionally, weak enforcement in Africa often stems from the creation of “window-dressing” institutions, where superficial democratic policies are implemented to feign democracy.However, these policies are rarely enforced.
Douglass North provides the argument that institutional change is incremental and is a result of “path-dependency”, which means that seemingly insignificant historical events can have major impacts on the formation of eventual institutions. These arguments follow William Brian Arthur’s theories on path-dependency where he states that market lock-in to a subpar technology is determined by “small-event history”. Thus, the colonial history in Africa becomes relevant as the decisions of European colonizers have impacted contemporary African economic and political structures. As a result, African institutions were impacted as well. Collectively, these theories from Levitsky and Murillo, North, and Arthur work to explain how colonialism led to the development and persistence of suboptimal African institutions.
Reorganization of borders
Defining borders
Throughout the era of European colonization, those in power routinely partitioned land masses and created borders that are still in place today. It has been estimated that Britain and France traced almost 40% of the entire length of today's international boundaries. Sometimes boundaries were naturally occurring, like rivers or mountains, but other times these borders were artificially created and agreed upon by colonial powers. The Berlin Conference of 1884 systemized European colonization in Africa and is frequently acknowledged as the genesis of the Scramble for Africa. The Conference implemented the Principle of Effective Occupation in Africa which allowed European states with even the most tenuous connection to an African region to claim dominion over its land, resources, and people. In effect, it allowed for the arbitrary construction of sovereign borders in a territory where they had never previously existed.
Jeffrey Herbst has written extensively on the impact of state organization in Africa. He notes, because the borders were artificially created, they generally do not conform to “typical demographic, ethnographic, and topographic boundaries.” Instead, they were manufactured by colonialists to advance their political goals. This led to large scale issues, like the division of ethnic groups; and small scale issues, such as families’ homes being separated from their farms.
William F. S. Miles of Northeastern University, argues that this perfunctory division of the entire continent created expansive ungoverned borderlands. These borderlands persist today and are havens for crimes like human trafficking and arms smuggling.
Modern preservation of the colonially defined borders
Herbst notes a modern paradox regarding the colonial borders in Africa: while they are arbitrary there is a consensus among African leaders that they must be maintained. Organization of African Unity in 1963 cemented colonial boundaries permanently by proclaiming that any changes made were illegitimate. This, in effect, avoided readdressing the basic injustice of colonial partition, while also reducing the likelihood of inter-state warfare as territorial boundaries were considered immutable by the international community.
Modern national boundaries are thus remarkably invariable, though the stability of the nation states has not followed in suit. Some African states are plagued by internal issues such as inability to effectively collect taxes and weak national identities. Lacking any external threats to their sovereignty, these countries have failed to consolidate power, leading to weak or failed states.
Though the colonial boundaries sometimes caused internal strife and hardship, some present day leaders benefit from the desirable borders their former colonial overlords drew. For example, Nigeria's inheritance of an outlet to the sea — and the trading opportunities a port affords — gives the nation a distinct economic advantage over its neighbor, Niger. Effectively, the early carving of colonial space turned naturally occurring factor endowments into state controlled assets.
Differing colonial investments
When European colonials entered a region, they invariably brought new resources and capital management. Different investment strategies were employed, which included focuses on health, infrastructure, or education. All colonial investments have had persistent effects on postcolonial societies, but certain types of spending have proven to be more beneficial than others. French economist Élise Huillery conducted research to determine specifically what types of public spending were associated with high levels of current development. Her findings were twofold. First, Huillery observes that the nature of colonial investments can directly influence current levels of performance. Increased spending in education led to higher school attendance; additional doctors and medical facilities decreased preventable illnesses in children; and a colonial focus on infrastructure translated into more modernized infrastructure today. Adding to this, Huillery also learned that early colonial investments instituted a pattern of continued spending that directly influenced the quality and quantity of public goods available today.
Land, property rights, and labor
Land and property rights
According to Mahmood Mamdani, prior to colonization, indigenous societies did not necessarily consider land private property. Alternatively, land was a communal resource that everyone could utilize. Once natives began interacting with colonial settlers, a long history of land abuse followed. Extreme examples of this include Trail of Tears, a series of forced relocations of Native Americans following the Indian Removal Act of 1830, and the apartheid system in South Africa. Australian anthropologist Patrick Wolfe points out that in these instances, natives were not only driven off land, but the land was then transferred to private ownership. He believes that the “frenzy for native land” was due to economic immigrants that belonged to the ranks of Europe's landless.
Making seemingly contradictory argument, Acemoglu, Johnson, and Robinson view strong property rights and ownership as an essential component of institutions that produce higher per capita income. They expand on this by saying property rights give individuals the incentive to invest, rather than stockpile, their assets. While this may appear to further encourage colonialists to exert their rights through exploitative behaviors, instead it offers protection to native populations and respects their customary ownership laws. Looking broadly at the European colonial experience, Acemoglu, Johnson, and Robinson explain that exploitation of natives transpired when stable property rights intentionally did not exist. These rights were never implemented in order to facilitate the predatory extraction of resources from indigenous populations. Bringing the colonial experience to the present that, they maintain that broad property rights set the stage for the effective institutions that are fundamental to strong democratic societies. An example of Acemoglu, Robinson and Johnson hypothesis is in the work of La Porta, et al. In a study of the legal systems in various countries, La Porta, et al. found that in those places that were colonized by the United Kingdom and kept its common-law system, the protection of property right is stronger compared to the countries that kept the French civil law.
In the case of India, Abhijit Banerjee and Lakshmi Iyer found divergent legacies of the British land tenure system in India. The areas where the property rights over the land were given to landlords registered lower productivity and agricultural investments in post-Colonial years compared to areas where land tenure was dominated by cultivators. The former areas also have lower levels of investment in health and education.
English philosopher John Locke´s theory of property supported settler colonialism, saying that the land belonged to those that made productive use of it.
Labor exploitation
Prominent Guyanese scholar and political activist Walter Rodney wrote at length about the economic exploitation of Africa by the colonial powers. In particular, he saw laborers as an especially abused group. While a capitalist system almost always employs some form of wage labor, the dynamic between laborers and colonial powers left the way open for extreme misconduct. According to Rodney, African workers were more exploited than Europeans because the colonial system produced a complete monopoly on political power and left the working class small and incapable of collective action. Combined with deep-seated racism, native workers were presented with impossible circumstances. The racism and superiority felt by the colonizers enabled them to justify the systematic underpayment of Africans even when they were working alongside European workers. Colonialists further defended their disparate incomes by claiming a higher cost of living. Rodney challenged this pretext and asserted the European quality of life and cost of living were only possible because of the exploitation of the colonies and African living standards were intentionally depressed in order to maximize revenue. In its wake, Rodney argues colonialism left Africa vastly underdeveloped and without a path forward.
Societal consequences of colonialism
Ethnic identity
The colonial changes to ethnic identity have been explored from the political, sociological, and psychological perspectives. In his book The Wretched of the Earth, French Afro-Caribbean psychiatrist and revolutionary Frantz Fanon claims the colonized must “ask themselves the question constantly: ‘who am I?’" Fanon uses this question to express his frustrations with fundamentally dehumanizing character of colonialism. Colonialism in all forms, was rarely an act of simple political control. Fanon argues the very act of colonial domination has the power to warp the personal and ethnic identities of natives because it operates under the assumption of perceived superiority. Natives are thus entirely divorced from their ethnic identities, which has been replaced by a desire to emulate their oppressors.
Ethnic manipulation manifested itself beyond the personal and internal spheres. Scott Straus from the University of Wisconsin describes the ethnic identities that partially contributed to the Rwandan genocide. In April 1994, following the assassination of Rwanda's President Juvénal Habyarimana, Hutus of Rwanda turned on their Tutsi neighbors and slaughtered between 500,000 and 800,000 people in just 100 days. While politically this situation was incredibly complex, the influence ethnicity had on the violence cannot be ignored. Before the German colonization of Rwanda, the identities of Hutu and Tutsi were not fixed. Germany ruled Rwanda through the Tutsi dominated monarchy and the Belgians continued this following their takeover. Belgian rule reinforced the difference between Tutsi and Hutu. Tutsis were deemed superior and were propped up as a ruling minority supported by the Belgians, while the Hutu were systematically repressed. The country's power later dramatically shifted following the so-called Hutu Revolution, during which Rwanda gained independence from their colonizers and formed a new Hutu-dominated government. Deep-seated ethnic tensions did not leave with the Belgians. Instead, the new government reinforced the cleavage.
Religious changes
Religion was one of the key parts of colony societies that were changed and manipulated. Ghana was one of the key countries that this impacted by British colonial rule. Jedwarb, Meier zu Selhausen, and Moradi (2022) were huge believers that the introduction of Christianity was one of the main reasons that Ghana still struggles to balance two societies in the modern day. "By 1932 the number of missions had expanded to 1,882 with 340,000 followers." At the time this was 9% of the population now in 2020 reportedly "The Christian share has since grown to 80%." Christianity unsettled the traditional African religious beliefs as well as the entire economic and political stability. This occurred not just specifically in Ghana but also in all over colony countries. Congo, one of the worst affected countries, had rules inflicted upon them like banning the practice of non-European religions.Oliver(1952) and Cleall (2009) argued that missionaries, used to teach the native people, were introduced "with little to no information on local circumstances, crossing political boundaries and whose objective was to save souls no matter the cost.” This caused significant damage both short term but especially long term with countries unable to cope with managing the different religions which consequently caused civil wars and infighting.
Civil society
Joel Migdal of the University of Washington believes weak postcolonial states have issues rooted in civil society. Rather than seeing the state as a singular dominant entity, Migdal describes “weblike societies” composed of social organizations. These organizations are a melange of ethnic, cultural, local, and familial groups and they form the basis of our society. The state is simply one actor in a much larger framework. Strong states are able to effectively navigate the intricate societal framework and exert social control over people's behavior. Weak states, on the other hand, are lost amongst the fractionalized authority of a complex society.
Migdal expands his theory of state-society relations by examining Sierra Leone. At the time of Migdal's publication (1988), the country's leader, President Joseph Saidu Momoh, was widely viewed as weak and ineffective. Just three years later, the country erupted into civil war, which continued for nearly 11 years. The basis for this tumultuous time, in Migdal's estimation, was the fragmented social control implemented by British colonizers. Using the typical British system of indirect rule, colonizers empowered local chiefs to mediate British rule in the region, and in turn, the chiefs exercised social control. After achieving independence from Great Britain, the chiefs remained deeply entrenched and did not allow for the necessary consolidation of power needed to build a strong state. Migdal remarked, “Even with all the resources at their disposal, even with the ability to eliminate any single strongman, state leaders found themselves severely limited.” It is necessary for the state and society to form a mutually beneficial symbiotic relationship in order for each to thrive. The peculiar nature of postcolonial politics makes this increasingly difficult.
Linguistic discrimination
In settler colonies, indigenous languages were often lost either as indigenous populations were decimated by war and disease, or as aboriginal tribes mixed with colonists. On the other hand, in exploitation colonies such as India, colonial languages were usually only taught to a small local elite. The linguistic differences between the local elite and other locals exacerbated class stratification, and also increased inequality in access to education, industry and civic society in postcolonial states.
Sport
Various traditional games that were played in different countries were overtaken by Western sports during the colonial era. This effect was notable in British colonies, as the British invented many of what later became the world's most popular sports during the colonial era, and propagated these sports in part because they allowed for the perpetuation of class and racial divides beneficial to them, and due to the belief that they would help spread Britain's cilivising values. Towards the end of the colonial era, colonizers' sports often played a significant role in the colonies' independence movements, as sport became an avenue for the colonized peoples to work together and prove their equality. After the colonial era, Western sports often became an important part of nation-building and international relations for former colonies; for example, cricket played a significant role in bringing Indian people together and allowed India to do "cricket diplomacy" with Pakistan, a country which it has had significant tensions with. Western sport has also played a role in fighting racism, as when South Africa was banned from most international sports during the apartheid era.
Ecological impacts of colonialism
European colonialism spread contagious diseases between Europeans and subjugated peoples.
Countering disease
The Spanish Crown organised a mission (the Balmis expedition) to transport the smallpox vaccine and establish mass vaccination programs in colonies in 1803. By 1832, the federal government of the United States established a smallpox vaccination program for Native Americans. Under the direction of Mountstuart Elphinstone a program was launched to increase smallpox vaccination in India.
From the beginning of the 20th century onwards, the elimination or control of disease in tropical countries became a necessity for all colonial powers. The sleeping sickness epidemic in Africa was arrested due to mobile teams systematically screening millions of people at risk. The biggest population increases in human history occurred during the 20th century due to the decreasing mortality rate in many countries due to medical advances.
Colonial policies contributing to indigenous deaths from disease
John S. Milloy published evidence indicating that Canadian authorities had intentionally concealed information on the spread of disease in his book A National Crime: The Canadian Government and the Residential School System, 1879 to 1986 (1999). According to Milloy, the Government of Canada was aware of the origins of many diseases but maintained a secretive policy. Medical professionals had knowledge of this policy, and further, knew it was causing a higher death rate among indigenous people, yet the policy continued.
Evidence suggests, government policy was not to treat natives infected with tuberculosis or smallpox, and native children infected with smallpox and tuberculosis were deliberately sent back to their homes and into native villages by residential school administrators. Within the residential schools, there was no segregation of sick students from healthy students, and students infected with deadly illnesses were frequently admitted to the schools, where infections spread among the healthy students and resulted in deaths; death rates were at least 24% and as high as 69%.
Tuberculosis was the leading cause of death in Europe and North America in the 19th century, accounting for about 40% of working-class deaths in cities, and by 1918 one in six deaths in France were still caused by tuberculosis. European governments, and medical professionals in Canada, were well aware that tuberculosis and smallpox were highly contagious, and that deaths could be prevented by taking measures to quarantine patients and inhibit the spread of the disease. They failed to do this, however, and imposed laws that in fact ensured that these deadly diseases spread quickly among the indigenous population. Despite the high death rate among students from contagious disease, in 1920 the Canadian government made attendance at residential schools mandatory for native children, threatening non-compliant parents with fines and imprisonment. John S. Milloy argued that these policies regarding disease were not conventional genocide, but rather policies of neglect aimed at assimilating natives.
Some historians, such as Roland Chrisjohn, director of Native Studies at St. Thomas University, have argued that some European colonists, having discovered that indigenous populations were not immune to certain diseases, deliberately spread diseases to gain military advantages and subjugate local peoples. In his book The Circle Game: Shadows and Substance in the Indian Residential School Experience in Canada, Chrisjohn argues that the Canadian government followed a deliberate policy amounting to genocide against native populations. During the siege of British-held Fort Pitt in Pontiac's War, the fort's commander, Simeon Ecuyer and his subordinate William Trent distributed blankets infected with smallpox to a Lenape delegation outside the fort. During the conflict, Colonel Henry Bouquet discussed plans to deliberately infect hostile Native American tribes with his superior, General Sir Jeffery Amherst, who wrote back approvingly of Bouquet's suggestion. Historians have been divided on the effectiveness of this particular incident in causing a smallpox outbreak among Native Americans in the region, though it has been recognized as one of the first instances of biological warfare. During the 1837 Great Plains smallpox epidemic, some scholars argued that the U.S. Army intentionally spread smallpox to Native American tribes, with scholar Ann F. Ramenofsky stating that "in the nineteenth century, the U.S. Army sent contaminated blankets to Native Americans, especially Plains groups, to control the Indian problem."
Historic debates surrounding colonialism
Bartolomé de Las Casas (1484–1566) was the first Protector of the Indians appointed by the Spanish Crown. During his time in the Spanish West Indies, he witnessed many of the atrocities committed by Spanish colonists against the natives. After this experience, he reformed his view on colonialism and determined the Spanish people would suffer divine punishment if the gross mistreatment in the Indies continued. De Las Casas detailed his opinion in his book The Destruction of the Indies: A Brief Account (1552).
During the sixteenth century, Spanish priest and philosopher Francisco Suarez (1548–1617) expressed his objections to colonialism in his work De Bello et de Indis (On War and the Indies). In this text and others, Suarez supported natural law and conveyed his beliefs that all humans had rights to life and liberty. Along these lines, he argued for the limitation of the imperial powers of Charles V, Holy Roman Emperor by underscoring the natural rights of indigenous people. Accordingly, native inhabitants of the colonial Spanish West Indies deserved independence and each island should be considered a sovereign state with all the legal powers of Spain.
French writer Denis Diderot was openly critical of ethnocentrism and European colonialism in Tahiti. In a series of philosophical dialogues entitled Supplément au voyage de Bougainville (1772), Diderot imagines several conversations between Tahitians and Europeans. The two speakers discuss their cultural differences, which acts as a critique of European culture.
Modern theories of colonialism
The effects of European colonialism have consistently drawn academic attention in the decades since decolonization. New theories continue to emerge. The field of colonial and postcolonial studies has been implemented as a major in multiple universities around the globe.
Dependency theory
Dependency theory is an economic theory which postulated that advanced and industrialized "metropolitan" or "core" nations have been able to develop because of the existence of less-developed "satellite" or "periphery" states. Satellite nations are anchored to, and subordinate to, metropolitan countries because of the international division of labor. Satellite countries are thus dependent on metropolitan states and incapable of charting their own economic path.
The theory was introduced in the 1950s by Raul Prebisch, Director of the United Nations Economic Commission for Latin America after observing that economic growth in wealthy countries did not translate into economic growth in poor countries. Dependency theorists believe this is due to the import-export relationship between rich and poor countries. Walter Rodney, in his book How Europe Underdeveloped Africa, used this framework when observing the relationship between European trading companies and African peasants living in postcolonial states. Through the labor of peasants, African countries are able to gather large quantities of raw materials. Rather than being able to export these materials directly to Europe, states must work with a number of trading companies, who collaborated to keep purchase prices low. The trading companies then sold the materials to European manufactures at inflated prices. Finally the manufactured goods were returned to Africa, but with prices so high, that laborers were unable to afford them. This led to a situation where the individuals who labored extensively to gather raw materials were unable to benefit from the finished goods.
Neocolonialism
Neocolonialism is the continued economic and cultural control of countries that have been decolonized. The first documented use of the term was by Former President of Ghana Kwame Nkrumah in the 1963 preamble of the Organization of African States. Nkrumah expanded the concept of neocolonialism in the book Neo-Colonialism, the Last Stage of Imperialism (1965). In Nkrumah's estimation, traditional forms of colonialism have ended, but many African states are still subject to external political and economic control by Europeans. Neocolonialism is related to dependency theory in that they both acknowledge the financial exploitation of poor counties by the rich, but neocolonialism also includes aspects of cultural imperialism. Rejection of cultural neocolonialism formed the basis of négritude philosophy, which sought to eliminate colonial and racist attitudes by affirming the values of "the black world" and embracing "blackness".
Benign colonialism
Benign colonialism is a theory of colonialism in which benefits allegedly outweigh the negatives for indigenous populations whose lands, resources, rights and freedoms come under the control of a colonising nation-state. The historical source for the concept of benign colonialism resides with John Stuart Mill (1806-1873), who served as chief examiner of the British East India Company - dealing with British interests in India - in the 1820s and 1830s. Mill's most well-known essays on benign colonialism appear in "Essays on some Unsettled Questions of Political Economy."
Mill's view contrasted with Burkean orientalists. Mill promoted the training of a corps of bureaucrats indigenous to India who could adopt the modern liberal perspective and values of 19th-century Britain. Mill predicted this group's eventual governance of India would be based on British values and perspectives.
Advocates of the concept of benign colonialism cite improved standards in health and education, in employment opportunities, in liberal markets, in the development of natural resources and in introduced governance. The first wave of benign colonialism lasted from c. 1790-1960, according to Mill's concept. The second wave included neocolonial policies exemplified in Hong Kong, where unfettered expansion of the market created a new form of benign colonialism. Political interference and military intervention in independent nation-states, such as Iraq, is also discussed under the rubric of benign colonialism in which a foreign power preempts national governance to protect a higher concept of freedom. The term is also used in the 21st century to refer to US, French and Chinese market activities in African countries with massive quantities of underdeveloped nonrenewable natural resources.
These views have support from some academics. Economic historian Niall Ferguson (born 1964) argues that empires can be a good thing provided that they are "liberal empires". He cites the British Empire as being the only example of a "liberal empire" and argues that it maintained the rule of law, benign government, free trade and, with the abolition of slavery, free labor. Historian Rudolf von Albertini agrees that, on balance, colonialism can be good. He argues that colonialism was a mechanism for modernisation in the colonies and imposed a peace by putting an end to tribal warfare.
Historians L. H. Gann and Peter Duignan have also argued that Africa probably benefited from colonialism on balance. Although it had its faults, colonialism was probably "one of the most efficacious engines for cultural diffusion in world history". The economic historian David Kenneth Fieldhouse has taken a kind of middle position, arguing that the effects of colonialism were actually limited and their main weakness was not in deliberate underdevelopment but in what it failed to do. Niall Ferguson agrees with his last point, arguing that colonialism's main weaknesses were sins of omission. Marxist historian Bill Warren has argued that whilst colonialism may be bad because it relies on force, he views it as being the genesis of Third World development.
However, history records few cases where two or more peoples have met and mingled without generating some sort of friction. The clearest cases of "benign" colonialism occur where the target exploited land is minimally populated (as with Iceland in the 9th century) or completely terra nullius (such as the Falkland Islands).
See also
References
Further reading
Albertini, Rudolf von. European Colonial Rule, 1880–1940: The Impact of the West on India, Southeast Asia, and Africa (1982) 581pp
Betts, Raymond F. The False Dawn: European Imperialism in the Nineteenth Century (1975)
Betts, Raymond F. Uncertain Dimensions: Western Overseas Empires in the Twentieth Century (1985)
Black, Jeremy. European International Relations, 1648–1815 (2002) excerpt and text search
Burbank, Jane, and Frederick Cooper. Empires in World History: Power and the Politics of Difference (2011), Very wide-ranging coverage from Rome to the 1980s; 511pp
Cotterell, Arthur. Western Power in Asia: Its Slow Rise and Swift Fall, 1415–1999 (2009) popular history; excerpt
Dodge, Ernest S. Islands and Empires: Western Impact on the Pacific and East Asia (1976)
Furber, Holden. Rival Empires of Trade in the Orient, 1600–1800 (1976)
Furber, Holden, and Boyd C Shafer. Rival Empires of Trade in the Orient, 1600–1800 (1976)
Hodge, Carl Cavanagh, ed. Encyclopedia of the Age of Imperialism, 1800–1914 (2 vol. 2007), Focus on European leaders
Langer, William. An Encyclopedia of World History (5th ed. 1973), very detailed outline; 6th edition ed. by Peter Stearns (2001) has more detail on Third World
McAlister, Lyle N. Spain and Portugal in the New World, 1492–1700 (1984)
Ness, Immanuel and Zak Cope, eds. The Palgrave Encyclopedia of Imperialism and Anti-Imperialism (2 vol 2015), 1456pp
Osterhammel, Jürgen: Colonialism: A Theoretical Overview, Princeton, NJ: M. Wiener, 1997.
Page, Melvin E. ed. Colonialism: An International Social, Cultural, and Political Encyclopedia (3 vol. 2003); vol. 3 consists of primary documents; vol. 2 pages 647–831 has a detailed chronology
Porter, Andrew. European Imperialism, 1860–1914 (1996), Brief survey focuses on historiography
Roberts, Stephen H. History of French Colonial Policy (1870–1925) (2 vol 1929) vol 1 online also vol 2 online; comprehensive scholarly history
Savelle, Max. Empires to Nations: Expansion in America, 1713–1824 (1975)
Smith, Tony. The Pattern of Imperialism: The United States, Great Britain and the Late-Industrializing World Since 1815 (1981)
Townsend, Mary Evelyn. European colonial expansion since 1871 (1941).
Wilson, Henry. The Imperial Experience in Sub-Saharan Africa since 1870 (1977)
History of European colonialism
Social ethics
colonialism and colonization, Western European
Articles containing video clips
Historiography by topic
Western European colonialism and colonization
Western culture
| 0.773647 | 0.981283 | 0.759166 |
Socialization
|
In sociology, socialization (Modern English; or socialisation - see spelling differences) is the process of internalizing the norms and ideologies of society. Socialization encompasses both learning and teaching and is thus "the means by which social and cultural continuity are attained".
Socialization is strongly connected to developmental psychology. Humans need social experiences to learn their culture and to survive.
Socialization essentially represents the whole process of learning throughout the life course and is a central influence on the behavior, beliefs, and actions of adults as well as of children.
Socialization may lead to desirable outcomes—sometimes labeled "moral"—as regards the society where it occurs. Individual views are influenced by the society's consensus and usually tend toward what that society finds acceptable or "normal". Socialization provides only a partial explanation for human beliefs and behaviors, maintaining that agents are not blank slates predetermined by their environment; scientific research provides evidence that people are shaped by both social influences and genes.
Genetic studies have shown that a person's environment interacts with their genotype to influence behavioral outcomes.
It is the process by which individuals learn their own societies culture.
History
Notions of society and the state of nature have existed for centuries. In its earliest usages, socialization was simply the act of socializing or another word for socialism. Socialization as a concept originated concurrently with sociology, as sociology was defined as the treatment of "the specifically social, the process and forms of socialization, as such, in contrast to the interests and contents which find expression in socialization". In particular, socialization consisted of the formation and development of social groups, and also the development of a social state of mind in the individuals who associate. Socialization is thus both a cause and an effect of association. The term was relatively uncommon before 1940, but became popular after World War II, appearing in dictionaries and scholarly works such as the theory of Talcott Parsons.
Stages of moral development
Lawrence Kohlberg studied moral reasoning and developed a theory of how individuals reason situations as right from wrong. The first stage is the pre-conventional stage, where a person (typically children) experience the world in terms of pain and pleasure, with their moral decisions solely reflecting this experience. Second, the conventional stage (typical for adolescents and adults) is characterized by an acceptance of society's conventions concerning right and wrong, even when there are no consequences for obedience or disobedience. Finally, the post-conventional stage (more rarely achieved) occurs if a person moves beyond society's norms to consider abstract ethical principles when making moral decisions.
Stages of psychosocial development
Erik H. Erikson (1902–1994) explained the challenges throughout the life course. The first stage in the life course is infancy, where babies learn trust and mistrust. The second stage is toddlerhood where children around the age of two struggle with the challenge of autonomy versus doubt. In stage three, preschool, children struggle to understand the difference between initiative and guilt. Stage four, pre-adolescence, children learn about industriousness and inferiority. In the fifth stage called adolescence, teenagers experience the challenge of gaining identity versus confusion. The sixth stage, young adulthood, is when young people gain insight into life when dealing with the challenge of intimacy and isolation. In stage seven, or middle adulthood, people experience the challenge of trying to make a difference (versus self-absorption). In the final stage, stage eight or old age, people are still learning about the challenge of integrity and despair.< This concept has been further developed by Klaus Hurrelmann and Gudrun Quenzel using the dynamic model of "developmental tasks".
Behaviorism
George Herbert Mead (1863–1931) developed a theory of social behaviorism to explain how social experience develops an individual's self-concept. Mead's central concept is the self: It is composed of self-awareness and self-image. Mead claimed that the self is not there at birth, rather, it is developed with social experience. Since social experience is the exchange of symbols, people tend to find meaning in every action. Seeking meaning leads us to imagine the intention of others. Understanding intention requires imagining the situation from the other's point of view. In effect, others are a mirror in which we can see ourselves. Charles Horton Cooley (1902-1983) coined the term looking glass self, which means self-image based on how we think others see us. According to Mead, the key to developing the self is learning to take the role of the other. With limited social experience, infants can only develop a sense of identity through imitation. Gradually children learn to take the roles of several others. The final stage is the generalized other, which refers to widespread cultural norms and values we use as a reference for evaluating others.
Contradictory evidence to behaviorism
Behaviorism makes claims that when infants are born they lack social experience or self. The social pre-wiring hypothesis, on the other hand, shows proof through a scientific study that social behavior is partly inherited and can influence infants and also even influence foetuses. Wired to be social means that infants are not taught that they are social beings, but they are born as prepared social beings.
The social pre-wiring hypothesis refers to the ontogeny of social interaction. Also informally referred to as, "wired to be social". The theory questions whether there is a propensity to socially oriented action already present before birth. Research in the theory concludes that newborns are born into the world with a unique genetic wiring to be social.
Circumstantial evidence supporting the social pre-wiring hypothesis can be revealed when examining newborns' behavior. Newborns, not even hours after birth, have been found to display a preparedness for social interaction. This preparedness is expressed in ways such as their imitation of facial gestures. This observed behavior cannot be contributed to any current form of socialization or social construction. Rather, newborns most likely inherit to some extent social behavior and identity through genetics.
Principal evidence of this theory is uncovered by examining Twin pregnancies. The main argument is, if there are social behaviors that are inherited and developed before birth, then one should expect twin foetuses to engage in some form of social interaction before they are born. Thus, ten foetuses were analyzed over a period of time using ultrasound techniques. Using kinematic analysis, the results of the experiment were that the twin foetuses would interact with each other for longer periods and more often as the pregnancies went on. Researchers were able to conclude that the performance of movements between the co-twins was not accidental but specifically aimed.
The social pre-wiring hypothesis was proved correct, "The central advance of this study is the demonstration that 'social actions' are already performed in the second trimester of gestation. Starting from the 14th week of gestation twin foetuses plan and execute movements specifically aimed at the co-twin. These findings force us to predate the emergence of social behavior: when the context enables it, as in the case of twin foetuses, other-directed actions are not only possible but predominant over self-directed actions."
Types
Primary socialization
Primary socialization occurs when a child learns the attitudes, values, and actions appropriate to individuals as members of a particular culture. Primary socialization for a child is very important because it sets the groundwork for all future socialization. It is mainly influenced by immediate family and friends. For example, if a child's mother expresses a discriminatory opinion about a minority or majority group, then that child may think this behavior is acceptable and could continue to have this opinion about that minority or majority group.
Secondary socialization
Secondary socialization refers to the process of learning what is the appropriate behavior as a member of a smaller group within the larger society. Basically, it involves the behavioral patterns reinforced by socializing agents of society. Secondary socialization takes place outside the home. It is where children and adults learn how to act in a way that is appropriate for the situations they are in. Schools require very different behavior from the home, and children must act according to new rules. New teachers have to act in a way that is different from pupils and learn the new rules from people around them. Secondary socialization is usually associated with teenagers and adults and involves smaller changes than those occurring in primary socialization. Examples of secondary socialization may include entering a new profession or relocating to a new environment or society.
Anticipatory socialization
Anticipatory socialization refers to the processes of socialization in which a person "rehearses" for future positions, occupations, and social relationships. For example, a couple might move in together before getting married in order to try out, or anticipate, what living together will be like. Research by Kenneth J. Levine and Cynthia A. Hoffner identifies parents as the main source of anticipatory socialization in regard to jobs and careers.
Resocialization
Resocialization refers to the process of discarding former behavior-patterns and reflexes while accepting new ones as part of a life transition. This can occur throughout the human life-span. Resocialization can be an intense experience, with individuals experiencing a sharp break with their past, as well as a need to learn and be exposed to radically different norms and values. One common example involves resocialization through a total institution, or "a setting in which people are isolated from the rest of society and manipulated by an administrative staff". Resocialization via total institutions involves a two step process: 1) the staff work to root out a new inmate's individual identity; and 2) the staff attempt to create for the inmate a new identity.
Other examples include the experiences of a young person leaving home to join the military, or of a religious convert internalizing the beliefs and rituals of a new faith. Another example would be the process by which a transsexual person learns to function socially in a dramatically altered gender-role.
Organizational socialization
Organizational socialization is the process whereby an employee learns the knowledge and skills necessary to assume his or her role in an organization. As newcomers become socialized, they learn about the organization and its history, values, jargon, culture, and procedures. Acquired knowledge about new employees' future work-environment affects the way they are able to apply their skills and abilities to their jobs. How actively engaged the employees are in pursuing knowledge affects their socialization process. New employees also learn about their work group, the specific people they will work with on a daily basis, their own role in the organization, the skills needed to do their job, and both formal procedures and informal norms. Socialization functions as a control system in that newcomers learn to internalize and obey organizational values and practices.
Group socialization
Group socialization is the theory that an individual's peer groups, rather than parental figures, become the primary influence on personality and behavior in adulthood. Parental behavior and the home environment has either no effect on the social development of children, or the effect varies significantly between children. Adolescents spend more time with peers than with parents. Therefore, peer groups have stronger correlations with personality development than parental figures do. For example, twin brothers with an identical genetic heritage will differ in personality because they have different groups of friends, not necessarily because their parents raised them differently. Behavioral genetics suggest that up to fifty percent of the variance in adult personality is due to genetic differences. The environment in which a child is raised accounts for only approximately ten percent in the variance of an adult's personality. As much as twenty percent of the variance is due to measurement error. This suggests that only a very small part of an adult's personality is influenced by factors which parents control (i.e. the home environment). Harris grants that while siblings do not have identical experiences in the home environment (making it difficult to associate a definite figure to the variance of personality due to home environments), the variance found by current methods is so low that researchers should look elsewhere to try to account for the remaining variance. Harris also states that developing long-term personality characteristics away from the home environment would be evolutionarily beneficial because future success is more likely to depend on interactions with peers than on interactions with parents and siblings. Also, because of already existing genetic similarities with parents, developing personalities outside of childhood home environments would further diversify individuals, increasing their evolutionary success.
Stages
Individuals and groups change their evaluations of and commitments to each other over time. There is a predictable sequence of stages that occur as an individual transitions through a group: investigation, socialization, maintenance, resocialization, and remembrance. During each stage, the individual and the group evaluate each other, which leads to an increase or decrease in commitment to socialization. This socialization pushes the individual from prospective to new, full, marginal, and ex member.
Stage 1: Investigation
This stage is marked by a cautious search for information. The individual compares groups in order to determine which one will fulfill their needs (reconnaissance), while the group estimates the value of the potential member (recruitment). The end of this stage is marked by entry to the group, whereby the group asks the individual to join and they accept the offer.
Stage 2: Socialization
Now that the individual has moved from a prospective member to a new member, the recruit must accept the group's culture. At this stage, the individual accepts the group's norms, values, and perspectives (assimilation), and the group may adapt to fit the new member's needs (accommodation). The acceptance transition-point is then reached and the individual becomes a full member. However, this transition can be delayed if the individual or the group reacts negatively. For example, the individual may react cautiously or misinterpret other members' reactions in the belief that they will be treated differently as a newcomer.
Stage 3: Maintenance
During this stage, the individual and the group negotiate what contribution is expected of members (role negotiation). While many members remain in this stage until the end of their membership, some individuals may become dissatisfied with their role in the group or fail to meet the group's expectations (divergence).
Stage 4: Resocialization
If the divergence point is reached, the former full member takes on the role of a marginal member and must be resocialized. There are two possible outcomes of resocialization: the parties resolve their differences and the individual becomes a full member again (convergence), or the group and the individual part ways via expulsion or voluntary exit.
Stage 5: Remembrance
In this stage, former members reminisce about their memories of the group and make sense of their recent departure. If the group reaches a consensus on their reasons for departure, conclusions about the overall experience of the group become part of the group's tradition.
Gender socialization
Henslin contends that "an important part of socialization is the learning of culturally defined gender roles". Gender socialization refers to the learning of behavior and attitudes considered appropriate for a given sex: boys learn to be boys and girls learn to be girls. This "learning" happens by way of many different agents of socialization. The behavior that is seen to be appropriate for each gender is largely determined by societal, cultural, and economic values in a given society. Gender socialization can therefore vary considerably among societies with different values. The family is certainly important in reinforcing gender roles, but so are groups - including friends, peers, school, work, and the mass media. Social groups reinforce gender roles through "countless subtle and not so subtle ways". In peer-group activities, stereotypic gender-roles may also be rejected, renegotiated, or artfully exploited for a variety of purposes.
Carol Gilligan compared the moral development of girls and boys in her theory of gender and moral development. She claimed that boys have a justice perspective - meaning that they rely on formal rules to define right and wrong. Girls, on the other hand, have a care-and-responsibility perspective, where personal relationships are considered when judging a situation. Gilligan also studied the effect of gender on self-esteem. She claimed that society's socialization of females is the reason why girls' self-esteem diminishes as they grow older. Girls struggle to regain their personal strength when moving through adolescence as they have fewer female teachers and most authority figures are men.
As parents are present in a child's development from the beginning, their influence in a child's early socialization is very important, especially in regard to gender roles. Sociologists have identified four ways in which parents socialize gender roles in their children: Shaping gender related attributes through toys and activities, differing their interaction with children based on the sex of the child, serving as primary gender models, and communicating gender ideals and expectations.
Sociologist of gender R.W. Connell contends that socialization theory is "inadequate" for explaining gender, because it presumes a largely consensual process except for a few "deviants", when really most children revolt against pressures to be conventionally gendered; because it cannot explain contradictory "scripts" that come from different socialization agents in the same society, and because it does not account for conflict between the different levels of an individual's gender (and general) identity.
Racial socialization
Racial socialization, or racial-ethnic socialization, has been defined as "the developmental processes by which children acquire the behaviors, perceptions, values, and attitudes of an ethnic group, and come to see themselves and others as members of the group". The existing literature conceptualizes racial socialization as having multiple dimensions. Researchers have identified five dimensions that commonly appear in the racial socialization literature: cultural socialization, preparation for bias, promotion of mistrust, egalitarianism, and other. Cultural socialization, sometimes referred to as "pride development", refers to parenting practices that teach children about their racial history or heritage.
Preparation for bias refers to parenting practices focused on preparing children to be aware of, and cope with, discrimination. Promotion of mistrust refers to the parenting practices of socializing children to be wary of people from other races. Egalitarianism refers to socializing children with the belief that all people are equal and should be treated with common humanity. In the United States, white people are socialized to perceive race as a zero-sum game and a black-white binary.
Oppression socialization
Oppression socialization refers to the process by which "individuals develop understandings of power and political structure, particularly as these inform perceptions of identity, power, and opportunity relative to gender, racialized group membership, and sexuality". This action is a form of political socialization in its relation to power and the persistent compliance of the disadvantaged with their oppression using limited "overt coercion".
Language socialization
Based on comparative research in different societies, and focusing on the role of language in child development, linguistic anthropologists Elinor Ochs and Bambi Schieffelin have developed the theory of language socialization.
They discovered that the processes of enculturation and socialization do not occur apart from the process of language acquisition, but that children acquire language and culture together in what amounts to an integrated process. Members of all societies socialize children both to and through the use of language; acquiring competence in a language, the novice is by the same token socialized into the categories and norms of the culture, while the culture, in turn, provides the norms of the use of language.
Planned socialization
Planned socialization occurs when other people take actions designed to teach or train others. This type of socialization can take on many forms and can occur at any point from infancy onward.
Natural socialization
Natural socialization occurs when infants and youngsters explore, play and discover the social world around them. Natural socialization is easily seen when looking at the young of almost any mammalian species (and some birds).
On the other hand, planned socialization is mostly a human phenomenon; all through history, people have made plans for teaching or training others. Both natural and planned socialization can have good and bad qualities: it is useful to learn the best features of both natural and planned socialization in order to incorporate them into life in a meaningful way.
Political socialization
Socialization produces the economic, social, and political development of any particular country. The nature of the compromise between nature and nurture also determines whether society is good or harmful. Political socialization is described as "the long developmental process by which an infant (even an adult) citizen learns, imbibes and ultimately internalizes the political culture (core political values, beliefs, norms and ideology) of his political system in order to make him a more informed and effective political participant."
A society's political culture is inculcated in its citizens and passed down from one generation to the next as part of the political socialization process. Agents of socialization are thus people, organizations, or institutions that have an impact on how people perceive themselves, behave, or have other orientations. In contemporary democratic government, political parties are the main forces behind political socialization.
Socialization enhances business, trade, and foreign investment globally. Building technology is made easy, is improved and carried out due to the ease with which interaction in interest services and media work can be connected. Citizens must instil in themselves excellent morals, ethics, and values and must preserve human rights or have sound judgment to be able to lead a country to a higher developmental level in order to construct a decent and democratic society for nation-building. Developing nations can transfer agricultural technology and machinery like tractors, harvesters, and agrochemicals to enhance the agricultural sector of the economy through socialization.
Positive socialization
Positive socialization is the type of social learning that is based on pleasurable and exciting experiences. Individual humans tend to like the people who fill their social learning processes with positive motivation, loving care, and rewarding opportunities. Positive socialization occurs when desired behaviors are reinforced with a reward, encouraging the individual to continue exhibiting similar behaviors in the future.
Negative socialization
Negative socialization occurs when socialialization agents use punishment, harsh criticisms, or anger to try to "teach us a lesson"; and often we come to dislike both negative socialization and the people who impose it on us. There are all types of mixes of positive and negative socialization, and the more positive social learning experiences we have, the happier we tend to be—especially if we are able to learn useful information that helps us cope well with the challenges of life. A high ratio of negative to positive socialization can make a person unhappy, leading to defeated or pessimistic feelings about life.
Bullying can examplify negative socialization.
Institutions
In the social sciences, institutions are the structures and mechanisms of social order and cooperation governing the behavior of individuals within a given human collectivity. Institutions are identified with a social purpose and permanence, transcending individual human lives and intentions, and with the making and enforcing of rules governing cooperative human behavior.
Productive processing of reality
From the late 1980s, sociological and psychological theories have been connected with the term socialization. One example of this connection is the theory of Klaus Hurrelmann. In his book Social Structure and Personality Development, he develops the model of productive processing of reality. The core idea is that socialization refers to an individual's personality development. It is the result of the productive processing of interior and exterior realities. Bodily and mental qualities and traits constitute a person's inner reality; the circumstances of the social and physical environment embody the external reality. Reality processing is productive because human beings actively grapple with their lives and attempt to cope with the attendant developmental tasks. The success of such a process depends on the personal and social resources available. Incorporated within all developmental tasks is the necessity to reconcile personal individuation and social integration and so secure the "I-dentity". The process of productive processing of reality is an enduring process throughout the life course.
Oversocialization
The problem of order, or Hobbesian problem, questions the existence of social orders and asks if it is possible to oppose them. Émile Durkheim viewed society as an external force controlling individuals through the imposition of sanctions and codes of law. However, constraints and sanctions also arise internally as feelings of guilt or anxiety.
See also
References
Further reading
Bayley, Robert; Schecter, Sandra R. (2003). Multilingual Matters,
Duff, Patricia A.; Hornberger, Nancy H. (2010). Language Socialization: Encyclopedia of Language and Education, Volume 8. Springer,
Kramsch, Claire (2003). Language Acquisition and Language Socialization: Ecological Perspectives – Advances in Applied Linguistics. Continuum International Publishing Group,
McQuail, Dennis (2005). McQuail's Mass Communication Theory: Fifth Edition, London: Sage.
Mehan, Hugh (1991). Sociological Foundations Supporting the Study of Cultural Diversity. National Center for Research on Cultural Diversity and Second Language Learning.
White, Graham (1977). Socialisation, London: Longman.
Conformity
Deviance (sociology)
Sociological terminology
Majority–minority relations
| 0.760822 | 0.997798 | 0.759146 |
Autoethnography
|
Autoethnography is a form of ethnographic research in which a researcher connects personal experiences to wider cultural, political, and social meanings and understandings. It is considered a form of qualitative and/or arts-based research.
Autoethnography has been used across various disciplines, including anthropology, arts education, communication studies, education, educational administration, English literature, ethnic studies, gender studies, history, human resource development, marketing, music therapy, nursing, organizational behavior, paramedicine, performance studies, physiotherapy, psychology, social work, sociology, and theology and religious studies.
Definitions
Historically, researchers have had trouble reaching a consensus regarding the definition of autoethnography. Whereas some scholars situate autoethnography within the family of narrative methods, others place it within the ethnographic tradition. However, it generally refers to research that involves critical observation of an individual's lived experiences and connecting those experience to broader cultural, political, and social concepts.
Autoethnography can refer to research in which a researcher reflexively studies a group they belong to or their subjective experience. In the 1970s, autoethnography was more narrowly defined as "insider ethnography," referring to studies of the (culture of) a group of which the researcher is a member.
According to Adams et al., autoethnography
uses a researcher's personal experience to describe and critique cultural beliefs, practices, and experiences;
acknowledges and values a researcher's relationships with others
uses deep and careful self-reflection—typically referred to as “reflexivity”—to name and interrogate the intersections between self and society, the particular and the general, the personal and the political
shows people in the process of figuring out what to do, how to live, and the meaning of their struggles
balances intellectual and methodological rigor, emotion, and creativity
strives for social justice and to make life better.
Bochner and Ellis have also defined autoethnography as "an autobiographical genre of writing and research that displays multiple layers of consciousness, connecting the personal to the cultural." They further indicate that autoethnography is typically written in first-person and can "appear in a variety of forms," such as "short stories, poetry, fiction, novels, photographic essays, personal essays, journals, fragmented and layered writing, and social science prose."
History
Mid-1800s
Anthropologists began conducting ethnographic research in the mid-1800s to study the cultures people they deemed "exotic" and/or "primitive." Typically, these early ethnographers aimed to merely observe and write "objective" accounts of these groups to provide others a better understanding of various cultures. They also "recognized and wrestled with questions of how to render textual accounts that would provide clear, accurate, rich descriptions of cultural practices of others" and "were concerned with offering valid, reliable, and objective interpretations in their writings."
Early- to mid-1900s
In the early to mid 1900s, it became clear that observation and fieldwork interfered with the cultural groups' natural and typical behaviors. Additionally, researchers realized the role they play in analyzing others' behaviors. As such, "serious questions arose about the possibility and legitimacy of offering purely objective accounts of cultural practices, traditions, symbols, meanings, premises, rituals, rules, and other social engagements."
To help combat potential issues of validity, ethnographers began using what Gilbert Ryle refers to as thick description: a description of human social behavior in which the writer-researcher describes the behavior and provides "commentary on, context for, and interpretation of these behaviors into the text." By doing so, the researcher aims to "evoke a cultural scene vividly, in detail, and with care," so readers can understand and attempt to interpret the scene for themselves, much like in more traditional research methods.
A few ethnographers, especially those related to the Chicago school, began incorporating aspects of autoethnography into their work, such as narrated life histories. While they created more lifelike representations of their subject than their predecessors, these researchers often "romanticized the subject" by creating narratives with "the three stages of the classic morality tale: being in a state of grace, being seduced by evil and falling from grace, and finally achieving redemption through suffering." Such researchers include Robert Parks, Nels Anderson, Everett Hughes, and Fred Davis.
During this time period, new theoretical constructs, such as feminism, began to emerge and with it, grew qualitative research. However, researchers were trying to "fit the classical traditional model of internal and external validity to constructionist and interactionist conceptions of the research act."
1970s
With the growth of qualitative research from the mid-1900s, "a few scholars were urging thicker descriptions, giving more attention to concrete details of everyday life, renouncing the ethics and artificiality of experimental studies, and complaining about the obscurity of jargon and technical language, ... but social scientists, for the most part, weren't all that concerned about the researcher's location in the text, the capacity of language to accurately represent reality, or the need for researcher reflexivity."
The term autoethnography was first used in 1975, when Heider connected individuals' personal experiences to larger, cultural beliefs and traditions. In Heider's case, the individual self referred to the people he was studying rather than himself. Because the people he studied were providing their personal accounts and experiences, Heider considered the work autoethnographic.
Later in the 1970s, researchers began more clearly stating their positionality and indicating how their mere presence altered the behaviors of the groups they studied. Further, researchers distinguished between people who researched groups of which they were a part (i.e., cultural insiders) and those who researched groups of which they were not a part (i.e., cultural outsiders). At this point, the term autoethnography began to refer to forms of ethnography in which the researcher is a cultural insider.
Walter Goldschmidt proposed that all ethnography is, in some way, autobiographical, because "ethnographic representations privilege personal beliefs, perspectives, and observations." As an anthropologist, David Hayano was interested in the role that an individual's own identity had in their research. Unlike more traditional research methods, Hayano believed there was value in a researcher "conducting and writing ethnographies of their own people."
While researchers recognized the part they played in understanding a group of people, none focused explicitly on the "inclusion and importance of personal experience in research."
1980s
More generally in the 1980s, researchers began questioning and critiquing the role of the researcher, especially in social sciences. Multiple researchers aimed to make "research and writing more reflexive and called into question the issues of gender, class, and race." As a result of these concerns, researchers purposefully inserted themselves as characters in the ethnographic narrative as a way of navigating the problem of researcher interference. Additionally, some of the predmoninant ways of understanding truth were eroded, and "[i]ssues such as validity, reliability, and objectivity ... were once again problematic. Pattern and interpretive theories, as opposed to causal linear theories, were now more common as writers continued to challenge older models of truth and meaning."
In addition to and perhaps because of the above, researchers became interested in the importance of culture and storytelling as they gradually became more engaged through the personal aspects in ethnographic practices.
In 1988, John Van Maanen noted three predominant ways ethnographers write about culture:
Realist Tales, in which the researcher uses a "dispassionate, third-person voice" and attempts to provide an "accurate" and "objective" account of the group studied without provider much researcher response
Confessional Tales, which include the researchers' "highly personalized styles" and responses to the observed data
Impressionist Tales, in which the researcher uses first-person to craft a "tightly focused, vibrant, exact, but necessarily imaginative rendering of fieldwork"
At the end of the 1980s, scholars began to apply the term autoethnography to work that used confessional and impressionist forms as they recognized that "the richness of cultural lives and life practices of others cannot be fully captured or evoked in purely objective or descriptive language."
1990s to present
In the early- to mid-1990s, researchers aimed to address the concerns raised in the previous decades regarding questions of legitimacy and reliability of ethnographic approaches. One way to do that was to directly place oneself into the research narrative, noting the positionality of the researcher. Here, the researcher could either insert themselves into the research narrative and/or increase participants' involvement in the research project, such as through participatory action research.
Autoethnography became more popular in the 1990s for ethnographers who aimed to use "personal experience and reflexivity to examine cultural experiences." Series such as Ethnographic Alternatives and the first Handbook of Qualitative Research were published to better explain the importance of autoethnographic use, and key texts focused specifically on autoethnography were published, including Carolyn Ellis's Investigating Subjectivity, Final Negotiations, The Ethnographic I, and Revision, as well as Art Bochner's Coming to Narrative. In 2013, Tony Adams, Stacy Holman Jones, and Carolyn Ellis co-edited the first edition of the Handbook of Autoethnography. They published Autoethnography in 2015 and the second edition of the Handbook of Autoethnography in 2022. In 2020, Adams and Andrew Herrmann started the Journal of Autoethnography with the University of California Press. In 2021, Marlen Harrison started The Autoethnographer, a Literary & Arts Magazine.
In the 2000s, major conferences began to regularly accept autoethnographic work, starting primarily with the International Congress of Qualitative Inquiry (2005). Other conferences that foreground autoethnographic research include the International Symposium on Autoethnography and Narrative (formerly Doing Autoethnography), the International Conference of Autoethnography (formerly British Autoethnography), and Critical Autoethnography.
Today, ethnographers typically use a "kind of hybrid form of confessional-impressionist tale" that includes "performative, poetic, impressionistic, symbolic, and lyrical language" while also "focusing closely on the self-data inherent in confessional writing."
Epistemological and theoretical basis
Autoethnography differs from ethnography in that autoethnography embraces and foregrounds the researcher's subjectivity rather than attempting to limit it as in empirical research. As Carolyn Ellis explains, "autoethnography overlaps art and science; it is part auto or self and part ethno or culture."Importantly, it is also "something different from both of them, greater than its parts." In other words, as Ellingson and Ellis put it, "whether we call a work an autoethnography or an ethnography depends as much on the claims made by authors as anything else."
In embracing personal thoughts, feelings, stories, and observations as a way of understanding the social context they are studying, autoethnographers are also shedding light on their total interaction with that setting by making their every emotion and thought visible to the reader. This is much the opposite of theory-driven, hypothesis-testing research methods that are based on the positivist epistemology. In this sense, Ellingson and Ellis see autoethnography as a social constructionist project that rejects the deep-rooted binary oppositions between the researcher and the researched, objectivity and subjectivity, process and product, self and others, art and science, and the personal and the political.
Autoethnographers, therefore, tend to reject the concept of social research as an objective and neutral knowledge produced by scientific methods, which can be characterized and achieved by detachment of the researcher from the researched. Autoethnography, in this regard, is a critical "response to the alienating effects on both researchers and audiences of impersonal, passionless, abstract claims of truth generated by such research practices and clothed in exclusionary scientific discourse." Deborah Reed-Danahay (1997) also argues that autoethnography is a postmodernist construct:
The concept of autoethnography...synthesizes both a postmodern ethnography, in which the realist conventions and objective observer position of standard ethnography have been called into question, and a postmodern autobiography, in which the notion of the coherent, individual self has been similarly called into question. The term has a double sense - referring either to the ethnography of one's own group or to autobiographical writing that has ethnographic interest. Thus, either a self- (auto-) ethnography or an autobiographical (auto-) ethnography can be signaled by "autoethnography.
Process
As a method, autoethnography combines characteristics of autobiography and ethnography.
To form the autobiographical aspects of the autoethnography, the author will write retroactively and selectively about past experiences. Unlike other forms of research, the author typically did not live through such experiences solely to create a publishable document; rather, the experiences are assembled using hindsight. Additionally, authors may conduct formal or informal interview and/or consult relevant texts (e.g., diaries or photographs) to help with recall. The experiences are tied together using literary elements "to create evocative and specific representations of the culture/cultural experience and to give audiences a sense of how being there in the experience feels."
Ethnography, on the other hand, involves observing and writing about culture. During the first stage, researchers will observe and interview individuals of the selected cultural group and take detailed fieldnotes. Ethnographers discover their findings through induction. That is, ethnographers don't go into the field looking for specific answers; rather, their observations, writing, and fieldnotes yield the findings. Such findings are conveyed to others through thick description so that readers may come to their own conclusions regarding the situation described.
Autoethnography uses aspects of autobiography (e.g., personal experiences and recall) and ethnography (e.g., interviews, observations, and fieldnotes) to create vivid descriptions that connect to the personal to the cultural.
Types of autoethnography
Because autoethnography is a broad and ambiguous "category that encompasses a wide array of practices," autoethnographies "vary in their emphasis on the writing and research process (graphy), on culture (ethnos), and on self (auto)." More recently, autoethnography has been separated into two distinct subtypes: analytic and evocative. According to Ellingson and Ellis, "Analytic autoethnographers focus on developing theoretical explanations of broader social phenomena, whereas evocative autoethnographers focus on narrative presentations that open up conversations and evoke emotional responses."Scholars also discuss visual autoethnography, which incorporates imagery along with written analysis.
Analytic autoethnography
Analytic autoethnography focuses on "developing theoretical explanations of broader social phenomena" and aligns with more traditional forms of research that value "generalization, distanced analysis, and theory-building."
This form has five key features:
complete member researcher (CMR) status
analytic reflexivity
narrative visibility of the researcher's self
dialogue with informants beyond the self
commitment to theoretical analysis
First, in all forms of autoethnography, the researcher must be a member of the cultural group they are study and thus, have CMR status. This cultural group may be loosely connected without knowledge of one another (e.g., people with disabilities) or tightly connected (e.g., members of a small church). CMR status helps the research "approximate the emotional stance of the people they study," thereby addressing some criticisms of ethnography. Like the evocative autoethnographer, the analytic autoethnographer "is personally engaged in a social group, setting, or culture as a full member and active participant." However, they "retains a distinct and highly visible identity as a self-aware scholar and social actor within the ethnographic text."
Two CMR status types are recognized: opportunistic and convert. Opportunistic CMRs exist as part of the cultural group they aim to study prior to deciding to research the group. To receive this insider status, the researcher "may be born into a group, thrown into a group by chance circumstance (e.g., illness), or have acquired intimate familiarity through occupational, recreational, or lifestyle participation." Conversely, convert CMRs "begin with a purely data-oriented research interest in the setting but become converted to complete immersion and membership during the course of the research." Here, a researcher will opt to study a cultural group, then become ingrained into that culture throughout the research process.
Second, when conducting analytic autoethnography, the researcher must utilize analytic reflexivity. That is, they must express their "awareness of their necessary connection to the research situation and hence their effects upon it," making themselves "visible, active, and reflexively engaged in the text."
Thirdly and similarly, the researcher should be visibly present throughout the narrative and "should illustrate analytic insights through recounting their own experiences and thoughts as well as those of others." Beyond this, analytic autoethnographers "should openly discuss changes in their beliefs and relationships over the course of fieldwork, thus vividly revealing themselves as people grappling with issues relevant to membership and participation in fluid rather than static social worlds."
Conversely, the fourth concept aims to prevent the text from "author saturation," which centers the author more than the culture being observed. While "analytic autoethnography is grounded inself-experience," it should "[reach] beyond it as well," perhaps including interviews with and/or observations of with others who are members of the culture studied. This connection to the culture moves the autoethnography beyond a mere autobiography or memoir.
Lastly, analytic autoethnography should commit to an analytic agenda. That is, the analytic autoethnography should not merely "document personal experience," "provide an 'insider's perspective,'" or "evoke emotional resonance with the reader." Rather, it should "use empirical data to gain insight into some broader set of social phenomena than those provided by the data themselves."
Although Leon Anderson (academic) conceptualized analytic autoethnography alongside evocative autoethnography, Anderson critiques the false dichotomy between analytic and evocative autoethnography in his chapter, "I Learn by Going: Autoethnographic Modes of Inquiry" (co-authored with Bonnie Glass-Coffin), the lead chapter in the first edition of the Handbook of Autoethnography.
Evocative autoethnography
Evocative autoethnography "focus[es] on narrative presentations that open up conversations and evoke emotional responses." According to Bochner and Ellis, the goal is for the readers to see themselves in the autoethnographer so they transform private troubles into public plight, making it powerful, comforting, dangerous, and culturally essential. Accounts are presented like novels or biographies and thus, fracture the boundaries that normally separate literature from social science.
Symbiotic autoethnography
Symbiotic Autoethnography (Beattie, 2022) suggests a way of reconciling the differences in various types of autoethnography through suggesting an innovative symbiotic approach. The author uses the concept of 'symbiosis' in its broader sense to denote close interdependence and interrelation between its suggested seven attributes, including temporality, researcher's omnipresence, evocative storytelling, interpretative analysis, political (transformative) focus, reflexivity and polyvocality.
Auto-ethnographic Design
Auto-ethnographic design is a materially-oriented practice that ties design research with expression. According to Schouwenberg and Kaethler, "There is a break here between the autoethnographic tradition and how it is taken up in design, where for the ‘graphy’ the act of reporting and reflection is replaced by creative production; design activates the knowledge component by directly engaging and altering the very world it seeks to make sense of". In contrast to other forms of design, auto-ethnographic designs are deeply personal and tend towards the artistic, using materiality as a way of understanding the self and communicating it. The hyphen that separates auto and ethnography represents the materiality that is needed to understand the self. It is critiqued for being excessively naval gazing.
Minor Literature Autoethnography
Minor Literature Autoethnography (MLA) draws on the concept of 'minor literature' as developed by Deleuze and Guattari, which refers to the use of a major language from a minoritarian perspective to challenge dominant cultural narratives. According to De Jong this type of autoethnography focuses on the experiences of marginalized groups and individuals who use the language of the majority to articulate their unique cultural positions and create new forms of expression. By doing so, minor literature autoethnography aims to reveal and critique power structures and give voice to perspectives that are often silenced or overlooked.
Goals of autoethnography
Adams, Ellis, and Jones recognize two primary purposes for practicing autoethnographic research. Given the complicated history of ethnography, "autoethnographers speak against, or provide alternatives to, dominant, taken-for-granted, and harmful cultural scripts, stories, and stereotypes" and "offer accounts of personal experience to complement, or fill gaps in, existing research."As with other forms of qualitative research, autoethnographic "accounts may show how the desire for, and practice of, generalization in research can mask important nuances of cultural issues."
In addition to providing nuanced accounts of cultural phenomena, Adams, Ellis, and Jones argue that the goal of autoethnography "is to articulate insider knowledge of cultural experience." Underlying this argument is the assumption that "the writer can inform readers about aspects of cultural life that other researchers may not be able to know." Importantly, "[i]nsider knowledge does not suggest that an autoethnographer can articulate more truthful or more accurate knowledge as compared to outsiders, but rather that as authors we can tell our stories in novel ways when compared to how others may be able to tell them."
Uses of autoethnography
Autoethnography is utilized across a variety of disciplines and can be presented in many forms, including but not limited to "short stories, poetry, fiction, novels, photographic essays, personal essays, journals, fragmented and layered writing, and social science prose."
Symbolic interactionists are particularly interested autoethnography, and examples can be found in a number of scholarly journals, such as Qualitative Inquiry, the Journal of the Society for the Study of Symbolic Interactionism, the Journal of Contemporary Ethnography, and the Journal of Humanistic Ethnography.
In performance studies, autoethnography acknowledges the researcher and the audience having equal weight. Portraying the performed "self" through writing then becomes an aim to create an embodied experience for the writer and the reader. This area acknowledges the inward and outward experience of ethnography in experiencing the subjectivity of the author. Audience members may experience the work of ethnography through reading/hearing/feeling (inward) and then have a reaction to it (outward), maybe by emotion. Ethnography and performance work together to invoke emotion in the reader.
Autoethnography is also used in film as a variant of the standard documentary film. It differs from the traditional documentary film in that its subject is the filmmaker. An autoethnographical film typically relates the life experiences and thoughts, views, and beliefs of the filmmaker, and as such, it is often considered to be rife with bias and image manipulation. Unlike other documentaries, autoethnographies do not usually make a claim of objectivity.
Storyteller/narrator
In different academic disciplines (particularly communication studies and performance studies), the term autoethnography itself is contested and is sometimes used interchangeably with or referred to as personal narrative or autobiography. Autoethnographic methods include journaling, looking at archival records - whether institutional or personal, interviewing one's own self, and using writing to generate a self-cultural understandings. Reporting an autoethnography might take the form of a traditional journal article or scholarly book, performed on the stage, or be seen in the popular press. Autoethnography can include direct (and participant) observation of daily behavior; unearthing of local beliefs and perception and recording of life history (e.g. kinship, education, etc.); and in-depth interviewing: "The analysis of data involves interpretation on the part of the researcher" (Hammersley in Genzuk). However, rather than a portrait of the Other (person, group, culture), the difference is that the researcher is constructing a portrait of the self.
Autoethnography can also be "associated with narrative inquiry and autobiography" in that it foregrounds experience and story as a meaning-making enterprise. Maréchal argues that "narrative inquiry can provoke identification, feelings, emotions, and dialogue." Furthermore, the increased focus on incorporating autoethnography and Narrative Inquiry into qualitative research indicates a growing concern for how the style of academic writing informs the types of claims made. As Laurel Richardson articulates "I consider writing as a method of inquiry, a way of finding out about a topic...form and content are inseparable." For many researchers, experimenting with alternative forms of writing and reporting, including autoethnography, personal narrative, performative writing, layered accounts and writing stories, provides a way to create multiple layered accounts of a research study, creating not only the opportunity to create new and provocative claims but also the ability to do so in a compelling manner. Ellis (2004) says that autoethnographers advocate "the conventions of literary writing and expression" in that "autoethnographic forms feature concrete action, emotion, embodiment, self-consciousness, and introspection portrayed in dialogue, scenes, characterization, and plot" (p. xix).
According to Bochner and Ellis (2006), an autoethnographer is "first and foremost a communicator and a storyteller." In other words, autoethnography "depicts people struggling to overcome adversity" and shows "people in the process of figuring out what to do, how to live, and the meaning of their struggles" (p. 111). Therefore, according to them, autoethnography is "ethical practice" and "gifts" that has a caregiving function (p. 111). In essence autoethnography is a story that re-enacts an experience by which people find meaning and through that meaning are able to be okay with that experience.
In Dr. Mayukh Dewan's opinion, this can be a problem because many readers may see us as being too self-indulgent but they have to realise that our stories and experiences we share are not solely ours, but rather that they also represent the group we are autoethnographically representing.
In this storytelling process, the researcher seeks to make meaning of a disorienting experience. A life example in which autoethnography could be applied is the death of a family member or someone close by. In this painful experience people often wonder how they will go about living without this person and what it will be like. In this scenario, especially in religious homes, one often asks "Why God?" thinking that with an answer as to why the person died they can go about living. Others, wanting to be able to offer up an explanation to make the person feel better, generally say things such as "At least they are in a better place" or "God wanted him/her home." People, who are never really left with an explanation as to why, generally fall back on the reason that "it was their time to go" and through this somewhat "explanation" find themselves able to move on and keep living life. Over time when looking back at the experience of someone close to you dying, one may find that through this hardship they became a stronger more independent person, or that they grew closer to other family members. With these realizations, the person has actually made sense of and has become fine with the tragic experience that occurred. And through this autoethnography is performed.
Evaluation
The main critique of autoethnography — and qualitative research in general — comes from the traditional social science methods that emphasize the objectivity of social research. In this critique, qualitative researchers are often called "journalists, or soft scientists," and their work, including autoethnography, is "termed unscientific, or only exploratory, or entirely personal and full of bias." Many quantitative researchers regard the materials produced by narrative as "the means by which a narrating subject, autonomous and independent...can achieve authenticity...This represents an almost total failure to use narrative to achieve serious social analysis."
According to Maréchal, the early criticism of autobiographical methods in anthropology was about "their validity on grounds of being unrepresentative and lacking objectivity." She also points out that evocative and emotional genres of autoethnography have been criticized by mostly analytic proponents for their "lack of ethnographic relevance as a result of being too personal." As she writes, they are criticized "for being biased, navel-gazing, self-absorbed, or emotionally incontinent, and for hijacking traditional ethnographic purposes and scholarly contribution."
The reluctance to accept narrative work as serious extends far beyond the realm of academia. In 1994, Arlene Croce refused to evaluate or even attend Bill T. Jones Still/Here performance. She echoed a quantitative stance towards narrative research by explaining
I can't review someone I feel sorry or hopeless about...I'm forced to feel sorry because of the way they present themselves as: dissed blacks, abused women, or disenfranchised homosexuals - as performers, in short, who make victimhood victim art
Croce illustrates what Adams, Jones, and Ellis refer to as "illusory boundaries and borders between scholarship and criticism." These "borders" are seen to hide or take away from the idea that autoethnographic evaluation and criticism present another personal story about the experience of an experience. Or as Craig Gingrich-Philbrook wrote, "any evaluation of autoethnography...is simply another story from a highly situated, privileged, empowered subject about something he or she experienced."
Rethinking traditional criteria
In her book's tenth chapter, titled "Evaluating and Publishing Autoethnography" (pp. 252~255), Ellis (2004) discusses how to evaluate an autoethnographic project, based on other authors' ideas about evaluating alternative modes of qualitative research. (See the special section in Qualitative Inquiry on "Assessing Alternative Modes of Qualitative and Ethnographic Research: How Do We Judge? Who Judges?") She presents several criteria for "good autoethnography," and indicates how these ideas resonate with each other.
First, Ellis mentions Richardson who described five factors she uses when reviewing personal narrative papers that includes analysis of both evaluative and constructive validity techniques. The criteria are:
(a) Substantive contribution. Does the piece contribute to our understanding of social life?
(b) Aesthetic merit. Does this piece succeed aesthetically? Is the text artistically shaped, satisfyingly complex, and not boring?
(c) Reflexivity. How did the author come to write this text? How has the author's subjectivity been both a producer and a product of this text?
(d) Impactfulness. Does this affect me emotionally and/or intellectually? Does it generate new questions or move me to action?
(e) Expresses a reality. Does this text embody a fleshed out sense of lived experience?
Autoethnographic manuscripts might include dramatic recall, unusual phrasing, and strong metaphors to invite the reader to "relive" events with the author. These guidelines may provide a framework for directing investigators and reviewers alike.
Further, Ellis suggests how Richardson's criteria mesh with criteria mentioned by Bochner who describes what makes him understand and feel with a story. (Bochner, 2000, pp. 264~266) He looks for concrete details (similar to Richardson's expression of lived experience), structurally complex narratives (Richardson's aesthetic merit), author's attempt to dig under the superficial to get to vulnerability and honesty (Richardson's reflexivity), a standard of ethical self-consciousness (Richardson's substantive contribution), and a moving story (Richardson's impact) (Ellis, 2004, pp. 253~254).
In 2015, Adams, Jones, and Ellis collaborated to bring about a similar list of Goals for Assessing Autoethnography. The list takes encompasses descriptive, prescriptive, practical, and theoretical goals for evaluating autoethnographic work (2015, pp. 102–104).
Make contributions to knowledge
Value the personal and experiential
Demonstrate the power, craft, and responsibilities of stories and storytelling
Take a relationally responsible approach to research practice and representation
Contributions to knowledge
Adams, Jones, and Ellis define the first goal of autoethnography as a conscious effort to "extend existing knowledge and research while recognizing that knowledge is both situated and contested." As Adams explains in his critique of his work Narrating the Closet,
I knew I had to contribute to knowledge about coming out by saying something new about the experience...I also needed a new angle toward coming out; my experience, alone, of coming out was not sufficient to justify a narrative.
With the critic's general decree of narrative as narcissism, Adams, Jones, and Ellis use the first goal of assessing autoethnography to explain the importance of striving to combine personal experience and existing theory while remaining mindful of the "insider insight that autoethnography offers researchers, participants, and readers/audiences." Ellis' Maternal Connections can be considered a successful incorporation of the first goal in that she "questions the idea of care-giving as a burden, instead of portraying caregiving as a loving and meaning-making relationship."
Value the personal and experiential
Adams, Jones, and Ellis define the second goal for assessing autoethnography with four elements which include featuring the perspective of the self in context and culture, exploring experience as a means of insight about social life, embracing the risks of presenting vulnerable selves in research, and using emotions and bodily experience as means and modes of understanding. This goal fully recognizes and commends the "I" in academic writing and calls for analysis of the subjective experience. In Jones' Lost and Found essay she writes,
I convey the sadness and the joy I feel about my relationships with my adopted child, the child I chose not to adopt, and my grandmother. I focus on the emotions and bodily experiences of both losing and memorializing my grandmother'
The careful and deliberate incorporation of auto (the "I," the self) into research is considered one of the most crucial aspects of the autoethnography process. The exploration of the ethics and care of presenting vulnerable selves is addressed at length by Adams in A Review of Narrative Ethics.
Stories and storytelling
Autoethnography showcases stories as the means in which sensemaking and researcher reflexivity create descriptions and critiques of culture. Adams, Jones, and Ellis write:
Reflexivity includes both acknowledging and critiquing our place and privilege in society and using the stories we tell to break long-held silences on power, relationships, cultural taboos, and forgotten and/or suppressed experiences.
A focus is placed a writer's ability to develop writing and representation skills alongside other analytic abilities. Adams switches between first-person and second-person narration in Living (In) the Closet: The Time of Being Closeted as a way to "bring readers into my story, inviting them to live my experiences alongside me, feeling how I felt and suggesting how they might, under similar circumstances, act as I did." Similarly, Ellis in Maternal Connections chose to steer away from the inclusion of references to the research literature or theory instead opting to "call on sensory details, movements, emotions, dialogue, and scene setting to convey an experience of taking care of a parent."
The examples included above are incomplete. Autoethnographers exploring different narrative structures can be seen in Andrew Herrmann's use of layered accounts, Ellis' use of haibun, and the use of autoethnographic film by Rebecca Long and Anne Harris.
Addressing veracity and the art of story telling in his 2019 autoethnographic monograph Going All City: Struggle and Survival in LA's Graffiti Subculture, Stefano Bloch writes "I do rely on artful rendering, but not artistic license."
Relationally responsible approach
Among the concepts in qualitative research is "relational responsibility." Researchers should work to make research relationships as collaborative, committed, and reciprocal as possible while taking care to safeguard identities and privacy of participants. Included under this concept is the accessibility of the work to a variety of readers which allows for the "opportunity to engage and improve the lives of our selves, participants, and readers/audiences."
Autoethnographers struggle with relational responsibility as in Adams' critique of his work on coming out and recognizing:
...how others can perceive my ideas as relationally irresponsible concessions to homophobic others and to insidious heteronormative cultural structures; by not being aggressively critical, my work does not do enough to engage and improve the lives of others.
In the critique he also questions how relationally irresponsible he was by including several brief conversations in his work without consent and exploited other's experiences for his own benefit. Similar sentiments are echoed throughout Adams, Jones, and Ellis critiques of their own writing.
From "validity" to "truth"
As an idea that emerged from the tradition of social constructionism and interpretive paradigm, autoethnography challenges the traditional social scientific methodology that emphasizes the criteria for quality in social research developed in terms of validity. Carolyn Ellis writes, In autoethnographic work, I look at validity in terms of what happens to readers as well as to research participants and researchers. To me, validity means that our work seeks verisimilitude; it evokes in readers a feeling that the experience described is lifelike, believable, and possible. You also can judge validity by whether it helps readers communicate with others different from themselves or offers a way to improve the lives of participants and readers- or even your own. In this sense, Ellis emphasizes the "narrative truth" for autoethnographic writings.
I believe you should try to construct the story as close to the experience as you can remember it, especially in the initial version. If you do, it will help you work through the meaning and purpose of the story. But it's not so important that narratives represent lives accurately – only, as Art(Arthur Bochner) argues, "that narrators believe they are doing so" (Bochner, 2002, p. 86). Art believes that we can judge one narrative interpretation of events against another, but we cannot measure a narrative against the events themselves because the meaning of the events comes clear only in their narrative expression.
Instead, Ellis suggests to judge autoethnographic writings on the usefulness of the story, rather than only on accuracy. She quotes Art Bochner, who argues
that the real questions is what narratives do, what consequences they have, to what uses they can be put. Narrative is the way we remember the past, turn life into language, and disclose to ourselves and others the truth of our experiences. In moving from concern with the inner veridicality to outer pragmatics of evaluating stories, Plummer [2001, p. 401] also looks at uses, functions, and roles of stories, and adds that they "need to have rhetorical power enhanced by aesthetic delight (Ellis, 2004, p. 126-127).
Similarly,
Laurel Richardson [1997, p. 92] uses the metaphor of a crystal to deconstruct traditional validity. A crystal has an infinite number of shapes, dimensions and angles. It acts as a prism and changes shape, but still has structure. Another writer, Patti Lather [1993, p. 674], proposes counter-practices of authority that rupture validity as a "regime of truth" and lead to a critical political agenda [Cf. Olesen, 2000, p. 231]. She mentions the four subtypes [pp. 685-686]: "ironic validity, concerning the problems of representation; paralogical validity, which honors differences and uncertainties; rhizomatic validity, which seeks out multiplicity; and voluptuous validity, which seeks out ethics through practices of engagement and self-reflexivity (Ellis, 2004, pp. 124~125).
From "generalizability" to "resonance"
With regard to the term of "generalizability," Ellis points out that autoethnographic research seeks generalizability not just from the respondents but also from the readers. Ellis says:
I would argue that a story's generalizability is always being tested – not in the traditional way through random samples of respondents, but by readers as they determine if a story speaks to them about their experience or about the lives of others they know. Readers provide theoretical validation by comparing their lives to ours, by thinking about how our lives are similar and different and the reasons why. Some stories inform readers about unfamiliar people or lives. We can ask, after Stake [1994], "does the story have 'naturalistic generalization'?" meaning that it brings "felt" news from one world to another and provides opportunities for the reader to have vicarious experience of the things told. The focus of generalizability moves from respondents to readers.(p. 195)
This generalizability through the resonance of readers' lives and "lived experience" (Richardson, 1997) in autoethnographic work, intends to open up rather than close down conversation (Ellis, 2004, p. 22).
Benefits and concerns
Denzin's criterion is whether the work has the possibility to change the world and make it a better place (Denzin, 2000, p. 256). This position fits with Clough, who argues that good autoethnographic writing should motivate cultural criticism. Autoethnographic writing should be closely aligned with theoretical reflection, says Clough, so that it can serve as a vehicle for thinking "new sociological subjects" and forming "new parameters of the social" (Clough, 2000, p. 290). Though Richardson and Bochner are less overtly political than Denzin and Clough, they indicate that good personal narratives should contribute to positive social change and move us to action (Bochner, 2000, p. 271).
In addition to helping the researcher make sense of his or her individual experience, autoethnographies are political in nature as they engage their readers in political issues and often ask us to consider things, or do things differently. Chang argues that autoethnography offers a research method friendly to researchers and readers because autoethnographic texts are engaging and enable researchers to gain a cultural understanding of self in relation to others, on which cross-cultural coalition can be built between self and others.
Also, autoethnography as a genre frees us to move beyond traditional methods of writing, promoting narrative and poetic forms, displays of artifacts, photographs, drawings, and live performances (Cons, p. 449). Denzin says autoethnography must be literary, present cultural and political issues, and articulate a politics of hope. The literary criteria he mentions are covered in what Richardson advocates: aesthetic value. Ellis elaborates her idea in autoethnography as good writing that through the plot, dramatic tension, coherence, and verisimilitude, the author shows rather than tells, develops characters and scenes fully, and paints vivid sensory experiences.
While advocating autoethnography for its value, some researchers argue that there are also several concerns about autoethnography. Chang warns autoethnographers of pitfalls that they should avoid in doing autoethnography: (1) excessive focus on self in isolation from others; (2) overemphasis on narration rather than analysis and cultural interpretation; (3) exclusive reliance on personal memory and recalling as a data source; (4) negligence of ethical standards regarding others in self-narratives; and (5) inappropriate application of the label autoethnography.Also some qualitative researchers have expressed their concerns about the worth and validity of autoethnography. Robert Krizek (2003) contributed a chapter titled "Ethnography as the Excavation of Personal Narrative" (pp. 141–152) to the book of Expressions of Ethnography in which he expresses concern about the possibility for autoethnography to devolve into narcissism. Krizek goes on to suggest that autoethnography, no matter how personal, should always connect to some larger element of life.
One of the main advantages of personal narratives is that they give us access into learners' private worlds and provide rich data (Pavlenko, 2002, 2007). Another advantage is the ease of access to data since the researcher calls on his or her own experiences as the source from which to investigate a particular phenomenon. It is this advantage that also entails a limitation as, by subscribing analysis to a personal narrative, the research is also limited in its conclusions. However, Bochner and Ellis (1996) consider that this limitation on the self is not valid, since, "If culture circulates through all of us, how can autoethnography be free of connection to a world beyond the self?."
Criticisms and concerns
Similar to other forms of qualitative and art-based research, autoethnography has faced many criticisms. As Sparkes stated, "The emergence of autoethnography and narratives of self…has not been trouble-free, and their status as proper research remains problematic."
The most recurrent criticism of autoethnography is of its strong emphasis on self, which is at the core of the resistance to accepting autoethnography as a valuable research method. Thus, autoethnographies have been criticised for being self-indulgent, narcissistic, introspective and individualised.
Another criticism is of the reality personal narratives or autoethnographies represent. As Geoffrey Walford states, "If people wish to write fiction, they have every right to do so, but not every right to call it research." This criticism originates from a statement by Ellis and Bochner (2000), conceiving autoethnography as a narrative that "is always a story about the past and not the past itself." To this, Walford asserts that "the aim of research is surely to reduce the distortion as much as possible." Walford's concerns are focused on how much of the accounts presented as autoethnographies represent real conversations or events as they happened and how much they are just inventions of the authors.
Evaluation
Several critiques exist regarding evaluating autoethnographical works grounded in interpretive paradigm.
From within qualitative research, some researchers have posited that autoethnographers, along with others, fail to meet positivist standards of validity and reliability. Schwandt, for instance, argues that some social researchers have "come to equate being rational in social science with being procedural and criteriological." Building on quantitative foundations, Lincoln and Guba translate quantitative indicators into qualitative quality indicators, namely: credibility (parallels internal validity), transferability (parallels external validity), dependability (parallels reliability), and confirmability (parallels objectivity and seeks to critically examine whether the researcher has acted in good faith during the course of the research). Smith and Smith and Heshusius critique these qualitative translations and warn that the claim of compatibility (between qualitative and quantitative criteria) cannot be sustained, and by making such claims, researchers are in effect closing down the conversation. Smith points out that "the assumptions of interpretive inquiry are incompatible with the desire for foundational criteria. How we are to work out this problem, one way or another, would seem to merit serious attention.
Secondly, some other researchers questions the need for specific criteria itself. Bochner and Clough both are concerned that too much emphasis on criteria will move us back to methodological policing and will takes us away from a focus on imagination, ethical issues in autographic work, and creating better ways of living. The autoethnographer internally judges its quality. Evidence is tacit, individualistic, and subjective. (see Ellis & Bochner, 2003). Practice-based quality is based in the lived research experience itself rather than in its formal evidencing per se. Bochner says:
Self-narratives... are not so much academic as they are existential, reflecting a desire to grasp or seize the possibilities of meaning, which is what gives life its imaginative and poetic qualities... a poetic social science does not beg the question of how to separate good narrativization from bad... [but] the good ones help the reader or listener to understand and feel the phenomena under scrutiny.
Finally, in addition to this anti-criteria stance of some researchers, some scholars have suggested that the criteria used to judge autoethnography should not necessarily be the same as traditional criteria used to judge other qualitative research investigations (Garratt & Hodkinson, 1999). They argue that autoethnography has been received with a significant degree of academic suspicion because it contravenes certain qualitative research traditions. The controversy surrounding autoethnography is in part related to the problematic exclusive use of the self to produce research (Denzin & Lincoln, 1994). This use of self as the only data source in autoethnography has been questioned (see, for example, Denzin & Lincoln, 1994; Sparkes, 2000; Beattie, 2022). Accordingly, autoethnographies have been criticized for being too self-indulgent and narcissistic. Sparkes (2000) suggested that autoethnography is at the boundaries of academic research because such accounts do not sit comfortably with traditional criteria used to judge qualitative inquiries.
Holt associates this problem with this problem as two crucial issues in "the fourth moment of qualitative research" Denzin and Lincoln (2000) presented; the dual crises of representation and legitimation. The crisis of representation refers to the writing practices (i.e., how researchers write and represent the social world). Additionally, verification issues relating to methods and representation are (re)considered as problematic (Marcus & Fischer, 1986). The crisis of legitimation questions traditional criteria used for evaluating and interpreting qualitative research, involving a rethinking of terms such as validity, reliability, and objectivity. Holt says:
Much like the autoethnographic texts themselves, the boundaries of research and their maintenance are socially constructed (Sparkes, 2000). In justifying autoethnography as proper research... ethnographers have acted autobiographically before, but in the past they may not have been aware of doing so, and taken their genre for granted (Coffey, 1999). Autoethnographies may leave reviewers in a perilous position.... the reviewers were not sure if the account was proper research (because of the style of representation), and the verification criteria they wished to judge this research by appeared to be inappropriate. Whereas the use of autoethnographic methods may be increasing, knowledge of how to evaluate and provide feedback to improve such accounts appears to be lagging. As reviewers begin to develop ways in which to judge autoethnography, they must resist the temptation to "seek universal foundational criteria lest one form of dogma simply replaces another" (Sparkes, 2002b, p. 223). However, criteria for evaluating personal writing have barely begun to develop.
Notable autoethnographers
Leon Anderson
Liana Beattie
Arthur P. Bochner
Jesse Cornplanter
Kimberly Dark
Norman K. Denzin
Carolyn Ellis
Maaike de Jong
Peter Pitseolak
Ernest Spybuck
Aleksandr Solzhenitsyn
Johnny Saldana
See also
Layered account
References
59. Beattie,L. (2022). Symbiotic Autoethnography. Moving Beyond the Boundaries of Qualitative Methodologies. London: Bloomsbury Publishing
Additional references
Ellis, C. (2001). With Mother/With Child: A True Story. Qualitative Inquiry, 7 (5), 598–616.
Ellis, C. (2009). Revision: Autoethnographic Reflections on Life and Work. Walnut Creek, CA: Left Coast Press.
Herrmann, A. F., & Di Fate, K. (Eds.) (2014). The new ethnography: Goodall, Trujillo, and the necessity of storytelling. Storytelling Self Society: An Interdisciplinary Journal of Storytelling Studies, 10.
Hodges, N. (2015). The Chemical Life. Health Communication, 30, 627–634.
Hodges, N. (2015). The American Dental Dream. Health Communication, 30, 943–950.
Holman Jones, S. (2005). Autoethnography: Making the personal political. In N. K. Denzin & Y. S. Lincoln. (Eds.) Handbook of Qualitative Research, (2nd ed., pp. 763–791). Thousand Oaks, CA: Sage Publications.
Holman Jones, S., Adams, T. & Ellis, C. (2013). Handbook of Autoethnography. Walnut Creek CA: Left Coast Press
Krizek, R. (2003). Ethnography as the Excavation of Personal Narrative. In R.P.Clair(Ed.), Expressions of ethnography: novel approaches to qualitative methods (pp. 141–152). New York: SUNY Press.
Plummer, K. (2001). The call of life stories in ethnographic research. In P. Atkinson, A. Coffey, S. Delamont, J. Lofland, and L. Lofland (Eds.), Handbook of Ethnography (pp. 395–406). London: Sage.
Richardson, L. (1997). Fields of play: Constructing an academic life. New Brunswick, N. J.: Rutgers University Press.
Richardson, L. (2007). Writing: A method of inquiry. In N. K. Denzin & Y. S. Lincoln. (Eds.) Handbook of Qualitative Research, (2nd ed., pp. 923–948). Thousand Oaks, CA: Sage Publications.
Stake, R. E. (1994). Case studies. In N. K. Denzin & Y. S. Lincoln. (Eds.) Handbook of Qualitative Research, (2nd ed., pp. 236–247). Thousand Oaks, CA: Sage Publications.
Ethnography
| 0.764585 | 0.992842 | 0.759112 |
Prehistoric archaeology
|
Prehistoric archaeology is a subfield of archaeology, which deals specifically with artefacts, civilisations and other materials from societies that existed before any form of writing system or historical record. Often the field focuses on ages such as the Stone Age, Bronze Age and Iron Age, although it also encompasses periods such as the Neolithic. The study of prehistoric archaeology reflects the cultural concerns of modern society by showing interpretations of time between economic growth and political stability. It is related to other disciplines such as geology, biology, anthropology, historiography and palaeontology, although there are noticeable differences between the subjects they all broadly study to understand; the past, either organic or inorganic or the lives of humans. Prehistoric archaeology is also sometimes termed as anthropological archaeology because of its indirect traces with complex patterns.
Due to the unique nature of prehistoric archaeology, in that written records can not be drawn upon to aid the study of the societies it focuses on, the subject matter investigated is entirely material remains as they are the only traceable evidence that is available. Material evidence includes pottery, burial goods, the remains of individuals and animals such as bones, jewellery and decorative items as well as many other artefacts. The subfield has existed since at least the late 1820s or early 1830s and is now a fully recognised and separate field of archaeology. Other fields of archaeology include; Classical archaeology, Near Eastern archaeology - as known as Biblical archaeology, Historical archaeology, Underwater archaeology and many more, each working to reconstruct our understanding of everything from the ancient past right up until modern times. Unlike continent and area specific fields of archaeology such as; Classical - which studies specifically the Mediterranean region and the civilisations of Ancient Greece and Ancient Rome in antiquity, the field of prehistoric archaeology is not contained to one continent. As such, there are many excavations attributed to this field which have occurred and are occurring all over the world to uncover all different types of settlements and civilisations.
Without history to provide evidence for names, places and motivations, prehistoric archaeologists speak in terms of cultures which can only be given arbitrary modern names relating to the locations of known occupation sites or the artifacts used. It is naturally much easier to discuss societies rather than individuals as these past people are completely anonymous in the archaeological record. Such a lack of concrete information means that prehistoric archaeology is a contentious field and the arguments that range over it have done much to inform archaeological theory.
Origins
Prehistoric archaeology, as with many other fields of archaeology, is attributed to multiple different individuals therefore an accurate and definitive timeline for its creation is difficult to present. The origins of prehistoric archaeology were marked by treasure hunting also known as antiquarianism and the individuals involved were not focused on scientific enquiry, as such the line between when the field moved from gathering artefacts to legitimate study is difficult to define. The first mention and use of the word prehistoric in archaeological terms was in the works of Daniel Wilson in 1851 within his book ‘The Archaeology and Prehistoric Annals of Scotland’, another early instance of prehistoric being used archaeologically is within Paul Tournal’s work in 1833 where he uses préhistoire to describe his work in Bize-Minervois located in the south of France. The phrase prehistoric archaeology did not officially become an English archaeological term until 1836, and further, the common three-age system coined by Christian Jürgensen Thomsen which classifies European prehistoric chronology was also invented during this year.
Beyond that, there are three other individuals who represented different field schools who are thought to have begun the first excavations dedicated solely to the study of prehistoric archaeology at roughly the same time between the 1820s to 1830s. Boucher de Perthes – a French archaeologist leading the Abbeville field school who uncovered handaxes in the Somme River, Jens Jacob Worsaae – a Danish archaeologist leading the Copenhagen field school who studied stratified assemblages in Denmark, and Giuseppe Scarabelli – an Italian archaeologist who led the Imola field school who studied stratigraphy in Italy.
Outside of these individuals, each continent began to host excavations and field schools to explore their own prehistory across a wide spectrum of dates. Several examples, though not a complete list of every country, are listed below by continental area. Within Europe exploration began in England with several individuals such as amateur archaeologist William Pengelly who studied the Kent Cavern in approximately 1846, and within Turkey excavation into specifically prehistoric archaeological sites did not begin until much later between the 1980s to the 1990s into the region of Anatolia. In Asia and Australasia, prehistoric archaeology began in China in the 1920s with the work of amateur Swedish archaeologist Johan Gunnar Andersson who uncovered Homo erectus fossils in the Zhoukoudian cave in southwest Beijing and excavated Yangshao in Henan, in India excavation began with the works of Captain H. Congreve in 1847, in Japan prehistoric archaeology as it is defined by European standards began with the works of German collector Heinrich von Siebold in 1869 – though there had been an internal interest in prehistoric archaeology since the 1700s, in Vietnam excavation began in 1960 and in Australia the discipline of archaeology was solidified in the 1960s and 70s which allowed it to begin to expand and include Indigenous peoples and their heritage. In North America exploration began in the United States in the mid-1800s with the collective excavation interests being led by the works of the American Philosophical Society, the American Antiquarian Society, and the Smithsonian Institution and in Canada excavations started in 1935 though it was not until 1965 that it fully took off as an important field of archaeology. In South America within Argentina excavations began between 1880 and 1910. In Africa investigations into prehistoric civilisations began in roughly the 1960s and 70s and further prehistoric archaeology began explorations into the Middle East in areas such as Iran in 1884 with excavations by the French at Susa.
Purpose
The purpose of prehistoric archaeology is to explore and understand civilisations that existed before writing systems; including the Stone, Bronze and Iron Age societies across the world. Within its early origins prehistoric archaeology had a focus on gathering artefacts and treasures to display in museums and private collections often to bring wealth back for the individual. Later on the field allowed for the more legitimate study of material evidence and data in order to ascertain important details such as; when the site may have been occupied, who was living there, what activities they may have engaged in, and finally what happened to them to either cause them to leave or make the area uninhabitable.
Notable Archaeologists
There are multitudes of credible past and emerging prehistoric archaeologists who have dedicated time and effort into honing their craft in order to accurately understand the lives of the individuals and societies they are studying. Some of the important foundational archaeologists are; Daniel Wilson – a Scottish-born Canadian archaeologist who first brought the term prehistoric into an archaeological context, Paul Tournal – a French amateur archaeologist, Christian Jürgensen Thomsen – a Danish antiquarian who classified the three age system important to aiding early European archaeological works, Boucher de Perthes – a French archaeologist who uncovered handaxes in the Somme River, Jens Jacob Worsaae – a Danish archaeologist who studied stratified assemblages in Denmark, Giuseppe Scarabelli – an Italian archaeologist who studied stratigraphy in Italy, and William Pengelly – an English archaeologist. They each fundamentally contributed to beginning the field of archaeology through defining how the subfield is different to archaeology.
More modern archaeologists, including Ali Umut Türkcan – who directed the excavations at Çatalhöyük as of 2022, have helped to further these developments in the field by directing excavations and studying materials found within the sites. Other modern archaeologists include; Paul Bahn – a British archaeologist who studies prehistoric rock art, Richard Bradley – a British archaeologist who specialises in European prehistory and has a particular focus on Prehistoric Britain, E. C. L. During Caspers (also known as Elizabeth Christina Louisa During Caspers) – a Dutch archaeologist who studied prehistoric Mesopotamia, Ufuk Esin – a Turkish archaeologist who specialised in prehistoric Anatolia, Pere Bosch-Gimpera – a Spanish-born Mexican archaeologist who studied prehistoric Spain, Lynne Goldstein – an American specialising in prehistoric eastern North America, Jakob Heierli – a Swedish archaeologist who studied prehistoric Sweden, Louise Steel – a British archaeologist who focused on prehistoric Cyprus, and Joyce White – an American archaeologist who specialises in prehistoric Southeast Asia. There are many more individuals whose importance to the continuation of this field has not been stated or acknowledged here.
Main types and locations of sites
Some of the main types of sites include early proto-city and proto city-states, settlements, temples and sanctuaries of worship and cave sites. To define each of these terms archaeologically; a proto city-state is a large town or village that existed in the Neolithic era, it is also categorised by its lack of central rule or deliberate city organisation of infrastructure. A settlement is an area where individuals lived either permanently or semi-permanently, conversely temples or sanctuaries were areas of cult practice or worship to the gods or beings associated with the specific people, areas of worship in early prehistoric sites or periods can also include spiritual areas of prominent religious importance without the presence of a directly associated deity. Cave sites are sheltered areas, usually in rock formations where individual members of a society may have gathered together in either semi-permanent or permanent basis to create art, to dwell or to prepare food. All prehistoric archaeology sites must contain evidence of humans – even if they did not actively live on the site or visited it occasionally, and a lack of historical record within the society. Sites that feature the ability of the inhabitants to record information about themselves are not considered prehistoric.
Prehistoric settlements are scattered all over the world, they vary in age and size. The period in which prehistoric archaeology covers is most often the Stone, Bronze and Iron ages, within each of these ages periods such as the Neolithic within the Stone age are explored deeply. In Western Europe the prehistoric period ends with Roman colonisation in 43 AD, with some non-Romanised areas the period does not end until as late as the 5th Century AD. Although in many other places, notably Egypt (at the end of the Third Intermediate Period ) it finishes much earlier and in others, such as Australia, much later. Some of the main sites that are being studied are; Çatalhöyük in Turkey, the Chauvet Cave in southern France, Bouldnor Cliff Mesolithic Village in the United Kingdom and Franchthi in Greece among many others. New sites are being uncovered regularly and their importance to understanding prehistoric peoples continues to further our knowledge of the past.
Methods of investigation and analysis
Archaeologists use many different methods to research the artefacts and materials that may be found during an excavation. Within early excavations dedicated to treasure hunting little to no care was taken when removing soil covering artefacts or when removing the artefacts and materials themselves from sites which may have led to the destruction of materials and has placed some recovered artefacts at risk. Often the digs were conducted by amateurs and treasure hunters who did not have the knowledge we do now about how to remove artefacts safely. Some of the main techniques used to gently recover artefacts or to ensure the integrity of the site remains for future exploration and the preservation of the site are; aerial photography – to survey the site, stratigraphic measurements – which document the layers of soil to aid in site dating, fieldwalking - which involves walking the ground and looking for objects and making site plans to record the locations of objects and remains on a site. Within aerial photography several important markers of archaeological sites can be revealed such as; shadow marks, crop marks and soil marks. Archaeologists also use a range of techniques when excavating a site to uncover materials including; digging test pits, creating trenches and using the box-grid or quadrant method to keep track of different areas of the site.
Some of the technology used to assist in looking for and uncovering materials in prehistoric archaeological sites are; electrical resistivity meters – which help locate objects underground non-disruptively, laser theodolites – to help map the site layout, satellite survey - which uses satellites to get an aerial view of the site, lidar - which uses lasers to scan for objects underground and sonar - which uses soundwaves to scan the earth for objects. None of these techniques are solely practised within prehistoric archaeology, most archaeological techniques may be used across many of the different subfields of archaeology.
Issues in the field
There are a vast amount of difficulties faced within prehistoric archaeology including; site degradation, which makes understanding a site very difficult as it erases evidence that may have been useful for gaining insight into the civilisation. Degradation of a site may be exacerbated by climate change as often prehistoric sites are delicate due to the nature of their age, for example, material evidence such as textile fabric which may have survived from antiquity in rare contexts due to being buried may be lost due to the top layers of the site being uncovered. However, as cosmologist Martin Rees and astronomer Carl Sagan have said ‘absence of evidence is not evidence of absence’ which is highly relevant within the field of prehistoric archaeology as often archaeologists must work with missing pieces and theorise in order to understand a site. Due to the substantial gaps that exist in the prehistoric record, there are many periods of time, inventions and materials – particularly those that were made out of perishable materials such as wood or textile fabrics, that have been lost to time or that are incredibly difficult to locate due to the conditions that must exist in order for them to survive. It is because of these gaps that in order to understand the physical evidence that has been recovered prehistoric archaeologists, as well as their counterparts in the sibling fields of archaeology, must make educated guesses as to what is there and what should be there based on the finds that they have. The variety of theories regarding the purpose of objects or sites for example obliges archaeologists to adopt a critical approach to all evidence and to examine their own constructs of the past. Structural functionalism and processualism are two schools of archaeological thought which have made a great contribution to prehistoric archaeology.
Other issues within the field of prehistoric archaeology which also affect every other branch of archaeology is the ethics of removing artefacts and or storing finds in museums. This moral question is a delicate balance within prehistoric archaeology as all finds and bodies must be treated with respect but archaeologists also wish to study them in order to further their understanding of the different origins of humanity.
References
See also
Prehistory
Stone tool
Pit alignments
The Collection of Pre- and Protohistoric Artifacts at the University of Jena
Archaeological sub-disciplines
Archaeology
| 0.781291 | 0.971525 | 0.759044 |
Turkification
|
Turkification, Turkization, or Turkicization describes a shift whereby populations or places receive or adopt Turkic attributes such as culture, language, history, or ethnicity. However, often this term is more narrowly applied to mean specifically Turkish rather than merely Turkic, meaning that it refers more frequently to the Ottoman Empire's policies or the Turkish nationalist policies of the Republic of Turkey toward ethnic minorities in Turkey. As the Turkic states developed and grew, there were many instances of this cultural shift.
The earliest instance of Turkification took place in Central Asia, when by the 6th century AD migration of Turkic tribes from Inner Asia caused a language shift among the Iranian peoples of the area. By the 8th century AD, the Turkification of Kashgar was completed by Qarluq Turks, who also Islamized the population.
The Turkification of Anatolia occurred in the time of the Seljuk Empire and Sultanate of Rum, when Anatolia had been a diverse and largely Greek-speaking region after previously being Hellenized.
Etymology
Prior to the 20th century, Anatolian, Balkan, Caucasian, and Middle Eastern regions were said to undergo Ottomanization. "Turkification" started being used interchangeably with "Ottomanization" after the rise of Turkish nationalism in the 20th century.
The term has been used in the Greek language since the 1300s or late-Byzantine era as "εκτουρκισμός", or "τούρκεμα". It literally translates to "becoming a Turk". Apart from people, it may refer also to cities that were conquered by Turks or churches that were converted to mosques. It is more frequently used in the verb form "τουρκεύω" (to Turkify, to become Muslim or Turk).
History
Early examples of Turkification
By 750, the Turkification of Kashgar by the Qarluq Turks was underway. The Qarluqs were ancestors of the Karakhanids, who also Islamized the population. The Iranian language of Khwarezm, a Central Asian oasis region, eventually died out as a result of Turkification.
Turkification of Central Asia
The current population of Central Asia is the result of the long and complex process that started at least 1,400 years ago. Today this region consists of mainly Turkic ethnic groups, barring Persian-speaking Tajiks, although centuries ago its native inhabitants were Iranian peoples. Turkification of the native Iranian population of Central Asia began by the 6th century A.D. partly due to migration of Turkic tribes from Inner Asia. The process of Turkification of Central Asia, besides those parts that constitute the territory of present-day Tajikistan and parts of Uzbekistan with a majority Tajik population, accelerated with the Mongol conquest of Central Asia. Mahmud al-Kashgari writes that the people who lived between Bukhara and Samarkand were Turkified Sogdians, whom he refers to as “Sogdak”.
Tajiks are considered to be the only ethnic group to have survived the process of Turkification in Central Asia. Despite their clear Iranian ethnicity, there are arguments that attempt to denounce Tajiks' Iranian identity, and instead link them with the descendants of Arabs raised in Iran or Turks who have lost their language under the influence of Persian civilization.
Turkification of Azerbaijan
Turkification of the non-Turkic population derives from the Turkic settlements in the area now known as Azerbaijan, which began and accelerated during the Seljuq period. The migration of Oghuz Turks from present-day Turkmenistan, which is attested by linguistic similarity, remained high through the Mongol period, since the bulk of the Ilkhanate troops were Turkic. By the Safavid period, the Turkic nature of Azerbaijan increased with the influence of the Qizilbash, an association of the Turkmen nomadic tribes that was the backbone of the Safavid Empire.
According to Soviet scholars, the Turkification of Azerbaijan was largely completed during the Ilkhanate period. Turkish scholar Faruk Sumer notes three distinct periods in which Turkification took place: Seljuq, Mongol and Post-Mongol (Qara Qoyunlu, Aq Qoyunlu and Safavid). In the first two, Oghuz Turkic tribes advanced or were driven to Anatolia and Arran. In the last period, the Turkic elements in Iran (Oghuz, with lesser admixtures of Uyghur, Qipchaq, Qarluq as well as Turkified Mongols) were joined now by Anatolian Turks migrating back to Iran. This marked the final stage of Turkification.
Turkification of Anatolia
Anatolia was home to many different peoples in ancient times who were either natives or settlers and invaders. These different people included the Armenians, Anatolian peoples, Persians, Hurrians, Greeks, Cimmerians, Galatians, Colchians, Iberians, Arabs, Arameans, Assyrians, Corduenes, and scores of others. During the Mycenaean and Classical periods of Greek history, Greeks colonised the Western, Northern and Southern Coasts of Anatolia. Over the course of many centuries a process of Hellenization occurred throughout the interior Anatolia which was aided by the fact that Koine Greek was the lingua franca in political circles and also later became the primary liturgical language, and the similarity of some of the native languages of Anatolia to Greek (cf. Phrygian). By the 5th century the native people of Asia Minor were entirely Greek in their language and Christian in religion. These Greek-speaking Christian inhabitants of Asia Minor are known as Byzantine Greeks, although at the time they would have considered themselves to be Romans (Rhomaioi), and they formed the bulk of the Byzantine Empire's Greek-speaking population for one thousand years, from the 5th century until the fall of the Byzantine state in the 15th century. In the northeast along the Black Sea these peoples eventually formed their own state known as the Empire of Trebizond, which gave rise to the modern Pontic Greek population. In the east, near the borderlands with the Persian Empire, other native languages remained, specifically Armenian, Assyrian Aramaic, and Kurdish. Byzantine authorities routinely conducted large-scale population transfers in an effort to impose religious uniformity and quell rebellions. After the subordination of the First Bulgarian Empire in 1018, for instance, much of its army was resettled in Eastern Anatolia. The Byzantines were particularly keen to assimilate the large Armenian population. To that end, in the eleventh century, the Armenian nobility were removed from their lands and resettled throughout western Anatolia with prominent families subsumed into the Byzantine nobility, leading to numerous Byzantine generals and emperors of Armenian extraction. These resettlements spread the Armenian-speaking community deep into Asia Minor, but an unintended consequence was the loss of local military leadership along the eastern Byzantine frontier, opening the path for the inroads of Turkish invaders.
Beginning in the eleventh century, war between the Turks and Byzantines led to the deaths of many in Asia Minor, while others were enslaved and removed. As areas became depopulated, Turkic nomads moved in with their herds. However, despite the suffering of the local Christian populations at the hands of the Turks and in particular the Turkoman tribesmen, they were still an overwhelming majority of the population 50 years after the Battle of Manzikert. The Turks seem to have been aware of their numerical inferiority during this time period as evidenced by the fact many Turkish rulers went to lengths to disarm their Christian subjects. There is also evidence that the Turks resorted to kidnapping Christian children and raising them as Turks, as attested by contemporary chronicler Matthew of Edessa. Intermarriage between Turks and Greek, Armenian and Georgian natives of Anatolia was not unheard of, although the majority of these unions were between Turkish men and Christian women. The children of these unions, known as 'Mixovarvaroi', were raised as Turks and were of the Muslim faith (although there were some cases of Mixovarvaroi defecting to the Byzantines). It is likely that these unions played a role in the eventual diminishment of the Christian population in Anatolia and its transition from Greek/Christian to Turkish/Muslim.
Number of pastoralists of Turkic origin in Anatolia
The number of nomads of Turkic origin that migrated to Anatolia is a matter of discussion. According to Ibn Sa'id al-Maghribi, there were 200,000 Turkmen tents in Denizli and its surrounding areas, 30,000 in Bolu and its surrounding areas, and 100,000 in Kastamonu and its surrounding areas. According to a Latin source, at the end of the 12th century, there were 100,000 nomadic tents in the regions of Denizli and Isparta.
According to Ottoman tax archives, in modern-day Anatolia, in the provinces of Anatolia, Karaman, Dulkadir and Rûm, there were about 872,610 households in the 1520s and 1530s; 160,564 of those households were nomadic, and the remainder were sedentary. Of the four provinces, Anatolia (which does not include the whole of geographic Anatolia but only its western and some of its northwestern parts) had the largest nomadic population with 77,268 households. Between 1570 and 1580, 220,217 households of the overall 1,360,474 households in the four provinces were nomadic, which means that at least 20% of Anatolia was still nomadic in the 16th century. The province of Anatolia, which had the largest nomadic population with 77,268 households, saw an increase of its nomadic population to 116,219 households in those years.
Devşirme
Devşirme (literally "collecting" in Turkish), also known as the blood tax, was chiefly the annual practice by which the Ottoman Empire sent military to press second or third sons of their Christian subjects (Rum millet) in the villages of the Balkans into military training as Janissaries. They were then taught to speak Turkish and converted to Islam with the primary objective of selecting and training the ablest children of the Empire for military or civil service, mostly into the ranks of the Janissaries.
Started by Murad I as a means to counteract the growing power of the Turkish nobility, the practice itself violated Islamic law. By 1648, the practice drew to an end. An attempt to re-institute it in 1703 was resisted by its Ottoman members who coveted its military and civilian posts, and in the early part of Ahmet III's reign, the practice was abolished.
Late Ottoman era
The late Ottoman government sought to create "a core identity with a single Turkish religion, language, history, tradition, culture and set of customs", replacing earlier Ottoman traditions that had not sought to assimilate different religions or ethnic groups. The Ottoman Empire had an ethnically diverse population that included Turks, Arabs, Albanians, Bosniaks, Greeks, Persians, Bulgarians, Serbs, Armenians, Kurds, Zazas, Circassians, Assyrians, Jews and Laz people. Turkish nationalists claimed that only Turks were loyal to the state. Ideological support for Turkification was not widespread in the Ottoman Empire.
One of its main supporters was sociologist and political activist Ziya Gökalp who believed that a modern state must become homogeneous in terms of culture, religion, and national identity. This conception of national identity was augmented by his belief in the primacy of Turkishness as a unifying virtue. As part of this belief, it was necessary to purge from the territories of the state those national groups who could threaten the integrity of a modern Turkish nation state. The 18th article of the Ottoman Constitution of 1876 declared Turkish the sole official language, and that only Turkish speaking people could be employed in the government.
After the Young Turks assumed power in 1909, the policy of Turkification received several new layers and it was sought to impose Turkish in the administration, the courts, and education in the areas where the Arabic-speaking population was the majority. Another aim was to loosen ties between the Empire's Turk and ethnically non-Turkish populations through efforts to purify the Turkish language of Arabic influences. In this nationalist vision of Turkish identity, language was supreme, and religion was relegated to a subordinate role. Arabs responded by asserting the superiority of Arabic language, describing Turkish as a "mongrel" language that had borrowed heavily from the Persian and Arabic languages. Through the policy of Turkification, the Young Turk government suppressed the Arabic language. Turkish teachers were hired to replace Arabic teachers at schools. The Ottoman postal service was administrated in Turkish.
Those who supported Turkification were accused of harming Islam. Rashid Rida was an advocate who supported Arabic against Turkish. Even before the Young Turk Revolution of 1908, the Syrian Reformer Tahrir al-Jazairi had convinced Midhat Pasha to adopt Arabic as the official language of instruction at state schools. The language of instruction was only changed to Turkish in 1885 under Sultan Abdulhamid. Though writers like Ernest Dawn have noted that the foundations of Second Constitutional Era "Arabism" predate 1908, the prevailing view still holds that Arab nationalism emerged as a response to the Ottoman Empire's Turkification policies. One historian of Arab nationalism wrote that: "the Unionists introduced a grave provocation by opposing the Arab language and adopting a policy of Turkification", but not all scholars agree about the contribution of Turkification policies to Arab nationalism.
European critics who accused the CUP of depriving non-Turks of their rights through Turkification saw Turk, Ottoman and Muslim as synonymous, and believed Young Turk "Ottomanism" posed a threat to Ottoman Christians. The British ambassador Gerard Lowther said it was like "pounding non-Turkish elements in a Turkish mortar", while another contemporary European source complained that the CUP plan would reduce "the various races and regions of the empire to one dead level of Turkish uniformity." Rifa'at 'Ali Abou-El-Haj has written that "some Ottoman cultural elements and Islamic elements were abandoned in favor of Turkism, a more potent device based on ethnic identity and dependent on a language based nationalism".
The Young Turk government launched a series of initiatives that included forced assimilation. Uğur Üngör writes that "Muslim Kurds and Sephardi Jews were considered slightly more 'Turkifiable' than others", noting that many of these nationalist era "social engineering" policies perpetuated persecution "with little regard for proclaimed and real loyalties." These policies culminated in the Armenian and Assyrian genocides.
During World War I, the Ottoman government established orphanages throughout the empire which included Armenian, Kurdish, and Turkish children. Armenian orphans were given Arabic and Turkish names. In 1916 a Turkification campaign began in which whole Kurdish tribes were to be resettled in areas where they were not to exceed more than 10% of the local population. Talaat Pasha ordered that Kurds in the eastern areas be relocated in western areas. He also demanded information regarding if the Kurds Turkified in their new settlements and if they got along with their Turkish population. Additionally, non-Kurdish immigrants from Greece, Albania, Bosnia and Bulgaria were to be settled in the Diyarbakır province, where the deported Kurds had lived before. By October 1918, with the Ottoman army retreating from Lebanon, a Father Sarlout sent the Turkish and Kurdish orphans to Damascus, while keeping the Armenian orphans in Antoura. He began the process of reversing the Turkification process by having the Armenian orphans recall their original names. It is believed by various scholars that at least two million Turks have at least one Armenian grandparent.
Around 1.5 million Ottoman Greeks remained in the Ottoman Empire after losses of 550,000 during WWI. Almost all, 1,250,000, except for those in Constantinople, had fled before or were forced to go to Greece in 1923 in the population exchanges mandated by the League of Nations after the Greco-Turkish War (1919–1922). The lingual Turkification of Greek-speakers in 19th-century Anatolia is well documented. According to Speros Vryonis the Karamanlides are the result of partial Turkification that occurred earlier, during the Ottoman period. Fewer than 300,000 Armenians remained of 1.2 million before the war; fewer than 100,000 of 400,000 Assyrians.
Modern Turkey
After the Great Thessaloniki Fire of 1917 displaced many of the Salonikan Jews and the Burning of Smyrna, the rebuilding of these places by the post-Ottoman Turkish and Greek nation-states devastated and erased the past of non-Turkish (and non-Hellenistic) habitation. According to historian Talin Suciyan, for non-Muslims in the Republic of Turkey, Turkification resulted in "de-identification, in which a person loses all references to his or her own grandparents, socialisation, culture and history, but cannot fully become part of the society, culture, and politics of the imposed system". There continues to be state-organized discrimination, such as keeping files of citizens of non-Muslim descent.
Ottoman Turkish classical music was banned from the school curriculum. Ottoman archival documents were sold to Bulgaria as recycled paper. Sunday was made the official rest day instead of Friday (the traditional rest day in the Muslim religion).
Political elites in the early Republic were divided: the modernist agenda, which promoted radical transformation, erasing all vestiges of the Ottoman past, and moderate nationalists, who preferred a softer transition that retained some elements of Ottoman heritage.
Ethnonational identity
When the modern Republic of Turkey was founded in 1923, nationalism and secularism were two of the founding principles. Mustafa Kemal Atatürk, the leader of the early years of the Republic, aimed to create a nation-state from the Turkish remnants of the Ottoman Empire. The Turkish Ministry of National Education in 2008 defines the "Turkish People" as "those who protect and promote the moral, spiritual, cultural and humanistic values of the Turkish Nation." One of the goals of the establishment of the new Turkish state was to ensure "the domination of Turkish ethnic identity in every aspect of social life from the language that people speak in the streets to the language to be taught at schools, from the education to the industrial life, from the trade to the cadres of state officials, from the civil law to the settlement of citizens to particular regions." In 2008, the then Defense Minister of Turkey; Vecdi Gönül remarked defending the actions of Mustafa Kemal Atatürk regarding the Turkification of Anatolia: "Could Turkey be the same national country had the Greek community still lived in the Aegean or Armenians lived in many parts of Turkey?"
The process of unification through Turkification continued within modern Turkey with such policies as:
According to Art. 12 of the Turkish Constitution of 1924, citizens who could not speak and read Turkish were not allowed to become members of parliament.
A law from December 1925 demanded that clothes worn by employees in all companies must be of Turkish production.
A Report for Reform in the East was released in September 1925 according to which non-Turkish languages shall be forbidden.
On 18 March 1926 a Civil Servants Law came into effect, which allowed only Turks to become civil servants and explicitly excluded Armenians and Greeks to become such.
On 28 May 1927 it was decided that business correspondence must be in Turkish language, and foreign assurance companies must employ Turks, except for the director and the deputy director.
The Law 1164 from September 1927 enabled the creation of regional administrative areas called Inspectorates-General, where extensive policies of Turkification were applied. The Inspectorates Generals existed until 1952.
Citizen, speak Turkish! (Turkish: Vatandaş Türkçe konuş!) – An initiative created by law students but sponsored by the Turkish government which aimed to put pressure on non-Turkish speakers to speak Turkish in public in the 1930s. In some municipalities, fines were given to those speaking in any language other than Turkish.
The Law 2007 of 11 June 1932 reserved a wide number of professions like lawyer, construction worker, artisan, hairdresser, messenger, etc. to Turkish citizens and forbade foreigners also to open shops in rural areas. Most affected by the Law were the Greeks.
1934 Resettlement Law (also known as Law no. 2510) – A policy adopted by the Turkish government which set forth the basic principles of immigration. The law was issued to impose a policy of forceful assimilation of non-Turkish minorities through a forced and collective resettlement.
Surname Law – The surname law forbade certain surnames that contained connotations of foreign cultures, nations, tribes, and religions. As a result, many ethnic Armenians, Greeks, and Kurds were forced to adopt last names of Turkish rendition. Names ending with "yan, of, ef, viç, is, dis, poulos, aki, zade, shvili, madumu, veled, bin" (names that denote Armenian, Slavic, Greek, Albanian, Arabic, Georgian, Kurdish, and other origins) could not be registered, and they had to be replaced by "-oğlu."
From 1932 on, it was implemented by the Diyanet that the Adhan and the Salah shall be called in Turkish. Imams who delivered the Adhan in Arabic were prosecuted according to the article 526 of the Turkish Criminal Code for "being opposed to the command of officials maintaining public order and safety". 1941 a new paragraph was added to Article 526 of the Turkish Criminal Code and from then on Imams who refused to deliver the Adhan in Turkish could be imprisoned for up to 3 months or be fined with between 10 and 300 Turkish Lira. After the Democrat Party won the elections in 1950, on 17 June 1950 it was decided that the prayers could be given in Arabic again.
The conscription of the 20 Classes working battalions in the years 1941–1942. Only non-Muslims, mainly Jews, Greeks and Armenians were conscripted to work under difficult conditions.
Varlık Vergisi ("Wealth tax" or "Capital tax") – A Turkish tax levied on the wealthy citizens of Turkey in 1942, with the stated aim of raising funds for the country's defense in case of an eventual entry into World War II. Those who suffered most severely were non-Muslims like the Jews, Greeks, Armenians, and Levantines, who controlled a large portion of the economy; the Armenians who were most heavily taxed. According to Klaus Kreiser for President Inönü the aim of the tax was to evict the foreigners who control the Turkish economy and move the economy to the Turks
Article 16 of the Population Law from 1972 prohibited to give newborns names that were contrary to the national culture.
Animal name changes in Turkey – An initiative by the Turkish government to remove any reference to Armenia and Kurdistan in the Latin names of animals.
Confiscated Armenian properties in Turkey – An initiative by the Ottoman and Turkish governments which involved seizure of the assets, properties and land of the Armenian community of Turkey. The policy is considered a nationalization and Turkification of the country's economy by eliminating ownership of non-Turkish minorities which in this case would be of the Armenian community.
Geographical name changes in Turkey – An initiative by the Turkish government to replace non-Turkish geographical and topographic names within the Turkish Republic or the Ottoman Empire, with Turkish names, as part of a policy of Turkification. The main proponent of the initiative has been a Turkish homogenization social-engineering campaign which aimed to assimilate or obliterate geographical or topographical names that were deemed foreign and divisive against Turkish unity. The names that were considered foreign were usually of Armenian, Greek, Laz, Slavic, Kurdish, Assyrian, or Arabic origin. For example, words such as Armenia were banned in 1880 from use in the press, schoolbooks, and governmental establishments and was subsequently replaced with words like Anatolia or Kurdistan. Assyrians have increased their protest regarding the forced Turkification of historically Aramaic-named cities and localities and they see this process as continuing the cultural genocide of their identity and history (as part of the wider erasure of Assyrian, Kurdish and Armenian cultures).
Article 301 (Turkish Penal Code) – An article of the Turkish Penal Code which makes it illegal to insult Turkey, the Turkish nation, or Turkish government institutions. It took effect on 1 June 2005, and was introduced as part of a package of penal-law reform in the process preceding the opening of negotiations for Turkish membership of the European Union (EU), in order to bring Turkey up to the Union standards.
Turkification was also prevalent in the educational system of Turkey. Measures were adopted making Turkish classes mandatory in minority schools and making use of the Turkish language mandatory in economic institutions.
Imprecise meaning of Türk
The Ottoman elite identified themselves as Ottomans, not as Turks, due to the term being associated mainly with Turkmens. Ottomans, like Central Asian Turkic peoples, firstly identified themselves via tribal descent and secondly viewed the various peoples under their dynastic rule (devlet) as part of a unique civilization, while viewing other Turkic peoples as more alien; seeing as they claimed Kayi ancestry through the House of Osman, the modern notion of "Turk" as a uniquely inter-ethnic label would not be communicable.
In the late 19th century, while "Turk" was still a pejorative for poor Yörük-Turkoman farmers and pastoralists of ignoble origins, European ideas of nationalism were adopted by the Ottoman elite, and when it became clear that the local Turkish-speakers of Anatolia were the most loyal supporters of Ottoman rule, the term Türk took on a much more positive connotation.
The imprecision of the appellation Türk can also be seen with other ethnic names, such as Kürt, which is often applied by western Anatolians to anyone east of Adana, even those who speak only Turkish.
Thus, the category Türk, like other ethnic categories popularly used in Turkey, does not have a uniform usage. In recent years, centrist Turkish politicians have attempted to redefine this category in a more multicultural way, emphasizing that a Türk is anyone who is a citizen of the Republic of Turkey. After 1982, article 66 of the Turkish Constitution defines a "Turk" as anyone who is "bound to the Turkish state through the bond of citizenship".
Genetic testing
The population of Asia Minor (Anatolia) and Balkans including Greece was estimated at 10.7 million in 600 AD whereas Asia Minor was probably around 8 million during the early part of Middle Ages (950 to 1348 AD). The estimated population for Asia Minor around 1204 AD was 6 million, including 3 million in Seljuk territory. Turkish genomic variation, along with several other Western Asian populations, looks most similar to genomic variation of South European populations such as southern Italians.
Data from ancient DNA – covering the Paleolithic, the Neolithic, and the Bronze Age periods – showed that Western Asian genomes, including Turkish ones, have been greatly influenced by early agricultural populations in the area; later population movements, such as those of Turkic speakers, also contributed. The first and only (as of 2017) whole genome sequencing study in Turkey was done in 2014. Moreover, the genetic variation of various populations in Central Asia "has been poorly characterized"; Western Asian populations may also be "closely related to populations in the east".
An earlier 2011 review had suggested that "small-scale, irregular punctuated migration events" caused changes in language and culture "among Anatolia's diverse autochthonous inhabitants," which explains Anatolian populations' profile today.
See also
1925 Report for Reform in the East
Cultural assimilation
Demographics of Turkey
Genetic history of Europe
Genetic origins of the Turkish people
Hellenisation
History of Anatolia
Kurdification
Pan-Turkism
Russification
Sun Language Theory
Turkification and Islamification of Xinjiang
Turkish History Thesis
Turkish settlers in Northern Cyprus
Uzbekisation
Turco-Mongol tradition
Turco-Persian tradition
Notes
References
Sources
Turkish nationalism
Cultural assimilation
| 0.761904 | 0.996228 | 0.75903 |
Primitivism
|
In the arts of the Western World, Primitivism is a mode of aesthetic idealization that means to recreate the experience of the primitive time, place, and person, either by emulation or by re-creation. In Western philosophy, Primitivism proposes that the people of a primitive society possess a morality and an ethics that are superior to the urban value system of civilized people.
In European art, the aesthetics of primitivism included techniques, motifs, and styles copied from the arts of Asian, African, and Australasian peoples perceived as primitive in relation to the urban civilization of western Europe. In that light, the painter Paul Gauguin's inclusion of Tahitian imagery to his oil paintings was a characteristic borrowing of technique, motif, and style that was important for the development of Modern art (1860s–1970s) in the late 19th century. As a genre of Western art, Primitivism reproduced and perpetuated racist stereotypes, such as the "noble savage", with which colonialists justified white colonial rule over the non-white other in Asia, Africa, and Australasia.
Moreover, the term primitivism also identifies the techniques, motifs, and styles of painting that predominated representational painting before the emergence of the Avant-garde; and also identifies the styles of naïve art and of folk art produced by amateur artists, such as Henri Rousseau, who painted for personal pleasure.
Philosophy
Primitivism is a utopian style of art that means to represent the physical world of Nature and humanity's original state of nature with two styles: (i) chronological primitivism and (ii) cultural primitivism. In Europe, chronological primitivism proposes the moral superiority of a primitive way of life represented by the myth of a golden age of pre-societal harmony with Nature, as depicted in the Pastoral genres of European representational art and poetry.
Notable examples of European cultural primitivism are the music of Igor Stravinsky, the Tahitian paintings of Paul Gauguin, and the African period artworks of Pablo Picasso. Stravinsky's The Rite of Spring (1913) is primitivist program music about the subject of Paganism, specifically the rite of human sacrifice in pre-christian Russia. Foregoing the aesthetic and technical restraints of Western musical composition, in The Rite of Spring the composer employs harsh consonance and dissonance and loud, repetitive rhythms as a mode of Dionysian spontaneity in musical modernism. The critic Malcolm Cook said that "with its folk-music motifs and the infamous 1913 Paris riot securing its avant-garde credentials, Stravinsky's The Rite of Spring engaged in Primitivism in both form and practice" while remaining within the technical praxes of Western classical music.
17th century
During the Age of Enlightenment, intellectuals rhetorically used the idealization of indigenous peoples as political criticism of European culture; however, as part of the Quarrel of the Ancients and the Moderns, the Italian intellectual Giambattista Vico said that the lives of primitive non-Europeans were more attuned to Nature's aesthetic inspirations for poetry than the arts of civilized, modern man. From that perspective, Vico compared the artistic merits of the epic poetry of Homer and of the Bible against the modern literature written in vernacular language.
18th century
In the Prolegomena to Homer (1795), the scholar Friedrich August Wolf identified the language of Homer's poetry and the language of The Bible as examples of folk art communicated and transmitted by oral tradition. Later, the ideas of Vico and Wolf were developed at the beginning of the 19th century by Johann Gottfried Herder; nevertheless, although influential in literature, the ideas of Vico and Wolf slightly influenced the visual arts.
19th century
The emergence of historicism — judging and evaluating different eras according to their historical context and criteria — resulted in new schools of visual art dedicated to historical fidelity of setting and costume, such as the art of Neoclassicism and the Romantic art of the Nazarene movement in Germany who were inspired by the primitive school of Italian devotional paintings, i.e. before Raphael and the discovery of oil painting.
Whereas academic painting (after Raphael) used dark glazes, idealized forms, and suppression of detail, the artists of the Nazarene movement used clear outlines, bright colors, and much detail. The artistic styles of the Nazarene movement were similar to the artistic styles of the Pre-Raphaelites, who were inspired by the critical writings of John Ruskin, who admired the painters before Raphael (e.g. Sandro Botticelli) and recommended that artists paint outdoors.
In the mid-19th century, the photographic camera and non-Euclidean geometry changed the visual arts; photography impelled the development of artistic Realism and non-Euclidean geometry voided the mathematic absolutes of Euclidean geometry, and so challenged the conventional perspective of Renaissance art by suggesting the existence of multiple worlds in which things are different from the human world.
Modernist Primitivism
The three-hundred-year Age of Discovery (15th c.–17th c.) exposed western European explorers to the peoples and cultures of Asia and the Americas, of Africa and Australasia, but the explorers' perspective of cultural difference led to colonialism. During the Age of Enlightenment, the explorers' encounters with the non-European Other provoked philosophers to question the Mediaeval assumptions about the fixed nature Man, of society, and of Nature, doubted the social-class organization of society and the mental, moral, and intellectual strictures of Christianity, by comparing the civilization of Europe against the way of life of the uncivilized natural man living in harmony with Nature.
In the 18th century, Western artists and intellectuals participated in "the conscious search in history for a more deeply expressive, permanent human nature and cultural structure in contrast to the nascent modern realities", by studying the cultures of the primitive peoples encountered by explorers. The spoils of European colonialism included the works of art of the colonized natives, which featured primitive styles of expression and execution, especially the absence of linear perspective, a simple outline, the presence of hieroglyphs, distortions of the figure, and the meaning communicated with repeated patterns of ornamentation. The African and Australasian cultures provided artists an answer to their "white, Western, and preponderantly male quest" for the ideal of the primitive, "whose very condition of desirability resides in some form of distance and difference."
Paul Gauguin
The painter Paul Gauguin departed urban Europe to reside in the French colony of Tahiti, where he adopted a primitive style of life much unlike the way of life in urban France. Gauguin's search for the primitive was a search for sexual freedom from the Christian constrictions of private life, evident in the paintings Spirit of the Dead Watching (1892), Parau na te Varua ino (1892), and Anna the Javanerin (1893), Te Tamari No Atua (1896) and Cruel Tales (1902).
Gauguin's European perspective of Tahiti as a sexual utopia free of the religious sexual prohibitions is in line with the perspective of pastoral art, which idealizes rural life as better than city life. The similarities between Pastoralism and Primitivism are evident in the paintings Tahitian Pastoral (1892) and Where Do We Come From? What Are We? Where Are We Going? (1897–1898).
The artist Gauguin said that his paintings celebrated Tahitian society, and that he was defending Tahiti against French colonialism; nonetheless, from the postcolonial perspective of the 20th century, feminist art critics said that Gauguin's taking adolescent mistresses voids his claim of being an anti-colonialist. As a European man, his sexual freedom derived from the male gaze of the colonist, because Gauguin's artistic primitivism is part of the "dense interweave of racial and sexual fantasies and power, both colonial and patriarchal", which French colonialists invented about Tahiti and the Tahitians; European fantasies invented in "effort to essentialize notions of primitiveness", by Othering non-European peoples into colonial subordinates.
Fauves and Pablo Picasso
In 1905–1906 period, a group of artists studied the arts from Sub-Saharan Africa and from Oceania, because of the popularity of the Gauguin paintings of Tahiti and the Tahitians. Two posthumous, retrospective exhibitions of Gauguin's works of art in Paris, one at the Salon d'Automne in 1903, and the other in 1906, influenced fauve movement artists such as Maurice de Vlaminck, André Derain and Henri Matisse, but also Pablo Picasso. In particular, Picasso studied Iberian sculpture, African sculpture, and African traditional masks, and historical works such as the Mannerist paintings of El Greco, from which aesthetic study Picasso painted Les Demoiselles d'Avignon (1907), and invented Cubism.
Anti-colonial primitivism
Primitivism in art is usually regarded as a cultural phenomenon of Western art, yet the structure of primitivist idealism is in the art works of non-Western and anti-colonial artists. The nostalgia for an idealized past when humans lived in harmony with Nature is related to critiques of the negative cultural impact of Western modernity upon colonized peoples. The primitivist works of anti-colonial artists are critiques of the Western stereotypes about colonized peoples, while also yearning for the pre-colonial way of life. The processes of decolonization fuse with the reverse teleology of Primitivism to produce native works of art distinct from the primitivist artworks by Western artists, which reinforce colonial stereotypes as true.
As a type of artistic primitivism, the artworks of the Négritude movement tend to nostalgia for a lost golden age. Begun in the 1930s, by francophone artists and intellectuals on both sides of the Atlantic Ocean, the Négritude movement was readily adopted throughout continental Africa and by the African diaspora. In rejection of Western rationalism and European colonialism, the Négritude artists idealized pre-colonial Africa with works of art that represent pre-colonial Africa as composed of societies who were more culturally united before the Europeans arrived to Africa.
Notable among the artists of the Négritude movement is the Cuban artist Wifredo Lam who was associated with Picasso and the surrealists in Paris, in the 1930s. On returning to Cuba in 1941, Lam was emboldened to create dynamic tableaux that integrated human beings, animals, and Nature. In The Jungle (1943), Lam's polymorphism creates a fantastical jungle scene featuring African motifs among the stalks of sugar cane to represent the connection between the neo-African idealism of Négritude and the history of plantation slavery for the production of table sugar.
Neo-primitivism
Neo-primitivism was a Russian art movement that took its name from the 31-page pamphlet Neo-primitivizm, by Aleksandr . It is considered a type of avant-garde movement and is proposed as a new style of modern painting which fuses elements of Cézanne, Cubism, and Futurism with traditional Russian 'folk art' conventions and motifs, notably the Russian icon and the lubok.
Neo-primitivism replaced the symbolist art of the Blue Rose movement. The nascent movement was embraced due to its predecessor's tendency to look back so that it passed its creative zenith. A conceptualization of neo-primitivism describes it as anti-primitivist Primitivism since it questions the primitivist's Eurocentric universalism. This view presents neo-primitivism as a contemporary version that repudiates previous primitivist discourses. Some characteristics of neo-primitivist art include the use of bold colors, original designs, and expressiveness. These are demonstrated in the works of Paul Gauguin, which feature vivid hues and flat forms instead of a three-dimensional perspective. Igor Stravinsky was another neo-primitivist known for his children's pieces, which were based on Russian folklore. Several neo-primitivist artists were also previous members of the Blue Rose group.
Neo-primitive artists
Russian artists associated with Neo-primitivism include:
David Burlyuk
Marc Chagall
Pavel Filonov
Natalia Goncharova
Mikhail Larionov
Kasimir Malevich
Aleksandr Shevchenko
Igor Stravinsky
Museum exhibitions on primitivism in modern art
In November 1910, Roger Fry organized the exhibition titled Manet and the Post-Impressionists held at the Grafton Galleries in London. This exhibition showcased works by Paul Cézanne, Paul Gauguin, Henri Matisse, Édouard Manet, Pablo Picasso, and Vincent Van Gogh, among others. This exhibition was meant to showcase how French art had developed over the past three decades; however, art critics in London were shocked by what they saw. Some called Fry "mad" and "crazy" for publicly displaying such artwork in the exhibition. Fry's exhibition called attention to primitivism in modern art even if he did not intend for it to happen; leading American scholar Marianna Torgovnick to term the exhibition as the "debut" of primitivism on the London art scene.
In 1984, The Museum of Modern Art in New York had a new exhibition focusing on primitivism in modern art. Instead of pointing out the obvious issues, the exhibition celebrated the use of non-Western objects as inspiration for modern artists. The director of the exhibition, William Rubin, took Roger Fry's exhibition one step further by displaying the modern works of art juxtaposed to the non-Western objects themselves. Rubin stated, "That he was not so much interested in the pieces of 'tribal' art in themselves but instead wanted to focus on the ways in which modern artists 'discovered' this art." He was trying to show there was an 'affinity' between the two types of art. Scholar Jean-Hubert Martin argued this attitude effectively meant that the 'tribal' art objects were "given the status of not much more than footnotes or addenda to the Modernist avant-garde." Rubin's exhibition was divided into four different parts: Concepts, History, Affinities, and Contemporary Explorations. Each section is meant to serve a different purpose in showing the connections between modern art and non-Western 'art.'
In 2017, the Musée du Quai Branly – Jacques Chirac in collaboration with the Musée National Picasso – Paris, put on the exhibition Picasso Primitif. Yves Le Fur, the director, stated he wanted this exhibition to invite a dialogue between "the works of Picasso – not only the major works but also the experiments with aesthetic concepts – with those, no less rich, by non-Western artists." Picasso Primitif meant to offer a comparative view of the artist's works with those of non-Western artists. The resulting confrontation was supposed to reveal the similar issues those artists have had to address such as nudity, sexuality, impulses and loss through parallel plastic solutions.
In 2018, the Montreal Museum of Fine Arts had an exhibition titled From Africa to the Americas: Face-to-Face Picasso, Past and Present. The MMFA adapted and expanded on Picasso Primitif by bringing in 300 works and documents from the Musée du Quai Branly – Jacques Chirac and the Musée National Picasso – Paris. Nathalie Bondil saw the issues with the ways in which Yves Le Fur presented Picasso's work juxtaposed to non-Western art and objects and found a way to respond to it. The headline of this exhibition was, "A major exhibition offering a new perspective and inspiring a rereading of art history." The exhibition looked at the transformation in our view of the arts of Africa, Oceania, and the Americas from the end of the 19th century to the present day. Bondil wanted to explore the question about how ethnographic objects come to be viewed as art. She also asked, "How can a Picasso and an anonymous mask be exhibited in the same plane?"
See also
Notes
References
Antliff, Mark and Patricia Leighten, "Primitive" in Critical Terms for Art History, R. Nelson and R. Shiff (Eds.). Chicago: University of Chicago Press, 1996 (rev. ed. 2003).
Blunt, Anthony & Pool, Phoebe. Picasso, the Formative Years: A Study of His Sources. Graphic Society, 1962.
Connelly, S. Frances. The Sleep of Reason: Primitivism in Modern European Art and Aesthetics, 1725–1907. University Park: Pennsylvania State University Press, 1999.
Cooper, Douglas The Cubist Epoch, Phaidon in association with the Los Angeles County Museum of Art & the Metropolitan Museum of Art, London, 1970,
Diamond, Stanley. In Search of the Primitive: A Critique of Civilization. New Brunswick: Transaction Publishers, 1974.
Etherington, Ben. Literary Primitivism. Stanford: Stanford University Press, 2018.
Flam, Jack and Miriam Deutch, eds. Primitivism and Twentieth-Century Art Documentary History. University of California Press, 2003.
Goldwater, Robert. Primitivism in Modern Art. Belnap Press. 2002.
Lovejoy, A. O. and George Boas. Primitivism and Related Ideas in Antiquity. Baltimore: Johns Hopkins Press, 1935 (With supplementary essays by W. F. Albright and P. E. Dumont, Baltimore and London, Johns Hopkins U. Press. 1997).
Redfield, Robert. "Art and Icon" in Anthropology and Art, C. Otten (Ed.). New York: Natural History Press, 1971.
Rhodes, Colin. Primitivism and Modern Art. London: Thames and Hudson, 1994.
Solomon-Godeau, Abigail. "Going Native: Paul Gauguin and the Invention of Primitivist Modernism" in The Expanded Discourse: Feminism and Art History, N. Broude and M. Garrard (Eds.). New York: Harper Collins, 1986.
External links
John Zerzan, Telos 124, Why Primitivism?. New York: Telos Press Ltd., Summer 2002. (Telos Press).
Articles on Primitivism
"Primitivism meaning and methods""Primitivism, or anarcho-primitivism, is an anarchist critique of the origins and progress of civilization. Primitivists argue that the shift from hunter-gatherer to agricultural subsistence gave rise to social stratification, coercion, and alienation. "
Research Group in Primitive Art and Primitivism (CIAP-UPF)
Ben Etherington, "The New Primitives", Los Angeles Review of Books, May 24, 2018.
Further reading on Neo-primitivism
Cowell, Henry. 1933. "Towards Neo-Primitivism". Modern Music 10, no. 3 (March–April): 149–53. Reprinted in Essential Cowell: Selected writings on Music by Henry Cowell, 1921–1964, edited by Richard Carter (Dick) Higgins and Bruce McPherson, with a preface by Kyle Gann, 299–303. Kingston, NY: Documentext, 2002. .
Doherty, Allison. 1983. "Neo-Primitivism". MFA diss. Syracuse: Syracuse University.
Floirat, Anetta. 2015a. "Chagall and Stravinsky: Parallels Between a Painter and a Musician Convergence of Interests", Academia.edu (April).
Floirat, Anetta. 2015b. "Chagall and Stravinsky, Different Arts and Similar Solutions to Twentieth-Century Challenges". Academia.edu (April).
Floirat, Anetta. 2016. "The Scythian Element of the Russian Primitivism, in Music and Visual arts. Based on the Work of Three Painters (Goncharova, Malevich and Roerich) and Two Composers (Stravinsky and Prokofiev)". Academia.edu.
Garafola, Lynn. 1989. "The Making of Ballet Modernism". Dance Research Journal 20, no. 2 (Winter: Russian Issue): 23–32.
Hicken, Adrian. 1995. "The Quest for Authenticity: Folkloric Iconography and Jewish Revivalism in Early Orphic Art of Marc Chagall (c. 1909–1914)". In Fourth International Symposium Folklore–Music–Work of Art, edited by Sonja Marinković and Mirjana Veselinović-Hofman, 47–66. Belgrade: Fakultet Muzičke Umetnosti.
Nemirovskaâ, Izol'da Abramovna [Немировская, Изольда Абрамовна]. 2011. "Музыка для детей И.Стравинского в контексте художественной культуры рубежа XIX-ХХ веков" [Stravinsky's Music for Children and Art Culture at the Turn of the Twentieth Century]. In Вопросы музыкознания: Теория, история, методика. IV [Problems in Musicology: Theory, History, Methodology. IV], edited by Ûrij Nikolaevic Byckov [Юрий Николаевич Бычков] and Izol'da Abramovna Nemirovskaâ [Изольда Абрамовна Немировская], 37–51. Moscow: Gosudarstvennyj Institut Muzyki im. A.G. Snitke. .
Sharp, Jane Ashton. 1992. "Primitivism, 'Neoprimitivism', and the Art of Natal'ia Gonchrova, 1907–1914". Ph.D. diss. New Haven: Yale University.
Art movements
Anthropology
Modern art
Folk art
Criticism of rationalism
| 0.762689 | 0.995163 | 0.758999 |
Matrix of domination
|
The matrix of domination or matrix of oppression is a sociological paradigm that explains issues of oppression that deal with race, class, and gender, which, though recognized as different social classifications, are all interconnected. Other forms of classification, such as sexual orientation, religion, or age, apply to this theory as well. Patricia Hill Collins is credited with introducing the theory in her work entitled Black Feminist Thought: Knowledge, Consciousness, and the Politics of Empowerment.
As the term implies, there are many different ways one might experience domination, facing many different challenges in which one obstacle, such as race, may overlap with other sociological features. Characteristics such as race, age, and sex, may intersectionally affect an individual in extremely different ways, in such simple cases as varying geography, socioeconomic status, or simply throughout time. Other scholars such as Kimberlé Crenshaw's Mapping the Margins: Intersectionality, Identity Politics, and Violence against Women of Color are credited with expanding Collins' work. The matrix of domination is a way for people to acknowledge their privileges in society. How one is able to interact, what social groups one is in, and the networks one establishes are all based on different interconnected classifications.
Theory applied
Though Collins' main focus of the theory of the matrix of domination was applied to African-American women, there are many other examples that can be used to illustrate the theory. Other examples include Log Cabin Republicans, female criminality, and African-American Muslim women. One of the key concepts of the matrix of domination is that the different categories, like race and gender, are separate groups, rather than a combination. This is a problem that can be seen in the law as well when it comes to discrimination because the courts fail to view discrimination as an overarching umbrella of intersectionality.
A way in which the matrix of domination works with regards to privilege can be if two people all have the same classification, except one person has an education and one does not have as high of an education. Their gender, race, sexuality, educational attainment all intersect to identify who they are. However, compared to other people one classification can point out privilege and, in turn, open more opportunities for one individual over the other.
One of the main aspects of the matrix of domination is the fact that one may be privileged in one area, yet they can be oppressed in a different aspect of their identity. Some people believe that racial discrimination is on its way to being eradicated from the United States when they look as people like Colin Powell, a very successful, African-American, middle-aged man. Although Powell obtains the characteristics of a person that may not face oppression (upper-class, middle-aged, male), he is still discriminated against because of his race. This shows one of the key components of the matrix of domination; the idea that one cannot look at the individual facets of someone's identity, but rather that they are all interconnected.
Matrix of domination compared to intersectionality
Historical background on the matrix of domination
In Collins' Black Feminist Thought: Knowledge, Consciousness, and the Politics of Empowerment, she first describes the concept of matrix thinking within the context of how black women in America encounter institutional discrimination based upon their race and gender. A prominent example of this in the 1990s was racial segregation, especially as it related to housing, education, and employment. At the time, there was very little encouraged interaction between whites and blacks in these common sectors of society. Collins argues that this demonstrates how being black and female in America continues to perpetuate certain common experiences for African-American women. As such, African-American women live in a different world than those who are not black and female. Collins notes how this shared social struggle can actually result in the formation of a group-based collective effort, citing how the high concentration of African-American women in the domestic labor sector in combination with racial segregation in housing and schooling contributed directly to the organization of the black feminist movement. The collective wisdom shared by black women that held these specific experiences constituted a distinct viewpoint for African-American women concerning correlations between their race and gender and the resulting economic consequences.
Moolman points out the main issue concerning matrix thinking is how one accounts for the power dynamics between varying identifying categories that are ingrained in both oppression and domination instead of the traditional approach, reducing experiences to a single identity. For instance, black women's experiences with society are used to illustrate how even though white scholars have attempted to use intersectionality in their research, they may still be inclined to default towards single-identity thinking that often fails to address all aspects of black women's experiences, thus ignoring the organization the matrix objectively offers.
The matrix of domination in the colonial era and white society has also been carefully examined. The societal hierarchy determined by race and implemented under apartheid locates different racial populations in regards to their privilege, with African Americans usually at the bottom of the ladder. Dhamoon argues that on a global scale, the spot occupied by African Americans in such context is interchangeable with indigenous populations, as marginalized peoples are systematically working both within and across a matrix of interrelated axes of "penalty and privilege". The interconnectivity of different identities in regards to power in racial structures in post-colonial era societies help illustrate what changes make a difference. The framework setup of the matrix of domination connects its origin theory and thought to existing struggles in the political and social spheres of society. A closer look at both specific and broader aspects of matrix thought will shed more light on the inner-workings and mechanisms that determine how different relationship dynamics influence matrix categorizations.
May cites that an important implication that matrix thinking inspires is that it directly goes against what is often described as the socially inclusive ‘add and stir’ approach. This is often used when describing the addition of one or more identity group to existing epistemological approaches, political strategies or research methodologies. This accounts for the proper weighing of power dynamics and their impact on different groups of people. Intersectionality centers power in a multi-pronged way as shifting across different sites and scales at the same time. Therefore, it is not neutral but evolved out of histories of struggle that pursue multidimensional forms of justice.
Historical background on intersectionality
Kimberlé Crenshaw, the founder of the term intersectionality, brought national and scholarly credential to the term through the paper Demarginalizing the Intersection of Race and Sex: A Black Feminist Critique of Antidiscrimination Doctrine, Feminist Theory and Antiracist Politics in The University of Chicago Legal Forum. In the paper, she uses intersectionality to reveal how feminist movements and antiracist movements exclude women of color. Focusing on the experiences of Black women, she dissects several court cases, influential pieces of literature, personal experiences, and doctrinal manifestations as evidence for the way Black women are oppressed through many different experiences, systems and groups.
Though the specifics differ, the basic argument is the same: Black women are oppressed in a multitude of situations because people are unable to see how their identities intersect and influence each other. Feminism has been crafted for white middle-class women, only considering problems that affect this group of people. Unfortunately, this only captures a small facet of the oppression women face. By catering to the most privileged women and addressing only the problems they face, feminism alienates women of color and lower-class women by refusing to accept the way other forms of oppression feed into the sexism they face. Not only does feminism completely disregard the experiences of women of color, it also solidifies the connection between womanhood and whiteness when feminists speak for "all women". (Crenshaw:154) Oppression cannot be detangled or separated easily in the same way identities cannot be separated easily. It is impossible to address the problem of sexism without addressing racism, as many women experience both racism and sexism. This theory can also be applied to the antiracist movement, which rarely addresses the problem of sexism, even though it is thoroughly intertwined with the problem of racism. Feminism remains white, and antiracism remains male. In essence, any theory that tries to measure the extent and manner of oppression Black women face will be wholly incorrect without using intersectionality.
Patricia Hill Collins wrote a book entitled Black Feminist Thought: Knowledge, Consciousness and the Politics of Empowerment, which articulated "Black Feminist Thought" in relation to intersectionality with a focus on the plight of Black women in face of the world, the white feminist movement, and the male antiracism movement. Collins references Crenshaw's concept of intersectionality and relates it to the matrix of domination, "The term matrix of domination describes this overall social organization within which intersecting oppressions originate, develop, and are contained.".
Intersectionality and the matrix of domination
Both intersectionality and the matrix of domination help sociologists understand power relationships and systems of oppression in society. The matrix of domination looks at the overall organization of power in society while intersectionality is used to understand a specific social location of an identity using mutually constructing features of oppression.
The concept of intersectionality today is used to move away from one dimensional thinking in the matrix of domination approach by allowing for different power dynamics of different identity categories at the same time. Researchers in public health are using Intersectionality-Based Policy Analysis (IBPA) Framework to show how social categories intersect to identify health disparities that evolve from factors beyond an individual's personal health. Ferlatte applied an IBPA framework and used structural interviews to identify barriers to the allocation of HIV prevention funding for gay men. He highlighted policy more likely to cause harm than reduce the epidemic stemming from policy makers missing the ‘intersections of oppression, sex panic, and medicalization’.
Intersectionality can also be used to correct for the over-attribution of traits to groups and be used to emphasize unique experiences within a group. As a result, the field of social work is introducing intersectional approaches in their research and client interactions. At the University of Arkansas, the curriculum for a Master of Social Work (MSW) is being amended to include the Multi-Systems Life Course (MSLC) approach. Christy and Valandra apply an MSLC approach to intimate partner violence and economic abuse against poor women of color to explain that symbols of safety (such as police) in one population can be symbols of oppression in another. By teaching this approach to future social workers, the default recommendation for these women to file a police report is amended and an intervention rooted in the individual case can emerge.
Implications of the matrix of domination
Many approaches have been used that consider the concepts of identity, societal structures, and representation to be mutually exclusive, but the introduction of Patricia Hill Collins’ matrix of domination addresses the interlocking patterns of privilege and marginalization along the lines of race, class, gender, and class inside social institutions as well as at the community level. With this work has come greater recognition of the various effects that each identity holds in different societal contexts, in both the micro- and macro-level structures within the systems of oppression that exist.
In female criminality
In April Bernard's article, "The Intersectional Alternative: Explaining Female Criminality", Bernard applies Patricia Hill Collins’ work to the study of feminist criminology, as a means of explaining the cumulative effects of identity in a system of oppression on women's decisions to commit a crime. Bernard employs an intersectional approach to dissect the complexities that act as determinant factors in a woman's decision to partake in criminal activities, and more specifically, the limiting pressures of a patriarchal society. In particular, this article is framed in response to Robert Merton's claims about deviance as a response to a lack of adequate resources to achieve cultural goals, as Bernard employs an intersectional paradigm model that explores female criminality as an expression of constraint and circumscription, rather than a "strained reality". With this alternative framework, Bernard suggests that societal goals are not unanimous, and are instead shaped by individuals’ experiences in economic, political, and social spaces; for marginalized women, access to the means through which they build success are impacted by micro- and macro-level norms and histories that have created indicators of class (e.g. racial, economic, political, sexual) and subjugated them to limited networks. Thus, identity makes women with marginalized identities more vulnerable in the legal system, subjugates to oppressive states within multiple institutions, and creating a need for policies that move toward creating an equitable reality for them.
Nancy A. Heitzeg's article, "'Whiteness', criminality, and double standards of deviance/social control" focuses on the construct of white racial framing and the impact that has on constructing Blackness with criminality. In doing so Heitzeg speaks on the methods of social control, placed on those who are deviant from the norm of society. Mechanisms of social control find themselves helping to categorize those who are not cis-gendered and white as the "Other". Heitzeg, using Patricia Hill Collin's "matrix of domination" explores how shapes access to social control as well as opportunity. Deviating from the social control base that finds itself at the intersections of race, gender, and class among other differences helps to solidify who is categorized as the "Other".
Markers associated with race, class, gender, etc., argues Heitzeg allows for stereotypes that allows for mitigation of a "redeemable" white middle class and criminalization for poor Black people and other people of color. White racial framing creates a space for constructing storylines of white deviance, simultaneously creating storylines of Black criminality. This extends itself to the "medicalization" of whiteness, allowing for racial framing around whiteness to allow for associations of purity and redeem-ability. The opposite is imposed upon Black people. The identifiers associated with whiteness and Blackness allow for a framework in which subjects Blackness to something that is accepted as criminal.
In the context of criminality, the Matrix of Domination, may best present itself in the statistics:
Currently the number of women imprisoned in the United States is more than one million, making them the fastest growing population in the prison industrial complex. The number of women in prison has massively increased from statistics found in the 1980s, more than eight times as many women have been reported to be either in prison, or are at the control of the criminal justice system.
Within these numbers, Black and brown women are an overrepresented population. Black women represented roughly thirty percent of the prison population while only representing thirteen percent of the female population in the United States. In addition, Hispanic women currently make up roughly sixteen percent of the prison population, while only making up eleven percent of the female population in the United States.
The facts surrounding the cases for Black and brown women who are incarcerated show a pattern of these women being from urban areas, with an emphasis on their alleged crimes being ones of involvement or association, as opposed to being the sole perpetrator. Scholars have attributed these numbers to the over-policing of these neighborhoods in which house an almost exclusively minority population. Other attributes come from the variations in arrest and sentencing policies and practices, prison expansions, especially with for profit prisons being on the rise.
In the welfare state
In the United States, especially, the matrix of domination has implications within the welfare state. Several sociological studies on the welfare state take note of state-market relations while ignoring the salient roles held by other identities such as gender, race, class, language, and age, among others. Due to the nature of the welfare state, there has not been much regard for exploring the existence of multiple axes of oppression which has led to lineation of categories of race, class, and gender. In Politics, Gender, and Concepts, Gary Goertz and Amy Mazur assert that literature about the welfare state should focus on the relationship between social positions and social policies, as well as provide a framework for investigations into the causal effects of class, gender, and race. As such, using the idea of a matrix of domination in these kinds of studies provides a basis for empirical research on the relationship between social positions and policies, and also, for a comparison between the outcomes of social policies on marginalized and privileged women.
Intersectionality of gender and class
Benefits among class
The benefits that upper-class citizens receive from their employer are far different from that of working-class employees. This is due to the upper class taking jobs that give them a higher status or position, whereas the working class take jobs with lower status such as retail and blue-collar jobs. The most obvious benefit that differs between classes is the amount of money made. Upper-class workers receive significantly more pay than the working class, and while the upper class receive salaries, the lower class typically receive their pay based on hourly wages. Moreover, the chance of getting a raise is greater for the higher-ups. More benefits that the upper class enjoy over the working class are vacation time, flexible hours, retirement savings, and greater coverage of healthcare and insurance.
Benefits among gender
When it comes to workplace benefits such as health insurance coverage, pensions, sick leave, and disability plans, there are gender differences in whether or not these benefits are offered. Women are less likely to be offered pensions, health coverage and disability plans. In fact, high poverty rates among elderly women have been linked to lack of pension coverage. Additionally, many female heads of household remain on welfare because they cannot find jobs with adequate health insurance coverage. When it comes to union contracts, men are also twice as likely to be covered. This gender gap in benefits coverage may be due to the fact that women tend to have higher medical expenditures than males of the same age. As a result, some of the observed gap in wages between males and females in the United States could be the result of employers compensating for the higher cost of employer-sponsored health insurance. This further perpetuates gender discrimination because it means that firms who offer ESI (Employer Sponsored Insurance) will prefer to hire males. Another effect of women generally having greater healthcare expenses than men is that they are likely to place a higher value on insurance and be more inclined to pass up jobs for insurance-related reasons. This lowers the probability of obtaining jobs that pay higher wages directly and decrease a woman's bargaining power with her current employer. Indeed, health insurance has a larger (negative) effect on the job mobility of women, which they attribute to women's elevated healthcare expenses.
Wage gap among class
In the United States there is an unequal distribution of income between social classes for multiple reasons. Level of education has a great influence on average salaries. The higher the socioeconomic status (SES) of an individual the more likely they are to graduate from high school and potentially obtain a college degree, which in return increases their chances of a larger salary. The average salary of an individual with a high school diploma is about $35,000, but increases to about $60,000 by obtaining a bachelor's degree after. The gap in salary increases with each additional level of education received. Those in the lower class face more obstacles and have less opportunities to pursue additional education due to their lack of resources. The wage gap is even larger for individuals affected by poverty and racial barriers. Whites have a median income of about $71,000 while blacks have a median income of about $43,000. Statistics show that blacks make up 16% of public high school graduates, 14% of those enrolling in college, and only 9% of those receiving a bachelor's degree. At the same time, whites make up 59%, 58%, and 69%, respectively. That is a 61% difference between blacks not obtaining a bachelor's degree and whites graduating with one. Individuals in poverty already face a disadvantage in obtaining the same level of income as their upper class coworkers, but when also affected by racial barriers the chances of reaching the same income are even fewer.
Wage gap among gender
There is definitely intersectionality and inequality with women and men when it comes to wage gaps. Careers that pay well are often male dominated, and do not tolerate women and their personal needs. There has been a stable "pay gap" between men and women which has remained between 10–20% difference in their average earnings. (Women, careers and work life preferences). When discussing wage gaps between genders, scientists takes into account two questions, the first being "is there differential access jobs on the basis of gender?" and the second being, "is women’s work perceived to have less value than comparable work done by men?". When women begin to increase their numbers in certain job positions the status or value of the job decreases. Conceptualizing intersectionality through class, gender and race then identifying the barriers that create inequality in Work organizations is found in the idea of "inequality regimes". Workplaces are prominent locations to analyze the continuous efforts of inequalities because many societal inequality issues stem in such areas. In the works of Inequality Regimes: Gender, Class, and Race in Organizations, inequality in gender, race, class are examined through intersectionality in organizations. Joane Acker discussed Inequality Regimes: Gender, Class, and Race in Organizations in Sociologists for Women in Society Feminists Lecture through studies conducted using Swedish Bank. Studies have shown in the 1980s depict that wage gaps were increasing between genders. Men were being rewarded the higher paying positions such as local managers, and believed fair wages for men should be higher than fair wages for women.
Representation among class
Social class plays a large role on people's everyday life, yet their representation is not always fair. In television and popular culture, those who fall into the lower class are often portrayed differently based on if they are a woman or a man. If they are a woman, they often are portrayed as being more intelligent and responsible than their husbands, almost acting as their mothers. The male head of the household is typically portrayed as being less intelligent, with some redeeming qualities, but typically is not respected. Together they can be shown in a light that makes them seem lazy or dishonest. The upper class however, does not face these same issues with representation in the media. The man of the household takes on stereotypical male qualities, while the woman takes on stereotypical female qualities. The children in this upper class scenario are what provides entertainment value, rather than focusing on the unintelligent and unorganized adults as in the lower class model. Overall, in the upper-class family unit, they are portrayed as organized and put together, while the lower class model are portrayed as lazy and unorganized.
Representation among gender
Whether one is a manager of a fast food restaurant or the CEO of a Fortune 500 company, authority is power and power is advantage. But just like the widespread power struggle, there is a widespread equality struggle. One of the largest workplace and societal inequalities is the inequality between genders. A prime example of this is the wage gap. Women in 2016 earned, on average, 82 cents to a man's dollar. This unequal pay is part of the reason that many women are the ones to leave the workforce when it is determined that a stay-at-home parent is required; if women are contributing less to the household income, it will make less of an impact if they quit their jobs. Women are also not granted the same opportunities for employment as men. A clear example is the U.S. military. Women were banned from all combat roles until recently. In 2011, only 14 percent of the armed forces were female, and only 14 percent of officers were female. Another example is the U.S. congress. In 2015, 80 percent of the Senate was male, and only 20 was female. This numbers were similar for the House, at 80.6 percent male and 19.4 percent female. The gender composition of the military and the government, along with the wage gap, shines a lights on the gender inequality experienced right here at home, but this inequality is more greatly felt abroad. Some countries place strict limitations on women, not allowing them to vote or work or even drive a car. While the U.S. is seen as a country of dreams and opportunity, is far easier to see this when compared to an even more unequal country. The United States has been trending toward gender equality in recent years, but it has a while to go.
Research contributions
An article found in the November 1998 issue of Social Problems details the conflict involving racial domination by identifying the complexity African-Americans face. In many cases, sociologists and laypersons alike are often limited in their approach to the problem. Michelle Byng, in "Mediating Discrimination: Oppression among African-American Muslim Women"—the 1998 article—brings to focus new approaches to understanding discrimination, but also, she writes to illustrate the many overlooked opportunities in which the discriminated are able to empower themselves in certain situations.
Intersectionality in court cases
There are certain cases that are widely cited in discussing intersectional discrimination.
In DeGraffenreid vs. General Motors Emma DeGraffenreid and four other black female production workers were laid off, and took it to court claiming that the company was violating Title VII of the Civil Rights Act of 1964 because, "it perpetrated past discriminatory practices of not hiring Black females." The court looked at each of the categories, race and gender, separately therefore they missed the discrimination of a person being both African-American and female. "It was argued, black women can expect little protection as long as approaches, such as that in DeGraffenreid, which completely obscure problems of intersectionality prevail."
Another case, Maivan Lam v. University of Hawai'i, where intersectionality was the core reason behind the problem that emerged. Maivan Lam was not offered a job twice when she applied to be the Director of the Law School's Pacific Asian Legal Studies Program. Two times, the university was looking for a director and when the final offer came around, and she was the best candidate available, the university simply cancelled their search. In the first search, Professor Lam made it to the final round, but was not offered the job before the whole search was simply shut down. The second time, the position was offered to another candidate and the other candidate refused to accept, the search was simply cancelled without it being offered to Professor Lam. When Lam brought this situation up to the Dean, he suggested reopening the search so that male candidates who had not applied on time could submit their applications. It is stated, "Early in the 1989-90 academic year, the new appointments committee reviewed applications for a commercial law position. At one meeting, a male committee member stated that the Law School should not have two women teaching commercial law. This comment was reported to the Dean, who said that he recognized that the professor had difficulty dealing with women but took no action to remove him from the committee or otherwise to remedy the problem.". There was clear intersectionality as Professor Lam was not only arguing regards to race but also how her gender affected her position.
In the case, Jefferies v. Harris County Community Action Association, April 21, 1980, Dafro M. Jefferies claimed that her former employer failed to promote her to a higher position because of her race and sex. In 1967 she was employed by Harris County Community Action Association as a Secretary to the Director of Programs. She was later promoted to Personal Interviewer in 1970. Everything seemed to be moving in a positive direction for her. However, between 1971 and April 1974, Jefferies applied for promotions in various positions and departments without any luck. She realized that her employer was discriminating against her when two Field Representative positions opened. Jefferies immediately applied. However, the positions were already staffed by a white female and black male the same day that she was told about the vacant position. The company had purposefully told her about the open positions knowing that they were already filled by other staff members. After several complaints to the company, on April 23, 1974 Jefferies was placed on probation. In June 1974, she was terminated from the job because she had called the company out for discriminating against her because of her race and sex. There was clear evidence of intersectionality in this case, she argued; she was not promoted to a higher position because she was both black and a female. However, the court ultimately disagreed with her, insofar as there existed zero concrete evidence to support her case.
See also
Black feminism
Intersectionality
Multiple jeopardy
Triple oppression
Further reading
Collins, Patricia Hill. (2000) Black Feminist Thought: knowledge, consciousness, and the politics of empowerment. New York, Routledge.
References
External links
Patricia Hill Collins, Black Feminist Thought in the Matrix of Domination
The Matrix of Domination, Prof Pat's World of Women's Studies
Feminist terminology
Majority–minority relations
Sociological theories
Intersectionality
| 0.775627 | 0.9785 | 0.758951 |
The Dawn of Everything
|
The Dawn of Everything: A New History of Humanity is a 2021 book by anthropologist and activist David Graeber and archaeologist David Wengrow. It was first published in the United Kingdom on 19 October 2021 by Allen Lane (an imprint of Penguin Books).
Graeber and Wengrow finished the book around August 2020. Its American edition is 704 pages long, including a 63-page bibliography. It was a finalist for the Orwell Prize for Political Writing (2022).
Describing the diversity of early human societies, the book critiques traditional narratives of history's linear development from primitivism to civilization. Instead, The Dawn of Everything posits that humans lived in large, complex, but decentralized polities for millennia.
The Dawn of Everything became an international bestseller, translated into more than thirty languages. It was widely reviewed in the popular press and in leading academic journals, as well as in activist circles, with divided opinions being expressed across the board. Both favorable and critical reviewers noted its challenge to existing paradigms in the study of human history.
Summary
The authors open the book by suggesting that current popular views on the progress of western civilization, as presented by Francis Fukuyama, Jared Diamond, Yuval Noah Harari, Charles C. Mann, Steven Pinker, and Ian Morris, are not supported by anthropological or archaeological evidence, but owe more to philosophical dogmas inherited unthinkingly from the Age of Enlightenment. The authors refute the Hobbesian and Rousseauian view on the origin of the social contract, stating that there is no single original form of human society. Moreover, they argue that the transition from foraging to agriculture was not a civilization trap that laid the ground for social inequality, and that throughout history, large-scale societies have often developed in the absence of ruling elites and top-down systems of management.
Rejecting the "origins of inequality" as a framework for understanding human history, the authors consider where this question originated, and find the answers in a series of encounters between European settlers and the Indigenous populations of North America. They argue that the latter provided a powerful counter-model to European civilisation and a sustained critique of its hierarchy, patriarchy, punitive law, and profit-motivated behaviour, which entered European thinking in the 18th century through travellers' accounts and missionary relations, to be widely imitated by the thinkers of the Enlightenment. They illustrate this process through the historical example of the Wendat leader Kondiaronk, and his depiction in the best-selling works of the Baron Lahontan, who had spent ten years in the colonies of New France. The authors further argue that the standard narrative of social evolution, including the framing of history as modes of production and a progression from hunter-gatherer to farmer to commercial civilisation, originated partly as a way of silencing this Indigenous critique, and recasting human freedoms as naive or primitive features of social development.
Subsequent chapters develop these initial claims with archaeological and anthropological evidence. The authors describe ancient and modern communities that self-consciously abandoned agricultural living, employed seasonal political regimes (switching back and forth between authoritarian and communal systems), and constructed urban infrastructure with egalitarian social programs. The authors then present extensive evidence for the diversity and complexity of political life among non-agricultural societies on different continents, from Japan to the Americas, including cases of monumental architecture, slavery, and the self-conscious rejection of slavery through a process of cultural schismogenesis. They then examine archaeological evidence for processes that eventually led to the adoption and spread of agriculture, concluding that there was no Agricultural Revolution, but a process of slow change, taking thousands of years to unfold on each of the world's continents, and sometimes ending in demographic collapse (e.g. in prehistoric Europe). They conclude that ecological flexibility and sustained biodiversity were key to the successful establishment and spread of early agriculture.
The authors then go on to explore the issue of scale in human history, with archaeological case studies from early China, Mesoamerica, Europe (Ukraine), the Middle East, South Asia, and Africa (Egypt). They conclude that contrary to standard accounts, the concentration of people in urban settlements did not lead mechanistically to the loss of social freedoms or the rise of ruling elites. While acknowledging that in some cases, social stratification was a defining feature of urban life from the beginning, they also document cases of early cities that present little or no evidence of social hierarchies, lacking such elements as temples, palaces, central storage facilities, or written administration, as well as examples of cities like Teotihuacan, that began as hierarchical settlements, but reversed course to follow more egalitarian trajectories, providing high quality housing for the majority of citizens. They also discuss at some length the case of Tlaxcala as an example of Indigenous urban democracy in the Americas, before the arrival of Europeans, and the existence of democratic institutions such as municipal councils and popular assemblies in ancient Mesopotamia.
Synthesizing these findings, the authors move to discovering underlying factors for the rigid, hierarchical, and highly bureaucratized political system of contemporary civilization. Rejecting the category of "the State" as a trans-historical reality, they instead define three basic sources of domination in human societies: control over violence (sovereignty), control over information (bureaucracy), and charismatic competition (politics). They explore the utility of this new approach by comparing examples of early centralised societies that elude definition as states, such as the Olmec and Chavín de Huántar, as well as the Inca, China in the Shang dynasty, the Maya Civilization, and Ancient Egypt. From this they go on to argue that these civilisations were not direct precursors to our modern states, but operated on very different principles. The origins of modern states, they conclude, are shallow rather than deep, and owe more to colonial violence than to social evolution. Returning to North America, the authors then bring the story of the Indigenous critique and Kondiaronk full circle, showing how the values of freedom and democracy encountered by Europeans among the Wendat and neighbouring peoples had historical roots in the rejection of an earlier system of hierarchy, with its focus at the urban center of Cahokia on the Mississippi.
Based on their accumulated discussions, the authors conclude by proposing a reframing of the central questions of human history. Instead of the origins of inequality, they suggest that our central dilemma is the question of how modern societies have lost the qualities of flexibility and political creativity that were once more common. They ask how we have apparently "got stuck" on a single trajectory of development, and how violence and domination became normalised within this dominant system. Without offering definitive answers, the authors end the book by suggesting lines of further investigation. These focus on the loss of three basic forms of social freedom, which they argue were once common:
the freedom to escape one's surroundings and move away,
the freedom to disobey arbitrary authority, and
the freedom to reimagine and reconstruct one's society in a different form.
They emphasize the loss of women's autonomy, and the insertion of principles of violence into basic notions of social care at the level of domestic and family relations, as crucial factors in establishing more rigid political systems.
The book ends by suggesting that narratives of social development in which western civilization is self-appointed to be the highest point of achievement to date in a linear progression are largely myths, and that possibilities for social emancipation can be found in a more accurate understanding of human history, based on scientific evidence that has come to light only in the last few decades, with the assistance of the field of anthropology and archaeology.
Reception
According to Book Marks, the book received "positive" reviews based on 16 critic reviews with 5 being "rave" and 6 being "positive" and 5 being "mixed".
The book entered The New York Times best-seller list at No. 2 for the week of November 28, 2021, while its German translation entered Der Spiegel Bestseller list at No.1. It was named a Sunday Times, Observer and BBC History Book of the Year. The book was shortlisted for the Orwell Prize for Political Writing. Historian David Edgerton, who chaired the judges panel, praised the book, saying it "genuinely is a new history of humanity" and a "celebration of human freedom and possibility, based on a reexamination of prehistory, opening up the past to make new futures possible.” Writing for The Hindu, G. Sampath noted that two strands run through the book: "the consolidation of a corpus of archaeological evidence, and a history of ideas." Inspired by "the rediscovery of an unknown past," he asks, "can humanity imagine a future that's more worthy of itself?"
Gideon Lewis-Kraus said in The New Yorker that the book "aspires to enlarge our political imagination by revitalizing the possibilities of the distant past". In The Atlantic, William Deresiewicz described the book as "brilliant" and "inspiring", stating that it "upends bedrock assumptions about 30,000 years of change." The anthropologist Giulio Ongaro, stated in Jacobin and Tribune that "Graeber and Wengrow do to human history what [Galileo and Darwin] did to astronomy and biology respectively". In Bookforum, Michael Robbins called the book both "maddening" and "wonderful." Historian of science Emily Kern, writing in the Boston Review, called the book "erudite" and "funny", suggesting that "once you start thinking like Graeber and Wengrow, it's difficult to stop." Kirkus Reviews described the book as "An ingenious new look at 'the broad sweep of human history' and many of its 'foundational” stories.'" and "A fascinating, intellectually challenging big book about big ideas." Andrew Anthony in The Observer said the authors persuasively replace "the idea of humanity being forced along through evolutionary stages with a picture of prehistoric communities making their own conscious decisions of how to live".
Historian David Priestland argued in The Guardian that Peter Kropotkin had more powerfully addressed the sorts of questions that a persuasive case for modern-day anarchism should address, but lauded the authors' historical "myth-busting" and called it "an exhilarating read". Philosopher Kwame Anthony Appiah argued in The New York Review of Books that there is a "discordance between what the book says and what its sources say," while also stating that the book, which is "chockablock with archaeological and ethnographic minutiae, is an oddly gripping read". NYRB subsequently published an extended exchange between Wengrow and Appiah under the title "The Roots of Inequality" in which Wengrow expanded on the book's use of archaeological sources, while Appiah concluded that "Graeber and Wengrow's argument against historical determinism—against the alluring notion that what happened had to have happened—is itself immensely valuable." Another philosopher, Helen De Cruz, wrote that the book offers "a valuable exercise in philosophical genealogy by digging up the origins of our political and social dysfunction," but also criticised the book for neglecting a range of other possible methodologies.
Writing in the Chicago Review, historian Brad Bolman and archaeologist Hannah Moots suggest that what makes the book so important is "its attempt to make accessible a vast array of recent anthropological and archaeological evidence; to read it against the grain; and to synthesize those findings into a novel story about what exactly happened in our long past," drawing comparisons with the work of V. Gordon Childe. Reviewing for American Antiquity archaeologist Jennifer Birch called the book 'a resounding success', while archaeologist and anthropologist Rosemary Joyce, reviewing for American Anthropologist, wrote that the book succeeds in providing "provocative thinking about major questions of human history" and a "convincing demonstration of new frameworks of anthropological comparison".
Archaeologist Mike Pitts, reviewing for British Archaeology described the book as "glorious" and suggested that its joint authorship by an anthropologist and an archaeologist "gives the book a depth and rigour rarely seen in the genre". Reviewing for Scientific American, John Horgan described the book as "both a dense, 692-page scholarly inquiry into the origins of civilization and an exhilarating vision of human possibility"
In Anthropology Today, Arjun Appadurai accused the book of "swerving to avoid a host of counter-examples and counter-arguments" while also describing the book's "fable" as "compelling". David Wengrow responded in the same issue. Anthropology Today later published a letter to the editor, in which political ecologist Jens Friis Lund writes "Appadurai never discloses where and how exactly Graeber and Wengrow go wrong," calling the book a "monumental empirical effort" and "exemplar of interdisciplinary engagement." In a subsequent issue, Anthropology Today published a full review of the book by social anthropologist Luiz Costa, who suggested it contains "a range of examples of societies drawing on their own past experiences, or those of neighbouring peoples, to shape future ways of life - not in a voluntaristic sense, but within specific social patterns, considering historical events." Costa compared The Dawn of Everything to classic works by Claude Lévi-Strauss in terms of its scope and importance. Another anthropologist Thomas Hylland Eriksen called the book an "intellectual feast"
The historian David A. Bell, responding solely to Graeber and Wengrow's arguments about the Indigenous origins of Enlightenment thought and Jean-Jacques Rousseau, accused the authors of coming "perilously close to scholarly malpractice." Historian and philosopher Justin E. H. Smith suggested "Graeber and Wengrow are to be credited for helping to re-legitimise this necessary component of historical anthropology, which for better or worse is born out of the history of the missions and early modern global commerce."
Anthropologist Durba Chattaraj claimed that the book includes "elisions, slippages, and too-exaggerated leaps" when referring to archaeology from India, but stated that its authors are "extremely rigorous and meticulous scholars", and that reading the book from India "expands our worlds and allows us to step outside of a particular postcolonial predicament." Anthropologist Matthew Porges, writing in The Los Angeles Review of Books suggested the book is "provocative, if not necessarily comprehensive", and that its "great value is that it provides a much better point of departure for future explorations of what was actually happening in the past". Anthropologist Richard Handler claimed that the book's endnotes "often reveal that a particularly startling interpretation of archaeological evidence depends on one or two sources taken from vast bodies of literature" while also claiming that the stories told "are stories we need and want to hear."
Writing for the New York Journal of Books, another anthropologist, James H. McDonald, suggested that The Dawn of Everything "may well prove to be the most important book of the decade, for it explodes deeply held myths about the inevitability of our social lives dominated by the state". Anthropologist James Suzman in the Literary Review claimed that the book doesn't "engage with the vast historical and academic literature on recent African ... small scale hunter-gatherers", but also maintained that the book is "consistently thought-provoking" in "forcing us to re-examine some of the cosy assumptions about our deep past". Writing for Black Perspectives Kevin Suemnicht noted that the book develops ideas proposed by Orlando Patterson to account for the loss of human freedoms, and argued that the book confirms the "Fanonian positions within the Black Radical Tradition that this world-system is inherently anti-Black". In Antiquity, archaeologist Rachael Kiddey suggested that the book arose from "playful conversations between two eminently qualified friends" and also that it contributes to "feminist revisions of the development of knowledge."
In Cliodynamics various authors praised the book while also making criticisms. Gary M. Feinman accused Graeber and Wengrow of using "cherry-picked and selectively presented examples". Another archaeologist Michael E. Smith criticized the book for "problems of evidence and argumentation". Ian Morris claimed some of the book's arguments "run more on rhetoric than on method," but praised it as "a work of careful research and tremendous originality." Historian Walter Scheidel criticized the book for its lack of "materialist perspectives", but also called it "timely and stimulating".
The book's reception among the political left was polarizing. Several reviewers suggested that the book was written from an anarchist perspective. Sébastien Doubinsky called the book "an important work, both as a summary of recent discoveries in the fields of archaeology and anthropology and as an eye-opener on the structures of dominant narratives". In Cosmonaut Magazine, Nicolas Villarreal described the book as "a series of brilliant interventions" while criticising the authors for not appreciating that ideology and politics are "the source of our profound unfreedom." CJ Sheu said the book is "simply put a masterpiece" while Peter Isackson in Fair Observer described the book as "nothing less than a compelling invitation to reframe and radically rethink our shared understanding of humanity's history and prehistory." Eliza Delay, writing for Resilience called the book "a revelation" and a "sweeping revision of how we see ourselves." Socialist activist and anthropologist Chris Knight stated that the "core message" of the book was rejecting Engels' primitive communism, and called The Dawn of Everything "incoherent and wrong" for beginning "far too late" and "systematically side-stepping the cultural flowering that began in Africa tens of thousands of years before Homo sapiens arrived in Europe". In a longer review, Knight did, however, emphasize that the book's "one important point" was "its advocacy of [political] oscillation".
Reviewers in the Ecologist expressed the view that the authors "fail to engage with the enormous body of new scholarship on human evolution" while, at the same time, calling the book a "howling wind of fresh air". Reviewing for The Rumpus Beau Lee Gambold calls the book "at once dense, funny, thorough, joyful, unabashedly intelligent, and infinitely readable." Historian Ryne Clos claimed that the book partly relies on "a specious, exaggerated interpretation of the historical evidence" but that it is also "incredibly informative".
Historian Dominic Alexander, writing for socialist organization Counterfire questioned the evidence used in the book and characterized its rejection of "the teleological habit of thought" as a "profoundly debilitating approach" to political change. Market anarchist Charles Johnson noted the book's "idiosyncratic readings of sources". In The Nation, historian Daniel Immerwahr characterised the book as "less a biography of the species than a scrapbook, filled with accounts of different societies doing different things," while praising its refusal "to dismiss long-ago peoples as corks floating on the waves of prehistory. Instead, it treats them as reflective political thinkers from whom we might learn something".
Writing for Artforum, Simon Wu called the book a "bracing rewrite of human history". Bryan Appleyard in his review for The Sunday Times called it "pacey and potentially revolutionary." Reviewing for Science, Erle Ellis described The Dawn of Everything as "a great book that will stimulate discussions, change minds, and drive new lines of research".
References
External links
2021 non-fiction books
Books by David Graeber
English-language books
History books
Anthropology books
Archaeology books
History books about civilization
Works about the theory of history
Allen Lane (imprint) books
Collaborative non-fiction books
Books published posthumously
| 0.760875 | 0.997441 | 0.758929 |
Paleoproterozoic
|
The Paleoproterozoic Era (also spelled Palaeoproterozoic) is the first of the three sub-divisions (eras) of the Proterozoic eon, and also the longest era of the Earth's geological history, spanning from (2.5–1.6 Ga). It is further subdivided into four geologic periods, namely the Siderian, Rhyacian, Orosirian and Statherian.
Paleontological evidence suggests that the Earth's rotational rate ~1.8 billion years ago equated to 20-hour days, implying a total of ~450 days per year. It was during this era that the continents first stabilized.
Atmosphere
The Earth's atmosphere was originally a weakly reducing atmosphere consisting largely of nitrogen, methane, ammonia, carbon dioxide and inert gases, in total comparable to Titan's atmosphere. When oxygenic photosynthesis evolved in cyanobacteria during the Mesoarchean, the increasing amount of byproduct dioxygen began to deplete the reductants in the ocean, land surface and the atmosphere. Eventually all surface reductants (particularly ferrous iron, sulfur and atmospheric methane) were exhausted, and the atmospheric free oxygen levels soared permanently during the Siderian and Rhyacian periods in an aerochemical event called the Great Oxidation Event, which brought atmospheric oxygen from near none to up to 10% of the modern level.
Life
At the beginning of the preceding Archean eon, almost all existing lifeforms were single-cell prokaryotic anaerobic organisms whose metabolism was based on a form of cellular respiration that did not require oxygen, and autotrophs were either chemosynthetic or relied upon anoxygenic photosynthesis. After the Great Oxygenation Event, the then mainly archaea-dominated anaerobic microbial mats were devastated as free oxygen is highly reactive and biologically toxic to cellular structures. This was compounded by a 300-million-year-long global icehouse event known as the Huronian glaciation — at least partly due to the depletion of atmospheric methane, a powerful greenhouse gas — resulted in what is widely considered one of the first and most significant mass extinctions on Earth. The organisms that thrived after the extinction were mainly aerobes that evolved bioactive antioxidants and eventually aerobic respiration, and surviving anaerobes were forced to live symbiotically alongside aerobes in hybrid colonies, which enabled the evolution of mitochondria in eukaryotic organisms.
The Palaeoproterozoic represents the era from which the oldest cyanobacterial fossils, those of Eoentophysalis belcherensis from the Kasegalik Formation in the Belcher Islands of Nunavut, are known. By 1.75 Ga, thylakoid-bearing cyanobacteria had evolved, as evidenced by fossils from the McDermott Formation of Australia.
Many crown node eukaryotes (from which the modern-day eukaryotic lineages would have arisen) have been approximately dated to around the time of the Paleoproterozoic Era.
While there is some debate as to the exact time at which eukaryotes evolved,
current understanding places it somewhere in this era. Statherian fossils from the Changcheng Group in North China provide evidence that eukaryotic life was already diverse by the late Palaeoproterozoic.
Geological events
During this era, the earliest global-scale continent-continent collision belts developed. The associated continent and mountain building events are represented by the 2.1–2.0 Ga Trans-Amazonian and Eburnean orogens in South America and West Africa; the ~2.0 Ga Limpopo Belt in southern Africa; the 1.9–1.8 Ga Trans-Hudson, Penokean, Taltson–Thelon, Wopmay, Ungava and Torngat orogens in North America, the 1.9–1.8 Ga Nagssugtoqidian Orogen in Greenland; the 1.9–1.8 Ga Kola–Karelia, Svecofennian, Volhyn-Central Russian, and Pachelma orogens in Baltica (Eastern Europe); the 1.9–1.8 Ga Akitkan Orogen in Siberia; the ~1.95 Ga Khondalite Belt; the ~1.85 Ga Trans-North China Orogen in North China; and the 1.8-1.6 Ga Yavapai and Mazatzal orogenies in southern North America.
That pattern of collision belts supports the formation of a Proterozoic supercontinent named Columbia or Nuna. That continental collisions suddenly led to mountain building at large scale is interpreted as having resulted from increased biomass and carbon burial during and after the Great Oxidation Event: Subducted carbonaceous sediments are hypothesized to have lubricated compressive deformation and led to crustal thickening.
Felsic volcanism in what is now northern Sweden led to the formation of the Kiruna and Arvidsjaur porphyries.
The lithospheric mantle of Patagonia's oldest blocks formed.
See also
, which immediately preceded the Paleoproterozoic
References
External links
EssayWeb Paleoproterozoic Era
First breath: Earth's billion-year struggle for oxygen New Scientist, #2746, 5 February 2010 by Nick Lane. Posits an earlier much longer snowball period, c2.4 - c2.0 Gya, triggered by the Great Oxygenation Event.
The information on eukaryotic lineage diversification was gathered from a New York Times opinion blog by Olivia Judson. See the text here: .
Paleoproterozoic (chronostratigraphy scale)
Geological eras
| 0.763656 | 0.993771 | 0.758899 |
Alterity
|
Alterity is a philosophical and anthropological term meaning "otherness", that is, the "other of two" (Latin alter). It is also increasingly being used in media to express something other than "sameness", or something outside of tradition or convention.
Philosophy
Within the phenomenological tradition, alterity is usually understood as the entity in contrast to which an identity is constructed, and it implies the ability to distinguish between self and not-self, and consequently to assume the existence of an alternative viewpoint. The concept was further developed by Emmanuel Levinas in a series of essays, collected in Altérité et transcendance (Alterity and Transcendence) (1995).
Castoriadis
For Cornelius Castoriadis (L'institution imaginaire de la société, 1975; The Imaginary Institution of Society, 1997) radical alterity/otherness denotes the element of creativity in history: "For what is given in and through history is not the determined sequence of the determined but the emergence of radical otherness, immanent creation, non-trivial novelty."
Baudrillard
For Jean Baudrillard (Figures de l'alterité, 1994; Radical Alterity, 2008), alterity is a precious and transcendent element and its loss would seriously impoverish a world culture of increasing sameness and "arrogant, insular cultural narcissism."
Spivak
Gayatri Chakravorty Spivak's theory of alterity was introduced in a 2014 symposium titled Remaking History, the intention of which was to challenge the masculine orthodoxy of history writing.
According to Spivak, it is imperative for one to uncover the histories and inherent historical behaviors in order to exercise an individual right to authentic experience, identity and reality. Within the concept of socially constructed histories one "must take into account the dangerous fragility and tenacity of these concept-metaphors."
Spivak recalls her personal history: "As a postcolonial, I am concerned with the appropriation of 'alternative history' or 'histories'. I am not a historian by training. I cannot claim disciplinary expertise in remaking history in the sense of rewriting it. But I can be used as an example of how historical narratives are negotiated. The parents of my parents' grandparents' grandparents were made over, not always without their consent, by the political, fiscal and educational intervention of British imperialism, and now I am independent. Thus I am, in the strictest sense, a postcolonial."
Spivak uses four "master words" to identify the modes of being that create alterity: "Nationalism, Internationalism, Secularism and Culturalism." Furthermore, tools for developing alternative histories include: "gender, race, ethnicity, class".
Other thinkers
Jeffery Nealon, in Alterity Politics: Ethics and Performative Subjectivity, argues that "ethics is constituted as an inexorable affirmative response to different identities, not through an inability to understand or totalize the other."
There is included a long article on alterity in the University of Chicago's Theories of Media: Keywords Glossary by Joshua Wexler. Wexler writes: "Given the various theorists formulations presented here, the mediation of alterity or otherness in the world provides a space for thinking about the complexities of self and other and the formation of identity."
The concept of alterity is also being used in theology and in spiritual books meant for general readers. This is not out of place because, for believers in the Judeo-Christian tradition, God is the ultimate 'Other'. Alterity has also been used to describe the goal of many Christians, to become themselves deeply "other" than the usual norms of behavior and patterns of thought of the secular culture at large. Enzo Bianchi in Echoes of the Word expresses this well, "Meditation always seeks to open us to alterity, love and communion by guiding us toward the goal of having in ourselves the same attitude and will that were in Christ Jesus."
Jadranka Skorin-Kapov in The Aesthetics of Desire and Surprise: Phenomenology and Speculation, relates alterity or otherness to newness and surprise, "The signification of the encounter with otherness is not in its novelty (or banal newness); on the contrary, newness has signification because it reveals otherness, because it allows the experience of otherness. Newness is related to surprise, it is a consequence of the encounter... Metaphysical desire is the acceptivity of irreducible otherness. Surprise is the consequence of the encounter. Between desire and surprise there is a pause, a void, a rupture, an immediacy that cannot be captured and presented."
Anthropology
In anthropology, alterity has been used by scholars such as Nicholas Dirks, Johannes Fabian, Michael Taussig and Pauline Turner Strong to refer to the construction of "cultural others".
Musicology
The term has gained further use in seemingly somewhat remote disciplines, e.g., historical musicology where it is employed by John Michael Cooper in a study of Johann Wolfgang von Goethe and Felix Mendelssohn.
See also
Abjection
Decolonization
Heterogeneity
Heterophenomenology
Imperialism
Indeterminacy in philosophy
Internationalism
Nationalism
Other
Pedagogy
Postcolonialism
Secularism
Self-consciousness
Subjectivity
Uncanny
References
Further reading
Martin Buber (1937), I and Thou.
Chan-Fai Cheung, Tze-Wan Kwan and Kwok-ying Lau (eds.), Identity and Alterity. Phenomenology and Cultural Traditions. Verlag Königshausen & Neumann, Würzburg 2009 (Orbis Phaenomenologicum, Perspektiven, Neue Folge Band 14)
Cooper, John Michael (2007) Mendelssohn, Goethe, and the Walpurgis Night. University of Rochester Press.
Fabian, Johannes (1983) Time and the Other: How Anthropology Makes Its Object. Columbia University Press.
Levinas, Emmanuel (1999[1970]) Alterity and Transcendence. (Trans. Michael B. Smith) Columbia University Press.
Maranhao, Tullio (ed.), Anthropology and the Question of the Other. Paideuma 44 (1998).
Nealon, Jeffrey (1998) Alterity Politics: Ethics and Performative Subjectivity. Duke University Press.
Půtová, B.: Freak Shows. Otherness of the Human Body as a Form of Public Presentation. Anthropologie: International Journal of Human Diversity and Evolution 56(2), 2018, s. 91–102
Strong, Pauline Turner (1999) Captive Selves, Captivating Others: The Politics and Poetics of Colonial American Captivity *Narratives. Westview Press/Perseus Books.
Taussig, Michael (1993) Mimesis and Alterity. Routledge.
Otherness - Dictionary of war
External links
Phenomenology
Existentialist concepts
Postcolonialism
| 0.768946 | 0.986863 | 0.758844 |
Newly industrialized country
|
The category of newly industrialized country (NIC), newly industrialized economy (NIE) or middle income country is a socioeconomic classification applied to several countries around the world by political scientists and economists. They represent a subset of developing countries whose economic growth is much higher than that of other developing countries; and where the social consequences of industrialization, such as urbanization, are reorganizing society.
Definition
NICs are countries whose economies have not yet reached a developed country's status but have, in a macroeconomic sense, outpaced their developing counterparts. Such countries are still considered developing nations and only differ from other developing nations in the rate at which an NIC's growth is much higher over a shorter allotted time period compared to other developing nations. Another characterization of NICs is that of countries undergoing rapid economic growth (usually export-oriented). Incipient or ongoing industrialization is an important indicator of an NIC.
Characteristics of newly industrialized countries
Newly industrialized countries can bring about an increase of stabilization in a country's social and economic status, allowing the people living in these nations to begin to experience better living conditions and better lifestyles. Another characteristic that appears in newly industrialized countries is the further development in government structures, such as democracy, the rule of law, and less corruption. Other such examples of a better lifestyle people living in such countries can experience are better transportation, electricity, and better access to water, compared to other developing countries and low infant mortality rate.
Historical context
The term came into use around 1970, when the Four Asian Tigers of Taiwan, Singapore, Hong Kong and South Korea rose to become globally competitive in science, technological innovation and economic prosperity as well as NICs in the 1970s and 1980s, with exceptionally fast industrial growth since the 1960s; all four countries having since graduated into high-tech industrialized developed countries with wealthy high-income economies. There is a clear distinction between these countries and the countries now considered NICs. In particular, the combination of an open political process, high GNI per capita, and a thriving, export-oriented economic policy has shown that these East Asian economic tiger countries have roughly come to a match with developed countries as those of Western Europe as well Canada, Japan, Australia, New Zealand and the United States.
All four countries are classified as high-income economies by the World Bank and developed countries by the International Monetary Fund (IMF) and U.S. Central Intelligence Agency (CIA). All of the Four Asian Tigers, like Western European countries, have a Human Development Index considered "very high" by the United Nations.
Current
The table below presents the list of countries consistently considered NICs by different authors and experts. Turkey and South Africa were classified among the world's 34 developed countries (DCs) by the CIA World Factbook in 2008. Turkey became a founding member of the OECD in 1961 and Mexico joined in 1994. The G8+5 group is composed of the original G8 members in addition to China, India, Mexico, South Africa and Brazil. The members of the G20 include Brazil, China, India, Indonesia, Mexico, South Africa and Turkey.
Note: Green-colored cells indicate highest value or best performance in index, while yellow-colored cells indicate the opposite.
For China and India, the immense population of these two countries (each with over 1.4 billion people as of May 2024) means that per capita income will remain low even if either economy surpasses that of the United States in overall GDP. When GDP per capita is calculated according to purchasing power parity (PPP), this takes into account the lower costs of living in each newly industrialized country. Nominal GDP per capita typically is an indicator for living standards in a given country as well.
Brazil, China, India, Mexico and South Africa meet annually with the G8 countries to discuss financial topics and climate change, due to their economic importance in today's global market and environmental impact, in a group known as G8+5.
Other
Authors set lists of countries accordingly to different methods of economic analysis. Sometimes a work ascribes NIC status to a country that other authors do not consider a NIC. This is the case of countries such as Argentina, Egypt, Sri Lanka and Russia.
Criticism
NICs usually benefit from comparatively low wage costs, which translates into lower input prices for suppliers. As a result, it is often easier for producers in NICs to outperform and outproduce factories in developed countries, where the cost of living is higher, and trade unions and other organizations have more political sway. This comparative advantage is often criticized by advocates of the fair trade movement.
Problems
While South Africa is considered wealthy on a wealth-per-capita basis, economic inequality is persistent and extreme poverty remains high in the country. South Africa is a NIC with 34% of population unemployed and poor.
Mexico's economic growth is hampered in some areas by an ongoing drug war.
Other NICs face common problems such as widespread corruption and political instability, as well as other circumstances that cause them to face the middle income trap.
See also
Emerging market
Flying geese paradigm
Global North and Global South
Industrialisation
Mechanization
Mass production
Science in newly industrialized countries
Second World
Groupings
BRIC / MINT / Next Eleven
BRICS
CIVETS
G8+5
G20
G20 developing nations
References
Economic country classifications
Industrialisation
International development
Economic development
Industrial history
| 0.762915 | 0.994649 | 0.758832 |
Slavery in medieval Europe
|
Slavery in medieval Europe was widespread. Europe and North Africa were part of a highly interconnected trade network across the Mediterranean Sea, and this included slave trading. During the medieval period (500–1500), wartime captives were commonly forced into slavery. As European kingdoms transitioned to feudal societies, a different legal category of unfree persons—serfdom—began to replace slavery as the main economic and agricultural engine. Throughout medieval Europe, the perspectives and societal roles of enslaved peoples differed greatly, from some being restricted to agricultural labor to others being positioned as trusted political advisors.
Early Middle Ages
Slavery in the Early Middle Ages (500–1000) was initially a continuation of earlier Roman practices from late antiquity, and was continued by an influx of captives in the wake of the social chaos caused by the barbarian invasions of the Western Roman Empire. With the continuation of Roman legal practices of slavery, new laws and practices concerning slavery spread throughout Europe. For example, the Welsh laws of Hywel the Good included provisions dealing with slaves. In the Germanic realms, laws instituted the enslavement of criminals, such as the Visigothic Code’s prescribing enslavement for criminals who could not pay financial penalties for their crimes and as an actual punishment for various other crimes. Such criminals would become slaves to their victims, often with their property.
As these peoples Christianized, the church worked more actively to reduce the practice of holding coreligionists in bondage. St. Patrick, who himself was captured and enslaved at one time, protested an attack that enslaved newly baptized Christians in his letter to the soldiers of Coroticus. The restoration of order and the growing power of the church slowly transmuted the late Roman slave system of Diocletian into serfdom.
Another major factor was the rise of Bathilde (626–680), queen of the Franks, who had been enslaved before marrying Clovis II. When she became regent, her government outlawed slave-trading of Christians throughout the Merovingian empire. About ten percent of England’s population entered in the Domesday Book (1086) were slaves, despite chattel slavery of English Christians being nominally discontinued after the 1066 conquest. It is difficult to be certain about slave numbers, however, since the old Roman word for slave (servus) continued to be applied to unfree people whose status later was reflected by the term serf.
Slave trade
Demand from the Islamic world, which arose in the seventh century, dominated the slave trade in Europe during the medieval period (500–1500). For most of that time, the sale of Christian slaves to non-Christians was banned. In the pactum Lotharii of 840 between Venice and the Carolingian Empire, Venice promised not to buy Christian slaves in the Empire, and not to sell Christian slaves to Muslims. The Church prohibited the export of Christian slaves to non-Christian lands, for example in the Council of Koblenz in 922, the Council of London in 1102, and the Council of Armagh in 1171.
As a result, most Christian slave merchants focused on moving slaves from non-Christian areas to Muslim Spain, North Africa, and the Middle East; and most non-Christian merchants, although not bound by the Church’s rules, focused on Muslim markets as well. Arabic silver dirhams, presumably exchanged for slaves, are plentiful in eastern Europe and Southern Sweden, indicating trade routes from Slavic to Muslim territory.
Italian merchants
By the reign of Pope Zachary (741–752), Venice had established a thriving slave trade, enslaving people in Italy, among other places, and selling them to the Moors in Northern Africa (Zacharias himself reportedly forbade such traffic out of Rome). When the sale of Christians to Muslims was banned (pactum Lotharii), the Venetian slave traders began to sell Slavs and other Eastern European non-Christian slaves in greater numbers via the Balkan slave trade. Caravans of slaves traveled from Eastern Europe, via the Prague slave trade through Alpine passes in Austria, to reach Venice. A record of tolls paid in Raffelstetten (903–906), near St. Florian on the Danube, describes such merchants. Some are Slavic themselves, from Bohemia and the Kievan Rus'. They had come from Kiev through Przemyśl, Kraków, Prague, and Bohemia. The same record values female slaves at a tremissa (about 1.5 grams of gold or roughly of a Byzantine solidus (nomisma) or Islamic gold dinar) and male slaves, who were more numerous, at a saiga (which is much less). Eunuchs were especially valuable, and "castration houses" arose in Venice, as well as other prominent slave markets, to meet this demand.
Venice was far from the only slave trading hub in Italy. Southern Italy boasted slaves from distant regions, including Greece, Bulgaria, Armenia, and Slavic regions. During the 9th and 10th centuries, Amalfi was a major exporter of slaves to North Africa. Genoa, along with Venice, dominated the trade in the Eastern Mediterranean beginning in the 12th century, and the Venetian slave traders and the Genoese slave traders dominated the Black Sea slave trade beginning in the 13th century. They sold both Baltic and Slavic slaves, as well as Armenians, Circassians, Georgians, Turks and other ethnic groups of the Black Sea and Caucasus, to the Muslim nations of the Middle East. Genoa primarily managed the slave trade from Crimea to Mamluk Egypt, until the 13th century, when increasing Venetian control over the Eastern Mediterranean allowed Venice to dominate that market. Between 1414 and 1423 alone, at least 10,000 slaves were sold in Venice.
Iberia
Al-Andalus, the Muslim-ruled area of the Iberian Peninsula, (711–1492) imported a large number of slaves to its own domestic market, as well as served as a staging point for Muslim and Jewish merchants to market slaves to the rest of the Islamic world. A ready market, especially for men of fighting age, could be found in Umayyad Spain, with its need for supplies of new Mamluks.
Al-Hakam was the first monarch of this family who surrounded his throne with a certain splendour and magnificence. He increased the number of mamelukes (slave soldiers) until they amounted to 5,000 horse and 1,000 foot. ... he increased the number of his slaves, eunuchs and servants; had a bodyguard of cavalry always stationed at the gate of his palace and surrounded his person with a guard of mamelukes .... these mamelukes were called Al-haras (the Guard) owing to their all being Christians or foreigners. They occupied two large barracks, with stables for their horses.
During the reign of Abd-ar-Rahman III (912–961), there were at first 3,750, then 6,087, and finally 13,750 Saqaliba, or Slavic slaves, at Córdoba, capital of the Umayyad Caliphate. Ibn Hawqal, Ibrahim al-Qarawi, and Bishop Liutprand of Cremona note that the Jewish merchants of Verdun specialized in castrating slaves, to be sold as eunuch saqaliba, which were enormously popular in Muslim Spain.
According to Roger Collins although the role of the Vikings in the slave trade in Iberia remains largely hypothetical, their depredations are clearly recorded. Raids on al-Andalus by Vikings are reported in the years 844, 859, 966 and 971, conforming to the general pattern of such activity concentrating in the mid ninth and late tenth centuries.
Vikings
The Nordic countries during the Viking Age (700–1100) practiced slavery. The Vikings called their slaves thralls (Old Norse: Þræll). There were also other terms used to describe thralls based on gender, such as ambatt/ambott and deja. Ambott is used in reference to female slaves, as is deja. Another name that is indicative of thrall status is bryti, which has associations with food. The word can be understood to mean, cook, and to break bread, which would place a person with this label as the person in charge of food in some manner. There is a runic inscription that describes a man of bryti status named Tolir who was able to marry and acted as the king’s estate manager. Another name is muslegoman, which would have been used for a runaway slave. From this, it can be gathered that the different names for those who were thralls indicate position and duties performed.
A fundamental part of Viking activity was the sale and taking of captives. The thralls were mostly from Western Europe, among them many Franks, Anglo-Saxons, and Celts. Many Irish slaves were brought on expeditions for the colonization of Iceland (874–930). Raids on monasteries provided a source of young, educated slaves who could be sold in Venice or via the Black Sea slave trade to Byzantium for high prices. Scandinavian trade centers stretched eastwards from Hedeby in Denmark and Birka in Sweden to Staraya Ladoga in northern Russia before the end of the 8th century. The collection of slaves was a by-product of conflict. The Annals of Fulda recorded that Franks who had been defeated by a group of Vikings in 880 CE were taken as captives after being defeated. Viking groups would have political conflicts that also resulted in the taking of captives.
This traffic continued into the 9th century as Scandinavians founded more trade centers at Kaupang in southwestern Norway and Novgorod, farther south than Staraya Ladoga, and Kiev, farther south still and closer to Byzantium. Dublin and other northwestern European Viking settlements were established as gateways through which captives were traded northwards. Thralls could be bought and sold at slave markets. An account from the Laxdoela Saga spoke of how during the 10th century there would be a meeting of kings every third year on the Branno Islands where negotiations and trades for slaves would take place. Though slaves could be bought and sold, it was more common to sell captives from other nations.
The 10th-century Persian traveler Ibn Rustah described how Vikings, the Varangians or Rus, terrorized and enslaved the Slavs taken in their raids along the Volga River.
Slaves were often sold south, to Byzantine via the Black Sea slave trade, or to Muslim buyers, via paths such as the Volga trade route through the Khazar slave trade and later the Volga Bulgarian slave trade to the Bukhara slave trade in Central Asia and from there to slavery in the Abbasid Caliphate.
People taken captive during the Viking raids in Western Europe, such as Ireland, could be sold to Moorish Spain via the Dublin slave trade or transported to Hedeby or Brännö and from there via the Volga trade route to present day Russia, where slaves and furs were sold to Muslim merchants in exchange for Arab silver dirham and silk, which have been found in Birka, Wollin and Dublin; initially this trade route between Europe and the Abbasid Caliphate passed via the Khazar Kaghanate, but from the early 10th-century onward it went via Volga Bulgaria.
Ahmad ibn Fadlan of Baghdad provides an account of the other end of this trade route, namely of Volga Vikings selling Slavic slaves to middle-eastern merchants. Finland proved another source for Viking slave raids. Slaves from Finland or Baltic states were traded as far as central Asia, that is the Bukhara slave trade, connecting it to the slavery in the Abbasid Caliphate in the Middle East. Captives may have been traded far within the Viking trade network, and within that network, it was possible to be sold again. In the Life of St. Findan, the Irishman was bought and sold three times after being taken captive by a Viking group.
Mongols
The Mongol invasions and conquests in the 13th century added a new force in the slave trade, and the slave trade in the Mongol Empire established an international slave market. The Mongols enslaved skilled individuals, women and children and marched them to Karakorum or Sarai, whence they were sold throughout Eurasia. Many of these slaves were shipped to the slave market in Novgorod.
Genoese and Venetians merchants in Crimea were involved in the slave trade with the Golden Horde. In 1441, Haci I Giray declared independence from the Golden Horde and established the Crimean Khanate. In the time of the Crimean Khanate, Crimeans engaged in frequent raids into the Danubian principalities, Poland-Lithuania, and Muscovy. For each captive, the khan received a fixed share (savğa) of 10% or 20%. The campaigns by Crimean forces categorize into "sefers", officially declared military operations led by the khans themselves, and çapuls, raids undertaken by groups of noblemen, sometimes illegally because they contravened treaties concluded by the khans with neighbouring rulers. For a long time, until the early 18th century, the khanate maintained a massive slave trade with the Ottoman Empire and the Middle East known as the Crimean slave trade. The Genoese colony of Caffa on the Black Sea coast of Crimea was one of the best known and significant trading ports and slave markets. Crimean Tatar raiders enslaved more than 1 million Eastern Europeans.
England and Ireland
In medieval Ireland, as a commonly traded commodity slaves could, like cattle, become a form of internal or trans-border currency. In 1102, the Council of London convened by Anselm of Canterbury obtained a resolution against the slave trade in England which was aimed mainly at the sale of English slaves to the Irish.
Christians holding Muslim slaves
Although the primary flow of slaves was toward Muslim countries, as evident in the history of slavery in the Muslim world, Christians did acquire Muslim slaves; in Southern France, in the 13th century, "the enslavement of Muslim captives was still fairly common". There are records, for example, of Saracen slave girls sold in Marseilles in 1248, a date which coincided with the fall of Seville and its surrounding area, to raiding Christian crusaders, an event during which a large number of Muslim women from this area were enslaved as war booty, as it has been recorded in some Arabic poetry, notably by the poet al-Rundi, who was contemporary to the events.
Additionally, the possession of slaves was legal in 13th century Italy; many Christians held Muslim slaves throughout the country. These Saracen slaves were often captured by pirates and brought to Italy from Muslim Spain or North Africa. During the 13th century, most of the slaves in the Italian trade city of Genoa were of Muslim origin. These Muslim slaves were owned by royalty, military orders or groups, independent entities, and the church itself.
Christians also sold Muslim slaves captured in war. The Order of the Knights of Malta attacked pirates and Muslim ships, and their base became a center for slave trading, selling captured North Africans and Turks. Malta remained a slave market until well into the late 18th century. One thousand slaves were required to man the galleys (ships) of the Order.
While they would at times seize Muslims as slaves, it was more likely that Christian armies would kill their enemies, rather than take them into servitude.
Jewish slave trade
The role of Jewish merchants in the early medieval slave trade has been subject to much misinterpretation and distortion. Although medieval records demonstrate that there were Jews who owned slaves in medieval Europe, Toch (2013) notes that the claim repeated in older sources, such as those by Charles Verlinden, that Jewish merchants where the primary dealers in European slaves is based on misreadings of primary documents from that era. Contemporary Jewish sources do not attest any large-scale slave trade or ownership of slaves which may be distinguished from the wider phenomenon of early medieval European slavery. The trope of the Jewish dealer of Christian slaves was additionally a prominent canard in medieval European anti-Semitic propaganda.
Slave trade at the close of the Middle Ages
As more and more of Europe Christianized, and open hostilities between Christian and Muslim nations intensified, large-scale slave trade moved to more distant sources. Sending slaves to Egypt, for example, was forbidden by the papacy in 1317, 1323, 1329, 1338, and, finally, 1425, as slaves sent to Egypt would often become soldiers, and end up fighting their former Christian owners. Although the repeated bans indicate that such trade still occurred, they also indicate that it became less desirable. In the 16th century, African slaves replaced almost all other ethnicities and religious enslaved groups in Europe.
Slavery in law
Secular law
Slavery was heavily regulated in Roman law, which was reorganized in the Byzantine Empire by Justinian I as the Corpus Iuris Civilis. Although the Corpus was lost to the West for centuries, it was rediscovered in the 11th and 12th centuries, and led to the foundation of law schools in Italy and France. According to the Corpus, the natural state of humanity is freedom, but the "law of nations" may supersede natural law and reduce certain people to slavery. The basic definition of slave in Romano-Byzantine law was:
anyone whose mother was a slave
anyone who has been captured in battle
anyone who has sold himself to pay a debt
It was, however, possible to become a freedman or a full citizen; the Corpus, like Roman law, had extensive and complicated rules for manumission of slaves.
The slave trade in England was officially abolished in 1102. In Poland slavery was forbidden in the 15th century; it was replaced by the second enserfment. In Lithuania, slavery was formally abolished in 1588.
Canon law
In fact, there was an explicit legal justification given for the enslavement of Muslims, found in the Decretum Gratiani and later expanded upon by the 14th century jurist Oldradus de Ponte: the Bible states that Hagar, the slave girl of Abraham, was beaten and cast out by Abraham’s wife Sarah. The Decretum, like the Corpus, defined a slave as anyone whose mother was a slave. Otherwise, the canons were concerned with slavery only in ecclesiastical contexts: slaves for instance were not permitted to be ordained as clergy.
Slavery in the Byzantine Empire
Slavery in the Islamic Near East
The ancient and medieval Near East includes modern day Turkey, the Levant and Egypt, with strong connections to the rest of the North African coastline. All of these areas were ruled by either the Byzantines or the Persians at the end of late antiquity. Pre-existing Byzantine (i.e. Roman) and Persian institutions of slavery may have influenced the development of institutions of slavery in Islamic law and jurisprudence. Likewise, some scholars have argued for the influence of Rabbinic tradition in regards to slavery on the development of Islamic legal thought.
Whatever the relationship between these different legal traditions, many similarities exist between the practice of Islamic slavery in the early Middle Ages and the practices of early medieval Byzantines and western Europeans. The status of freed slaves under Islamic rule, who continued to owe services to their former masters, bears a strong similarity to slavery in ancient Rome and slavery in ancient Greece. However, the practice of slavery in the early medieval Near East also grew out of slavery practices in currency among pre-Islamic Arabs.
Islamic states
Like the Old and New Testaments and Greek and Roman law codes, the Quran takes the institution of slavery for granted, though it urges kindness toward slaves and eventual manumission, especially for slaves who convert to Islam. In early Middle Ages, many slaves in Islamic society served as such for only a short period of time—perhaps an average of seven years. Like their European counterparts, early medieval Islamic slave traders preferred slaves who were not co-religionists and hence focused on "pagans" from inner Asia, Europe, and especially from sub-Saharan Africa. The practice of manumission may have contributed to the integration of former slaves into the wider society. However, under sharia law, conversion to Islam did not necessitate manumission.
Slaves were employed in heavy labor as well as in domestic contexts. Because of Quranic allowance of concubinage, early Islamic traders, in contrast to Byzantine and early modern slave traders, imported large numbers of female slaves. The very earliest Islamic states did not create corps of slave soldiers (a practice familiar from later contexts) but did integrate freedmen into armies, which may have contributed to the rapid expansion of early Islamic conquest. By the 9th century, use of slaves in Islamic armies, particularly Turks in cavalry units and Africans in infantry units, was a relatively common practice.
In Egypt, Ahmad ibn Tulun imported thousands of black slaves to wrestle independence from the Abbasid Caliphate in Iraq in 868. The Ikhshidid dynasty used black slave units to liberate itself from Abbasid rule after the Abbasids destroyed ibn Tulun’s autonomous empire in 935. Black professional soldiers were most associated with the Fatimid dynasty, which incorporated more professional black soldiers than the previous two dynasties. It was the Fatimids who first incorporated black professional slave soldiers into the cavalry, despite massive opposition from Central Asian Turkish Mamluks, who saw the African contingent as a threat to their role as the leading military unit in the Egyptian army.In the later half of the Middle Ages, the expansion of Islamic rule further into the Mediterranean, the Persian Gulf, and Arabian Peninsula established the Saharan-Indian Ocean slave trade. This network was a large market for African slaves, transporting approximately four million African slaves from its 7th century inception to its 20th century demise. Ironically, the consolidation of borders in the Islamic Near East changed the face of the slave trade. A rigid Islamic code, coupled with crystallizing frontiers, favored slave purchase and tribute over capture as lucrative slave avenues. Even the sources of slaves shifted from the Fertile Crescent and Central Asia to Indochina and the Byzantine Empire.
Patterns of preference for slaves in the Near East, as well as patterns of use, continued into the later Middle Ages with only slight changes. Slaves were employed in many activities, including agriculture, industry, the military, and domestic labor. Women were prioritized over men, and usually served in the domestic sphere as menials, concubines (cariye), or wives. Domestic and commercial slaves were mostly better off than their agricultural counterparts, either becoming family members or business partners rather than condemned to a grueling life in a chain gang. There are references to gangs of slaves, mostly African, put to work in drainage projects in Iraq, salt and gold mines in the Sahara, and sugar and cotton plantations in North Africa and Spain. References to this latter type of slavery are rare, however. Eunuchs were the most prized and sought-after type of slave.
The most fortunate slaves found employment in politics or the military. In the Ottoman Empire, the Devşirme system groomed young slave boys for civil or military service. Young Christian boys were uprooted from their conquered villages periodically as a levy, and were employed in government, entertainment, or the army, depending on their talents. Slaves attained great success from this program, some winning the post of Grand Vizier to the Sultan and others positions in the Janissaries.
It is a bit of a misnomer to classify these men as "slaves", because in the Ottoman Empire, they were referred to as kul, or, slaves "of the Gate", or Sultanate. While not slaves per se under Islamic law, these Devşrime alumni remained under the Sultan’s discretion.
The Islamic Near East extensively relied upon professional slave soldiers, and was known for having them compose the core of armies. The institution was conceived out of political predicaments and reflected the attitudes of the time, and was not indicative of political decline or financial bankruptcy. Slave units were desired because of their unadulterated loyalty to the ruler, since they were imported and therefore could not threaten the throne with local loyalties or alliances.
Ottoman Empire
Slavery was an important part of Ottoman society. The Byzantine-Ottoman wars and the Ottoman wars in Europe brought large numbers of Christian slaves into the Ottoman Empire. In the middle of the 14th century, Murad I built his own personal slave army called the Kapıkulu. The new force was based on the sultan’s right to a fifth of the war booty, which he interpreted to include captives taken in battle. The captive slaves were converted to Islam and trained in the sultan’s personal service.
In the devşirme (translated "blood tax" or "child collection"), young Christian boys from Anatolia and the Balkans were taken away from their homes and families, converted to Islam and enlisted into special soldier classes of the Ottoman army. These soldier classes were named Janissaries, the most famous branch of the Kapıkulu. The Janissaries eventually became a decisive factor in the Ottoman military conquests in Europe.
Most of the military commanders of the Ottoman forces, imperial administrators and de facto rulers of the Ottoman Empire, such as Pargalı İbrahim Pasha and Sokollu Mehmet Paşa, were recruited in this way. By 1609 the Sultan’s Kapıkulu forces increased to about 100,000.
The concubines of the Ottoman Sultan consisted chiefly of purchased slaves. Because Islamic law forbade Muslims to enslave fellow Muslims, the Sultan’s concubines were generally of Christian origin (cariye). The mother of a Sultan, though technically a slave, received the extremely powerful title of Valide Sultan, and at times became effective ruler of the Empire (see Sultanate of women). One notable example was Kösem Sultan, daughter of a Greek Christian priest, who dominated the Ottoman Empire during the early decades of the 17th century. Another notable example was Roxelana, the favourite wife of Suleiman the Magnificent.
Slavery in the Crusader states
As a result of the crusades, thousands of Muslims and Christians were sold into slavery. Once sold into slavery most were never heard from again, so it is challenging to find evidence of specific slave experiences.
In the crusader Kingdom of Jerusalem, founded in 1099, at most 120,000 Franks ruled over 350,000 Muslims, Jews, and native Eastern Christians. Following the initial invasion and conquest, sometimes accompanied by massacres or expulsions of Jews and Muslims, a peaceable co-existence between followers of the three religions prevailed. The Crusader states inherited many slaves. To this may have been added some Muslims taken as captives of war. The Kingdom’s largest city, Acre, had a large slave market; however, the vast majority of Muslims and Jews remained free. The laws of Jerusalem declared that former Muslim slaves, if genuine converts to Christianity, must be freed.
In 1120, the Council of Nablus forbade sexual relations between crusaders and their female Muslim slaves: if a man raped his own slave, he would be castrated, but if he raped someone else’s slave, he would be castrated and exiled from the kingdom. But Benjamin Z. Kedar argued that the canons of the Council of Nablus were in force in the 12th century but had fallen out of use by the thirteenth. Marwan Nader questions this and suggests that the canons may not have applied to the whole kingdom at all times.
Christian law mandated Christians could not enslave other Christians; however, enslaving non-Christians was acceptable. In fact, military orders frequently enslaved Muslims and used slave labor for agricultural estates. No Christian, whether Western or Eastern, was permitted by law to be sold into slavery, but this fate was as common for Muslim prisoners of war as it was for Christian prisoners taken by the Muslims. In the later medieval period, some slaves were used to oar Hospitaller ships. Generally, it was a relatively small number non-Christian slaves in medieval Europe, and this number significantly decreased by the end of the medieval period.
The 13th-century Assizes of Jerusalem dealt more with fugitive slaves and the punishments ascribed to them, the prohibition of slaves testifying in court, and manumission of slaves, which could be accomplished, for example, through a will, or by conversion to Christianity. Conversion was apparently used as an excuse to escape slavery by Muslims who would then continue to practise Islam; crusader lords often refused to allow them to convert, and Pope Gregory IX, contrary to both the laws of Jerusalem and the canon laws that he himself was partially responsible for compiling, allowed for Muslim slaves to remain enslaved even if they had converted.
Slavery in Iberia
Communities of Muslims, Christians, and Jews existed on both sides of the political divide between Muslim and Christian kingdoms in Medieval Iberia: Al-Andalus hosted Jewish and Christian communities while Christian Iberia hosted Muslim and Jewish communities. Christianity had introduced the ethos that banned the enslavement of fellow Christians, an ethos that was reinforced by the banning of the enslavement of co-religionists during the rise of Islam. Additionally, the Dar al-Islam protected ‘people of the book’ (Christians and Jews living in Islamic lands) from enslavement, an immunity which also applied to Muslims living in Christian Iberia. Despite these restrictions, criminal or indebted Muslims and Christians in both regions were still subject to judicially-sanctioned slavery.
Islamic Iberia
An early economic pillar of the Islamic empire in Iberia (Al-Andalus) during the eighth century was the slave trade. Due to manumission being a form of piety under Islamic law, slavery in Muslim Spain couldn't maintain the same level of auto-reproduction as societies with older slave populations. Therefore, Al-Andalus relied on trade systems as an external means of replenishing the supply of enslaved people. Forming relations between the Umayyads, Khārijites and 'Abbāsids, the flow of trafficked people from the main routes of the Sahara towards Al-Andalus served as a highly lucrative trade configuration. The archaeological evidence of human trafficking and proliferation of early trade in this case follows numismatics and materiality of text. This monetary structure of consistent gold influx proved to be a tenet in the development of Islamic commerce. In this regard, the slave trade outperformed and was the most commercially successful venture for maximizing capital. This major change in the form of numismatics serves as a paradigm shift from the previous Visigothic economic arrangement. Additionally, it demonstrates profound change from one regional entity to another, the direct transfer of people and pure coinage from one religiously similar semi-autonomous province to another.
The medieval Iberian Peninsula was the scene of episodic warfare among Muslims and Christians (although sometimes Muslims and Christians were allies). Periodic raiding expeditions were sent from Al-Andalus to ravage the Christian Iberian kingdoms, bringing back booty and people. For example, in a raid on Lisbon in 1189 the Almohad caliph Yaqub al-Mansur took 3,000 female and child captives, and his governor of Córdoba took 3,000 Christian slaves in a subsequent attack upon Silves in 1191; an offensive by Alfonso VIII of Castile in 1182 brought him over two-thousand Muslim slaves. These raiding expeditions also included the Sa’ifa (summer) incursions, a tradition produced during the Amir reign of Cordoba. In addition to acquiring wealth, some of these Sa’ifa raids sought to bring mostly male captives, often eunuchs, back to Al-Andalus. They were generically referred to as Saqaliba, the Arab word for Slavs. Slavs’ status as the most common group in the slave trade by the tenth century led to the development of the word “slave.” The Saqaliba were mostly assigned to palaces as guards, concubines, and eunuchs, although they were sometimes privately owned. Along with Christians and Slavs, Sub-Saharan Africans were also held as slaves, brought back from the caravan trade in the Sahara. Slaves in Islamic lands were generally used for domestic, military, and administrative purposes, rarely used for agriculture or large-scale manufacturing. Christians living in Al-Andalus were not allowed to hold authority over Muslims, but they were permitted to hold non-Muslim slaves.
Christian Iberia
Contrary to suppositions of historians such as Marc Bloch, slavery thrived as an institution in medieval Christian Iberia. Slavery existed in the region under the Romans, and continued to do so under the Visigoths. From the fifth to the early 8th century, large portions of the Iberian Peninsula were ruled by Christian Visigothic Kingdoms, whose rulers worked to codify human bondage. In the 7th century, King Chindasuinth issued the Visigothic Code (Liber Iudiciorum), to which subsequent Visigothic kings added new legislation. Although the Visigothic Kingdom collapsed in the early 8th century, portions of the Visigothic Code were still observed in parts of Spain in the following centuries. The Code, with its pronounced and frequent attention to the legal status of slaves, reveals the continuation of slavery as an institution in post-Roman Spain.
The Code regulated the social conditions, behavior, and punishments of slaves in early medieval Spain. The marriage of slaves and free or freed people was prohibited. Book III, title II, iii ("Where a Freeborn Woman Marries the Slave of Another or a Freeborn Man the Female Slave of Another") stipulates that if a free woman marries another person’s slave, the couple is to be separated and given 100 lashes. Furthermore, if the woman refuses to leave the slave, then she becomes the property of the slave’s master. Likewise, any children born to the couple would follow the father’s condition and be slaves.
Unlike Roman law, in which only slaves were liable to corporal punishment, under Visigothic law, people of any social status were subject to corporal punishment. However, the physical punishment, typically beatings, administered to slaves was consistently harsher than that administered to freed or free people. Slaves could also be compelled to give testimony under torture. For example, slaves could be tortured to reveal the adultery of their masters, and it was illegal to free a slave for fear of what he or she might reveal under torture. Slaves' greater liability to physical punishment and judicial torture suggests their inferior social status in the eyes of Visigothic lawmakers.
Slavery remained persistent in Christian Iberia after the Umayyad invasions in the 8th century, and the Visigothic law codes continued to control slave ownership. However, as William Phillips notes, medieval Iberia should not be thought of as a slave society, but rather as a society that owned slaves. Slaves accounted for a relatively small percentage of the population, and did not make up a significant portion of the labor pool. Furthermore, while the existence of slavery continued from the earlier period, the use of slaves in post-Visigothic Christian Iberia differed from early periods. Ian Wood has suggests that, under the Visigoths, the majority of the slave population lived and worked on rural estates.
After the Muslim invasions, slave owners (especially in the kingdoms of Aragon and Valencia) moved away from using slaves as field laborers or in work gangs, and did not press slaves into military service. Slaves tended to be owned singly rather than in large groups. There appear to have been many more female than male slaves, and they were most often used as domestic servants, or to supplement free labor. In this respect, slave institutions in Aragon, especially, closely resembled those of other Mediterranean Christian kingdoms in France and Italy.
In the kingdoms of León and Castile, slavery followed the Visigothic model more closely than in the littoral kingdoms. Slaves in León and Castile were more likely to be employed as field laborers, supplanting free labor to support an aristocratic estate society. These trends in slave populations and use changed in the wake of the Black Death in 1348, which significantly increased the demand for slaves across the whole of the peninsula.
Christians were not the only slaveholders in Christian Iberia. Both Jews and Muslims living under Christian rule owned slaves, though more commonly in Aragon and Valencia than in Castile. After the conquest of Valencia in 1245, the Kingdom of Aragon prohibited the possession of Christian slaves by Jews, though they were still permitted to hold Muslim or pagan slaves. The main role of Iberian Jews in the slave trade came as facilitators: Jews acted as slave brokers and agents of transfer between the Christian and Muslim kingdoms.
This role caused some degree of fear among Christian populations. A letter from Pope Gregory XI to the Bishop of Cordoba in 1239 addressed rumors that the Jews were involved in kidnapping and selling Christian women and children into slavery while their husbands were away fighting the Muslims. Despite these worries, the primary role of Jewish slave traders lay in facilitating the exchange of captives between Muslim and Christian rulers, one of the primary threads of economic and political connectivity between Christian and Muslim Iberia.
In the early period after the fall of the Visigothic kingdom in the 8th century, slaves primarily came into Christian Iberia through trade with the Muslim kingdoms of the south. Most were Eastern European, captured in battles and raids, with the heavy majority being Slavs. However, the ethnic composition of slaves in Christian Iberia shifted over the course of the Middle Ages. Slaveholders in the Christian kingdoms gradually moved away from owning Christians, in accordance with Church proscriptions. In the middle of the medieval period most slaves in Christian Iberia were Muslim, either captured in battle with the Islamic states from the southern part of the peninsula, or taken from the eastern Mediterranean and imported into Iberia by merchants from cities such as Genoa.
The Christian kingdoms of Iberia frequently traded their Muslim captives back across the border for payments of money or kind. Indeed, historian James Broadman writes that this type of redemption offered the best chance for captives and slaves to regain their freedom. The sale of Muslim captives, either back to the Islamic southern states or to third-party slave brokers, supplied one of the means by which Aragon and Castile financed the Reconquista. Battles and sieges provided large numbers of captives; after the siege of Almeria in 1147, sources report that Alfonso VII of León sent almost 10,000 of the city’s Muslim women and children to Genoa to be sold into slavery as partial repayment of Genoese assistance in the campaign.
Towards the end of the Reconquista, however, this source of slaves became increasingly exhausted. Muslim rulers were increasingly unable to pay ransoms, and the Christian capture of large centers of population in the south made wholesale enslavement of Muslim populations impractical. The loss of an Iberian Muslim source of slaves further encouraged Christians to look to other sources of manpower. Beginning with the first Portuguese slave raid in sub-Saharan Africa in 1411, the focus of slave importation began to shift from the Mediterranean to the Atlantic World, and the racial composition of slaves in Christian Iberia began to include an increasing number of Sub-Saharan Africans.
Between 1489 and 1497 almost 2,100 black slaves were shipped from Portugal to Valencia. By the end of the 15th century, Spain held the largest population of black Africans in Europe, with a small, but growing community of black ex-slaves. In the mid 16th century Spain imported up to 2,000 black African slaves annually through Portugal, and by 1565 most of Seville’s 6,327 slaves (out of a total population of 85,538) were black Africans.
Slavery in the Mediterranean
In the Mediterranean region, individuals became enslaved through war and conquest, piracy, and frontier raiding. Additionally, some courts would sentence people to slavery, and even some people sold themselves or their children into slavery due to extreme poverty. The incentive for slavery in the Mediterranean was the greed of the slavers. The motivation behind many raids was to make money from the resulting slaves, with no political or religious agenda. Also, state and religious institutions frequently participated in the ransoming of individuals, so piracy became a lucrative market. This meant some individuals were returned home while others were sold away.
For those who traded in the Mediterranean, it was the humanity and intellect of these enslaved peoples that made them valuable merchandise worth commodifying. To purchase an individual was to purchase their labor, autonomy, and faith; religious conversion was often a motivation for these transactions. Additionally, religious division was the fundamental basis of law for the ownership of slaves during this period; it was not legal for Christians, Muslims, or Jewish people to enslave fellow believers. However, the enslavement, and compulsory conversion, of nonbelievers or people from other religions was permissible.
There were markets throughout the Mediterranean where enslaved people were bought and sold. In Italy the major slave trade centers were Venice and Genoa; in Iberia they were Barcelona and Valencia; and islands off the Mediterranean including Majorca, Sardinia, Sicily, Crete, Rhodes, Cyprus, and Chios also participated in slave markets. From these markets merchants would sell enslaved people domestically, or transport them to somewhere enslaved people were more in demand. For example, the Italian slave market often found itself selling to Egypt in order to meet the Mamluk demand for slaves. This demand caused Venice and Genoa to compete with one another for control of Black Sea trading ports.
The duties and expectations of slaves varied geographically; however, in the Mediterranean, it was most common for enslaved people to work in the households of elites. Enslaved people also worked in agricultural fields, but this was infrequent across the Mediterranean. It was most common in Venetian Crete, Genoese Chios, and Cyprus where enslaved people worked in vineyards, fields, and sugar mills. These were colonial societies, and enslaved people worked with free laborers in these areas. Enslaved women were sought after the most and therefore sold at the highest prices. This reflects the desire for domestic workers in elite households; however, enslaved women also could face sexual exploitation. Furthermore, even if freed from their stations, the former masters of these women often maintained power over them by becoming their employers or patrons.
Slavery in Moldavia and Wallachia
Slavery existed on the territory of present-day Romania while under The Ottoman Empire and Russian Empire rulership, from before the founding of the principalities of Wallachia and Moldavia in 13th–14th century, until it was abolished in stages during the 1840s and 1850s before the independence of the United Principalities of Moldavia and Wallachia was allowed, and also until 1783, in Transylvania and Bukovina (parts of the Habsburg monarchy and later The Austria-Hungarian Empire). Most slaves were of Roma (Gypsy) ethnicity and a significant number of in serfdom slavery.
Historian Nicolae Iorga associated the Roma people’s arrival with the 1241 Mongol invasion of Europe and considered their slavery as a vestige of that era. The practice of enslaving prisoners may also have been taken from the Mongols. The ethnic identity of the "Tatar slaves" is unknown, they could have been captured Tatars of the Golden Horde, Cumans, or the slaves of Tatars and Cumans.
While it is possible that some Romani people were slaves or auxiliary troops of the Mongols or Tatars and Nogai Horde, the bulk of them came from south of the Danube at the end of the 14th century, some time before the foundation of Wallachia. The Roma slaves were owned by the boyars (see Wallachian Revolution of 1848), the Christian Orthodox monasteries, or the state. They were used only as smiths, gold panners and as agricultural workers.
The Rumâni were only owned by Boyars and Monasteries, until the Independence of Romania from the Ottoman Empire on 9 May 1877. They were considered less valuable because they were taxable, only skilled at agricultural work and could not be used as tribute. It was common for both boyars and monasteries to register their Romanian serfs as "Gypsies" so that they would not pay the taxes that were imposed on the serfs. Any Romanian, regardless of gender, marrying a Roma would immediately become a slave that could be used as tribute.
Slavery in Russia
In Kievan Rus' and Russia, the slaves were usually classified as kholops. A kholop's master had unlimited power over his life: he could kill him, sell him, or use him as payment upon a debt. The master, however, was responsible before the law for his kholop’s actions. A person could become a kholop as a result of capture, selling himself or herself, being sold for debts or committed crimes, or marriage to a kholop. Until the late 10th century, the kholops represented a majority among the servants who worked lordly lands.
By the 16th century, slavery in Russia consisted mostly of those who sold themselves into slavery owing to poverty. They worked predominantly as household servants, among the richest families, and indeed generally produced less than they consumed. Laws forbade the freeing of slaves in times of famine, to avoid feeding them, and slaves generally remained with the family a long time; the Domostroy, an advice book, speaks of the need to choose slaves of good character and to provide for them properly. Slavery remained a major institution in Russia until 1723, when Peter the Great converted the household slaves into house serfs. Russian agricultural slaves were formally converted into serfs earlier in 1679.
In 1382, the Golden Horde under Tokhtamysh sacked Moscow, burning the city and carrying off thousands of inhabitants as slaves. For years, the khanates of Kazan and Astrakhan routinely made raids on Russian principalities for slaves and to plunder towns. Russian chronicles record about 40 raids of Kazan khans on Russian territories in the first half of the 16th century. In 1521, the combined forces of Crimean khan Mehmed I Giray and his Kazan allies attacked Moscow and captured thousands of slaves. About 30 major Tatar raids were recorded into Muscovite territories between 1558 and 1596. In 1571, the Crimean Tatars attacked and sacked Moscow, burning everything but the Kremlin and taking thousands of captives as slaves for the Crimean slave trade. In Crimea, about 75% of the population consisted of slaves.
Slavery in Poland and Lithuania
Slavery in Poland existed on the territory of Kingdom of Poland during the times of the Piast dynasty; however, slavery was restricted to those captured during war. In some special cases and for limited periods serfdom was also applied to debtors.
Slavery was banned officially in 1529 and prohibition on slavery was one of the most important of the Statutes of Lithuania, which had to be implemented before the Grand Duchy of Lithuania could join the Polish–Lithuanian Commonwealth in 1569.
The First Statute was drafted in 1522 and came into power in 1529 by the initiative of the Lithuanian Council of Lords. It has been proposed that the codification was initiated by Grand Chancellor of Lithuania Mikołaj Radziwiłł as a reworking and expansion of the 15th century Casimir's Code.
Slavery in Scandinavia
The evidence indicates that slavery in Scandinavia was more common in southern regions, as there are fewer northern provincial laws that contain mentions of slavery. Likewise, slaves were likely numerous but consolidated under the ownership of elites as chattel labor on large farm estates.
The laws from 12th and 13th centuries describe the legal status of two categories. According to the Norwegian Gulating code (in about 1160), domestic slaves could not, unlike foreign slaves, be sold out of the country. This and other laws defined slaves as their master’s property at the same level as cattle; if either were harmed then the perpetrator was responsible for damages, but if either caused damage to property then the owners were held accountable. It also described a procedure for giving a slave their freedom. According to the Law of Scania slaves could be granted freedom or redeem it themselves, upon which they must then be accepted into a new kin group or face societal ostracization.
The Law of Scania indicates free men may become slaves as a way to atone for a crime with the implication they would be eventually freed. Likewise, the Gotlander Guta Lag indicates slavery could be for a fixed period and as a method to pay for debt. Within the Older Västgöta Law widows are only allowed to remarry if an enslaved fostre or fostra could manage the farm in her absence. Likewise, the Younger Västgöta Law indicates further trust for fostre and fostra as they could occasionally be entrusted with the master’s keys. Likewise, some fostre were in such a trusted position they could undertake military actions while a slave. Yet, for all their independence, any children of fostre or fostra were still property of their masters.
A freed slave did not have full legal status; for example, the punishment for killing a former slave was low. A former slave’s son also had a low status, but higher than that of his parents. Women were commonly taken as slaves and forced into concubinage for lords. The children of these women had little formal rights with inheritance and legitimacy possible should they be needed for succession or favored by their parents, but nothing was guaranteed.
Slavery began to be replaced by a feudal-style tenant farmer economy wherein free men tied to the land worked farms for a lord reducing the need for slaves The Norwegian law code from 1274, Landslov (Land’s law), does not mention slaves, but former slaves. Thus it seems that slavery was abolished in Norway by this time. In Denmark, slavery was gradually replaced by serfdom (hoveriet) in the 13th-century, and in Sweden, slavery was abolished in 1334 and not replaced with serfdom, which never existed in Sweden.
Slavery in the British Isles
British Wales and Gaelic Ireland and Scotland were among the last areas of Christian Europe to give up their institution of slavery. Under Gaelic custom, prisoners of war were routinely taken as slaves. During the period that slavery was disappearing across most of western Europe, it was reaching its height in the British Isles: with the Viking invasions and the subsequent warring between Scandinavians and the natives, the number of captives taken as slaves drastically increased. The Irish church was vehemently opposed to slavery and blamed the 1169 Norman invasion on divine punishment for the practice, along with local acceptance of polygyny and divorce.
Serfdom versus slavery
In considering how serfdom evolved from slavery, historians who study the divide between slavery and serfdom encounter several issues of historiography and methodology. Some historians believe that slavery transitioned into serfdom (a view that has only been around for the last 200 years), though there is disagreement among them regarding how rapid this transition was. Pierre Bonnassie, a medieval historian, thought that the chattel slavery of the ancient world ceased to exist in the Europe of the 10th century and was followed by feudal serfdom. Jean-Pierre Devroey thinks that the shift from slavery to serfdom was gradual as well in some parts of the continent. Other areas, though, did not have what he calls "western-style serfdom" after the end of slavery, such as the rural areas of the Byzantine Empire, Iceland, and Scandinavia. Complicating this issue is that regions in Europe often had both serfs and slaves simultaneously. In northwestern Europe, a transition from slavery to serfdom happened by the 12th century. The Catholic Church promoted the transformation by giving the example. Enslavement of fellow Catholics was prohibited in 992 and manumission was declared to be a pious act. However it remained legal to enslave people of other religions and dogmas.
Generally speaking, regarding how slaves differed from serfs, the underpinnings of slavery and serfdom are debated as well. Dominique Barthélemy, among others, has questioned the very premises for neatly distinguishing serfdom from slavery, arguing that a binary classification masks the many shades of servitude. Of particular interest to historians is the role of serfdom and slavery within the state, and the implications that held for both serf and slave. Some think that slavery was the exclusion of people from the public sphere and its institutions, whereas serfdom was a complex form of dependency that usually lacked a codified basis in the legal system. Wendy Davies argues that serfs, like slaves, also became excluded from the public judicial system and that judicial matters were attended to in the private courts of their respective lords.
Despite the scholarly disagreement, it is possible to piece together a general picture of slavery and serfdom. Slaves typically owned no property, and were in fact the property of their masters. Slaves worked full-time for their masters and operated under a negative incentive structure; in other words, failure to work resulted in physical punishment. Serfs held plots of land, which was essentially a form of "payment" that the lord offered in exchange for the serf’s service. Serfs worked part-time for the masters and part-time for themselves and had opportunities to accumulate personal wealth that often did not exist for the slave.
Slaves were generally imported from foreign countries or continents, via the slave trade. Serfs were typically indigenous Europeans and were not subject to the same involuntary movements as slaves. Serfs worked in family units, whereas the concept of family was generally murkier for slaves. At any given moment, a slave’s family could be torn apart via trade, and masters often used this threat to coerce compliant behavior from the slave.
The end of serfdom is also debated, with Georges Duby pointing to the early 12th century as a rough end point for "serfdom in the strict sense of the term". Other historians dispute this assertion, citing discussions and the mention of serfdom as an institution during later dates (such as in 13th century England, or in Central Europe, where the rise of serfdom coincided with its decline in Western Europe). There are several approaches to get a time span for the transition, and lexicography is one such method. There is supposedly a clear shift in diction when referencing those who were either slaves or serfs at approximately 1000, though there is not a consensus on how significant this shift is, or if it even exists.
In addition, numismatists shed light on the decline of serfdom. There is a widespread theory that the introduction of currency hastened the decline of serfdom because it was preferable to pay for labor rather than depend on feudal obligations. Some historians argue that landlords began selling serfs their land – and hence, their freedom – during periods of economic inflation across Europe. Other historians argue that the end of slavery came from the royalty, who gave serfs freedom through edicts and legislation in an attempt to broaden their tax base.
The absence of serfdom in some parts of medieval Europe raises several questions. Devroey thinks it is because slavery was not born out of economic structures in these areas, but was rather a societal practice. Heinrich Fichtenau points out that in Central Europe, there was not a labor market strong enough for slavery to become a necessity.
Justifications for slavery
In late Rome, the official attitude toward slavery was ambivalent. According to Justinian’s legal code, slavery was defined as "an institution according to the law of nations whereby one person falls under the property rights of another, contrary to nature".
Justifications for slavery throughout the medieval period were dominated by the perception of religious difference. Slaves were often outsiders taken in war. As such, Hebrew and Islamic thinking both conceived of the slave as an "enemy within". In the Christian tradition, pagans and heretics were similarly considered enemies of the faith who could be justly enslaved. In theory, slaves who converted could embark on the path to freedom, but practices were inconsistent: masters were not obliged to manumit them and the practice of baptising slaves was often discouraged. The enslavement of co-religionists was discouraged, if not forbidden, for Christians, Jews, and Muslims alike. Consequently, northern European pagans and black Africans were a target for all three religious groups. Ethnic and religious difference were conflated in the justification of slavery.
A major Christian justification for the use of slavery, especially against those with dark skin, was the Curse of Ham. The Curse of Ham refers to a biblical parable (Gen. 9:20–27) in which Ham, the son of Noah, sins by seeing his father inebriated and naked, although scholars differ on the exact nature of Ham’s transgression. Noah then curses Ham’s offspring, Canaan, with being a "servant of servants unto his brethren". Although race or skin color is not mentioned, many Jewish, Christian and Muslim scholars began to interpret the passage as a curse of both slavery and black skin, in an attempt to justify the enslavement of people of color, specifically those of African descent. In the medieval period, however, it was also used by some Christians as a justification for serfdom. Muslim sources in the 7th century allude to the Curse of Ham gaining relevance as a justifying myth for the Islamic world’s longstanding enslavement of Africans.
The apparent discrepancy between the notion of human liberty founded in natural law and the recognition of slavery by canon law was resolved by a legal "compromise": enslavement was allowable given a just cause, which could then be defined by papal authority. The state of slavery was thought to be closely tied to original sin. Towards the middle of the 15th century, the Catholic Church, in particular the Papacy, took an active role in offering justifications for the enslavement of Saracens, pagans, infidels, and "other enemies of Christ". In 1452, a papal bull entitled Dum Diversas authorized King Afonso V of Portugal to enslave any "Saracens" or "pagans" he encountered. The Pope, Pope Nicholas V, recognized King Alfonso’s military action as legitimate in the form of the papal bull, and declared the
full and free power, through the Apostolic authority by this edict, to invade, conquer, fight, and subjugate the Saracens and pagans, and other infidels and other enemies of Christ, and ... to reduce their persons into perpetual servitude ...Pope Nicholas V (1452), "Dum Diversas (English Translation)", Unam Sanctam Catholicam, 5 February 2011. http://unamsanctamcatholicam.blogspot.com/2011/02/dum-diversas-english-translation.html.
In a follow-up bull, released in 1455 and entitled Romanus Pontifex, Pope Nicholas V reiterated his support for the enslavement of infidels in the context of Portugal’s monopoly on North African trade routes.
Historians such as Timothy Rayborn have contended that religious justifications served to mask the economic necessities underlying the institution of slavery.
See also
Catholic Church and slavery
Christianity and slavery
History of slavery
Islamic views on slavery
Slavery in ancient Greece
Slavery in ancient Rome
Slavery in antiquity
The Bible and slavery
References
Further reading
Barker, Hannah "Slavery in Medieval Europe." Oxford Bibliographies (2019)
Barker, Hannah That Most Precious Merchandise: The Mediterranean Trade in Black Sea Slaves, 1260-1500 (University of Pennsylvania Press, 2019)
Campbell, Gwyn et al. eds. Women and Slavery, Vol. 1: Africa, the Indian Ocean World, and the Medieval North Atlantic (2007)
Dockès, Pierre. Medieval Slavery and Liberation (1989)
Frantzen, Allen J., and Douglas Moffat, eds. The Work of Work: Servitude, Slavery and Labor in Medieval England (1994)
Karras, Ruth Mazo. Slavery and Society in Medieval Scandinavia (Yale University Press, 1988)
Perry, Craig et al. eds. The Cambridge World History of Slavery: Volume 2 AD500-AD1420 (Cambridge University Press, 2021)
Phillips, William D. Slavery from Roman Times to the Early Transatlantic Trade (Manchester University Press, 1985)
Rio, Alice. Slavery After Rome, 500-1100 (Oxford University Press, 2017) online review
Stuard, Susan Mosher. "Ancillary evidence for the decline of medieval slavery." Past & Present 149 (1995): 3-28 online.
Verhulst, Adriaan. "The decline of slavery and the economic expansion of the Early Middle Ages." Past & Present No. 133 (Nov., 1991), pp. 195–203 online
Wyatt David R. Slaves and warriors in medieval Britain and Ireland, 800–1200 (2009)
Slavery in Europe
History of slavery
Medieval society
Slavery in the Middle Ages
| 0.762959 | 0.994584 | 0.758827 |
The Archaeology of Knowledge
|
The Archaeology of Knowledge (L’archéologie du savoir, 1969) by Michel Foucault is a treatise about the methodology and historiography of the systems of thought (epistemes) and of knowledge (discursive formations) which follow rules that operate beneath the consciousness of the subject individuals, and which define a conceptual system of possibility that determines the boundaries of language and thought used in a given time and domain. The archaeology of knowledge is the analytical method that Foucault used in Madness and Civilization: A History of Insanity in the Age of Reason (1961), The Birth of the Clinic: An Archaeology of Medical Perception (1963), and The Order of Things: An Archaeology of the Human Sciences (1966).
Summary
The contemporary study of the History of Ideas concerns the transitions between historical world-views, but ultimately depends upon narrative continuities that break down under close inspection. The history of ideas marks points of discontinuity between broadly defined modes of knowledge, but those existing modes of knowledge are not discrete structures among the complex relations of historical discourse. Discourses emerge and transform according to a complex set of relationships (discursive and institutional) defined by discontinuities and unified themes.
An énoncé (statement) is a discourse, a way of speaking; the methodology studies only the “things said” as emergences and transformations, without speculation about the collective meaning of the statements of the things said. A statement is the set of rules that makes an expression — a phrase, a proposition, an act of speech — into meaningful discourse, and is conceptually different from signification; thus, the expression “The gold mountain is in California” is discursively meaningless if it is unrelated to the geographic reality of California. Therefore, the function of existence is necessary for an énoncé (statement) to have a discursive meaning.
As a set of rules, the statement has special meaning in the archaeology of knowledge, because it is the rules that render an expression discursively meaningful, while the syntax and the semantics are additional rules that make an expression significative. The structures of syntax and the structures of semantics are insufficient to determine the discursive meaning of an expression; whether or not an expression complies with the rules of discursive meaning, a grammatically correct sentence might lack discursive meaning; inversely, a grammatically incorrect sentence might be discursively meaningful; even when a group of letters are combined in such a way that no recognizable lexical item is formulated can possess discursive meaning, e.g. QWERTY identifies a type of keyboard layout for typewriters and computers.
The meaning of an expression depends upon the conditions in which the expression emerges and exists within the discourse of a field or the discourse of a discipline; the discursive meaning of an expression is determined by the statements that precede and follow it. To wit, the énoncés (statements) constitute a network of rules that establish which expressions are discursively meaningful; the rules are the preconditions for signifying propositions, utterances, and acts of speech to have discursive meaning. The analysis then deals with the organized dispersion of statements, discursive formations, and Foucault reiterates that the outlined archaeology of knowledge is one possible method of historical analysis.
Reception
The philosopher Gilles Deleuze describes The Archaeology of Knowledge as, "the most decisive step yet taken in the theory-practice of multiplicities."
See also
Foucauldian discourse analysis
References
Further reading
Deleuze, Gilles. 1986. Foucault. Trans. Sean Hand. London: Althone, 1988. .
Foucault, Michel. 1969. The Archaeology of Knowledge. Trans. A. M. Sheridan Smith. London and New York: Routledge, 2002. .
1969 non-fiction books
Books about discourse analysis
Éditions Gallimard books
French non-fiction books
Philosophy books
Works by Michel Foucault
| 0.76744 | 0.988773 | 0.758824 |
Regional power
|
In international relations, regional power, since the late 20thcentury has been used for a sovereign state that exercises significant power within its geographical region. States that wield unrivaled power and influence within a region of the world possess regional hegemony.
Characteristics
Regional powers shape the polarity of a regional area. Typically, regional powers have capabilities which are important in the region, but do not have capabilities at a global scale. Slightly contrasting definitions differ as to what makes a regional power. The European Consortium for Political Research defines a regional power as 'a state belonging to a geographically defined region, dominating this region in economic and military terms, able to exercise hegemonic influence in the region and considerable influence on the world scale, willing to make use of power resources and recognized or even accepted as the regional leader by its neighbors.'
The German Institute of Global and Area Studies states that a regional power must:
Form part of a definable region with its own identity
Claim to be a regional power (self-image as a regional power)
Exert decisive influence on the geographic extension of the region as well as on its ideological construction
Dispose over comparatively high military, economic, demographic, political, and ideological capabilities
Be well integrated into the region
Define the regional security agenda to a high degree
Be appreciated as a regional power by other powers in the region and beyond, especially by other regional powers
Be well connected with regional and global forums
Regional powers
In this list are states that have been described as regional powers by international relations and political science academics, analysts, or other experts. These states, to some extent, meet the criteria for regional power status, as described above. Different experts have differing views on exactly which states are regional powers. States are arranged by their region, and in alphabetical order.
Africa
Even though the economic weight of Africa is relatively low compared to other continents, and more than two-thirds of African countries are among the least developed states in the world, Africa's rich natural resources and diverse cultures could carry the potential to enable future development.
Although South Africa was diplomatically isolated during the latter years of the apartheid era, it is considered to have successfully reintegrated into international affairs over the last 20 years. It is recognized as the only newly industrialized country in Africa and takes a crucial role in BRICS and G20.
Nigeria is often referred to as the "Giant of Africa" due to both its population and economy being the largest in Africa and the cultural influence that it holds over other countries in Sub-Saharan Africa through its movie industry and mass media. Nigeria is also the largest oil producer in Africa.
Nigeria and South Africa are among the largest African economies; both have GDPs over $250 billion (nominal) and $700 billion (PPP) as of 2020.
Sub-Saharan Africa
Asia
Historically, Imperial China was the dominant power in East Asia. From the late 19th century, the Empire of Japan initiated far-reaching Westernizing reforms, and rapidly industrialized, to become a major power in Asia by the time of World War I, as one of the Allied powers. With economic turmoil, Japan's expulsion from the League of Nations, and its interest in expansion on the mainland, Japan became one of the three main Axis powers in World War II.
Since the late 20th century, regional alliances, economic progress, and contrasting military power changed the strategic and regional power balance in Asia. In recent years, a re-balancing of military and economic power among emerging powers, such as China and India, has resulted in significant changes in the geopolitics of Asia. China and Japan have also gained greater influence over regions beyond Asia. In recent decades, South Korea has emerged as a significant economic and cultural power in East Asia. Japan and South Korea are important allies for the United States in the Indo-Pacific region.
East Asia
Southeast Asia
South Asia
West Asia/Middle East
Europe
Russia – the dominant part of a former superpower, the Soviet Union, is now considered a potential superpower, and has historically been the primary geopolitical force in Eastern Europe. France, Germany, Italy, and the United Kingdom, collectively known as Big Four in Western Europe, as they play pivotal roles as part of the NATO Quint in the security of the Western Bloc. Most of the continent is now integrated as a consequence of the enlargement of the European Union, which is sometimes considered a great power as a whole, despite it not being a sovereign state. Historically, dominant powers in Europe created colonial empires (such as the Belgian, British, Danish, Dutch, French, German, Italian, Portuguese, Russian, and Spanish Empires).
Eastern Europe
Central Europe
Western Europe
Southern Europe
North America
The United States is the primary geopolitical force in North America, and is widely considered as the sole contemporary superpower globally. It dominates the region so heavily that its neighbors, Canada and Mexico, both middle powers in the region, are generally not considered regional powers. Despite having a large enough economy to be a member of the G7, Canada is not a regional power for two reasons. It is militarily secure as a result of U.S. hegemony, and has become financially comfortable by its dependence on, and deep integration with, a robust U.S. economy. Mexico is an emerging power which could probably be viewed as a regional power if grouped with Latin America, or a definite regional power if considered in either Middle America or in Hispanic America due to its economic size and diverse cultural heritages. However, similar to Canada, Mexican economy is highly reliant on the U.S. with about 80% of its exports shipping to the U.S. alone.
Oceania
Australia is considered to be a regional power due to its significant commercial and diplomatic relations in Asia–Pacific region since the late 1990s.
South America
Since the Age of Discovery, Portugal and Spain mostly divided South America to be the foremost colonial powers in the continent, but following decolonization in the first half of the 19th century, the European powers withdrew and new nations were established, although their cultural influence and languages still remain predominant in Latin America.
Brazil is considered one of the most compelling geopolitical power in South America, as the country has the highest population and landmass in the continent, and its economic size, which possesses large stockpiles of natural resources, including valuable minerals, a tenth of the world's fresh water and it's also one of the countries that contain the Earth's largest remaining rainforest. Brazil has an important role in international relations, especially in economic and global environmental issues.
See also
List of historical great powers
List of modern great powers
Middle power
Notes
Considered a great power
Member of AUKUS
Member of OEI
Member of BRICS
Member of CIVETS
Member of OECD
Member of Pacific Alliance
Member of D-8
Member of G7
Member of G-14
Member of G-15
Member of G20
Member of MIKTA
Member of OPEC
Member of QUAD
Member of the Shanghai Cooperation Organisation (SCO)
One of the G4 nations
Permanent member of the UN Security Council
References
Bibliography
Types of countries
20th-century neologisms
Hegemony
Political science terminology
Political terminology
International relations theory
| 0.764068 | 0.993136 | 0.758824 |
Unit of analysis
|
The unit of analysis is the entity that frames what is being looked at in a study, or is the entity being studied as a whole. In social science research, at the macro level, the most commonly referenced unit of analysis, considered to be a society is the state (polity) (i.e. country). At meso level, common units of observation include groups, organizations, and institutions, and at micro level, individual people.
Unit of analysis vs the level of analysis
Unit of analysis is closely related to the term level of analysis, and some scholars have used them interchangingly, while others argue for a need for distinction. Ahmet Nuri Yurdusev wrote that "the level of analysis is more of an issue related to the framework/context of analysis and the level at which one conducts one's analysis, whereas the question of the unit of analysis is a matter of the 'actor' or the 'entity' to be studied". Manasseh Wepundi noted the difference between "the unit of analysis, that is the phenomenon about which generalizations are to be made, that which each 'case' in the data file represents and the level of analysis, that is, the manner in which the units of analysis can be arrayed on a continuum from the very small (micro) to very large (macro) levels."
Unit of analysis vs unit of observation
The unit of analysis should also not be confused with the unit of observation. The unit of observation is a subset of the unit of analysis. A study may have a differing unit of observation and unit of analysis: for example, in community research, the research design may collect data at the individual level of observation but the level of analysis might be at the neighborhood level, drawing conclusions on neighborhood characteristics from data collected from individuals. Together, the unit of observation and the level of analysis define the population of a research enterprise.
Countries as units of analysis
Dependency theory and world-systems analysis challenged the treatment of countries as societies or units of analysis and the assumption that each country develops separately through stages from agrarian to industrial, from authoritarian to democratic, from backwards to advanced, by raising historical evidence. The development of an uneven division of labor (world-economy) shows factors of causality that account for changes within countries indicating that countries are part of a larger society or historical social system with systemic patterns that account for global inequality.
The literature of international relations provides a good example of units of analysis system.
See also
Statistical unit
References
External links
Unit of analysis
Choice of unit of analysis
Social research
| 0.778972 | 0.974122 | 0.758814 |
Metahistory: The Historical Imagination in Nineteenth-century Europe
|
Metahistory: The Historical Imagination in Nineteenth-century Europe is a work of historiography by Hayden White first published in 1973.
On the second page of his introduction, White stated:
The theoretical framework is outlined in the first 50 pages of the book, which consider in detail eight major figures of 19th-century history and the philosophy of history. The larger context of historiography and writing in general is also considered. White's approach uses systematically a fourfold structural schema with two terms mediating between a pair of opposites.
Synopsis
According to White, historians begin their work by constituting a chronicle of events which is to be organized into a coherent story. These are the two preliminary steps before processing the material into a plot which is argumented as to express an ideology. Thus the historical work is "a verbal structure in the form of a narrative prose discourse that purports to be a model, or icon, of past structures and processes in the interest of explaining what they were by representing them".
For the typologies of emplotment, argumentation and ideologies White refers to works by Northrop Frye, Stephen Pepper and Karl Mannheim. His four basic emplotments are provided by the archetypical genres of romance, comedy, tragedy and satire. The modes of argumentation, following Pepper's 'adequate root metaphors' are formist, organist, mechanicist and contextualist. Among the main types of Ideology White adopts anarchy, conservatism, radicalism and liberalism. White affirms that elective affinities link the three different aspects of a work and only four combinations (out of 64) are without internal inconsistencies or 'tensions'. The limitation arises through a general
mode of functioning - representation, reduction, integration or
negation, which White assimilates to one of the four main tropes:
metaphor, metonymy, synecdoche and irony. Structuralists
as Roman Jakobson or Emile Benveniste have used mostly an opposition between the first two of them but White refers to an earlier classification, adopted by Giambattista Vico and contrasts metaphor with irony.
The exemplary figures chosen by White present the ideal types of
historians and philosophers.
Reception
Frank Ankersmit has forcefully asserted the importance of Metahistory for the English speaking world. In the view of Ankersmit and like-minded scholars, White's work has made obsolete the view of language as neutral medium in historiography and has provided a way to treat methodological issues at a level higher than elementary propositions and atomic facts. So, with it, "philosophy of history finally, belatedly, underwent its linguistic turn and became part of the contemporary intellectual scene."
Norman Levitt has identified White as
"the most magisterial spokesman" for relativist and postmodernist historiography, where "[w]hen one particular narrative prevails, the dirty work is invariably done by 'rhetoric', never evidence and logic, which are, in any case, simply sleight-of-language designations for one kind of rhetorical strategy".
In his essay, entitled "Revisiting History in Hayden White's Philosophy," Mehdi Ghasemi refers to White's “Historical Fiction, Fictional History, and Historical Reality,” in which he writes “What we postmodernists are against is a professional historiography”. This statement has inspired Ghasemi to reexamine a number of White's works from the perspective of postmodernism so as to find out in what ways and to what extent White is a postmodernist. In addition, in his essay, Ghasemi highlights a number of preoccupations of professional historiography and argues how White deploys the discourse of postmodernism to dismantle them.
Be that as it may, it is unclear whether White himself would care to be closely identified with relativist and postmodernist schools of thought, given his sharp critiques of several key figures associated with those schools (not only postmodernism's outspoken official proponent, Jean-Francois Lyotard, but also—and more directly—certain unofficial poststructuralist exponents such as Michel Foucault, Roland Barthes and Jacques Derrida, whom White dubbed "absurdist critics"). What is clear is that White was, at the very least, stimulated by the ideas of several of these figures, particularly Barthes (whom White honored in The Content of the Form with an epigraph ["Le fait n'a jamais qu'une existence linguistique"] and the rueful remark that Barthes has been "profoundly missed" since his death) and Foucault (with whose work White demonstrates intense engagement in the essay "Foucault's discourse: The Historiography of Anti-Humanism"). Furthermore, White has denied the charge of relativism, averring that the reality of events in the past is not contradicted by literary portrayals of those events.
Along similar lines, White may also be regarded as a traditional moralist, inasmuch as he has
asked of historical and fictional narrative, "[O]n what other grounds
[than moralism] could a narrative of real events possibly conclude?
[…] What else could narrative closure consist of than the passage from
one moral order to another?"
For several of the reasons given above, White's ideas are somewhat controversial among academic historians, who have expressed both enthusiasm for and frustration with Metahistory. For instance, Arthur Marwick praised it as "a brilliant analysis of the rhetorical techniques of some famous early 19th-century historians ... [who wrote] well before the emergence of professional history." Yet in the very next breath Marwick complained that "White seems to have made very little acquaintanceship with what historians write today."
References
Sources
Hayden White, Metahistory: The Historical Imagination in 19th-century Europe, 1973
1973 non-fiction books
Books about historiography
Books about tropes
English-language books
| 0.776403 | 0.977331 | 0.758803 |
Kshatriya
|
Kshatriya (from Sanskrit kṣatra, "rule, authority"; also called Rajanya) is one of the four varnas (social orders) of Hindu society and is associated with the warrior aristocracy. The Sanskrit term kṣatriyaḥ is used in the context of later Vedic society wherein members were organised into four classes: brahmin, kshatriya, vaishya, and shudra.
History
Early Rigvedic tribal monarchy
The administrative machinery in the Vedic India was headed by a tribal king called a Rajan whose position may or may not have been hereditary. The king may have been elected in a tribal assembly (called Samiti), which included women. The Rajan protected the tribe and cattle; was assisted by a priest; and did not maintain a standing army, though in the later period the rulership appears to have risen as a social class. The concept of the fourfold varna system is not yet recorded.
Later Vedic period
The hymn Purusha Sukta to the Rigveda describes the symbolic creation of the four varna-s through cosmic sacrifice (yajña). Some scholars consider the Purusha Sukta to be a late interpolation into the Rigveda based on the neological character of the composition, as compared to the more archaic style of the Vedic literature. Since not all Indians were fully regulated under the varna in the Vedic society, the Purusha Sukta was supposedly composed in order to secure Vedic sanction for the heredity caste scheme. An alternate explanation is that the word 'Shudra' does not occur anywhere else in the Rig-veda except the Purusha Sukta, leading some scholars to believe the Purusha Sukta was a composition of the later Rig-vedic period itself to denote, legitimize and sanctify an oppressive and exploitative class structure that had already come into existence.
Although the Purusha Sukta uses the term rajanya, not Kshatriya, it is considered the first instance in the extant Vedic texts where four social classes are mentioned for the first time together. Usage of the term Rajanya possibly indicates the 'kinsmen of the Rajan' (i.e., kinsmen of the ruler) had emerged as a distinct social group then, such that by the end of the Vedic period, the term rajanya was replaced by Kshatriya; where rajanya stresses kinship with the Rajan and Kshatriya denotes power over a specific domain. The term rajanya unlike the word Kshatriya essentially denoted the status within a lineage. Whereas Kshatra, means "ruling; one of the ruling order".
Jaiswal points out the term Brahman rarely occurs in the Rig-veda with the exception of the Purusha Sukta and may not have been used for the priestly class. Based on the authority of Pāṇini, Patanjali, Kātyāyana and the Mahabharata, Jayaswal believes that Rajanya was the name of political people and that the Rajanyas were, therefore, a democracy (with an elected ruler). Some examples were the Andhaka and Vrsni Rajanyas who followed the system of elected rulers. Ram Sharan Sharma details how the central chief was elected by various clan chiefs or lineage chiefs with increasing polarisation between the rajanya (aristocracy helping the ruler) and the vis (peasants) leading to a distinction between the chiefs as a separate class (raja, rajanya, kshatra, kshatriya) on one hand and vis (clan peasantry) on the other hand.
The term kshatriya comes from kshatra and implies temporal authority and power which was based less on being a successful leader in battle and more on the tangible power of laying claim to sovereignty over a territory, and symbolising ownership over clan lands. This later gave rise to the idea of kingship.
In the period of the Brahmanas (800 BCE to 700 BCE) there was ambiguity in the position of the varna. In the Panchavimsha Brahmana (13,4,7), the Rajanya are placed first, followed by Brahmana then Vaishya. In Shatapatha Brahmana 13.8.3.11, the Rajanya are placed second. In Shatapatha Brahmana 1.1.4.12 the order is—Brahmana, Vaishya, Rajanya, Shudra. The order of the Brahmanical tradition—Brahmana, Kshatriya, Vaishya, Shudra—became fixed from the time of dharmasutras (450 BCE to 100 BCE). The kshatriya were often considered pre-eminent in Buddhist circles. Even among Hindu societies they were sometimes at rivalry with the Brahmins, but they generally acknowledged the superiority of the priestly class. The Kshatriyas also began to question the yajnas of the historical Vedic religion, which led to religious ideas developed in the Upanishads.
Mahajanapadas
The gaṇa sangha form of government was an oligarchic republic during the period of the Mahajanapadas (c. 600–300 BCE), that was ruled by Kshatriya clans. However, these kshatriyas did not follow the Vedic religion, and were sometimes called degenerate Kshatriyas or Shudras by Brahmanical sources. The kshatriyas served as representatives in the assembly at the capital, debated various issues put before the assembly. Due to the lack of patronage of Vedic Brahmanism, the kshatriyas of the gana sanghas were often patrons of Buddhism and Jainism. In the Pali canon, Kshatriya is referred as khattiya.
In the kingdoms of the Mahajanapadas, the king claimed kshatriya status through the Vedic religion. While kings claimed to be kshatriya, some kings came from non-kshatriya origins.
After the Mahajanapada period, most of the prominent royal dynasties in northern India were not kshatriyas. The Nanda Empire, whose rulers were stated to be shudras, destroyed many kshatriya lineages.
Post-Mauryan Kshatriyas
After the collapse of the Maurya Empire, numerous clan-based polities in Punjab, Haryana, and Rajasthan claimed kshatriya status.
The Shakas and Yavanas were considered to be low-status kshatriyas by Brahmin authors.
In the third to fourth centuries CE, kingdoms in the Krishna and Godavari rivers claimed kshatriya status and performed Vedic rituals to legitimate themselves as rulers. During his visit to India in the 7th century, Hieun Tsang noted that kshatriya rulers were ruling the kingdoms like Kabul, Kosala, Bhillamala, Maharashtra and Vallabhi.
Emergence of "Puranic" Kshatriyas
In the era from 300 to 700 CE, new royal dynasties were bestowed kshatriya status by Brahmins by linking them to the kshatriyas of the epics and Puranas. Dynasties began affiliating themselves with the Solar and Lunar dynasties and this gave them legitimation as rulers. In return the newly christened kshatriyas would patronize and reward the Brahmins. The Sanskritic culture of the kshatriyas of this period was heavily influential for later periods and set the style that kshatriyas of later periods appealed to. This process took place both in North India and the Deccan.
Modern era
Writing in the context of how the jajmani system operated in the 1960s, Pauline Kolenda noted that the "caste function of the Kshatriya is to lead and protect the village, and with conquest to manage their conquered lands. The Kshatriyas do perform these functions today to the extent possible, by distributing food as payments to kamins and providing leadership."
A number of castes in the modern era began claiming the Kshatriya origin.
Symbols
In rituals, the nyagrodha (Ficus indica or India fig or banyan tree) danda, or staff, is assigned to the kshatriya class, along with a mantra, intended to impart physical vitality or 'ojas'.
Lineage
The Vedas do not mention kshatriya (or varna) of any vamsha (lineage). The lineages of the Itihasa-Purana tradition are: the Solar dynasty (Suryavamsha); and the Lunar dynasty (Chandravamsha/Somavamsha).
There are other lineages, such as Agnivanshi ("fire lineage"), in which an eponymous ancestor is claimed from Agni (fire), and Nagavanshi (snake-born), claiming descent from the Nāgas, whose description can be found in scriptures such as Mahabharata.
See also
Indian caste system
Forward castes
Sanskritisation
References
Citations
Bibliography
Further reading
Ramesh Chandra Majumdar, Bharatiya Vidya Bhavan. History and Culture of Indian People, The Vedic Age. Bharatiya Vidya Bhavan, 1996. pp. 313–314
Varnas in Hinduism
Indian castes
Warriors
| 0.760037 | 0.998353 | 0.758786 |
Long Peace
|
"Long Peace", is a term for the unprecedented historical period following the end of World War II in 1945 to the present day. The period of the Cold War (1947–1991) was marked by the absence of major wars between the great powers of the period, the United States and the Soviet Union. First recognized in 1986, the period of "relative peace" has been compared to the relatively-long stability of the Roman Empire, the Pax Romana, or the Pax Britannica, a century of relative peace that existed between the end of the Napoleonic Wars in 1815 and the outbreak of World War I in 1914, during which the British Empire held global hegemony.
In the 1990s, it was thought that the Long Peace was a unique result of the Cold War. However, when the Cold War ended the same trends continued in what has also been called the "New Peace". The period has exhibited more than a quarter of a century of even greater stability and peacefulness and has also shown continued improvements in related measurements such as the number of coups, the amount of repression, and the durability of peace settlements. Though civil wars and lesser military conflicts have occurred, there has been a continued absence of direct conflict between any of the largest economies by gross domestic product; instead, wealthier countries have fought limited small-scale regional conflicts with poorer countries. Conflicts involving smaller economies have also gradually tapered off. Overall, the number of international wars decreased from a rate of six per year in the 1950s to one per year in the 2000s, and the number of fatalities decreased from 240 reported deaths per million to less than 10 reported deaths per million.
In 2012, the European Union was awarded the Nobel Peace Prize "for over six decades [having] contributed to the advancement of peace and reconciliation, democracy and human rights in Europe" by a unanimous decision of the Norwegian Nobel Committee.
Major factors cited as reasons for the Long Peace have included the deterrence effect of nuclear weapons, the economic incentives towards cooperation caused by globalization and international trade, the worldwide increase in the number of democracies, the World Bank's efforts in reduction of poverty, and the effects of the empowerment of women and peacekeeping by the United Nations. However, no factor is a sufficient explanation on its own and so additional or combined factors are likely. Other proposed explanations have included the proliferation of the recognition of human rights, increasing education and quality of life, changes in the way that people view conflicts (such as the presumption that wars of aggression are unjustified), the success of non-violent action, and demographic factors such as the reduction in birthrates.
In the book The Better Angels of Our Nature, Steven Pinker considers that to be part of a trend that has continued since the beginning of recorded history, and other experts have made similar arguments. While there is general agreement among experts that we are in a Long Peace and that wars have declined since the 1950s, Pinker's broader thesis has been contested. Critics have also said that a longer period of relative peace is needed to be certain, or they have emphasized minor reversals in specific trends, such as the increase in battle deaths between 2011 and 2014 due to the Syrian Civil War.
Pinker's work has received some publicity, but most information about the Long Peace and related trends remains outside public awareness, and some data demonstrate a widespread perception that the world has become more dangerous.
See also
Balance of terror
Deterrence theory
Human Security Report Project
Nuclear peace
References
Cold War terminology
Historical eras
1986 neologisms
Cold War historiography
| 0.76747 | 0.988677 | 0.75878 |
Human taxonomy
|
Human taxonomy is the classification of the human species (systematic name Homo sapiens, Latin: "wise man") within zoological taxonomy. The systematic genus, Homo, is designed to include both anatomically modern humans and extinct varieties of archaic humans. Current humans have been designated as subspecies Homo sapiens sapiens, differentiated, according to some, from the direct ancestor, Homo sapiens idaltu (with some other research instead classifying idaltu and current humans as belonging to the same subspecies).
Since the introduction of systematic names in the 18th century, knowledge of human evolution has increased drastically, and a number of intermediate taxa have been proposed in the 20th and early 21st centuries. The most widely accepted taxonomy grouping takes the genus Homo as originating between two and three million years ago, divided into at least two species, archaic Homo erectus and modern Homo sapiens, with about a dozen further suggestions for species without universal recognition.
The genus Homo is placed in the tribe Hominini alongside Pan (chimpanzees). The two genera are estimated to have diverged over an extended time of hybridization, spanning roughly 10 to 6 million years ago, with possible admixture as late as 4 million years ago. A subtribe of uncertain validity, grouping archaic "pre-human" or "para-human" species younger than the Homo-Pan split, is Australopithecina (proposed in 1939).
A proposal by Wood and Richmond (2000) would introduce Hominina as a subtribe alongside Australopithecina, with Homo the only known genus within Hominina. Alternatively, following Cela-Conde and Ayala (2003), the "pre-human" or "proto-human" genera of Australopithecus, Ardipithecus, Praeanthropus, and possibly Sahelanthropus, may be placed on equal footing alongside the genus Homo. An even more extreme view rejects the division of Pan and Homo as separate genera, which based on the Principle of Priority would imply the reclassification of chimpanzees as Homo paniscus (or similar).
Categorizing humans based on phenotypes is a socially controversial subject. Biologists originally classified races as subspecies, but contemporary anthropologists reject the concept of race as a useful tool to understanding humanity, and instead view humanity as a complex, interrelated genetic continuum. Taxonomy of the hominins continues to evolve.
History
Human taxonomy on one hand involves the placement of humans within the taxonomy of the hominids (great apes), and on the other the division of archaic and modern humans into species and, if applicable, subspecies. Modern zoological taxonomy was developed by Carl Linnaeus during the 1730s to 1750s. He was the first to develop the idea that, like other biological entities, groups of people could too share taxonomic classifications. He named the human species as Homo sapiens in 1758, as the only member species of the genus Homo, divided into several subspecies corresponding to the great races. The Latin noun homō (genitive hominis) means "human being". The systematic name Hominidae for the family of the great apes was introduced by John Edward Gray (1825). Gray also supplied Hominini as the name of the tribe including both chimpanzees (genus Pan) and humans (genus Homo).
The discovery of the first extinct archaic human species from the fossil record dates to the mid 19th century: Homo neanderthalensis, classified in 1864. Since then, a number of other archaic species have been named, but there is no universal consensus as to their exact number. After the discovery of H. neanderthalensis, which even if "archaic" is recognizable as clearly human, late 19th to early 20th century anthropology for a time was occupied with finding the supposedly "missing link" between Homo and Pan. The "Piltdown Man" hoax of 1912 was the fraudulent presentation of such a transitional species. Since the mid-20th century, knowledge of the development of Hominini has become much more detailed, and taxonomical terminology has been altered a number of times to reflect this.
The introduction of Australopithecus as a third genus, alongside Homo and Pan, in the tribe Hominini is due to Raymond Dart (1925). Australopithecina as a subtribe containing Australopithecus as well as Paranthropus (Broom 1938) is a proposal by Gregory & Hellman (1939). More recently proposed additions to the Australopithecina subtribe include Ardipithecus (1995) and Kenyanthropus (2001). The position of Sahelanthropus (2002) relative to Australopithecina within Hominini is unclear. Cela-Conde and Ayala (2003) propose the recognition of Australopithecus, Ardipithecus, Praeanthropus, and Sahelanthropus (the latter incertae sedis)as separate genera.
Other proposed genera, now mostly considered part of Homo, include:
Pithecanthropus (Dubois, 1894),
Protanthropus (Haeckel, 1895),
Sinanthropus (Black, 1927),
Cyphanthropus (Pycraft, 1928)
Africanthropus (Dreyer, 1935),
Telanthropus (Broom & Anderson 1949),
Atlanthropus (Arambourg, 1954),
Tchadanthropus (Coppens, 1965).
The genus Homo has been taken to originate some two million years ago, since the discovery of stone tools in Olduvai Gorge, Tanzania, in the 1960s. Homo habilis (Leakey et al., 1964) would be the first "human" species (member of genus Homo) by definition, its type specimen being the OH 7 fossils. However, the discovery of more fossils of this type has opened up the debate on the delineation of H. habilis from Australopithecus. Especially, the LD 350-1 jawbone fossil discovered in 2013, dated to 2.8 Mya, has been argued as being transitional between the two. It is also disputed whether H. habilis was the first hominin to use stone tools, as Australopithecus garhi, dated to c. 2.5 Mya, has been found along with stone tool implements. Fossil KNM-ER 1470 (discovered in 1972, designated Pithecanthropus rudolfensis by Alekseyev 1978) is now seen as either a third early species of Homo (alongside H. habilis and H. erectus) at about 2 million years ago, or alternatively as transitional between Australopithecus and Homo.
Wood and Richmond (2000) proposed that Gray's tribe Hominini ("hominins") be designated as comprising all species after the chimpanzee–human last common ancestor by definition, to the inclusion of Australopithecines and other possible pre-human or para-human species (such as Ardipithecus and Sahelanthropus) not known in Gray's time. In this suggestion, the new subtribe of Hominina was to be designated as including the genus Homo exclusively, so that Hominini would have two subtribes, Australopithecina and Hominina, with the only known genus in Hominina being Homo. Orrorin (2001) has been proposed as a possible ancestor of Hominina but not Australopithecina.
Designations alternative to Hominina have been proposed: Australopithecinae (Gregory & Hellman 1939) and Preanthropinae (Cela-Conde & Altaba 2002);
Species
At least a dozen species of Homo other than Homo sapiens have been proposed, with varying degrees of consensus. Homo erectus is widely recognized as the species directly ancestral to Homo sapiens. Most other proposed species are proposed as alternatively belonging to either Homo erectus or Homo sapiens as a subspecies. This concerns Homo ergaster in particular. One proposal divides Homo erectus into an African and an Asian variety; the African is Homo ergaster, and the Asian is Homo erectus sensu stricto. (Inclusion of Homo ergaster with Asian Homo erectus is Homo erectus sensu lato.) There appears to be a recent trend, with the availability of ever more difficult-to-classify fossils such as the Dmanisi skulls (2013) or Homo naledi fossils (2015) to subsume all archaic varieties under Homo erectus.
Subspecies
Homo sapiens subspecies
The recognition or nonrecognition of subspecies of Homo sapiens has a complicated history. The rank of subspecies in zoology is introduced for convenience, and not by objective criteria, based on pragmatic consideration of factors such as geographic isolation and sexual selection. The informal taxonomic rank of race is variously considered equivalent or subordinate to the rank of subspecies, and the division of anatomically modern humans (H. sapiens) into subspecies is closely tied to the recognition of major racial groupings based on human genetic variation.
A subspecies cannot be recognized independently: a species will either be recognized as having no subspecies at all or at least two (including any that are extinct). Therefore, the designation of an extant subspecies Homo sapiens sapiens only makes sense if at least one other subspecies is recognized. H. s. sapiens is attributed to "Linnaeus (1758)" by the taxonomic Principle of Coordination. During the 19th to mid-20th century, it was common practice to classify the major divisions of extant H. sapiens as subspecies, following Linnaeus (1758), who had recognized H. s. americanus, H. s. europaeus, H. s. asiaticus and H. s. afer as grouping the native populations of the Americas, West Eurasia, East Asia and Sub-Saharan Africa, respectively. Linnaeus also included H. s. ferus, for the "wild" form which he identified with feral children, and two other "wild" forms for reported specimens now considered very dubious (see cryptozoology), H. s. monstrosus and H. s. troglodytes.
There were variations and additions to the categories of Linnaeus, such as H. s. tasmanianus for the native population of Australia. Bory de St. Vincent in his Essai sur l'Homme (1825) extended Linnaeus's "racial" categories to as many as fifteen: Leiotrichi ("smooth-haired"): japeticus (with subraces), arabicus, iranicus, indicus, sinicus, hyperboreus, neptunianus, australasicus, columbicus, americanus, patagonicus; Oulotrichi ("crisp-haired"): aethiopicus, cafer, hottentotus, melaninus. Similarly, Georges Vacher de Lapouge (1899) also had categories based on race, such as priscus, spelaeus (etc.).
Homo sapiens neanderthalensis was proposed by King (1864) as an alternative to Homo neanderthalensis. There have been "taxonomic wars" over whether Neanderthals were a separate species since their discovery in the 1860s. Pääbo (2014) frames this as a debate that is unresolvable in principle, "since there is no definition of species perfectly describing the case." Louis Lartet (1869) proposed Homo sapiens fossilis based on the Cro-Magnon fossils.
There are a number of proposals of extinct varieties of Homo sapiens made in the 20th century. Many of the original proposals were not using explicit trinomial nomenclature, even though they are still cited as valid synonyms of H. sapiens by Wilson & Reeder (2005). These include: Homo grimaldii (Lapouge, 1906),
Homo aurignacensis hauseri (Klaatsch & Hauser, 1910),
Notanthropus eurafricanus (Sergi, 1911),
Homo fossilis infrasp. proto-aethiopicus (Giuffrida-Ruggeri, 1915),
Telanthropus capensis (Broom, 1917),
Homo wadjakensis (Dubois, 1921),
Homo sapiens cro-magnonensis, Homo sapiens grimaldiensis (Gregory, 1921),
Homo drennani (Kleinschmidt, 1931),
Homo galilensis (Joleaud, 1931) = Paleanthropus palestinus (McCown & Keith, 1932).
Rightmire (1983) proposed Homo sapiens rhodesiensis.
After World War II, the practice of dividing extant populations of Homo sapiens into subspecies declined. An early authority explicitly avoiding the division of H. sapiens into subspecies was Grzimeks Tierleben, published 1967–1972.
A late example of an academic authority proposing that the human racial groups should be considered taxonomical subspecies is John Baker (1974). The trinomial nomenclature Homo sapiens sapiens became popular for "modern humans" in the context of Neanderthals being considered a subspecies of H. sapiens in the second half of the 20th century. Derived from the convention, widespread in the 1980s, of considering two subspecies, H. s. neanderthalensis and H. s. sapiens, the explicit claim that "H. s. sapiens is the only extant human subspecies" appears in the early 1990s.
Since the 2000s, the extinct Homo sapiens idaltu (White et al., 2003) has gained wide recognition as a subspecies of Homo sapiens, but even in this case there is a dissenting view arguing that "the skulls may not be distinctive enough to warrant a new subspecies name". H. s. neanderthalensis and H. s. rhodesiensis continue to be considered separate species by some authorities, but the 2010s discovery of genetic evidence of archaic human admixture with modern humans has reopened the details of taxonomy of archaic humans.
Homo erectus subspecies
Homo erectus since its introduction in 1892 has been divided into numerous subspecies, many of them formerly considered individual species of Homo. None of these subspecies have universal consensus among paleontologists.
Homo erectus erectus (Java Man) (1970s)
Homo erectus yuanmouensis (Yuanmou Man) (Li et al., 1977)
Homo erectus lantianensis (Lantian Man) (Woo Ju-Kang, 1964)
Homo erectus nankinensis (Nanjing Man) (1993)
Homo erectus pekinensis (Peking Man) (1970s)
Homo erectus palaeojavanicus (Meganthropus) (Tyler, 2001)
Homo erectus soloensis (Solo Man) (Oppenoorth, 1932)
Homo erectus tautavelensis (Tautavel Man) (de Lumley and de Lumley, 1971)
Homo erectus georgicus (1991)
Homo erectus bilzingslebenensis (Vlček, 2002)
See also
Footnotes
References
Taxonomy
| 0.760551 | 0.997602 | 0.758727 |
Social order
|
The term social order can be used in two senses: In the first sense, it refers to a particular system of social structures and institutions. Examples are the ancient, the feudal, and the capitalist social order. In the second sense, social order is contrasted to social chaos or disorder and refers to a stable state of society in which the existing social structure is accepted and maintained by its members. The problem of order or Hobbesian problem, which is central to much of sociology, political science and political philosophy, is the question of how and why it is that social orders exist at all.
Sociology
Thomas Hobbes is recognized as the first to clearly formulate the problem, to answer which he conceived the notion of a social contract.
Social theorists (such as Karl Marx, Émile Durkheim, Talcott Parsons, and Jürgen Habermas) have proposed different explanations for what a social order consists of, and what its real basis is. For Marx, it is the relations of production or economic structure which is the basis of social order. For Durkheim, it is a set of shared social norms. For Parsons, it is a set of social institutions regulating the pattern of action-orientation, which again are based on a frame of cultural values. For Habermas, it is all of these, as well as communicative action.
Principle of extensiveness
Another key factor concerning social order is the principle of extensiveness. This states the more norms and the more important the norms are to a society, the better these norms tie and hold together the group as a whole.
A good example of this is smaller religions based in the U.S., such as the Amish. Many Amish live together in communities and because they share the same religion and values, it is easier for them to succeed in upholding their religion and views because their way of life is the norm for their community.
Groups and networks
In every society, people belong to groups, such as businesses, families, churches, athletic groups, or neighborhoods. The structure inside of these groups mirrors that of the whole society. There are networks and ties between groups, as well as inside of each of the groups, which create social order.
Status groups
"Status groups" can be based on a person's characteristics such as race, ethnicity, sexual orientation, religion, caste, region, occupation, physical attractiveness, gender, education, age, etc. They are defined as "a subculture having a rather specific rank (or status) within the stratification system. That is, societies tend to include a hierarchy of status groups, some enjoying high ranking and some low." One example of this hierarchy is the prestige of a university professor compared to that of a garbage man.
A certain lifestyle usually distinguishes the members of different status groups. For example, around the holidays a Jewish family may celebrate Hanukkah while a Christian family may celebrate Christmas. Other cultural differences such as language and cultural rituals identify members of different status groups.
Smaller groups exist inside of one status group. For instance, one can belong to a status group based on one's race and a social class based on financial ranking. This may cause strife for the individual in this situation when they feel they must choose to side with either their status group or their social class. For example, a wealthy African American man who feels he has to take a side on an issue on which the opinions of poor African Americans and wealthy white Americans are divided and finds his class and status group opposed.
Values and norms
Values can be defined as "internal criteria for evaluation". Values are also split into two categories, there are individual values, which pertains to something that we think has worth and then there are social values. Social values are our desires modified according to ethical principles or according to the group, we associate with: friends, family, or co-workers.
Norms tell us what people ought to do in a given situation. Unlike values, norms are enforced externally – or outside of oneself. A society as a whole determines norms, and they can be passed down from generation to generation.
Power and authority
An exception to the idea of values and norms as social order-keepers is deviant behavior. Not everyone in a society abides by a set of personal values or the group's norms all the time. For this reason, it is generally deemed necessary for a society to have authority. The adverse opinion holds that the need for authority stems from social inequality.
In a class society, those who hold positions of power and authority are among the upper class. Norms differ for each class because the members of each class were raised differently and hold different sets of values. Tension can form, therefore, between the upper class and lower class when laws and rules are put in place that do not conform to the values of both classes.
Spontaneous order
The order does not necessarily need to be controlled by the government. Individuals pursuing self-interest can make predictable systems. These systems, being planned by more than one person, may actually be preferable to those planned by a single person. This means that predictability may be possible to achieve without a central government's control. These stable expectations do not necessarily lead to individuals behaving in ways that are considered beneficial to group welfare. Considering this, Thomas Schelling studied neighborhood racial segregation. His findings suggest that interaction can produce predictability, but it does not always increase social order. In his researching, he found that "when all individuals pursue their own preferences, the outcome is segregation rather than integration," as stated in "Theories of Social Order", edited by Michael Hechter and Christine Horne.
Social honor
Social honor can also be referred to as social status. It is considered the distribution of prestige or "the approval, respect, admiration, or deference a person or group is able to command by virtue of his or its imputed qualities or performances". The case most often is that people associate social honor with the place a person occupies with material systems of wealth and power. Since most of the society finds wealth and power desirable, they respect or envy people that have more than they do. When social honor is referred to as social status, it deals with the rank of a person within the stratification system. Status can be achieved, which is when a person position is gained on the basis of merit or in other words by achievement and hard work or it can be ascribed, which is when a person position is assigned to individuals or groups without regard for merit but because of certain traits beyond their control, such as race, sex, or parental social standing. An example of ascribed status is Kate Middleton who married a prince. An example of achieved status is Oprah Winfrey, an African American woman from poverty who worked her way to being a billionaire.
Attainment
Two different theories exist that explain and attempt to account for social order. The first theory is "order results from a large number of independent decisions to transfer individual rights and liberties to a coercive state in return for its guarantee of security for persons and their property, as well as its establishment of mechanisms to resolve disputes," as stated in Theories of Social Order by Hechter and Horne. The next theory is that "the ultimate source of social order as residing not in external controls but in a concordance of specific values and norms that individuals somehow have managed to internalize." also stated in Theories of Social Order by Hechter and Horne. Both arguments for how social order is attained are very different. One argues that it is achieved through outside influence and control, and the other argues that it can only be attained when the individual willingly follows norms and values that they have grown accustomed to and internalized. Weber's insistence on the importance of domination and symbolic systems in social life was retained by Pierre Bourdieu, who developed the idea of social orders, ultimately transforming it into a theory of fields.
See also
Anti-social behaviour
Antinomianism
Conformity
Norm (sociology)
Organic crisis
Social hierarchy
References
Further reading
Hobbes, T. Leviathan or The Matter, Forme and Power of a Common Wealth Ecclesiasticall and Civil.
Sociological terminology
Structural functionalism
| 0.764775 | 0.992061 | 0.758704 |
Cultural studies
|
Cultural studies is a politically engaged postdisciplinary academic field that explores the dynamics of especially contemporary culture (including the politics of popular culture) and its social and historical foundations. Cultural studies researchers generally investigate how cultural practices relate to wider systems of power associated with, or operating through, social phenomena. These include ideology, class structures, national formations, ethnicity, sexual orientation, gender, and generation. Employing cultural analysis, cultural studies views cultures not as fixed, bounded, stable, and discrete entities, but rather as constantly interacting and changing sets of practices and processes. The field of cultural studies encompasses a range of theoretical and methodological perspectives and practices. Although distinct from the discipline of cultural anthropology and the interdisciplinary field of ethnic studies, cultural studies draws upon and has contributed to each of these fields.
Cultural studies was initially developed by British Marxist academics in the late 1950s, 1960s, and 1970s, and has been subsequently taken up and transformed by scholars from many different disciplines around the world. Cultural studies is avowedly and even radically interdisciplinary and can sometimes be seen as anti-disciplinary. A key concern for cultural studies practitioners is the examination of the forces within and through which socially organized people conduct and participate in the construction of their everyday lives.
Cultural studies combines a variety of politically engaged critical approaches drawn including semiotics, Marxism, feminist theory, ethnography, post-structuralism, postcolonialism, social theory, political theory, history, philosophy, literary theory, media theory, film/video studies, communication studies, political economy, translation studies, museum studies and art history/criticism to study cultural phenomena in various societies and historical periods. Cultural studies seeks to understand how meaning is generated, disseminated, contested, bound up with systems of power and control, and produced from the social, political and economic spheres within a particular social formation or conjuncture. The movement has generated important theories of cultural hegemony and agency. Its practitioners attempt to explain and analyze the cultural forces related and processes of globalization.
During the rise of neoliberalism in Britain and the US, cultural studies both became a global movement, and attracted the attention of many conservative opponents both within and beyond universities for a variety of reasons. A worldwide movement of students and practitioners with a raft of scholarly associations and programs, annual international conferences and publications carry on work in this field today. Distinct approaches to cultural studies have emerged in different national and regional contexts.
Overview
Sardar's characteristics
In his 1994 book, Introducing Cultural Studies, orientalist scholar Ziauddin Sardar lists the following five main characteristics of cultural studies:
The objective of cultural studies is to understand culture in all its complex forms, and analyzing the social and political context in which culture manifests itself.
Cultural study is a site of both study/analysis and political criticism. For example, not only would a cultural studies scholar study an object, but they may also connect this study to a larger political project.
Cultural studies attempts to expose and reconcile constructed divisions of knowledge that purport to be grounded in nature.
Cultural studies has a commitment to an ethical evaluation of modern society.
One aim of cultural studies could be to examine cultural practices and their relation to power, following critical theory. For example, a study of a subculture (such as white working-class youth in London) would consider their social practices against those of the dominant culture (in this example, the middle and upper classes in London who control the political and financial sectors that create policies affecting the well-being of white working-class youth in London).
British cultural studies
Dennis Dworkin writes that "a critical moment" in the beginning of cultural studies as a field was when Richard Hoggart used the term in 1964 in founding the Centre for Contemporary Cultural Studies (CCCS) at the University of Birmingham. The centre would become home to the development of the intellectual orientation that has become known internationally as the "Birmingham School" of cultural studies, thus becoming the world's first institutional home of cultural studies.
Hoggart appointed as his assistant Stuart Hall, who would effectively be directing CCCS by 1968. Hall formally assumed the directorship of CCCS in 1971, when Hoggart left Birmingham to become Assistant Director-General of UNESCO. Thereafter, the field of cultural studies became closely associated with Hall's work. In 1979, Hall left Birmingham to accept a prestigious chair in sociology at the Open University, and Richard Johnson took over the directorship of the centre.
In the late 1990s, "restructuring" at the University of Birmingham led to the elimination of CCCS and the creation of a new Department of Cultural Studies and Sociology (CSS) in 1999. Then, in 2002, the university's senior administration abruptly announced the disestablishment of CSS, provoking a substantial international outcry. The immediate reason for disestablishment of the new department was an unexpectedly low result in the UK's Research Assessment Exercise of 2001, though a dean from the university attributed the decision to "inexperienced 'macho management'." The RAE, a holdover initiative of the Margaret Thatcher-led British government of 1986, determines research funding for university programs.
To trace the development of British Cultural Studies, see, for example, the work of Richard Hoggart, E. P. Thompson, Raymond Williams, Stuart Hall, Paul Willis, Angela McRobbie, Paul Gilroy, David Morley, Charlotte Brunsdon, Richard Dyer, and others. There are also many published overviews of the historical development of cultural studies, including Graeme Turner's British Cultural Studies: An Introduction, 3rd Ed. and John Hartley's A Short History of Cultural Studies
Stuart Hall's directorship of CCCS at Birmingham centre
Beginning in 1964, after the initial appearance of the founding works of British Cultural Studies in the late 1950s, Stuart Hall's pioneering work at CCCS, along with that of his colleagues and postgraduate students, gave shape and substance to the field of cultural studies. This would include such people as Paul Willis, Dick Hebdige, David Morley, Charlotte Brunsdon, John Clarke, Richard Dyer, Judith Williamson, Richard Johnson, Iain Chambers, Dorothy Hobson, Chris Weedon, Tony Jefferson, Michael Green and Angela McRobbie.
Many cultural studies scholars employed Marxist methods of analysis, exploring the relationships between cultural forms (i.e., the superstructure) and that of the political economy (i.e., the base). By the 1970s, the work of Louis Althusser radically rethought the Marxist account of base and superstructure in ways that had a significant influence on the "Birmingham School." Much of the work done at CCCS studied youth-subcultural expressions of antagonism toward "respectable" middle-class British culture in the post-WWII period. Also during the 1970s, the politically formidable British working classes were in decline. Britain's manufacturing industries while continuing to grow in output and value, were decreasing in share of GDP and numbers employed, and union rolls were shrinking. Millions of working-class Britons backed the rise of Margaret Thatcher, through the labour losses. For Stuart Hall and his colleagues, this shift in loyalty from the Labour Party to the Conservative Party had to be explained in terms of cultural politics, which they had been tracking even before Thatcher's first victory. Some of this work was presented in the cultural studies classic, Policing the Crisis, and in other later texts such as Hall's The Hard Road to Renewal: Thatcherism and the Crisis of the Left, and New Times: The Changing Face of Politics in the 1990s.
In 2016, Duke University Press launched a new series of Stuart Hall's collected writings, many of which detail his major and decisive contributions toward the establishment of the field of cultural studies. In 2023, a new Stuart Hall Archive Project was launched at the University of Birmingham to commemorate Hall's contributions in pioneering the field of cultural studies at CCCS.
Late-1970s and beyond
By the late 1970s, scholars associated with The Birmingham School had firmly placed questions of gender and race on the cultural studies agenda, where they have remained ever since. Also by the late 1970s, cultural studies had begun to attract a great deal of international attention. It spread globally throughout the 1980s and 1990s. As it did so, it both encountered new conditions of knowledge production, and engaged with other major international intellectual currents such as poststructuralism, postmodernism, and postcolonialism. The wide range of cultural studies journals now located throughout the world, as shown below, is one indication of the globalization of the field. For overviews of and commentaries on developments in cultural studies during the twenty-first century, see Lawrence Grossberg's Cultural Studies in the Future Tense, Gilbert Rodman's Why Cultural Studies? and Graeme Turner's What's Become of Cultural Studies?
Developments outside the UK
In the US, prior to the emergence of British Cultural Studies, several versions of cultural analysis had emerged largely from pragmatic and liberal-pluralist philosophical traditions. However, in the late 1970s and 1980s, when British Cultural Studies began to spread internationally, and to engage with feminism, poststructuralism, postmodernism, and race, critical cultural studies (i.e., Marxist, feminist, poststructuralist, etc.) expanded tremendously in American universities in fields such as communication studies, education, sociology, and literature. Cultural Studies, the flagship journal of the field, has been based in the US since its founding editor, John Fiske, brought it there from Australia in 1987.
A thriving cultural studies scene has existed in Australia since the late 1970s, when several key CS practitioners emigrated there from the UK, bringing British Cultural Studies with them, after Margaret Thatcher became Prime Minister of the UK in 1979. A school of cultural studies known as cultural policy studies is one of the distinctive Australian contributions to the field, though it is not the only one. Australia also gave birth to the world's first professional cultural studies association (now known as the Cultural Studies Association of Australasia) in 1990. Cultural studies journals based in Australia include International Journal of Cultural Studies, Continuum: Journal of Media & Cultural Studies, and Cultural Studies Review.
In Canada, cultural studies has sometimes focused on issues of technology and society, continuing the emphasis in the work of Marshall McLuhan, Harold Innis, and others. Cultural studies journals based in Canada include Topia: Canadian Journal of Cultural Studies.
In Africa, human rights and Third-World issues are among the central topics treated. There is a thriving cultural and media studies scholarship in Southern Africa, with its locus in South Africa and Zimbabwe. Cultural Studies journals based in Africa include the Journal of African Cultural Studies.
In Latin America, cultural studies have drawn on thinkers such as José Martí, Ángel Rama, and other Latin-American figures, in addition to the Western theoretical sources associated with cultural studies in other parts of the world. Leading Latin American cultural studies scholars include Néstor García Canclini, Jésus Martín-Barbero, and Beatriz Sarlo. Among the key issues addressed by Latin American cultural studies scholars are decoloniality, urban cultures, and postdevelopment theory. Latin American cultural studies journals include the Journal of Latin American Cultural Studies.
Even though cultural studies developed much more rapidly in the UK than in continental Europe, there is significant cultural studies presence in countries such as France, Spain, and Portugal. The field is relatively undeveloped in Germany, probably due to the continued influence of the Frankfurt School, which is now often said to be in its third generation, which includes notable figures such as Axel Honneth. Cultural studies journals based in continental Europe include the European Journal of Cultural Studies, the Journal of Spanish Cultural Studies, French Cultural Studies, and Portuguese Cultural Studies.
In Germany, the term cultural studies specifically refers to the field in the Anglosphere, especially British Cultural Studies, to differentiate it from the German which developed along different lines and is characterized by its distance from political science. However, and cultural studies are often used interchangeably, particularly by lay people.
Throughout Asia, cultural studies have boomed and thrived since at least the beginning of the 1990s. Cultural studies journals based in Asia include Inter-Asia Cultural Studies. In India, the Centre for Study of Culture and Society, Bangalore and the Department of Cultural Studies at The English and Foreign Languages and the University of Hyderabad are two major institutional spaces for Cultural Studies.
Issues, concepts, and approaches
Marxism has been an important influence upon cultural studies. Those associated with CCCS initially engaged deeply with the structuralism of Louis Althusser, and later in the 1970s turned decisively toward Antonio Gramsci. Cultural studies has also embraced the examination of race, gender, and other aspects of identity, as is illustrated, for example, by a number of key books published collectively under the name of CCCS in the late 1970s and early 1980s, including Women Take Issue: Aspects of Women's Subordination (1978), and The Empire Strikes Back: Race and Racism in 70s Britain (1982).
Gramsci and hegemony
To understand the changing political circumstances of class, politics, and culture in the United Kingdom, scholars at The Birmingham School turned to the work of Antonio Gramsci, an Italian thinker, writer, and Communist Party leader. Gramsci had been concerned with similar issues: why would Italian laborers and peasants vote for fascists? What strategic approach is necessary to mobilize popular support in more progressive directions? Gramsci modified classical Marxism, and argued that culture must be understood as a key site of political and social struggle. In his view, capitalists used not only brute force (police, prisons, repression, military) to maintain control, but also penetrated the everyday culture of working people in a variety of ways in their efforts to win popular "consent."
It is important to recognize that for Gramsci, historical leadership, or hegemony, involves the formation of alliances between class factions, and struggles within the cultural realm of everyday common sense. Hegemony was always, for Gramsci, an interminable, unstable and contested process.
Scott Lash writes:
Edgar and Sedgwick write:
Structure and agency
The development of hegemony theory in cultural studies was in some ways consonant with work in other fields exploring agency, a theoretical concept that insists on the active, critical capacities of subordinated people (e.g. the working classes, colonized peoples, women). As Stuart Hall famously argued in his 1981 essay, "Notes on Deconstructing 'the Popular: "ordinary people are not cultural dopes." Insistence on accounting for the agency of subordinated people run counter to the work of traditional structuralists. Some analysts have however been critical of some work in cultural studies that they feel overstates the significance of or even romanticizes some forms of popular cultural agency.
Cultural studies often concerns itself with the agency at the level of the practices of everyday life, and approaches such research from a standpoint of radical contextualism. In other words, cultural studies rejects universal accounts of cultural practices, meanings, and identities.
Judith Butler, an American feminist theorist whose work is often associated with cultural studies, wrote that:
Globalization
In recent decades, as capitalism has spread throughout the world via contemporary forms of globalization, cultural studies has generated important analyses of local sites and practices of negotiation with and resistance to Western hegemony.
Cultural consumption
Cultural studies criticizes the traditional view of the passive consumer, particularly by underlining the different ways people read, receive and interpret cultural texts, or appropriate other kinds of cultural products, or otherwise participate in the production and circulation of meanings. On this view, a consumer can appropriate, actively rework, or challenge the meanings circulated through cultural texts. In some of its variants, cultural studies has shifted the analytical focus from traditional understandings of production to consumption – viewed as a form of production (of meanings, of identities, etc.) in its own right. Stuart Hall, John Fiske, and others have been influential in these developments.
A special 2008 issue of the field's flagship journal, Cultural Studies, examined "anti-consumerism" from a variety of cultural studies angles. Jeremy Gilbert noted in the issue, cultural studies must grapple with the fact that "we now live in an era when, throughout the capitalist world, the overriding aim of government economic policy is to maintain consumer spending levels. This is an era when 'consumer confidence' is treated as the key indicator and cause of economic effectiveness."
The concept of "text"
Cultural studies, drawing upon and developing semiotics, uses the concept of text to designate not only written language, but also television programs, films, photographs, fashion, hairstyles, and so forth; the texts of cultural studies comprise all the meaningful artifacts of culture. This conception of textuality derives especially from the work of the pioneering and influential semiotician, Roland Barthes, but also owes debts to other sources, such as Juri Lotman and his colleagues from Tartu–Moscow School. Similarly, the field widens the concept of culture. Cultural studies approach the sites and spaces of everyday life, such as pubs, living rooms, gardens, and beaches, as "texts."
Culture, in this context, includes not only high culture, but also everyday meanings and practices, a central focus of cultural studies.
Jeff Lewis summarized much of the work on textuality and textual analysis in his cultural studies textbook and a post-9/11 monograph on media and terrorism. According to Lewis, textual studies use complex and difficult heuristic methods and require both powerful interpretive skills and a subtle conception of politics and contexts. The task of the cultural analyst, for Lewis, is to engage with both knowledge systems and texts and observe and analyze the ways the two interact with one another. This engagement represents the critical dimensions of the analysis, its capacity to illuminate the hierarchies within and surrounding the given text and its discourse.
Academic reception
Cultural studies has evolved through its uptake across a variety of different disciplines—anthropology, media studies, communication studies, literary studies, education, geography, philosophy, sociology, politics, and others.
While some have accused certain areas of cultural studies of meandering into political relativism and a kind of empty version of "postmodern" analysis, others hold that at its core, cultural studies provides a significant conceptual and methodological framework for cultural, social, and economic critique. This critique is designed to "deconstruct" the meanings and assumptions that are inscribed in the institutions, texts, and practices that work with and through, and produce and re-present, culture. Thus, while some scholars and disciplines have dismissed cultural studies for its methodological rejection of disciplinarity, its core strategies of critique and analysis have influenced areas of the social sciences and humanities; for example, cultural studies work on forms of social differentiation, control and inequality, identity, community-building, media, and knowledge production has had a substantial impact. Moreover, the influence of cultural studies has become increasingly evident in areas as diverse as translation studies, health studies, international relations, development studies, computer studies, economics, archaeology, and neurobiology.
Cultural studies has also diversified its own interests and methodologies, incorporating a range of studies on media policy, democracy, design, leisure, tourism, warfare, and development. While certain key concepts such as ideology or discourse, class, hegemony, identity, and gender remain significant, cultural studies has long engaged with and integrated new concepts and approaches. The field thus continues to pursue political critique through its engagements with the forces of culture and politics.
Literary scholars
Many cultural studies practitioners work in departments of English or comparative literature. Nevertheless, some traditional literary scholars such as Yale professor Harold Bloom have been outspoken critics of cultural studies. On the level of methodology, these scholars dispute the theoretical underpinning of the movement's critical framework.
Bloom stated his position during the 3 September 2000 episode of C-SPAN's Booknotes, while discussing his book How to Read and Why:
Marxist literary critic Terry Eagleton is not wholly opposed to cultural studies, but has criticised aspects of it and highlighted what he sees as its strengths and weaknesses in books such as After Theory (2003). For Eagleton, literary and cultural theory have the potential to say important things about the "fundamental questions" in life, but theorists have rarely realized this potential.
English departments also host cultural rhetorics scholars. This academic field defines cultural rhetorics as "the study and practice of making meaning and knowledge with the belief that all cultures are rhetorical and all rhetorics are cultural." Cultural rhetorics scholars are interested in investigating topics like climate change, autism, Asian American rhetoric, and more.
Sociology
Cultural studies have also had a substantial impact on sociology. For example, when Stuart Hall left CCCS at Birmingham, it was to accept a prestigious professorship in Sociology at the Open University in Britain. The subfield of cultural sociology, in particular, is disciplinary home to many cultural studies practitioners. Nevertheless, there are some differences between sociology as a discipline and the field of cultural studies as a whole. While sociology was founded upon various historic works purposefully distinguishing the subject from philosophy or psychology, cultural studies have explicitly interrogated and criticized traditional understandings and practices of disciplinarity. Most CS practitioners think it is best that cultural studies neither emulate disciplines nor aspire to disciplinarity for cultural studies. Rather, they promote a kind of radical interdisciplinarity as the basis for cultural studies.
One sociologist whose work has had a major influence on cultural studies is Pierre Bourdieu, whose work makes innovative use of statistics and in-depth interviews. However, although Bourdieu's work has been highly influential within cultural studies, and although Bourdieu regarded his work as a form of science, cultural studies has never embraced the idea that it should aspire toward "scientificity," and has marshalled a wide range of theoretical and methodological arguments against the fetishization of "scientificity" as a basis for cultural studies.
Two sociologists who have been critical of cultural studies, Chris Rojek and Bryan S. Turner, argue in their article, "Decorative sociology: towards a critique of the cultural turn," that cultural studies, particularly the flavor championed by Stuart Hall, lacks a stable research agenda, and privileges the contemporary reading of texts, thus producing an ahistorical theoretical focus. Many, however, would argue, following Hall, that cultural studies have always sought to avoid the establishment of a fixed research agenda; this follows from its critique of disciplinarity. Moreover, Hall and many others have long argued against the misunderstanding that textual analysis is the sole methodology of cultural studies, and have practiced numerous other approaches, as noted above. Rojek and Turner also level the accusation that there is "a sense of moral superiority about the correctness of the political views articulated" in cultural studies.
Science wars
In 1996, physicist Alan Sokal expressed his opposition to cultural studies by submitting a hoax article to a cultural studies journal, Social Text. The article, which was crafted as a parody of what Sokal referred to as the "fashionable nonsense" of postmodernism, was accepted by the editors of the journal, which did not at the time practice peer review. When the paper appeared in print, Sokal published a second article in a self-described "academic gossip" magazine, Lingua Franca, revealing his hoax on Social Text. Sokal stated that his motivation stemmed from his rejection of contemporary critiques of scientific rationalism:
In response to this critique, Jacques Derrida wrote:
Founding works
Hall and others have identified some core originating texts, or the original "curricula," of the field of cultural studies:
Richard Hoggart's The Uses of Literacy
Raymond Williams' Culture and Society and The Long Revolution
E. P. Thompson's The Making of the English Working Class.
See also
Culturology
Cultural Studies Association (US)
European Communication Research and Education Association (Norway)
International Association for Translation and Intercultural Studies (South Korea)
Popular culture studies
References
Sources
Du Gay, Paul, et al. 1997. Doing Cultural Studies: The Story of the Sony Walkman. Culture, Media and Identities. London: SAGE, in association with Open University.
Edgar, Andrew, and Peter Sedgwick. 2005. Cultural Theory: The Key Concepts (2nd ed.). New York: Routledge.
Engel, Manfred. 2008. "Cultural and Literary Studies." Canadian Review of Comparative Literature 31:460–67.
Grossberg, Lawrence (2010). Cultural Studies in the Future Tense. Durham, NC: Duke University Press.
.
Hall, Gary & Birchall, Claire, eds. (2006). New Cultural Studies: Adventures in Theory. Edinburgh: Edinburgh University Press.
—— 1980. "Cultural Studies: Two Paradigms." Media, Culture, and Society 2.
—— 1992. "Race, Culture, and Communications: Looking Backward and Forward at Cultural Studies." Rethinking Marxism 5(1):10–18.
Hoggart, Richard. 1957. The Uses of Literacy: Aspects of Working Class Life. Chatto and Windus.
Hartley, John (2003). A Short History of Cultural Studies. London: Sage.
Johnson, Richard. 1986–87. "What Is Cultural Studies Anyway?" Social Text 16:38–80.
—— 2004. "Multiplying Methods: From Pluralism to Combination." pp. 26–43 in Practice of Cultural Studies. London: SAGE.
—— "Post-Hegemony? I Don't Think So" Theory, Culture & Society 24(3):95–110.
Lindlof, T. R., and B. C. Taylor. 2002. Qualitative Communication Research Methods (2nd ed.). Thousand Oaks, CA: SAGE.
Longhurst, Brian, Greg Smith, Gaynor Bagnall, Garry Crawford, and Michael Ogborn. 2008. Introducing Cultural Studies (2nd ed.). London: Pearson. .
Pollock, Griselda, ed. 1996. Generations and Geographies: Critical Theories and Critical Practices in Feminism and the Visual Arts. Routledge.
—— 2006. Psychoanalysis and the Image. Boston: Blackwell.
Sardar, Ziauddin, Van Loon, Borin (1997). Introducing Cultural Studies. New York: Totem Books.
Smith, Paul. 1991. "A Course In 'Cultural Studies'." The Journal of the Midwest Modern Language Association 24(1):39–49.
—— 2006. "Looking Backwards and Forwards at Cultural Studies." pp. 331–40 in A Companion to Cultural Studies, edited by T. Miller. Malden, MA: Blackwell Publishers. .
Rodman, Gil (2015). Why Cultural Studies? Maldon, MA: Wiley Blackwell.
Turner, Graeme (2003). British Cultural Studies: An Introduction (Third ed.). London: Routledge.
—— 2012. What's Become of Cultural Studies? Los Angeles: SAGE.
Williams, Jeffrey, interviewer. 1994. "Questioning Cultural Studies: An Interview with Paul Smith." Hartford, CT: MLG Institute for Culture and Society, Trinity College. Retrieved 1 July 2020.
Williams, Raymond. 1985. Keywords: A Vocabulary of Culture and Society (revised ed.). New York: Oxford University Press.
—— 1966. Culture and Society, 1780-1950. New York: Harper & Row.
External links
CCCS publications (Annual Reports and Stencilled Papers) of the University of Birmingham
CSAA: Cultural Studies Association of Australasia
Cultural Studies
International Journal of Cultural Studies
Stuart Hall Archive Project, University of Birmingham, UK
Stuart Hall: Selected Writings, Duke University Press
Social sciences
| 0.761284 | 0.996563 | 0.758667 |
Nationalist historiography
|
Historiography is the study of how history is written. One pervasive influence upon the writing of history has been nationalism, a set of beliefs about political legitimacy and cultural identity. Nationalism has provided a significant framework for historical writing in Europe and in those former colonies influenced by Europe since the nineteenth century. Typically official school textbooks are based on the nationalist model and focus on the emergence, trials and successes of the forces of nationalism.
Origins
The eighteenth and nineteenth century saw the emergence of nationalist ideologies. John Breuilly notes how the "historical grounding of nationalism was reinforced by its close ties with the emergence of professional academic historical writing". During the French Revolution a national identity was crafted, identifying the common people with the Gauls. In Germany historians and humanists, such as Johann Gottfried Herder and Johann Gottlieb Fichte, identified a linguistic and cultural identity of the German nation, which became the basis of a political movement to unite the fragmented states of this German nation.
A significant historiographical outcome of this movement of German nationalism was the formation of a "Society for Older German Historical Knowledge", which sponsored the editing of a massive collection of documents of German history, the Monumenta Germaniae Historica. The sponsors of the MGH, as it is commonly known, defined German history very broadly; they edited documents concerning all territories where German-speaking people had once lived or ruled. Thus, documents from Italy to France to the Baltic were grist for the mill of the MGH editors.
This model of scholarship focusing on detailed historical and linguistic investigations of the origins of a nation, set by the founders of the MGH, was imitated throughout Europe. In this framework, historical phenomena were interpreted as they related to the development of the nation-state; the state was projected into the past. National histories are thus expanded to cover everything that has ever happened within the largest extent of the expansion of a nation, turning Mousterian hunter-gatherers into incipient Frenchmen. Conversely, historical developments spanning many current countries may be ignored, or analysed from narrow parochial viewpoints.
The efforts of these nineteenth- century historians provided the intellectual foundations for both justifying the creation of new nation states and the expansion of already existing ones. As Georg Iggers notes, these historians were often highly partisan and "went into the archives to find evidence that would support their nationalistic and class preconceptions and thus give them the aura of scientific authority". Paul Lawrence concurs, noting how - even with nationalisms still without states - historians "often sought to provide a historical basis for the claims to nationhood and political independence of states that did not yet exist".
Time depth and ethnicity
The difficulty faced by any national history is the changeable nature of ethnicity. That one nation may turn into another nation over time, both by splitting (colonization) and by merging (syncretism, acculturation) is implicitly acknowledged by ancient writers; Herodotus describes the Armenians as "colonists of the Phrygians", implying that at the time of writing clearly separate groups originated as a single group. Similarly, Herodotus refers to a time when the "Athenians were just beginning to be counted as Hellenes", implying that a formerly Pelasgian group over time acquired "Greekness". The Alamanni are described by Asinius Quadratus as originally a conglomerate of various tribes which acquired a common identity over time. All these processes are summarized under the term ethnogenesis.
In ancient times, ethnicities often derived their or their rulers' origin from divine or semi-divine founders of a mythical past (for example, the Anglo-Saxons deriving their dynasties from Woden; see also Euhemerism). In modern times, such mythical aetiologies in nationalist constructions of history were replaced by the frequent attempt to link one's own ethnic group to a source as ancient as possible, often known not from tradition but only from archaeology or philology, such as Armenians claiming as their origin the Urartians, the Albanians claiming as their origin the Pelasgians (supposedly including Illyrians, Epirotes, and Ancient Macedonians), the Georgians claiming as their origin the Mushki—all of the mentioned groups being known only from either ancient historiographers or archaeology.
Nationalism and ancient history
Nationalist ideologies frequently employ results of archaeology and ancient history as propaganda, often significantly distorting them to fit their aims, cultivating national mythologies and national mysticism. Frequently this involves the uncritical identification of one's own ethnic group with some ancient or even prehistoric (known only archaeologically) group, whether mainstream scholarship accepts as plausible or reject as pseudoarchaeology the historical derivation of the contemporary group from the ancient one. The decisive point, often assumed implicitly, that it is possible to derive nationalist or ethnic pride from a population that lived millennia ago and, being known only archaeologically or epigraphically, is not remembered in living tradition.
Examples include Kurds claiming identity with the Medes, Albanians claiming as their origin the Pelasgians, Bulgarians claiming identity with the Thracians, Iraqi propaganda invoking Sumer or Babylonia, Georgians claiming as their origin the Mushki, —all of the mentioned groups being known only from either ancient historiographers or archaeology. In extreme cases, nationalists will ignore the process of ethnogenesis altogether and claim ethnic identity of their own group with some scarcely attested ancient ethnicity known to scholarship by the chances of textual transmission or archaeological excavation.
Historically, various hypotheses regarding the Urheimat of the Proto-Indo-Europeans has been a popular object of patriotic pride, quite regardless of their respective scholarly values:
Albanian nationalism: The descent from the Pelasgians (supposedly including Illyrians, Epirotes, and Ancient Macedonians)
Romanian nationalism: Dacianism or Dacomania
Greek nationalism: The supposedly Greek origins of the ancient Thracians, Illyrians and of the Minoan civilization.
Northern European origins of an Aryan race (Germanic mysticism, Nazi mysticism, Ahnenerbe)
Lithuanian Sarmatism: The Lithuanian origins of the Goths, Sarmatians and other Eastern European peoples.
Pan-Turkism and Neo-Eurasianism postulate mythical origins of humanity or culture in Central Asia, (Sun Language Theory, Arkaim)
Slavic nationalisms: Polish Sarmatism, Macedonism, Illyrian movement, Thracomania, etc.
Slovene nationalists and venetic theory
Armenian nationalism: Armenia, Subartu and Sumer
Antiquization: claims continuity between ancient Macedonia and modern North Macedonia
Indian Indigenous Aryanism: believes that the Indo-European peoples originated in South Asia instead of Eastern Europe
Study
Nationalism was so much taken for granted as the "proper" way to organize states and view history that nationalization of history was essentially invisible to historians until fairly recently. Then scholars such as Ernest Gellner, Benedict Anderson, and Anthony D. Smith made attempts to step back from nationalism and view it critically. Historians began to ask themselves how this ideology had affected the writing of history.
Smith, for instance, develops the concept of 'historicism' to describe an emerging belief in the birth, growth, and decay of specific peoples and cultures, which - in the eighteenth and nineteenth centuries - became "increasingly attractive as a framework for inquiry into the past and present and [...] an explanatory principle in elucidating the meaning of events, past and present".
Speaking to an audience of anthropologists, the historian E. J. Hobsbawm pointed out the central role of the historical profession in the development of nationalism:
Martin Bernal's much debated book Black Athena (1987) argues that the historiography on ancient Greece has been in part influenced by nationalism and ethnocentrism. He also claimed that influences by non-Greek or non-Indo-European cultures on Ancient Greek were marginalized.
According to the medieval historian Patrick J. Geary:[The] modern [study of] history was born in the nineteenth century, conceived and developed as an instrument of European nationalism. As a tool of nationalist ideology, the history of Europe's nations was a great success, but it has turned our understanding of the past into a toxic waste dump, filled with the poison of ethnic nationalism, and the poison has seeped deep into popular consciousness.
By country
Nationalist historiographies have emerged in a number of countries and some have been subject to in-depth scholarly analysis.
Cuba
In 2007, Kate Quinn presented an analysis of the Cuban nationalist historiography.
Indonesia
In 2003, Rommel Curaming analyzed the Indonesian nationalistic historiography.
South Korea
Nationalist historiography in South Korea has been the subject of 2001 study by Kenneth M. Wells.
Thailand
In 2003, Patrick Jory analyzed the Thai nationalistic historiography.
Zimbabwe
In 2004, Terence Ranger noted that "Over the past two or three years there has emerged in Zimbabwe a sustained attempt by the Mugabe regime to propagate what is called ‘patriotic history’."
See also
Afrocentrism
Gothicism
Historical revisionism
Historical negationism
Irredentism
Methodological nationalism
Nationalisms Across the Globe
National myth
Nationalism and archaeology
Nationalization of history
Nazi archaeology
Primordialism
Romantic nationalism
Politics of archaeology in Israel and Palestine
References
Further reading
Nationalism in general
Anderson, Benedict. Imagined Communities: Reflections on the Origin and Spread of Nationalism, 2nd. ed. London: Verso, 1991.
Bond, George C. and Angela Gilliam (eds.) Social Construction of the Past: Representation as Power. London: Routledge, 1994.
Díaz-Andreu, Margarita. A World History of Nineteenth-Century Archaeology. Nationalism, Colonialism and the Past. Oxford, Oxford University Press, 2007.
Díaz-Andreu, Margarita and Champion, Tim (eds.) Nationalism and Archaeology in Europe. London: UCL Press; Boulder, Co.: Westview Press, 1996. (UCL Press); (hb) & 978-0813330518 (pb) (Westview)
Ferro, Marc. The Use and Abuse of History: Or How the Past Is Taught to Children. London:Routledge, 2003,
Gellner, Ernest. Nations and Nationalism. Ithaca: Cornell University Press, 1983.
Hobsbawm, Eric. Nations and Nationalism since 1780. Cambridge: Cambridge University Press, 1992.
Hobsbawm, Eric J. and Terence Ranger, ed.. The Invention of Tradition. Cambridge: Cambridge University Press, 1992
Kohl, Philip L. "Nationalism and Archaeology: On the Constructions of Nations and the Reconstructions of the Remote past", Annual Review of Anthropology, 27, (1998): 223–246.
Smith, Anthony D. The Ethnic Origins of Nations. Oxford: Blackwell Publishers, 1988.
Suny, Ronald Grigor. "Constructing Primordialism: Old Histories for New Nations", The Journal of Modern History, 73, 4 (Dec, 2001): 862–896.
Bergunder, Michael Contested Past: Anti-Brahmanical and Hindu nationalist reconstructions of Indian prehistory, Historiographia Linguistica, Volume 31, Number 1, 2004, 59–104.
G. Fagan (ed.), Archaeological Fantasies: How Pseudoarchaeology Misrepresents the Past and Misleads the Public Routledge (2006), .
Kohl, Fawcett (eds.), Nationalism, Politics and the Practice of Archaeology, Cambridge University Press (1996),
Bruce Lincoln, Theorizing Myth: Narrative, Ideology, and Scholarship, University of Chicago Press (2000), .
Specific nationalisms
Baltic
Krapauskas, Virgil. Nationalism and Historiography: The Case of Nineteenth-Century Lithuanian Historicism. Boulder, Colo.: East European Monographs, 2000.
Celtic
Chapman, Malcolm. The Celts: The Construction of a Myth. New York: St. Martin's Press, 1992.
Dietler, Michael. "'Our Ancestors the Gauls': Archaeology, Ethnic Nationalism, and the Manipulation of Celtic Identity in Modern Europe". American Anthropologist, N.S. 96 (1994): 584–605.
James, Simon. The Atlantic Celts: Ancient People or Modern Invention? London: British Museum Press, 1999.
Chinese
Duara, Prasenjit. Rescuing History from the Nation: Questioning Narratives of Modern China. Chicago: University of Chicago Press, 1997
Israeli
Abu El-Haj, Nadia. Facts on the Ground: Archaeological Practice and Territorial Self-Fashioning in Israeli Society. Chicago: University of Chicago Press, 2001.
Uri Ram, The Future of the Past in Israel – A Sociology of Knowledge Approach, in Benny Morris, Making Israel, the University of Michigan Press, 2007.
Pakistan
Raja, Masood Ashraf. Constructing Pakistan: Foundational Texts and the Rise of Muslim National Identity, 1857–1947, Oxford 2010,
Spanish
Díaz-Andreu, Margarita 2010. "Nationalism and Archaeology. Spanish Archaeology in the Europe of Nationalities". In Preucel, R. and Mrozowksi, S. (eds.), Contemporary Archaeology in Theory and Practice. London, Blackwell: 432–444.
Recent conferences
Nationalism, Historiography and the (Re)construction of the Past, University of Birmingham, 10–12 September 2004
External links
"Encyclopedia of 1848 Revolutions", comprehensive collection of new articles by modern scholars
Nationalism
Nationalism
National mysticism
National histories
Nationalism and archaeology
| 0.769711 | 0.985637 | 0.758656 |
Epic (genre)
|
Epic is a narrative genre characterised by its length, scope, and subject matter. The defining characteristics of the genre are mostly derived from its roots in ancient poetry (epic poems such as Homer's Iliad and Odyssey). An epic is not limited to the traditional medium of oral poetry, but has expanded to include modern mediums including film, theater, television shows, novels, and video games.
The use of epic as a genre, specifically for epic poetry, dates back millennia, all the way to the Epic of Gilgamesh, widely agreed to be the first epic. But critique and discourse has continuously arisen over this long period of time, with attempts to clarify what the core characteristics of the “epic” genre really are beginning only in the past two centuries as new mediums of storytelling emerged with developing technologies. Most significantly, the advent of the novel, such as classics like Tolstoy's War and Peace which began to be referred to as “epic novels”, caused critics to reconsider what can be called an “epic”. With this discussion, epic became a larger overarching genre under which many subgenres, such as epic poetry, epic novels, and epic films could fall under. However, the nebulous definitions assigned to even the long-standing ancient epics due to their ubiquitous presence across vastly differing cultures and traditions, are still a topic of discourse for today's literary academics, and have caused lingering difficulties in creating a definitive definition for the umbrella term of “epic” as a genre.
Etymology and origin
Epic originally comes from the Latin word epicus, which itself comes from the Ancient Greek adjective ἐπικός (epikos) deriving from ἔπος (epos), meaning "word, story, poem."
The word Epic, throughout the years, has adapted to different meanings that stem far away from its origins. In Ancient Greece, Epic was used in the form of a noun. Epic (noun) refers to a long poem, book, movie, etc. that tells the story of a hero's adventures. The earliest epics were long poems performed out loud that told these grandiose stories about heroes. Today, in modern society, the word epic has been expanded and associated with all kinds of long literatures that still underlyingly focus on the values of a given society. It is often used as an adjective. Epic (adjective) refers to something very great or large and usually difficult or impressive. In addition, the word epic can be used to describe any media that has a large scope, that speaks about the human condition and that is ambitious with artistic goals. Star Wars, for example, is considered a modern cinematic epic.
History
Ancient sources
Providing a plethora of narrative tropes, the Mesopotamian Epic of Gilgamesh, as the first recorded epic poem, would lay the foundation for the entire Western branch of the genre. Both the Old Testament and New Testament borrow many themes from Gilgamesh, which in turn has been found to draw from older Sumerian tradition. As such, some anthropologists identify Jesus as an embodiment of the same mythical archetype. Some similarities, among others, include stories of:
the universal flood:
Utnapishtim in the Mesopotamian story
Noah in the Judeo-Christian story
the 'tree of life' and the Garden of Eden:
Enkidu and Shamhat in Gilgamesh.
Adam and Eve in the Bible.
the hero versus a divine assailant:
Gilgamesh vs Enkidu
Jacob vs the angel
Just as it provided a blueprint for biblical traditions, many other pre-Christian mythos and religious epics have also shown to be influenced by Gilgamesh, including those of Buddha in Buddhist tradition; Krishna in Hindu tradition; Odysseus, Perseus, and Dionysus in Greek tradition; Ra, Horus, Osiris, and Amenhotep III in Ancient Egyptian tradition; Romulus in Roman tradition; and Zoroaster/Zarathustra and Mithra in Zoroastrian tradition.
The Bible similarly extended its influence into existing epic literature such as the legend of King Arthur, which, as it exists in the modern day, has been interpreted to be loosely modeled upon the life of Jesus, however this was not always the case. Arthurian literature had originally been based on pre-Christian, Celtic folklore and may have been based on a British warrior (5th–6th century) who staved off invading Saxons. During the early christianization of the United Kingdom, the Church tolerated new converts observing their older, pagan traditions. However, as the British Church grew in power, events taking place in Europe (such as The Crusades) inspired authors to reshape the traditional legends with christian undertones. Author Robert de Boron, for instance, translated the legend into French in 1155, in which he would conceive of the now-iconic addition of the sword-in-the-stone legend, and would expand upon the Round Table lore whereby Arthur had twelve knights just as Jesus has twelve disciples.
Modernity
Specific echelons of popular culture draw from a variety of epic narrative tropes. This may preclude to genres such as heroic fantasy, sword and sorcery, space opera, fantasy adventures, and high fantasy. Some even draw influence from each other, just as ancient sources. For example Frank Herbert's Dune Saga inspired the Star Wars trilogy and the Alejandro Jodorowsky's Jodoverse.
Types
Folk Epic
Folk epic can be defined as the earliest form of the epic genre, which was performed and passed down in oral form. Folk epics were often sung or narrated in royal courts. These stories recounted particular mythologies and consisted of mainly made on-the-spot stories. Due to their oral handing down, early folk epic authors and performers remain unknown. The authors are presumed to have been mostly common men.
Literary Epic
As the years went by there was a need to preserve these folk epics in written form and attribute value to their authors. With this increased demand, the literary epic genre emerged. Literary epic shares similarities with folk epic, but instead of being in oral form, it is presented in a written format to ensure its survival across the years. Literary epics tend to be more polished, coherent, and compact in structure and style. They most often are based on ideas of the author, that stem from their own learned knowledge. The author, unlike with folk epics, tends to be recognised.
Transition from Folk to Literary Epic
Early famous poems such as Iliad and Odyssey, show the transition from folk to literary epics. With a need to preserve these famous stories, they were adapted to a written format. Their author known as Homer probably never existed, as the name was used to incorporate the many generations of performers who told, retold, and shaped the stories of the Iliad and the Odyssey over time.
Elements
Length
It has been well-established that narrative works of extreme length can be considered “epics”. The exact length is less important than the relative length within its medium. For example, with poetry, the distinction is made between epics and lyrics, relatively long and short respectively as poetry. In film, television, or novels, just as in epic poetry, this can manifest as a series or collection of connected individual works, evoking the epic cycle.
Style
Originating once again from the style of the ancient epic, a certain level of seriousness is expected in the prose of something considered an “epic”. Put another way, to achieve the grandiosity typical of an “epic”, distance must be created from the story for the reader via the style of the prose. To further this, the work must be high quality within its medium, again to evoke “epicness”.
Epic hero
Epics are thought of as representative of a culture and a community, and something which defines a social identity, thus the epic hero is the individual representative of that. The hero is often righteous or moralistically good, especially in the ancient epic, or else above all others in some field such as combat or leadership. The hero is the vehicle by which the epic's long, difficult narrative must be carried. They must therefore be a strong, distinct, and memorable character.
Mythos
An epic tends to draw upon existing narratives, specifically within the community or culture it represents. This can be thought of as the “mythos” of the epic. In ancient epics, this was often existing, published works. In the modern context, many narratives that could be considered “epic” have developed their own mythos, such as with comic book franchises like DC, or sci-fi like Star Wars and Star Trek and fantasy like The Lord of the Rings which go so far as to develop multiple novel languages for their mythos. This creative mythos could still, however, be argued to be drawing upon existing narratives, traditions, and motifs present in cultures and communities represented in these epics.
Themes
The themes within an epic are reflected in the relationship between the epic hero and the epic setting. The concerns of an epic are greater than the individual hero's concerns; the grandiosity extends to the conflict, and the concern of the epic is the concern of the entire world within the narrative.
Genres
There are many genres of epic and various mediums that have adopted such genres, including:
Epic film: encompasses historical epics, religious epics, and western epics. However, such commonly been broken further into subgenres,
Female epic: examines ways in which female authors have adapted the masculine epic tradition to express their own heroic visions.
Chivalric epics from the Middle Ages.
National epics.
Real-life stories of heroic figures have also been referred to as being epic. For example, Ernest Shackleton's exploration adventures in Antarctica.
Epic fantasy
Epic fantasy (or high fantasy) has been described as containing three elements:
it must be a trilogy or longer;
its time-span must encompass years or more; and
it must contain a large back-story or universe setting in which the story takes place.
J. R. R. Tolkien's The Lord of the Rings is an example of epic fantasy, though the genre is not limited to the Western tradition, for example: Arabic epic literature includes One Thousand and One Nights; and Indian epic poetry includes Ramayana and Mahabharata.
References
Further reading
Merchant, Paul. 1971. The Epic. Routledge Kegan & Paul. .
| 0.762012 | 0.995595 | 0.758655 |
European values
|
European values are the norms and values that Europeans are said to have in common, and which transcend national or state identities. In addition to helping promote European integration, this doctrine also provides the basis for analyses that characterise European politics, economics, and society as reflecting a shared identity; it is often associated with human rights, electoral democracy, and rule of law.
Overview
Especially in France, "the European idea" (l'idée d'Europe) is associated with political values derived from the Age of Enlightenment and the republicanism growing out of the French Revolution and the Revolutions of 1848 rather than with personal or individual identity formed by culture or ethnicity (let alone a "pan-European" construct including those areas of the continent never affected by 18th-century rationalism or Republicanism).
The phrase "European values" arises as a political neologism in the 1980s in the context of the project of European integration and the future formation of the European Union. The phrase was popularised by the European Values Study, a long-term research program started in 1981, aiming to document the outlook on "basic human values" in European populations. The project had grown out of a study group on "values and social change in Europe" initiated by Jan Kerkhofs, and Ruud de Moor (Catholic University in Tilburg). The claim that the people of Europe have a distinctive set of political, economic and social norms and values that are gradually replacing national values has also been named "Europeanism" by McCormick (2010).
"European values" were contrasted to non-European values in international relations, especially in the East–West dichotomy, "European values" encompassing individualism and the idea of human rights in contrast to Eastern tendencies of collectivism. However, "European values" were also viewed critically, their "darker" side not necessarily leading to more peaceful outcomes in international relations.
The association of "European values" with European integration as pursued by the European Union came to the fore with the eastern enlargement of the EU in the aftermath of the Cold War.
The Treaty of Lisbon (2007) in article 1A lists a number of "values of the Union",
including "respect for human dignity, freedom, democracy, equality, the rule of law and respect for human rights including the rights of persons belonging to minorities", invoking "a society in which pluralism, non-discrimination, tolerance, justice, solidarity and equality between women and men prevail".
The 2012 Eurobarometer survey reported that 49% of those surveyed described the EU member states as "close" in terms of "shared values" (down from 54% in 2008), 42% described them as "different" (up from 34% in 2008).
Habermas and Derrida (2005)
The philosophers Jürgen Habermas and Jacques Derrida wrote an article for the newspaper Frankfurter Allgemeine Zeitung in which they claimed the birth of a 'European public sphere'.
They argued that new values and habits had given contemporary Europe 'its own face', and saw an opportunity for the construction of a 'core Europe' (excluding Britain and Eastern Europe) that might be a counterweight to the United States.
Attempting to explain what Europe represented, the two philosophers listed six facets of what they described as a common European 'political mentality':
Secularisation.
Trust in the state and scepticism about the achievements of markets.
Realistic expectations about technological progress.
Welfarism.
A low threshold of tolerance for the use of force.
Multilateralism within the framework of a reformed United Nations.
McCormick (2010)
Political scientist John McCormick expands on these ideas, and identifies the following as core attributes of Europeanism:
Secularism is probably the one quality most clearly associated with Europe: while religion continues to grow in most of the rest of the world, in virtually every European country, its role is declining, and it plays an increasingly marginal role in politics and public life, while heavily influencing Europeanist attitudes towards science and towards public policies in which religious belief plays a role.
A rethinking of the meaning of citizenship and patriotism. In regard to the latter, pride in country is being replaced with pride in ideas, otherwise known as constitutional patriotism. Identification with nations or states is being increasingly joined with identification with Europe.
Cosmopolitanism, or an association with universal ideas, and a belief that all Europeans, and possibly even all humans, belong to a single moral community that transcends state boundaries or national identities. The local and the global cannot be separated or divorced.
Communitarianism, which - in contrast to the liberal emphasis on individual rights - supports a balance between individual and community interests, emphasizing the responsibilities of government to all those who live under its jurisdiction. Europeanism argues that society may sometimes be a better judge of what is good for individuals rather than vice versa.
The collective society. Europeanism emphasizes the view that societal divisions will occur in spite of attempts to ensure equal opportunity, and accepts the role of the state as an economic manager and as a guarantor of societal welfare.
Welfarism, or a reference to Europeanist ideas that while individual endeavor is to be welcomed, applauded and rewarded, the community has a responsibility for working to ensure that the playing field is as level as possible, and that opportunity and wealth are equitably distributed. Europeanism emphasizes equality of results over equality of opportunity.
Sustainable development, or the belief that development should be sustainable, meeting the needs of the present without compromising the needs of future generations.
Redefining the family. The place of the European family is changing, with fewer Europeans opting to marry, their ages at marriage rising, their divorce rates growing, their fertility rates declining, more children are being born outside marriage, and single-parent households becoming more usual.
Working to live. Post-material Europeans are working fewer hours, are doing more with those hours, and have developed family-friendly laws and policies.
Criminal rights. In matters of criminal justice, Europeanism means a greater emphasis on individual rights, and a preference for resolving disputes through negotiation rather than confrontation through the law.
Multiculturalism, in which Europe has a long and often overlooked tradition arising from the diversity of European societies, and a Europeanist habit of integrating core values and features from new groups with which its dominant cultures have come into contact.
Opposition to capital punishment. This is prohibited in all European Union and Council of Europe member states, and European governments have worked to achieve a global moratorium as a first step towards its worldwide abolition.
Perpetual peace. Where once Europe was a region of near constant war, conflict and political violence, it is today a region of generalised peace, and one which has made much progress along the path to achieving the Kantian condition of perpetual peace. Inter-state war in the region is alleged to be unthinkable and impossible, even during the worst economic or financial troubles.
Multilateralism. Europeanism has eschewed national self-interest in favour of cooperation and consensus, of the promotion of values rather than interests, of reliance on international rules and agreements, and of building coalitions and working through international organisations to resolve problems.
European Union
The European Union declares the fundamental EU values to be the ones "common to the Member States in a society in which pluralism, non-discrimination, tolerance, justice, solidarity and equality between women and men prevail". They are: human dignity, freedom, democracy, equality, rule of law, and human rights. These fundamental values are defined in the Treaty of Lisbon.
See also
Western values (West)
Asian values
Europhile
Pan-European identity
Pro-Europeanism
References
External links
dialogueanduniversalism.eu
Pro-Europeanism
Western culture
Value (ethics)
| 0.773495 | 0.980804 | 0.758647 |
Utopian socialism
|
Utopian socialism is the term often used to describe the first current of modern socialism and socialist thought as exemplified by the work of Henri de Saint-Simon, Charles Fourier, Étienne Cabet, and Robert Owen. Utopian socialism is often described as the presentation of visions and outlines for imaginary or futuristic ideal societies, with positive ideals being the main reason for moving society in such a direction. Later socialists and critics of utopian socialism viewed utopian socialism as not being grounded in actual material conditions of existing society. These visions of ideal societies competed with revolutionary and social democratic movements.
The term utopian socialism is most often applied to those socialists who lived in the first quarter of the 19th century by later socialists as a pejorative in order to dismiss their ideas as fanciful and unrealistic. A similar school of thought that emerged in the early 20th century which makes the case for socialism on moral grounds is ethical socialism.
Those anarchists and Marxists who dismissed utopian socialism did so because utopian socialists generally did not believe any form of class struggle or social revolution was necessary for socialism to emerge. Utopian socialists believed that people of all classes could voluntarily adopt their plan for society if it was presented convincingly. Cooperative socialism could be established among like-minded people in small communities that would demonstrate the feasibility of their plan for the broader society. Because of this tendency, utopian socialism was also related to classical radicalism, a left-wing liberal ideology.
Development
The term "utopian socialism" was used by socialist thinkers after the publication of The Communist Manifesto to describe early socialist or quasi-socialist intellectuals who created hypothetical visions of egalitarian, communal, meritocratic, or other notions of perfect societies without considering how these societies could be created or sustained.
In The Poverty of Philosophy, Marx criticized the economic and philosophical arguments of Proudhon set forth in The System of Economic Contradictions, or The Philosophy of Poverty. Marx accused Proudhon of wanting to rise above the bourgeoisie. In the history of Marx's thought and Marxism, this work is pivotal in the distinction between the concepts of utopian socialism and what Marx and the Marxists claimed as scientific socialism. Although utopian socialists shared few political, social, or economic perspectives, Marx and Engels argued that they shared certain intellectual characteristics. In The Communist Manifesto, Marx and Friedrich Engels wrote:The undeveloped state of the class struggle, as well as their own surroundings, causes Socialists of this kind to consider themselves far superior to all class antagonisms. They want to improve the condition of every member of society, even that of the most favored. Hence, they habitually appeal to society at large, without distinction of class; nay, by preference, to the ruling class. For how can people, when once they understand their system, fail to see it in the best possible plan of the best possible state of society? Hence, they reject all political, and especially all revolutionary, action; they wish to attain their ends by peaceful means, and endeavor, by small experiments, necessarily doomed to failure, and by the force of example, to pave the way for the new social Gospel.Marx and Engels associated utopian socialism with communitarian socialism which similarly sees the establishment of small intentional communities as both a strategy for achieving and the final form of a socialist society. Marx and Engels used the term scientific socialism to describe the type of socialism they saw themselves developing. According to Engels, socialism was not "an accidental discovery of this or that ingenious brain, but the necessary outcome of the struggle between two historically developed classes, namely the proletariat and the bourgeoisie. Its task was no longer to manufacture a system of society as perfect as possible, but to examine the historical-economic succession of events from which these classes and their antagonism had of necessity sprung, and to discover in the economic conditions thus created the means of ending the conflict". Critics have argued that utopian socialists who established experimental communities were in fact trying to apply the scientific method to human social organization and were therefore not utopian. On the basis of Karl Popper's definition of science as "the practice of experimentation, of hypothesis and test", Joshua Muravchik argued that "Owen and Fourier and their followers were the real 'scientific socialists.' They hit upon the idea of socialism, and they tested it by attempting to form socialist communities". By contrast, Muravchik further argued that Marx made untestable predictions about the future and that Marx's view that socialism would be created by impersonal historical forces may lead one to conclude that it is unnecessary to strive for socialism because it will happen anyway.
Social unrest between the employee and employer in a society results from the growth of productive forces such as technology and natural resources are the main causes of social and economic development. These productive forces require a mode of production, or an economic system, that's based around private property rights and institutions that determine the wage for labor. Additionally, the capitalist rulers control the modes of production. This ideological economic structure allows the bourgeoises to undermine the worker's sensibility of their place in society, being that the bourgeoises rule the society in their own interests. These rulers of society exploit the relationship between labor and capital, allowing for them to maximize their profit. To Marx and Engels, the profiteering through the exploitation of workers is the core issue of capitalism, explaining their beliefs for the oppression of the working class. Capitalism will reach a certain stage, one of which it cannot progress society forward, resulting in the seeding of socialism. As a socialist, Marx theorized the internal failures of capitalism. He described how the tensions between the productive forces and the modes of production would lead to the downfall of capitalism through a social revolution. Leading the revolution would be the proletariat, being that the preeminence of the bourgeoise would end. Marx's vision of his society established that there would be no classes, freedom of mankind, and the opportunity of self-interested labor to rid any alienation.
Since the mid-19th century, Engels overtook utopian socialism in terms of intellectual development and number of adherents. At one time almost half the population of the world lived under regimes that claimed to be Marxist. Currents such as Owenism and Fourierism attracted the interest of numerous later authors but failed to compete with the now dominant Marxist and Anarchist schools on a political level. It has been noted that they exerted a significant influence on the emergence of new religious movements such as spiritualism and occultism.
Utopian socialists were seen as wanting to expand the principles of the French revolution in order to create a more rational society. Despite being labeled as utopian by later socialists, their aims were not always utopian and their values often included rigid support for the scientific method and the creation of a society based upon scientific understanding.
In literature and in practice
Edward Bellamy (1850–1898) published Looking Backward in 1888, a utopian romance novel about a future socialist society. In Bellamy's utopia, property was held in common and money replaced with a system of equal credit for all. Valid for a year and non-transferable between individuals, credit expenditure was to be tracked via "credit-cards" (which bear no resemblance to modern credit cards which are tools of debt-finance). Labour was compulsory from age 21 to 40 and organised via various departments of an Industrial Army to which most citizens belonged. Working hours were to be cut drastically due to technological advances (including organisational). People were expected to be motivated by a Religion of Solidarity and criminal behavior was treated as a form of mental illness or "atavism". The book ranked as second or third best seller of its time (after Uncle Tom's Cabin and Ben Hur). In 1897, Bellamy published a sequel entitled Equality as a reply to his critics and which lacked the Industrial Army and other authoritarian aspects.
William Morris (1834–1896) published News from Nowhere in 1890, partly as a response to Bellamy's Looking Backward, which he equated with the socialism of Fabians such as Sydney Webb. Morris' vision of the future socialist society was centred around his concept of useful work as opposed to useless toil and the redemption of human labour. Morris believed that all work should be artistic, in the sense that the worker should find it both pleasurable and an outlet for creativity. Morris' conception of labour thus bears strong resemblance to Fourier's, while Bellamy's (the reduction of labour) is more akin to that of Saint-Simon or in aspects Marx.
The Brotherhood Church in Britain and the Life and Labor Commune in Russia were based on the Christian anarchist ideas of Leo Tolstoy (1828–1910). Pierre-Joseph Proudhon (1809–1865) and Peter Kropotkin (1842–1921) wrote about anarchist forms of socialism in their books. Proudhon wrote What is Property? (1840) and The System of Economic Contradictions, or The Philosophy of Poverty (1847). Kropotkin wrote The Conquest of Bread (1892) and Fields, Factories and Workshops (1912). Many of the anarchist collectives formed in Spain, especially in Aragon and Catalonia, during the Spanish Civil War were based on their ideas.
Many participants in the historical kibbutz movement in Palestine in the Ottoman Empire, then later Mandatory Palestine under British occupation and later Israel were motivated by utopian socialist ideas. Augustin Souchy (1892–1984) spent most of his life investigating and participating in many kinds of socialist communities. Souchy wrote about his experiences in his autobiography Beware! Anarchist! Behavioral psychologist B. F. Skinner (1904–1990) published Walden Two in 1948. The Twin Oaks Community was originally based on his ideas. Ursula K. Le Guin (1929–2018) wrote about an impoverished anarchist society in her book The Dispossessed, published in 1974, in which the anarchists agree to leave their home planet and colonize a barely habitable moon in order to avoid a bloody revolution.
Related concepts
Some communities of the modern intentional community movement such as kibbutzim could be categorized as utopian socialist. Some religious communities such as the Hutterites are categorized as utopian religious socialists.
Classless modes of production in hunter-gatherer societies are referred to as primitive communism by Marxists to stress their classless nature.
Notable utopian socialists
Edward Bellamy
Alphonse Toussenel
Tommaso Campanella
Etienne Cabet
Icarians
Victor Considérant
David Dale
Charles Fourier
North American Phalanx
The Phalanx
King Gillette
Jean-Baptiste Godin
Laurence Gronlund
Matti Kurikka
John Humphrey Noyes
John Ball
Robert Owen
Philippe Buonarroti
Vaso Pelagić
Henri de Saint-Simon
William Thompson
Wilhelm Weitling
Gerrard Winstanley
Thomas Paine
Notable utopian communities
Utopian communities have existed all over the world. In various forms and locations, they have existed continuously in the United States since the 1730s, beginning with Ephrata Cloister, a religious community in what is now Lancaster County, Pennsylvania.
Owenite communities
New Lanark, Scotland, 1786
New Harmony, Indiana, 1814
Fourierist communities
Brook Farm, Massachusetts, 1841
La Reunion (Dallas), Texas, 1855
North American Phalanx, New Jersey, 1843
Silkville, Kansas, 1870
Utopia, Ohio, 1844
Icarian communities
Corning, Iowa, 1860
Anarchist communities
Home, Washington, 1898
Life and Labor Commune, Moscow Oblast, Soviet Union (Russia), 1921
Socialist Community of Modern Times, New York, 1851
Whiteway Colony, United Kingdom, 1898
Others
Fairhope, Alabama, US, 1894
Kaweah Colony, California, 1886
Llano del Rio, California, 1914
Los Mochis, Sinaloa, Mexico, 1893
Nevada City, Nevada, 1916
New Australia, Paraguay, 1893
Oneida Community, New York, 1848
Ruskin Colony, Tennessee, 1894
Rugby, Tennessee, 1880
Sointula, British Columbia, Canada, 1901
See also
Pre-Marx socialists
Pre-Marxist communism
Christian socialism
Communist utopia
Diggers
Ethical socialism
Futurism
History of socialism
Ideal (ethics)
Intentional communities
Kibbutz
List of anarchist communities
Marxism
Nanosocialism
Post-capitalism
Post-scarcity
Radicalism (historical)
Ricardian socialism
Scientific socialism
Socialism
Socialist economics
Syndicalism
Utopia for Realists
Yellow socialism
Zero waste
References
Further reading
Taylor, Keith (1992). The political ideas of Utopian socialists. London: Cass. .
External links
Be Utopian: Demand the Realistic by Robert Pollin, The Nation, March 9, 2009.
History of social movements
Idealism
Radicalism (historical)
Social theories
Socialism
Syndicalism
Types of socialism
| 0.761795 | 0.995823 | 0.758612 |
Classical demography
|
Classical demography refers to the study of human demography in the Classical period. It often focuses on the absolute number of people who were alive in civilizations around the Mediterranean Sea between the Bronze Age and the fall of the Western Roman Empire, but in recent decades historians have been more interested in trying to analyse demographic processes such as the birth and death rates or the sex ratio of ancient populations. The period was characterized by an explosion in population with the rise of the Greek and Roman civilizations followed by a steep decline caused by economic and social disruption, migrations, and a return to primarily subsistence agriculture. Demographic questions play an important role in determining the size and structure of the economy of Ancient Greece and the Roman economy.
Ancient Greece and Greek colonies
From around 800 BC, Greek city-states began colonizing the Mediterranean and Black Sea coasts. Suggested reasons for this dramatic expansion include overpopulation, severe droughts, or an escape for vanquished people (or a combination). The population of the areas of Greek settlement from the western Mediterranean to Asia Minor and the Black Sea in the 4th century BC has been estimated at up to 7.5-10 million.
Greece proper
The geographical definition of Greece has fluctuated over time. While today the ancient kingdom of Macedonia is always considered part of the Greek world, in the Classical period it was a distinct entity and even though Macedonian language was part of the Greek dialect continuum, it was not considered as a part of Greece by some Athenian writers. Similarly, almost all modern residents of historical Ionia, now part of Turkey, speak the Turkish language, although from the 1st millennium BC Ionia was densely populated by Greek-speaking people and an important part of Greek culture.
Estimates of the Greek-speaking population in the coast and islands of the Aegean Sea during the 5th century BC vary from 800,000 to over 3,000,000. In Athens and Attica in the 5th century BC, there were up to 150,000 Athenians of the citizen class, around 30,000 aliens, and 100,000 slaves, most residing outside the city and port., though precise numbers remain unknown and estimates vary widely.
Other Greek colonization
The ancient Roman province of Cyrenaica in the eastern region of present-day Libya was home to a Greek, Latin and native population in the hundreds of thousands. Originally settled by Greek colonists, five important settlements (Cyrene, Barca, Euesperides, Apollonia, and Tauchira) formed a pentapolis. The fertility of the land, the exportation of silphium, and its location between Carthage and Alexandria made it a magnet for settlement.
Ancient Phoenicia and Phoenician colonies
Phoenicia also established colonies along the Mediterranean, including Carthage.
Demography of the Hellenistic kingdoms
When urbanization began to take place, it was Pella which became the largest city. The Kingdom of Macedonia had 4 million people after the Wars of the Diadochi.
Ptolemaic Egypt
Greek historian Diodorus Siculus estimated that 7,000,000 inhabitants resided in Egypt during his lifetime before its annexation by the Roman Empire. Of this, he states that 300,000 citizens lived within the city of Alexandria. Later historians have queried whether the country could have supported such high numbers.
Seleucid Empire
The population of the vast Seleucid Empire has been estimated to have been higher than 30 million., though others indicate as few as 20 million inhabitants in the whole of Alexander's earlier empire of which it had been a part.
Demography of the Roman Empire
There are many estimates of the population for the Roman Empire, that range from 45 million to 120 million with 59-76 million as the most accepted range. The population likely peaked just before the Antonine Plague.
An estimated population of the empire during the reign of Augustus:
Beloch's 1886 estimate for the population of the empire during the reign of Augustus:
Russell's 1958 estimate for the population of the empire in 1 AD:
Russell's 1958 estimate for the population of the empire in 350 AD:
Roman Italy
The Romans carried out a regular census of citizens eligible for military service (Polybius 2.23), but for the population of the rest of Italy at this time we have to rely on a single report of the military strength of Rome's allies in 227 BC – and guess the numbers of those who were opposed to Rome at this time. The citizen count in the second century B.C. hovered between 250 and 325,000 presumably males over the age of 13.
The census of 70/69 B.C. records 910,000 presumably due to the extension of citizenship to the allies after the Social War of 91–88. Still, even if only males this seems like an undercount. For the 1st and 2nd centuries BC, historians have developed two radically different accounts, resting on different interpretations of the figures of 4,036,000 recorded for the census carried out by Augustus in 28 BC, 4,233,000 in 8 BC, and 4,937,000 in 14 AD. and almost 6 million during the reign of Claudius, not all of whom lived in Italy. Many lived in Spain, Gaul and other parts of the Empire. If this only represents adult male citizens (or some subset of adult male citizens those over age 13 as the census traditionally did not count children until they were formally enrolled as citizens early in puberty), then the population of Italy must have been around 10 million, not including slaves and foreigners, which was a striking, sustained increase despite the Romans' losses in the almost constant wars over the previous two centuries. Others find this entirely incredible, and argue that the census must now be counting all citizens, male and female over the age of 13 – in which case the population had declined slightly, something which can readily be attributed to war casualties and to the crisis of the Italian peasantry. The majority of historians favour the latter interpretation as being more demographically plausible, but the issue remains contentious.
Estimates for the population of mainland Italia, including Gallia Cisalpina, at the beginning of the 1st Century AD range from 6,000,000 according to Beloch in 1886, 6,830,000 according to Russell in 1958, less than 10,000,000 according to Hin in 2007, and 14,000,000 according to Lo Cascio in 2009.
Evidence for the population of Rome itself or of the other cities of Roman Italy is equally scarce. For the capital, estimates have been based on the number of houses listed in 4th-century AD guidebooks, on the size of the built-up area, and on the volume of the water supply, all of which are problematic; the best guess is based on the number of recipients of the grain dole under Augustus, 200,000, implying a population of around 800,000–1,200,000. Italy had numerous urban centres – over 400 are listed by Pliny the Elder – but the majority were small, with populations of just a few thousand. As much as 40% of the population might have lived in towns (25% if the city of Rome is excluded), on the face of it an astonishingly high level of urbanisation for a pre-industrial society. However, studies of later periods would not count the smallest centres as 'urban'; if only cities of 10,000+ are counted, Italy's level of urbanisation was a more realistic (but still impressive) 25% (11% excluding Rome).
See also
Historical demography
Medieval demography
Colonies in antiquity
Roman agriculture
Deforestation during the Roman period
List of states by population in 1 CE
Pre-modern human migration
References
Bibliography
Further reading
Ancient Greece
(Review )
Roman Republic and Empire
External links
Princeton/Stanford Working Papers in Classics: Walter Scheidel on Roman demography and population history
Demographic history
Classical studies
| 0.76996 | 0.985259 | 0.75861 |
Environmental sociology
|
Environmental sociology is the study of interactions between societies and their natural environment. The field emphasizes the social factors that influence environmental resource management and cause environmental issues, the processes by which these environmental problems are socially constructed and define as social issues, and societal responses to these problems.
Environmental sociology emerged as a subfield of sociology in the late 1970s in response to the emergence of the environmental movement in the 1960s. It represents a relatively new area of inquiry focusing on an extension of earlier sociology through inclusion of physical context as related to social factors.
Definition
Environmental sociology is typically defined as the sociological study of socio-environmental interactions, although this definition immediately presents the problem of integrating human cultures with the rest of the environment. Different aspects of human interaction with the natural environment are studied by environmental sociologists including population and demography, organizations and institutions, science and technology, health and illness, consumption and sustainability practices, culture and identity, and social inequality and environmental justice. Although the focus of the field is the relationship between society and environment in general, environmental sociologists typically place special emphasis on studying the social factors that cause environmental problems, the societal impacts of those problems, and efforts to solve the problems. In addition, considerable attention is paid to the social processes by which certain environmental conditions become socially defined as problems. Most research in environmental sociology examines contemporary societies.
History
Environmental sociology emerged as a coherent subfield of inquiry after the environmental movement of the 1960s and early 1970s. The works of William R. Catton, Jr. and Riley Dunlap, among others, challenged the constricted anthropocentrism of classical sociology. In the late 1970s, they called for a new holistic, or systems perspective, which lead to a marked shift in the field’s focus. Since the 1970s, general sociology has noticeably transformed to include environmental forces in social explanations. Environmental sociology has now solidified as a respected, interdisciplinary field of study in academia.
Concepts
Existential dualism
The duality of the human condition rests with cultural uniqueness and evolutionary traits. From one perspective, humans are embedded in the ecosphere and co-evolved alongside other species. Humans share the same basic ecological dependencies as other inhabitants of nature. From the other perspectives, humans are distinguished from other species because of their innovative capacities, distinct cultures and varied institutions. Human creations have the power to independently manipulate, destroy, and transcend the limits of the natural environment.
According to Buttel (2004), there are five major traditions in environmental sociology today: the treadmill of production and other eco-Marxisms, ecological modernization and other sociologies of environmental reform, cultural-environmental sociologies, neo-Malthusianisms, and the new ecological paradigm. In practice, this means five different theories of what to blame for environmental degradation, i.e., what to research or consider as important. These ideas are listed below in the order in which they were invented. Ideas that emerged later built on earlier ideas, and contradicted them.
Neo-Malthusianism
Works such as Hardin's "Tragedy of the Commons" (1969) reformulated Malthusian thought about abstract population increases causing famines into a model of individual selfishness at larger scales causing degradation of common pool resources such as the air, water, the oceans, or general environmental conditions. Hardin offered privatization of resources or government regulation as solutions to environmental degradation caused by tragedy of the commons conditions. Many other sociologists shared this view of solutions well into the 1970s (see Ophuls). There have been many critiques of this view particularly political scientist Elinor Ostrom, or economists Amartya Sen and Ester Boserup.
Even though much of mainstream journalism considers Malthusianism the only view of environmentalism, most sociologists would disagree with Malthusianism since social organizational issues of environmental degradation are more demonstrated to cause environmental problems than abstract population or selfishness per se. For examples of this critique, Ostrom in her book Governing the Commons: The Evolution of Institutions for Collective Action (1990) argues that instead of self-interest always causing degradation, it can sometimes motivate people to take care of their common property resources. To do this they must change the basic organizational rules of resource use. Her research provides evidence for sustainable resource management systems, around common pool resources that have lasted for centuries in some areas of the world.
Amartya Sen argues in his book Poverty and Famines: An Essay on Entitlement and Deprivation (1980) that population expansion fails to cause famines or degradation as Malthusians or Neo-Malthusians argue. Instead, in documented cases a lack of political entitlement to resources that exist in abundance, causes famines in some populations. He documents how famines can occur even in the midst of plenty or in the context of low populations. He argues that famines (and environmental degradation) would only occur in non-functioning democracies or unrepresentative states.
Ester Boserup argues in her book The Conditions of Agricultural Growth: The Economics of Agrarian Change under Population Pressure (1965) from inductive, empirical case analysis that Malthus's more deductive conception of a presumed one-to-one relationship with agricultural scale and population is actually reversed. Instead of agricultural technology and scale determining and limiting population as Malthus attempted to argue, Boserup argued the world is full of cases of the direct opposite: that population changes and expands agricultural methods.
Eco-Marxist scholar Allan Schnaiberg (below) argues against Malthusianism with the rationale that under larger capitalist economies, human degradation moved from localized, population-based degradation to organizationally caused degradation of capitalist political economies to blame. He gives the example of the organized degradation of rainforest areas which states and capitalists push people off the land before it is degraded by organizational means. Thus, many authors are critical of Malthusianism, from sociologists (Schnaiberg) to economists (Sen and Boserup), to political scientists (Ostrom), and all focus on how a country's social organization of its extraction can degrade the environment independent of abstract population.
New Ecological Paradigm
In the 1970s, the New Ecological Paradigm (NEP) conception critiqued the claimed lack of human-environmental focus in the classical sociologists and the sociological priorities their followers created. This was critiqued as the Human Exemptionalism Paradigm (HEP). The HEP viewpoint claims that human-environmental relationships were unimportant sociologically because humans are 'exempt' from environmental forces via cultural change. This view was shaped by the leading Western worldview of the time and the desire for sociology to establish itself as an independent discipline against the then popular racist-biological environmental determinism where environment was all. In this HEP view, human dominance was felt to be justified by the uniqueness of culture, argued to be more adaptable than biological traits. Furthermore, culture also has the capacity to accumulate and innovate, making it capable of solving all natural problems. Therefore, as humans were not conceived of as governed by natural conditions, they were felt to have complete control of their own destiny. Any potential limitation posed by the natural world was felt to be surpassed using human ingenuity. Research proceeded accordingly without environmental analysis.
In the 1970s, sociological scholars Riley Dunlap and William R. Catton, Jr. began recognizing the limits of what would be termed the Human Excemptionalism Paradigm. Catton and Dunlap (1978) suggested a new perspective that took environmental variables into full account. They coined a new theoretical outlook for sociology, the New Ecological Paradigm, with assumptions contrary to HEP.
The NEP recognizes the innovative capacity of humans, but says that humans are still ecologically interdependent as with other species. The NEP notes the power of social and cultural forces but does not profess social determinism. Instead, humans are impacted by the cause, effect, and feedback loops of ecosystems. The Earth has a finite level of natural resources and waste repositories. Thus, the biophysical environment can impose constraints on human activity. They discussed a few harbingers of this NEP in 'hybridized' theorizing about topics that were neither exclusively social nor environmental explanations of environmental conditions. It was additionally a critique of Malthusian views of the 1960s and 1970s.
Dunlap and Catton's work immediately received a critique from Buttel who argued to the contrary that classical sociological foundations could be found for environmental sociology, particularly in Weber's work on ancient "agrarian civilizations" and Durkheim's view of the division of labor as built on a material premise of specialization/specialization in response to material scarcity. This environmental aspect of Durkheim has been discussed by Schnaiberg (1971) as well.
Treadmill of Production Theory
The Treadmill of Production is a theory coined and popularized by Schnaiberg as a way to answer for the increase in U.S. environmental degradation post World War II. At its simplest, this theory states that the more product or commodities are created, the more resources will be used, and the higher the impact will be. The treadmill is a metaphor of being caught in the cycle of continuous growth which never stops, demanding more resources and as a result causing more environmental damage.
Eco-Marxism
In the middle of the HEP/NEP debate Neo-Marxist ideas of conflict sociology were applied to environmental conflicts. Therefore, some sociologists wanted to stretch Marxist ideas of social conflict to analyze environmental social movements from the Marxist materialist framework instead of interpreting them as a cultural "New Social Movement", separate from material concerns. So "Eco-Marxism" was developed based on using Neo-Marxist Conflict theories concepts of the relative autonomy of the state and applying them to environmental conflict.
Two people following this school were James O'Connor (The Fiscal Crisis of the State, 1971) and later Allan Schnaiberg.
Later, a different trend developed in eco-Marxism via the attention brought to the importance of metabolic analysis in Marx's thought by John Bellamy Foster. Contrary to previous assumptions that classical theorists in sociology all had fallen within a Human Exemptionalist Paradigm, Foster argued that Marx's materialism lead him to theorize labor as the metabolic process between humanity and the rest of nature. In Promethean interpretations of Marx that Foster critiques, there was an assumption his analysis was very similar to the anthropocentric views critiqued by early environmental sociologists. Instead, Foster argued Marx himself was concerned about the Metabolic rift generated by capitalist society's social metabolism, particularly in industrial agriculture—Marx had identified an "irreparable rift in the interdependent process of social metabolism," created by capitalist agriculture that was destroying the productivity of the land and creating wastes in urban sites that failed to be reintegrated into the land and thus lead toward destruction of urban workers health simultaneously. Reviewing the contribution of this thread of eco-marxism to current environmental sociology, Pellow and Brehm conclude, "The metabolic rift is a productive development in the field because it connects current research to classical theory and links sociology with an interdisciplinary array of scientific literatures focused on ecosystem dynamics."
Foster emphasized that his argument presupposed the "magisterial work" of Paul Burkett, who had developed a closely related "red-green" perspective rooted in a direct examination of Marx's value theory. Burkett and Foster proceeded to write a number of articles together on Marx's ecological conceptions, reflecting their shared perspective
More recently, Jason W. Moore, inspired by Burkett's value-analytical approach to Marx's ecology and arguing that Foster's work did not in itself go far enough, has sought to integrate the notion of metabolic rift with world systems theory, incorporating Marxian value-related conceptions. For Moore, the modern world-system is a capitalist world-ecology, joining the accumulation of capital, the pursuit of power, and the production of nature in dialectical unity. Central to Moore's perspective is a philosophical re-reading of Marx's value theory, through which abstract social labor and abstract social nature are dialectically bound. Moore argues that the emergent law of value, from the sixteenth century, was evident in the extraordinary shift in the scale, scope, and speed of environmental change. What took premodern civilizations centuries to achieve—such as the deforestation of Europe in the medieval era—capitalism realized in mere decades. This world-historical rupture, argues Moore, can be explained through a law of value that regards labor productivity as the decisive metric of wealth and power in the modern world. From this standpoint, the genius of capitalist development has been to appropriate uncommodified natures—including uncommodified human natures—as a means of advancing labor productivity in the commodity system.
Societal-environment dialectic
In 1975, the highly influential work of Allan Schnaiberg transfigured environmental sociology, proposing a societal-environmental dialectic, though within the 'neo-Marxist' framework of the relative autonomy of the state as well. This conflictual concept has overwhelming political salience. First, the economic synthesis states that the desire for economic expansion will prevail over ecological concerns. Policy will decide to maximize immediate economic growth at the expense of environmental disruption. Secondly, the managed scarcity synthesis concludes that governments will attempt to control only the most dire of environmental problems to prevent health and economic disasters. This will give the appearance that governments act more environmentally consciously than they really do. Third, the ecological synthesis generates a hypothetical case where environmental degradation is so severe that political forces would respond with sustainable policies. The driving factor would be economic damage caused by environmental degradation. The economic engine would be based on renewable resources at this point. Production and consumption methods would adhere to sustainability regulations.
These conflict-based syntheses have several potential outcomes. One is that the most powerful economic and political forces will preserve the status quo and bolster their dominance. Historically, this is the most common occurrence. Another potential outcome is for contending powerful parties to fall into a stalemate. Lastly, tumultuous social events may result that redistribute economic and political resources.
In 1980,the highly influential work of Allan Schnaiberg entitled The Environment: From Surplus to Scarcity (1980)
was a large contribution to this theme of a societal-environmental dialectic.
Ecological modernization and reflexive modernization
By the 1980s, a critique of eco-Marxism was in the offing, given empirical data from countries (mostly in Western Europe like the Netherlands, Western Germany and somewhat the United Kingdom) that were attempting to wed environmental protection with economic growth instead of seeing them as separate. This was done through both state and capital restructuring. Major proponents of this school of research are Arthur P.J. Mol and Gert Spaargaren. Popular examples of ecological modernization would be "cradle to cradle" production cycles, industrial ecology, large-scale organic agriculture, biomimicry, permaculture, agroecology and certain strands of sustainable development—all implying that economic growth is possible if that growth is well organized with the environment in mind.
Reflexive modernization
The many volumes of the German sociologist Ulrich Beck first argued from the late 1980s that our risk society is potentially being transformed by the environmental social movements of the world into structural change without rejecting the benefits of modernization and industrialization. This is leading to a form of 'reflexive modernization' with a world of reduced risk and better modernization process in economics, politics, and scientific practices as they are made less beholden to a cycle of protecting risk from correction (which he calls our state's organized irresponsibility)—politics creates ecodisasters, then claims responsibility in an accident, yet nothing remains corrected because it challenges the very structure of the operation of the economy and the private dominance of development, for example. Beck's idea of a reflexive modernization looks forward to how our ecological and social crises in the late 20th century are leading toward transformations of the whole political and economic system's institutions, making them more "rational" with ecology in mind.
Neo-Liberalism
Neo-liberalism includes deregulation, free market capitalism, and aims at reducing government spending. These Neo-liberal policies greatly affect environmental sociology. Since Neo-liberalism includes deregulation and essentially less government involvement, this leads to the commodification and privatization of unowned, state-owned, or common property resources. Diana Liverman and Silvina Vilas mentions that this results in payments for environmental services; deregulation and cuts in public expenditure for environmental management; the opening up of trade and investment; and transfer of environmental management to local or nongovernmental institutions. The privatization of these resources have impacts on society, the economy, and to the environment. An example that has greatly affected society is the privatization of water.
Social construction of the environment
Additionally in the 1980s, with the rise of postmodernism in the western academy and the appreciation of discourse as a form of power, some sociologists turned to analyzing environmental claims as a form of social construction more than a 'material' requirement. Proponents of this school include John A. Hannigan, particularly in Environmental Sociology: A Social Constructionist Perspective (1995). Hannigan argues for a 'soft constructionism' (environmental problems are materially real though they require social construction to be noticed) over a 'hard constructionism' (the claim that environmental problems are entirely social constructs).
Although there was sometimes acrimonious debate between the constructivist and realist "camps" within environmental sociology in the 1990s, the two sides have found considerable common ground as both increasingly accept that while most environmental problems have a material reality they nonetheless become known only via human processes such as scientific knowledge, activists' efforts, and media attention. In other words, most environmental problems have a real ontological status despite our knowledge/awareness of them stemming from social processes, processes by which various conditions are constructed as problems by scientists, activists, media and other social actors. Correspondingly, environmental problems must all be understood via social processes, despite any material basis they may have external to humans. This interactiveness is now broadly accepted, but many aspects of the debate continue in contemporary research in the field.
Events
Modern environmentalism
United States
The 1960s built strong cultural momentum for environmental causes, giving birth to the modern environmental movement and large questioning in sociologists interested in analyzing the movement. Widespread green consciousness moved vertically within society, resulting in a series of policy changes across many states in the U.S. and Europe in the 1970s. In the United States, this period was known as the "Environmental Decade" with the creation of the United States Environmental Protection Agency and passing of the Endangered Species Act, Clean Water Act, and amendments to the Clean Air Act. Earth Day of 1970, celebrated by millions of participants, represented the modern age of environmental thought. The environmental movement continued with incidences such as Love Canal.
Historical studies
While the current mode of thought expressed in environmental sociology was not prevalent until the 1970s, its application is now used in analysis of ancient peoples. Societies including Easter Island, the Anaszi, and the Mayans were argued to have ended abruptly, largely due to poor environmental management. This has been challenged in later work however as the exclusive cause (biologically trained Jared Diamond's Collapse (2005); or more modern work on Easter Island). The collapse of the Mayans sent a historic message that even advanced cultures are vulnerable to ecological suicide—though Diamond argues now it was less of a suicide than an environmental climate change that led to a lack of an ability to adapt—and a lack of elite willingness to adapt even when faced with the signs much earlier of nearing ecological problems. At the same time, societal successes for Diamond included New Guinea and Tikopia island whose inhabitants have lived sustainably for 46,000 years.
John Dryzek et al. argue in Green States and Social Movements: Environmentalism in the United States, United Kingdom, Germany, and Norway (2003) that there may be a common global green environmental social movement, though its specific outcomes are nationalist, falling into four 'ideal types' of interaction between environmental movements and state power. They use as their case studies environmental social movements and state interaction from Norway, the United Kingdom, the United States, and Germany. They analyze the past 30 years of environmentalism and the different outcomes that the green movement has taken in different state contexts and cultures.
Recently and roughly in temporal order below, much longer-term comparative historical studies of environmental degradation are found by sociologists. There are two general trends: many employ world systems theory—analyzing environmental issues over long periods of time and space; and others employ comparative historical methods. Some utilize both methods simultaneously, sometimes without reference to world systems theory (like Whitaker, see below).
Stephen G. Bunker (d. 2005) and Paul S. Ciccantell collaborated on two books from a world-systems theory view, following commodity chains through history of the modern world system, charting the changing importance of space, time, and scale of extraction and how these variables influenced the shape and location of the main nodes of the world economy over the past 500 years. Their view of the world was grounded in extraction economies and the politics of different states that seek to dominate the world's resources and each other through gaining hegemonic control of major resources or restructuring global flows in them to benefit their locations.
The three volume work of environmental world-systems theory by Sing C. Chew analyzed how "Nature and Culture" interact over long periods of time, starting with World Ecological Degradation (2001) In later books, Chew argued that there were three "Dark Ages" in world environmental history characterized by periods of state collapse and reorientation in the world economy associated with more localist frameworks of community, economy, and identity coming to dominate the nature/culture relationships after state-facilitated environmental destruction delegitimized other forms. Thus recreated communities were founded in these so-called 'Dark Ages,' novel religions were popularized, and perhaps most importantly to him the environment had several centuries to recover from previous destruction. Chew argues that modern green politics and bioregionalism is the start of a similar movement of the present day potentially leading to wholesale system transformation. Therefore, we may be on the edge of yet another global "dark age" which is bright instead of dark on many levels since he argues for human community returning with environmental healing as empires collapse.
More case oriented studies were conducted by historical environmental sociologist Mark D. Whitaker analyzing China, Japan, and Europe over 2,500 years in his book Ecological Revolution (2009). He argued that instead of environmental movements being "New Social Movements" peculiar to current societies, environmental movements are very old—being expressed via religious movements in the past (or in the present like in ecotheology) that begin to focus on material concerns of health, local ecology, and economic protest against state policy and its extractions. He argues past or present is very similar: that we have participated with a tragic common civilizational process of environmental degradation, economic consolidation, and lack of political representation for many millennia which has predictable outcomes. He argues that a form of bioregionalism, the bioregional state, is required to deal with political corruption in present or in past societies connected to environmental degradation.
After looking at the world history of environmental degradation from very different methods, both sociologists Sing Chew and Mark D. Whitaker came to similar conclusions and are proponents of (different forms of) bioregionalism.
Related journals
Among the key journals in this field are:
Environmental Sociology
Human Ecology
Human Ecology Review
Nature and Culture
Organization & Environment
Population and Environment
Rural Sociology
Society and Natural Resources
See also
Bibliography of sociology
Ecological anthropology
Ecological design
Ecological economics
Ecological modernization theory
Enactivism
Environmental design
Environmental design and planning
Environmental economics
Environmental policy
Environmental racism
Environmental racism in Europe
Environmental social science
Ethnoecology
Political ecology
Sociology of architecture
Sociology of disaster
Climate change
References
Notes
Dunlap, Riley E., Frederick H. Buttel, Peter Dickens, and August Gijswijt (eds.) 2002. Sociological Theory and the Environment: Classical Foundations, Contemporary Insights (Rowman & Littlefield, ).
Dunlap, Riley E., and William Michelson (eds.) 2002.Handbook of Environmental Sociology (Greenwood Press, )
Freudenburg, William R., and Robert Gramling. 1989. "The Emergence of Environmental Sociology: Contributions of Riley E. Dunlap and William R. Catton, Jr.", Sociological Inquiry 59(4): 439–452
Harper, Charles. 2004. Environment and Society: Human Perspectives on Environmental Issues. Upper Saddle River, New Jersey: Pearson Education, Inc.
Humphrey, Craig R., and Frederick H. Buttel. 1982.Environment, Energy, and Society. Belmont, California: Wadsworth Publishing Company.
Humphrey, Craig R., Tammy L. Lewis and Frederick H. Buttel. 2002. Environment, Energy and Society: A New Synthesis. Belmont, California: Wadsworth/Thompson Learning.
Mehta, Michael, and Eric Ouellet. 1995. Environmental Sociology: Theory and Practice, Toronto: Captus Press.
Redclift, Michael, and Graham Woodgate, eds. 1997.International Handbook of Environmental Sociology (Edgar Elgar, 1997; )
Schnaiberg, Allan. 1980. The Environment: From Surplus to Scarcity. New York: Oxford University Press.
Further reading
Hannigan, John, "Environmental Sociology", Routledge, 2014.
Zehner, Ozzie, Green Illusions: The Dirty Secrets of Clean Energy and the Future of Environmentalism, University of Nebraska Press, 2012. An environmental sociology text forming a critique of energy production and green consumerism.
External links
ASA Section on Environment and Technology
ESA Environment & Society Research Network
ISA Research Committee on Environment and Society (RC24)
Canadian Sociological Association (CSA) Environment Research Cluster
| 0.768946 | 0.986531 | 0.758589 |
Science in classical antiquity
|
Science in classical antiquity encompasses inquiries into the workings of the world or universe aimed at both practical goals (e.g., establishing a reliable calendar or determining how to cure a variety of illnesses) as well as more abstract investigations belonging to natural philosophy. Classical antiquity is traditionally defined as the period between the 8th century BC (beginning of Archaic Greece) and the 6th century AD (after which there was medieval science). It is typically limited geographically to the Greco-Roman West, Mediterranean basin, and Ancient Near East, thus excluding traditions of science in the ancient world in regions such as China and the Indian subcontinent.
Ideas regarding nature that were theorized during classical antiquity were not limited to science but included myths as well as religion. Those who are now considered as the first scientists may have thought of themselves as natural philosophers, as practitioners of a skilled profession (e.g., physicians), or as followers of a religious tradition (e.g., temple healers). Some of the more widely known figures active in this period include Hippocrates, Aristotle, Euclid, Archimedes, Hipparchus, Galen, and Ptolemy. Their contributions and commentaries spread throughout the Eastern, Islamic, and Latin worlds and contributed to the birth of modern science. Their works covered many different categories including mathematics, cosmology, medicine, and physics.
Classical Greece
Knowledge of causes
This subject inquires into the nature of things first began out of practical concerns among the ancient Greeks. For instance, an attempt to establish a calendar is first exemplified by the Works and Days of the Greek poet Hesiod, who lived around 700 BC. Hesiod's calendar was meant to regulate seasonal activities by the seasonal appearances and disappearances of the stars, as well as by the phases of the Moon, which were held to be propitious or ominous. Around 450 BC we begin to see compilations of the seasonal appearances and disappearances of the stars in texts known as parapegmata, which were used to regulate the civil calendars of the Greek city-states on the basis of astronomical observations.
Medicine is another area where practically oriented investigations of nature took place during this period. Greek medicine was not the province of a single trained profession and there was no accepted method of qualification of licensing. Physicians in the Hippocratic tradition, temple healers associated with the cult of Asclepius, herb collectors, drug sellers, midwives, and gymnastic trainers all claimed to be qualified as healers in specific contexts and competed actively for patients. This rivalry among these competing traditions contributed to an active public debate about the causes and proper treatment of disease, and about the general methodological approaches of their rivals.
An example of the search for causal explanations is found in the Hippocratic text On the Sacred Disease, which deals with the nature of epilepsy. In it, the author attacks his rivals (temple healers) for their ignorance in attributing epilepsy to divine wrath, and for their love of gain. Although the author insists that epilepsy has a natural cause, when it comes to explain what that cause is and what the proper treatment would be, the explanation is as short on specific evidence and the treatment as vague as that of his rivals. Nonetheless, observations of natural phenomena continued to be compiled in an effort to determine their causes, as for instance in the works of Aristotle and Theophrastus, who wrote extensively on animals and plants. Theophrastus also produced the first systematic attempt to classify minerals and rocks, a summary of which is found in Pliny's Natural History.
The legacy of Greek science in this era included substantial advances in factual knowledge due to empirical research (e.g., in zoology, botany, mineralogy, and astronomy), an awareness of the importance of certain scientific problems (e.g., the problem of change and its causes), and a recognition of the methodological significance of establishing criteria for truth (e.g., applying mathematics to natural phenomena), despite the lack of universal consensus in any of these areas.
Pre-Socratic philosophy
Materialist philosophers
The earliest Greek philosophers, known as the pre-Socratics, were materialists who provided alternative answers to the same question found in the myths of their neighbors: "How did the ordered cosmos in which we live come to be?" Although the question is much the same, their answers and their attitude towards the answers is markedly different. As reported by such later writers as Aristotle, their explanations tended to center on the material source of things.
Thales of Miletus (624–546 BC) considered that all things came to be from and find their sustenance in water. Anaximander (610–546 BC) then suggested that things could not come from a specific substance like water, but rather from something he called the "boundless". Exactly what he meant is uncertain but it has been suggested that it was boundless in its quantity, so that creation would not fail; in its qualities, so that it would not be overpowered by its contrary; in time, as it has no beginning or end; and in space, as it encompasses all things. Anaximenes (585–525 BC) returned to a concrete material substance, air, which could be altered by rarefaction and condensation. He adduced common observations (the wine stealer) to demonstrate that air was a substance and a simple experiment (breathing on one's hand) to show that it could be altered by rarefaction and condensation.
Heraclitus of Ephesus (about 535–475 BC), then maintained that change, rather than any substance was fundamental, although the element fire seemed to play a central role in this process. Finally, Empedocles of Acragas (490–430 BC), seems to have combined the views of his predecessors, asserting that there are four elements (Earth, Water, Air and Fire) which produce change by mixing and separating under the influence of two opposing "forces" that he called Love and Strife.
All these theories imply that matter is a continuous substance. Two Greek philosophers, Leucippus (first half of the 5th century BC) and Democritus came up with the notion that there were two real entities: atoms, which were small indivisible particles of matter, and the void, which was the empty space in which matter was located. Although all the explanations from Thales to Democritus involve matter, what is more important is the fact that these rival explanations suggest an ongoing process of debate in which alternate theories were put forth and criticized.
Xenophanes of Colophon prefigured paleontology and geology as he thought that periodically the earth and sea mix and turn all to mud, citing several fossils of sea creatures that he had seen.
Pythagorean philosophy
The materialist explanations of the origins of the cosmos were attempts at answering the question of how an organized universe came to be; however, the idea of a random assemblage of elements (e.g., fire or water) producing an ordered universe without the existence of some ordering principle remained problematic to some.
One answer to this problem was advanced by the followers of Pythagoras (c. 582–507 BC), who saw number as the fundamental unchanging entity underlying all the structure of the universe. Although it is difficult to separate fact from legend, it appears that some Pythagoreans believed matter to be made up of ordered arrangements of points according to geometrical principles: triangles, squares, rectangles, or other figures. Other Pythagoreans saw the universe arranged on the basis of numbers, ratios, and proportions, much like musical scales. Philolaus, for instance, held that there were ten heavenly bodies because the sum of 1 + 2 + 3 + 4 gives the perfect number 10. Thus, the Pythagoreans were some of the first to apply mathematical principles to explain the rational basis of an orderly universe—an idea that was to have immense consequences in the development of scientific thought.
Hippocrates and the Hippocratic Corpus
According to tradition, the physician Hippocrates of Kos (460–370 BC) is considered the "father of medicine" because he was the first to make use of prognosis and clinical observation, to categorize diseases, and to formulate the ideas behind humoral theory. However, most of the Hippocratic Corpus—a collection of medical theories, practices, and diagnoses—was often attributed to Hippocrates with very little justification, thus making it difficult to know what Hippocrates actually thought, wrote, and did.
Despite their wide variability in terms of style and method, the writings of the Hippocratic Corpus had a significant influence on the medical practice of Islamic and Western medicine for more than a thousand years.
Schools of philosophy
The Academy
The first institution of higher learning in Ancient Greece was founded by Plato (c. 427 – c. 347 BC), an Athenian who—perhaps under Pythagorean influence—appears to have identified the ordering principle of the universe as one based on number and geometry. A later account has it that Plato had inscribed at the entrance to the Academy the words "Let no man ignorant of geometry enter." Although the story is most likely a myth, it nonetheless testifies to Plato's interest in mathematics, which is alluded to in several of his dialogues.
Plato's philosophy maintained that all material things are imperfect reflections of eternal unchanging ideas, just as all mathematical diagrams are reflections of eternal unchanging mathematical truths. Since Plato believed that material things had an inferior kind of reality, he considered that demonstrative knowledge cannot be achieved by looking at the imperfect material world. Truth is to be found through rational argumentation, analogous to the demonstrations of mathematicians. For instance, Plato recommended that astronomy be studied in terms of abstract geometrical models rather than empirical observations, and proposed that leaders be trained in mathematics in preparation for philosophy.
Aristotle (384–322 BC) studied at the Academy and nonetheless disagreed with Plato in several important respects. While he agreed that truth must be eternal and unchanging, Aristotle maintained that the world is knowable through experience and that we come to know the truth by what we perceive with our senses. For him, directly observable things are real; ideas (or as he called them, forms) only exist as they express themselves in matter, such as in living things, or in the mind of an observer or artisan.
Aristotle's theory of reality led to a different approach to science. Unlike Plato, Aristotle emphasized observation of the material entities which embody the forms. He also played down (but did not negate) the importance of mathematics in the study of nature. The process of change took precedence over Plato's focus on eternal unchanging ideas in Aristotle's philosophy. Finally, he reduced the importance of Plato's forms to one of four causal factors.
Aristotle thus distinguished between four causes:
the matter of which a thing was made (the material cause).
the form into which it was made (the formal cause; similar to Plato's ideas).
the agent who made the thing (the moving or efficient cause).
the purpose for which the thing was made (the final cause).
Aristotle insisted that scientific knowledge (Ancient Greek: ἐπιστήμη, Latin: ) is knowledge of necessary causes. He and his followers would not accept mere description or prediction as science. Most characteristic of Aristotle's causes is his final cause, the purpose for which a thing is made. He came to this insight through his biological researches, such as those of marine animals at Lesbos, in which he noted that the organs of animals serve a particular function:
The absence of chance and the serving of ends are found in the works of nature especially. And the end for the sake of which a thing has been constructed or has come to be belongs to what is beautiful.
The Lyceum
After Plato's death, Aristotle left the Academy and traveled widely before returning to Athens to found a school adjacent to the Lyceum. As one of the most prolific natural philosophers of Antiquity, Aristotle wrote and lecture on many topics of scientific interest, including biology, meteorology, psychology, logic, and physics. He developed a comprehensive physical theory that was a variation of the classical theory of the elements (earth, water, fire, air, and aether). In his theory, the light elements (fire and air) have a natural tendency to move away from the center of the universe while the heavy elements (earth and water) have a natural tendency to move toward the center of the universe, thereby forming a spherical Earth. Since the celestial bodies (i.e., the planets and stars) were seen to move in circles, he concluded that they must be made of a fifth element, which he called aether.
Aristotle used intuitive ideas to justify his reasoning and could point to the falling stone, rising flames, or pouring water to illustrate his theory. His laws of motion emphasized the common observation that friction was an omnipresent phenomenon: that any body in motion would, unless acted upon, come to rest. He also proposed that heavier objects fall faster, and that voids were impossible.
Aristotle's successor at the Lyceum was Theophrastus, who wrote valuable books describing plant and animal life. His works are regarded as the first to put botany and zoology on a systematic footing. Theophrastus' work on mineralogy provided descriptions of ores and minerals known to the world at that time, making some shrewd observations of their properties. For example, he made the first known reference to the phenomenon that the mineral tourmaline attracts straws and bits of wood when heated, now known to be caused by pyroelectricity. Pliny the Elder makes clear references to his use of the work in his Natural History, while updating and making much new information available on minerals himself. From both these early texts was to emerge the science of mineralogy, and ultimately geology. Both authors describe the sources of the minerals they discuss in the various mines exploited in their time, so their works should be regarded not just as early scientific texts, but also important for the history of engineering and the history of technology.
Other notable peripatetics include Strato, who was a tutor in the court of the Ptolemies and who devoted time to physical research, Eudemus, who edited Aristotle's works and wrote the first books on the history of science, and Demetrius of Phalerum, who governed Athens for a time and later may have helped establish the Library of Alexandria.
Hellenistic age
The military campaigns of Alexander the Great spread Greek thought to Egypt, Asia Minor, Persia, up to the Indus River. The resulting migration of many Greek speaking populations across these territories provided the impetus for the foundation of several seats of learning, such as those in Alexandria, Antioch, and Pergamum.
Hellenistic science differed from Greek science in at least two respects: first, it benefited from the cross-fertilization of Greek ideas with those that had developed in other non-Hellenic civilizations; secondly, to some extent, it was supported by royal patrons in the kingdoms founded by Alexander's successors. The city of Alexandria, in particular, became a major center of scientific research in the 3rd century BC. Two institutions established there during the reigns of Ptolemy I Soter (367–282 BC) and Ptolemy II Philadelphus (309–246 BC) were the Library and the Museum. Unlike Plato's Academy and Aristotle's Lyceum, these institutions were officially supported by the Ptolemies, although the extent of patronage could be precarious depending on the policies of the current ruler.
Hellenistic scholars often employed the principles developed in earlier Greek thought in their scientific investigations, such as the application of mathematics to phenomena or the deliberate collection of empirical data. The assessment of Hellenistic science, however, varies widely. At one extreme is the view of English classical scholar Cornford, who believed that "all the most important and original work was done in the three centuries from 600 to 300 BC". At the other end is the view of Italian physicist and mathematician Lucio Russo, who claims that the scientific method was actually born in the 3rd century BC, only to be largely forgotten during the Roman period and not revived again until the Renaissance.
Technology
A good example of the level of achievement in astronomical knowledge and engineering during the Hellenistic age can be seen in the Antikythera mechanism (150–100 BC). It is a 37-gear mechanical computer which calculated the motions of the Sun, Moon, and possibly the other five planets known to the ancients. The Antikythera mechanism included lunar and solar eclipses predicted on the basis of astronomical periods believed to have been learned from the Babylonians. The device may have been part of an ancient Greek tradition of complex mechanical technology that was later, at least in part, transmitted to the Byzantine and Islamic worlds, where mechanical devices which were complex, albeit simpler than the Antikythera mechanism, were built during the Middle Ages. Fragments of a geared calendar attached to a sundial, from the fifth or sixth century Byzantine Empire, have been found; the calendar may have been used to assist in telling time. A geared calendar similar to the Byzantine device was described by the scientist al-Biruni around 1000, and a surviving 13th-century astrolabe also contains a similar clockwork device.
Medicine
An important school of medicine was formed in Alexandria from the late 4th century to the 2nd century BC. Beginning with Ptolemy I Soter, medical officials were allowed to cut open and examine cadavers for the purposes of learning how human bodies operated. The first use of human bodies for anatomical research occurred in the work of Herophilos (335–280 BC) and Erasistratus (c. 304 – c. 250 BC), who gained permission to perform live dissections, or vivisections, on condemned criminals in Alexandria under the auspices of the Ptolemaic dynasty.
Herophilos developed a body of anatomical knowledge much more informed by the actual structure of the human body than previous works had been. He also reversed the longstanding notion made by Aristotle that the heart was the "seat of intelligence", arguing for the brain instead. Herophilos also wrote on the distinction between veins and arteries, and made many other accurate observations about the structure of the human body, especially the nervous system. Erasistratus differentiated between the function of the sensory and motor nerves, and linked them to the brain. He is credited with one of the first in-depth descriptions of the cerebrum and cerebellum. For their contributions, Herophilos is often called the "father of anatomy", while Erasistratus is regarded by some as the "founder of physiology".
Mathematics
Greek mathematics in the Hellenistic period reached a level of sophistication not matched for several centuries afterward, as much of the work represented by scholars active at this time was of a very advanced level. There is also evidence of combining mathematical knowledge with high levels of technical expertise, as found for instance in the construction of massive building projects (e.g., the Syracusia), or in Eratosthenes' (276–195 BC) measurement of the distance between the Sun and the Earth and the size of the Earth.
Although few in number, Hellenistic mathematicians actively communicated with each other; publication consisted of passing and copying someone's work among colleagues. Among the most recognizable is the work of Euclid (325–265 BC), who presumably authored a series of books known as the Elements, a canon of geometry and elementary number theory for many centuries. Euclid's Elements served as the main textbook for the teaching of theoretical mathematics until the early 20th century.
Archimedes (287–212 BC), a Sicilian Greek, wrote about a dozen treatises where he communicated many remarkable results, such as the sum of an infinite geometric series in Quadrature of the Parabola, an approximation to the value π in Measurement of the Circle, and a nomenclature to express very large numbers in the Sand Reckoner.
The most characteristic product of Greek mathematics may be the theory of conic sections, which was largely developed in the Hellenistic period, primarily by Apollonius (262–190 BC). The methods used made no explicit use of algebra, nor trigonometry, the latter appearing around the time of Hipparchus (190–120 BC).
Astronomy
Advances in mathematical astronomy also took place during the Hellenistic age. Aristarchus of Samos (310–230 BC) was an ancient Greek astronomer and mathematician who presented the first known heliocentric model that placed the Sun at the center of the known universe, with the Earth revolving around the Sun once a year and rotating about its axis once a day. Aristarchus also estimated the sizes of the Sun and Moon as compared to Earth's size, and the distances to the Sun and Moon. His heliocentric model did not find many adherents in antiquity but did influence some early modern astronomers, such as Nicolaus Copernicus, who was aware of the heliocentric theory of Aristarchus.
In the 2nd century BC, Hipparchus discovered precession, calculated the size and distance of the Moon and invented the earliest known astronomical devices such as the astrolabe. Hipparchus also created a comprehensive catalog of 1020 stars, and most of the constellations of the northern hemisphere derive from Greek astronomy. It has recently been claimed that a celestial globe based on Hipparchus's star catalog sits atop the broad shoulders of a large 2nd-century Roman statue known as the Farnese Atlas.
Roman era
Science during the Roman Empire was concerned with systematizing knowledge gained in the preceding Hellenistic age and the knowledge from the vast areas the Romans had conquered. It was largely the work of authors active in this period that would be passed on uninterrupted to later civilizations.
Even though science continued under Roman rule, Latin texts were mainly compilations drawing on earlier Greek work. Advanced scientific research and teaching continued to be carried on in Greek. Such Greek and Hellenistic works as survived were preserved and developed later in the Byzantine Empire and then in the Islamic world. Late Roman attempts to translate Greek writings into Latin had limited success (e.g., Boethius), and direct knowledge of most ancient Greek texts only reached western Europe from the 12th century onwards.
Pliny
Pliny the Elder published the Naturalis Historia in 77 AD, one of the most extensive compilations of the natural world which survived into the Middle Ages. Pliny did not simply list materials and objects but also recorded explanations of phenomena. Thus he is the first to correctly describe the origin of amber as being the fossilized resin of pine trees. He makes the inference from the observation of trapped insects within some amber samples.
Pliny's work is divided neatly into the organic world of plants and animals, and the realm of inorganic matter, although there are frequent digressions in each section. He is especially interested in not just describing the occurrence of plants, animals and insects, but also their exploitation (or abuse) by man. The description of metals and minerals is particularly detailed, and valuable as being the most extensive compilation still available from the ancient world. Although much of the work was compiled by judicious use of written sources, Pliny gives an eyewitness account of gold mining in Spain, where he was stationed as an officer. Pliny is especially significant because he provides full bibliographic details of the earlier authors and their works he uses and consults. Because his encyclopaedia survived the Dark Ages, we know of these lost works, even if the texts themselves have disappeared. The book was one of the first to be printed in 1489, and became a standard reference work for Renaissance scholars, as well as an inspiration for the development of a scientific and rational approach to the world.
Hero
Hero of Alexandria was a Greco-Egyptian mathematician and engineer who is often considered to be the greatest experimenter of antiquity. Among his most famous inventions was a windwheel, constituting the earliest instance of wind harnessing on land, and a well-recognized description of a steam-powered device called an aeolipile, which was the first-recorded steam engine.
Galen
The greatest medical practitioner and philosopher of this era was Galen, active in the 2nd century AD. Around 100 of his works survive—the most for any ancient Greek author—and fill 22 volumes of modern text. Galen was born in the ancient Greek city of Pergamon (now in Turkey), the son of a successful architect who gave him a liberal education. Galen was instructed in all major philosophical schools (Platonism, Aristotelianism, Stoicism and Epicureanism) until his father, moved by a dream of Asclepius, decided he should study medicine. After his father's death, Galen traveled widely searching for the best doctors in Smyrna, Corinth, and finally Alexandria.
Galen compiled much of the knowledge obtained by his predecessors, and furthered the inquiry into the function of organs by performing dissections and vivisections on Barbary apes, oxen, pigs, and other animals. In 158 AD, Galen served as chief physician to the gladiators in his native Pergamon, and was able to study all kinds of wounds without performing any actual human dissection. It was through his experiments, however, that Galen was able to overturn many long-held beliefs, such as the theory that the arteries contained air which carried it to all parts of the body from the heart and the lungs. This belief was based originally on the arteries of dead animals, which appeared to be empty. Galen was able to demonstrate that living arteries contain blood, but his error, which became the established medical orthodoxy for centuries, was to assume that the blood goes back and forth from the heart in an ebb-and-flow motion.
Anatomy was a prominent part of Galen's medical education and was a major source of interest throughout his life. He wrote two great anatomical works, On anatomical procedure and On the uses of the parts of the body of man. The information in these tracts became the foundation of authority for all medical writers and physicians for the next 1300 years until they were challenged by Vesalius and Harvey in the 16th century.
Ptolemy
Claudius Ptolemy (c. 100–170 AD), living in or around Alexandria, carried out a scientific program centered on the writing of about a dozen books on astronomy, astrology, cartography, harmonics, and optics. Despite their severe style and high technicality, a great deal of them have survived, in some cases the sole remnants of their kind of writing from antiquity. Two major themes that run through Ptolemy's works are mathematical modelling of physical phenomena and methods of visual representation of physical reality.
Ptolemy's research program involved a combination of theoretical analysis with empirical considerations seen, for instance, in his systematized study of astronomy. Ptolemy's Mathēmatikē Syntaxis, better known as the Almagest, sought to improve on the work of his predecessors by building astronomy not only upon a secure mathematical basis but also by demonstrating the relationship between astronomical observations and the resulting astronomical theory. In his Planetary Hypotheses, Ptolemy describes in detail physical representations of his mathematical models found in the Almagest, presumably for didactic purposes. Likewise, the Geography was concerned with the drawing of accurate maps using astronomical information, at least in principle. Apart from astronomy, both the Harmonics and the Optics contain (in addition to mathematical analyses of sound and sight, respectively) instructions on how to construct and use experimental instruments to corroborate theory.
In retrospect, it is apparent that Ptolemy adjusted some reported measurements to fit his (incorrect) assumption that the angle of refraction is proportional to the angle of incidence.
Ptolemy's thoroughness and his preoccupation with ease of data presentation (for example, in his widespread use of tables) virtually guaranteed that earlier work on these subjects be neglected or considered obsolete, to the extent that almost nothing remains of the works Ptolemy often refers. His astronomical work in particular defined the method and subject matter of future research for centuries, and the Ptolemaic system became the dominant model for the motions of the heavens until the seventeenth century.
See also
Forensics in antiquity
Protoscience
Roman technology
Obsolete scientific theories
Notes
References
Alioto, Anthony M. A History of Western Science. Englewood Cliffs, NJ: Prentice Hall, 1987. .
Barnes, Jonathan. Early Greek Philosophy. Published by Penguin Classics
Clagett, Marshall. Greek Science in Antiquity. New York: Collier Books, 1955.
Cornford, F. M. Principium Sapientiæ: The Origins of Greek Philosophical Thought. Cambridge: Cambridge Univ. Pr, 1952; Gloucester, Mass.: Peter Smith, 1971.
Lindberg, David C. The Beginnings of Western Science: The European Scientific Tradition in Philosophical, Religious, and Institutional Context, 600 B.C. to A.D. 1450. Chicago: Univ. of Chicago Pr, 1992. .
Lloyd, G. E. R. Aristotle: The Growth and Structure of his Thought. Cambridge: Cambridge Univ. Pr, 1968. .
Lloyd, G. E. R. Early Greek Science: Thales to Aristotle. New York: W.W. Norton & Co, 1970. .
Lloyd, G. E. R. Greek Science after Aristotle. New York: W.W. Norton & Co, 1973. .
Lloyd, G. E. R. Magic Reason and Experience: Studies in the Origin and Development of Greek Science. Cambridge: Cambridge Univ. Pr, 1979.
Pedersen, Olaf. Early Physics and Astronomy: A Historical Introduction. 2nd edition. Cambridge: Cambridge University Press, 1993. .
Stahl, William H. Roman Science: Origins, Development, and Influence to the Later Middle Ages. Madison: Univ. of Wisconsin Pr, 1962.
Thurston, Hugh. Early Astronomy. New York: Springer, 1994. .
Classical antiquity
Ancient Greek science
Ancient Roman science
Classical antiquity
| 0.769005 | 0.986451 | 0.758586 |
Militarization
|
Militarization, or militarisation, is the process by which a society organizes itself for military conflict and violence. It is related to militarism, which is an ideology that reflects the level of militarization of a state. The process of militarization involves many interrelated aspects that encompass all levels of society.
Geopolitical
The perceived level of threat influences what potential for violence or warfare the state must achieve to assure itself an acceptable level of security. When the perceived level of threat is low, as with Canada, a country may have a relatively small military and level of armament. However, in Israel, the threat of attack from neighbouring countries means that the armed forces and defense have a high profile and are given significant funding and personnel.
This threat may involve the:
Balance of power of neighboring states (pre-World War I Europe, for example.)
Terrorism, rogue states, weapons of mass destruction and state terror
Threats to state interests, such as political control of an oil-rich region, or preventing the spread of a conflicting ideology (e.g., the U.S.'s use of CIA interventions to undermine various communist governments)
Political
Militaristic ideas are referred to within civilian contexts. The War on Poverty declared by President Lyndon B. Johnson, and the War on drugs declared by President Richard Nixon, are rhetorical wars. They are not declared against a concrete, military enemy which can be defeated, but are symbolic of the amount of effort, sacrifice, and dedication which needs to be applied to the issue. They may also be a means of consolidating executive power, because war implies emergency powers for the executive branch which are normally reserved for the legislature. As well, politicians have invoked militaristic ideas with rhetorical wars on other social issues. Some governments draw on militaristic imagery when they appoint "task forces" of bureaucrats to address pressing political or social issues.
Economic
military–industrial complex
metropolitan–military complex
Militarization has been used as a strategy for boosting a state's economy, by creating jobs and increasing industrial production. This was part of Adolf Hitler's plan to revive the German economy after the devastation it suffered after the First World War.
Social
Increasingly, Christian evangelical prayer has taken on militaristic forms and language. Spiritual warfare may involve forms of prayer spoken in militarized discourse. Its adherents, sometimes referring to themselves as "prayer warriors", wage "spiritual battle" on a "prayer battlefield". Spiritual warfare is the latest iteration in a long-standing partnership between religious organizations and militarization, two spheres that religion scholar Elizabeth A. McAlister argues are rarely considered together, although aggressive forms of prayer have long been used to further the aims of expanding Christian influence through a variety of conversion tactics. These tactics have begun being articulated in militaristic imagery, using terms such as "enlist, rally, advance and blitz". Major moments of increased political militarization have occurred concurrently with the growth of prominence of militaristic imagery in many evangelical communities, such as the evangelical engagement in a militarized project of aggressive missionary expansion conducted against the backdrop of the Vietnam War in the 1970s.
Gender
The military also has a role in defining gender identities. War movies (i.e. Rambo) associate the cultural identities of masculinity with warriors. Representations of Vietnam in popular culture display the male body as a weapon of war and contribute to ideals of masculinity in American culture. Military prowess has been crucial to understandings of contemporary masculinity in European and American culture. During World War I, soldiers who experienced shell-shock were seen as failures of masculinity, unable to withstand war as the ultimate task of manliness. The maintenance of military systems relies on ideas about men and manliness as well as ideas about women and femininity, including notions of fallen women and patriotic motherhood.
Women have been mobilized during times of war to perform tasks seen as incompatible with men's roles in combat, including cooking, laundry, and nursing. Women have also been seen as necessary for servicing male soldiers' sexual needs through prostitution. For example, during the Vietnam War, Vietnamese women who worked as prostitutes were allowed on US bases as local national Jabaits.
Civil–military relations
The role and image of the military within a society is another aspect of militarization. At differing times and places in history, soldiers have been viewed as respectable, honoured individuals (for example, this was the reputation of Allied soldiers who liberated the Nazi-occupied Netherlands in WWII, or the view of Americans and Canadians who placed support our troops car-magnets on their vehicles during the war on terror). Military figures can become heroes (for example, the Finnish people's view of the Finnish sniper nicknamed "White Death", who killed many Russian invaders). Alternatively, one can brand soldiers as "baby killers" (as a few U.S. anti-war activists did during and after the Vietnam War) or as war criminals (the Nazi leaders and SS units responsible for the Holocaust).
Structural organization is another process of militarization. Before World War II (1939–1945), the United States experienced a post-war reduction of forces after major conflicts, reflecting American suspicion of large standing armies. After World War II, not only was the army maintained, but the National Security Act of 1947 restructured both civilian and military leadership structures, establishing the Department of Defense and the National Security Council. The Act also created permanent intelligence structures (the CIA et al.) within the United States government for the first time, reflecting the civilian government's perception of a need for previously military-based intelligence to be incorporated into the structure of the civilian state.
Ex-soldiers entering business or politics may import military mindsets and jargon into their new environments – thus there is the popularity of advertising campaigns, sales break-throughs and election victories (even if Pyrrhic ones).
How citizenship is tied to military service plays an important role in establishing civil–military relations. Countries with volunteer-based military service have a different mindset from those with universal conscription. In some countries, men must have served with the military to be considered citizens. Compare historical Prussia (where every male was required to serve, and service was a requirement of citizenship) to post-Vietnam America's all-volunteer army. In 2016 in Israel, military service is mandatory. This develops a society where almost all people have served in the armed forces.
Race
Racial interactions between society and the military:
During imperial Germany, military service was a requirement of citizenship, but Jews and other foreigners were not allowed to serve in the military.
During Nazi Germany's Holocaust, SS units committed war crimes and crimes against humanity on a massive scale, including executing millions of civilians.
In the United States, beyond the Civil War, military service was a way for blacks to serve the country, and later appeal for equal citizenship during World War II. The military was one of the first national institutions to be integrated. In 1948, President Harry S. Truman issued Executive Order 9981 establishing equality within the armed services. The military was also a tool of integration. In 1957, President Dwight Eisenhower sent troops to Little Rock, Arkansas, to desegregate a school after the Brown v. Board Supreme Court decision in 1954. (See also MacGregor, 1985.)
Improved race relations was seen as a national security issue during the Cold War. Communist propaganda cited American racism as a major flaw, and America wanted to improve its image to third-world countries which might be susceptible to Communism.
Eleanor Roosevelt said "civil rights [is] an international question. . . [that] may decide whether Democracy or Communism wins out in the world." and this sort of false dichotomy was continued further throughout the McCarthy era and the Cold War in general.
Class
The military also serves as a means of social restructuring. Lower classes could gain status and mobility within the military, at least after levée en masse after the French Revolution. Also, the officer corps became open to the middle class, although it was once reserved only for nobility. In Britain, becoming a military officer was an expectation for 'second sons' who were to gain no inheritance; the role of officer was assumed to maintain their noble class. In the United States, military service has been/is advertised as a means for lower-class people to receive training and experience that they would not normally receive, propelling them to higher incomes and higher positions in society. Joining the military has enabled many people from lower socioeconomic demographics to receive college education and training. As well, a number of positions in the military involve transferable skills that can be used in the regular labor market after an individual is discharged (e.g., pilot, air traffic controller, mechanic).
Police
The militarization of police involves the use of military equipment and tactics by law enforcement officers. This includes the use of armored personnel carriers, assault rifles, submachine guns, flashbang grenades, grenade launchers, sniper rifles, Special Weapons and Tactics (SWAT) teams. The militarization of law enforcement is also associated with intelligence agency-style information gathering aimed at the public and political activists, and a more aggressive style of law enforcement. Criminal justice professor Peter Kraska has defined militarization of law enforcement as "the process whereby civilian police increasingly draw from, and pattern themselves around, the tenets of militarism and the military model."
Observers have noted the militarizing of the policing of protests. Since the 1970s, riot police have fired at protesters using guns with rubber bullets or plastic bullets. Tear gas, which was developed for riot control in 1919, is widely used against protesters in the 2000s. The use of tear gas in warfare is prohibited by various international treaties that most states have signed; however, its law enforcement or military use for domestic or non-combat situations is permitted.
Concerns about the militarization of police have been raised by both ends of the political spectrum in the United States, with both the right-of-center/libertarian Cato Institute and the left-of-center American Civil Liberties Union voicing criticisms of the practice. The Fraternal Order of Police has spoken out in favor of equipping law enforcement officers with military equipment, on the grounds that it increases the officers' safety and enables them to protect civilians.
See also
Militarization of space
List of military officers who have led divisions of a civil service
References
Notes
Sources
External links
Army Girls: The Role of Militarization in Women's Lives
Military sociology
Militarism
| 0.773806 | 0.980254 | 0.758527 |
Structuration theory
|
The theory of structuration is a social theory of the creation and reproduction of social systems that is based on the analysis of both structure and agents (see structure and agency), without giving primacy to either. Furthermore, in structuration theory, neither micro- nor macro-focused analysis alone is sufficient. The theory was proposed by sociologist Anthony Giddens, most significantly in The Constitution of Society, which examines phenomenology, hermeneutics, and social practices at the inseparable intersection of structures and agents. Its proponents have adopted and expanded this balanced position. Though the theory has received much criticism, it remains a pillar of contemporary sociological theory.
Premises and origins
Sociologist Anthony Giddens adopted a post-empiricist frame for his theory, as he was concerned with the abstract characteristics of social relations. This leaves each level more accessible to analysis via the ontologies which constitute the human social experience: space and time ("and thus, in one sense, 'history'.") His aim was to build a broad social theory which viewed "[t]he basic domain of study of the social sciences... [as] neither the experience of the individual actor, nor the existence of any form of societal totality, but social practices ordered across space and time." His focus on abstract ontology accompanied a general and purposeful neglect of epistemology or detailed research methodology, consistent with other types of pragmatism.
Giddens used concepts from objectivist and subjectivist social theories, discarding objectivism's focus on detached structures, which lacked regard for humanist elements and subjectivism's exclusive attention to individual or group agency without consideration for socio-structural context. He critically engaged classical nineteenth and early twentieth century social theorists such as Auguste Comte, Karl Marx, Max Weber, Émile Durkheim, Alfred Schutz, Robert K. Merton, Erving Goffman, and Jürgen Habermas. Thus, in many ways, structuration was "an exercise in clarification of logical issues." Structuration drew on other fields, as well: "He also wanted to bring in from other disciplines novel aspects of ontology that he felt had been neglected by social theorists working in the domains that most interested him. Thus, for example, he enlisted the aid of geographers, historians and philosophers in bringing notions of time and space into the central heartlands of social theory." Giddens hoped that a subject-wide "coming together" might occur which would involve greater cross-disciplinary dialogue and cooperation, especially between anthropologists, social scientists and sociologists of all types, historians, geographers, and even novelists. Believing that "literary style matters", he held that social scientists are communicators who share frames of meaning across cultural contexts through their work by utilising "the same sources of description (mutual knowledge) as novelists or others who write fictional accounts of social life."
Structuration differs from its historical sources. Unlike structuralism it sees the reproduction of social systems not "as a mechanical outcome, [but] rather ... as an active constituting process, accomplished by, and consisting in, the doings of active subjects." Unlike Althusser's concept of agents as "bearers" of structures, structuration theory sees them as active participants. Unlike the philosophy of action and other forms of interpretative sociology, structuration focuses on structure rather than production exclusively. Unlike Saussure's production of an utterance, structuration sees language as a tool from which to view society, not as the constitution of society—parting with structural linguists such as Claude Lévi-Strauss and generative grammar theorists such as Noam Chomsky. Unlike post-structuralist theory, which put similar focus on the effects of time and space, structuration does not recognise movement, change and transition. Unlike functionalism, in which structures and their virtual synonyms, "systems", comprise organisations, structuration sees structures and systems as separate concepts. Unlike Marxism, structuration avoids an overly restrictive concept of "society" and Marxism's reliance on a universal "motor of history" (i.e. class conflict), its theories of societal "adaptation", and its insistence on the working class as universal class and socialism as the ultimate form of modern society. Finally, "structuration theory cannot be expected to furnish the moral guarantees that critical theorists sometimes purport to offer."
Main ideas
Duality of structure
Giddens observed that in social analysis, the term structure referred generally to "rules and resources" and more specifically to "the structuring properties allowing the 'binding' of time-space in social systems". These properties make it possible for similar social practices to exist across time and space and that lend them "systemic" form. Agents—groups or individuals—draw upon these structures to perform social actions through embedded memory, called memory traces. Memory traces are thus the vehicle through which social actions are carried out. Structure is also, however, the result of these social practices. Thus, Giddens conceives of the duality of structure as being:
Giddens uses "the duality of structure" (i.e. material/ideational, micro/macro) to emphasize structure's nature as both medium and outcome. Structures exist both internally within agents as memory traces that are the product of phenomenological and hermeneutic inheritance and externally as the manifestation of social actions. Similarly, social structures contain agents and/or are the product of past actions of agents. Giddens holds this duality, alongside "structure" and "system," in addition to the concept of recursiveness, as the core of structuration theory. His theory has been adopted by those with structuralist inclinations, but who wish to situate such structures in human practice rather than to reify them as an ideal type or material property. (This is different, for example, from actor–network theory which appears to grant a certain autonomy to technical artifacts.)
Social systems have patterns of social relation that change over time; the changing nature of space and time determines the interaction of social relations and therefore structure. Hitherto, social structures or models were either taken to be beyond the realm of human control—the positivistic approach—or posit that action creates them—the interpretivist approach. The duality of structure emphasizes that they are different sides to the same central question of how social order is created.
Gregor McLennan suggested renaming this process "the duality of structure ", since both aspects are involved in using and producing social actions.
Cycle of structuration
The duality of structure is essentially a feedback–feedforward process whereby agents and structures mutually enact social systems, and social systems in turn become part of that duality. Structuration thus recognizes a social cycle. In examining social systems, structuration theory examines structure, modality, and interaction. The "modality" (discussed below) of a structural system is the means by which structures are translated into actions.
Interaction
Interaction is the agent's activity within the social system, space and time. "It can be understood as the fitful yet routinized occurrence of encounters, fading away in time and space, yet constantly reconstituted within different areas of time-space." Rules can affect interaction, as originally suggested by Goffman. "Frames" are "clusters of rules which help to constitute and regulate activities, defining them as activities of a certain sort and as subject to a given range of sanctions." Frames are necessary for agents to feel "ontological security, the trust that everyday actions have some degree of predictability. Whenever individuals interact in a specific context they address—without any difficulty and in many cases without conscious acknowledgement—the question: "What is going on here?" Framing is the practice by which agents make sense of what they are doing.
Routinization
Structuration theory is centrally concerned with order as "the transcending of time and space in human social relationships". Institutionalized action and routinization are foundational in the establishment of social order and the reproduction of social systems. Routine persists in society, even during social and political revolutions, where daily life is greatly deformed, "as Bettelheim demonstrates so well, routines, including those of an obnoxious sort, are re-established." Routine interactions become institutionalized features of social systems via tradition, custom and/or habit, but this is no easy societal task and it "is a major error to suppose that these phenomena need no explanation. On the contrary, as Goffman (together with ethnomethodology) has helped to demonstrate, the routinized character of most social activity is something that has to be 'worked at' continually by those who sustain it in their day-to-day conduct." Therefore, routinized social practices do not stem from coincidence, "but the skilled accomplishments of knowledgeable agents."
Trust and tact are essential for the existence of a "basic security system, the sustaining (in praxis) of a sense of ontological security, and [thus] the routine nature of social reproduction which agents skilfully organize. The monitoring of the body, the control and use of face in 'face work'—these are fundamental to social integration in time and space."
Explanation
Thus, even the smallest social actions contribute to the alteration or reproduction of social systems. Social stability and order is not permanent; agents always possess a dialectic of control (discussed below) which allows them to break away from normative actions. Depending on the social factors present, agents may cause shifts in social structure.
The cycle of structuration is not a defined sequence; it is rarely a direct succession of causal events. Structures and agents are both internal and external to each other, mingling, interrupting, and continually changing each other as feedbacks and feedforwards occur. Giddens stated, "The degree of "systemness" is very variable. ...I take it to be one of the main features of structuration theory that the extension and 'closure' of societies across space and time is regarded as problematic."
The use of "patriot" in political speech reflects this mingling, borrowing from and contributing to nationalistic norms and supports structures such as a police state, from which it in turn gains impact.
Structure and society
Structures are the "rules and resources" embedded in agents' memory traces. Agents call upon their memory traces of which they are "knowledgeable" to perform social actions. "Knowledgeability" refers to "what agents know about what they do, and why they do it." Giddens divides memory traces (structures-within-knowledgeability) into three types:
Domination (power): Giddens also uses "resources" to refer to this type. "Authoritative resources" allow agents to control persons, whereas "allocative resources" allow agents to control material objects.
Signification (meaning): Giddens suggests that meaning is inferred through structures. Agents use existing experience to infer meaning. For example, the meaning of living with mental illness comes from contextualized experiences.
Legitimation (norms): Giddens sometimes uses "rules" to refer to either signification or legitimation. An agent draws upon these stocks of knowledge via memory to inform him or herself about the external context, conditions, and potential results of an action.
When an agent uses these structures for social interactions, they are called modalities and present themselves in the forms of facility (domination), interpretive scheme/communication (signification) and norms/sanctions (legitimation).
Thus, he distinguishes between overall "structures-within-knowledgeability" and the more limited and task-specific "modalities" on which these agents subsequently draw when they interact.
The duality of structures means that structures enter "simultaneously into the constitution of the agent and social practices, and 'exists' in the generating moments of this constitution." "Structures exist paradigmatically, as an absent set of differences, temporally "present" only in their instantiation, in the constituting moments of social systems." Giddens draws upon structuralism and post-structuralism in theorizing that structures and their meaning are understood by their differences.
Agents and society
Giddens' agents follow previous psychoanalysis work done by Sigmund Freud and others. Agency, as Giddens calls it, is human action. To be human is to be an agent (not all agents are human). Agency is critical to both the reproduction and the transformation of society. Another way to explain this concept is by what Giddens calls the "reflexive monitoring of actions." "Reflexive monitoring" refers to agents' ability to monitor their actions and those actions' settings and contexts. Monitoring is an essential characteristic of agency. Agents subsequently "rationalize," or evaluate, the success of those efforts. All humans engage in this process, and expect the same from others. Through action, agents produce structures; through reflexive monitoring and rationalization, they transform them. To act, agents must be motivated, must be knowledgeable must be able to rationalize the action; and must reflexively monitor the action.
Agents, while bounded in structure, draw upon their knowledge of that structural context when they act. However, actions are constrained by agents' inherent capabilities and their understandings of available actions and external limitations. Practical consciousness and discursive consciousness inform these abilities. Practical consciousness is the knowledgeability that an agent brings to the tasks required by everyday life, which is so integrated as to be hardly noticed. Reflexive monitoring occurs at the level of practical consciousness. Discursive consciousness is the ability to verbally express knowledge. Alongside practical and discursive consciousness, Giddens recognizes actors as having reflexive, contextual knowledge, and that habitual, widespread use of knowledgeability makes structures become institutionalized.
Agents rationalize, and in doing so, link the agent and the agent's knowledgeability. Agents must coordinate ongoing projects, goals, and contexts while performing actions. This coordination is called reflexive monitoring and is connected to ethnomethodology's emphasis on agents' intrinsic sense of accountability.
The factors that can enable or constrain an agent, as well as how an agent uses structures, are known as capability constraints include age, cognitive/physical limits on performing multiple tasks at once and the physical impossibility of being in multiple places at once, available time and the relationship between movement in space and movement in time.
Location offers are a particular type of capability constraint. Examples include:
Locale
Regionalization: political or geographical zones, or rooms in a building
Presence: Do other actors participate in the action? (see co-presence); and more specifically
Physical presence: Are other actors physically nearby?
Agents are always able to engage in a dialectic of control, able to "intervene in the world or to refrain from such intervention, with the effect of influencing a specific process or state of affairs." In essence, agents experience inherent and contrasting amounts of autonomy and dependence; agents can always either act or not.
Methodology
Structuration theory is relevant to research, but does not prescribe a methodology and its use in research has been problematic. Giddens intended his theory to be abstract and theoretical, informing the hermeneutic aspects of research rather than guiding practice. Giddens wrote that structuration theory "establishes the internal logical coherence of concepts within a theoretical network." Giddens criticized many researchers who used structuration theory for empirical research, critiquing their "en bloc" use of the theory's abstract concepts in a burdensome way. "The works applying concepts from the logical framework of structuration theory that Giddens approved of were those that used them more selectively, 'in a spare and critical fashion.'" Giddens and followers used structuration theory more as "a sensitizing device".
Structuration theory allows researchers to focus on any structure or concept individually or in combination. In this way, structuration theory prioritizes ontology over epistemology. In his own work, Giddens focuses on production and reproduction of social practices in some context. He looked for stasis and change, agent expectations, relative degrees of routine, tradition, behavior, and creative, skillful, and strategic thought simultaneously. He examined spatial organization, intended and unintended consequences, skilled and knowledgeable agents, discursive and tacit knowledge, dialectic of control, actions with motivational content, and constraints. Structuration theorists conduct analytical research of social relations, rather than organically discovering them, since they use structuration theory to reveal specific research questions, though that technique has been criticized as cherry-picking.
Giddens preferred strategic conduct analysis, which focuses on contextually situated actions. It employs detailed accounts of agents' knowledgeability, motivation, and the dialectic of control.
Criticisms and additions
Though structuration theory has received critical expansion since its origination, Giddens' concepts remained pivotal for later extension of the theory, especially the duality of structure.
Strong structuration
Rob Stones argued that many aspects of Giddens' original theory had little place in its modern manifestation. Stones focused on clarifying its scope, reconfiguring some concepts and inserting new ones, and refining methodology and research orientations. Strong structuration:
Places its ontology more in situ than abstractly.
Introduces the quadripartite cycle, which details the elements in the duality of structure. These are:
external structures as conditions of action;
internal structures within the agent;
active agency, "including a range of aspects involved when agents draw upon internal structures in producing practical action"; and
outcomes (as both structures and events).
Increases attention to epistemology and methodology. Ontology supports epistemology and methodology by prioritising:
the question-at-hand;
appropriate forms of methodological bracketing;
distinct methodological steps in research; and
"[t]he specific combinations of all the above in composite forms of research."
Discovers the "meso-level of ontology between the abstract, philosophical level of ontology and the in-situ, ontic level." Strong structuration allows varied abstract ontological concepts in experiential conditions.
Focuses on the meso-level at the temporal and spatial scale.
Conceptualises independent causal forces and irresistible causal forces, which take into account how external structures, internal structures, and active agency affect agent choices (or lack of them). "Irresistible forces" are the connected concepts of a horizon of action with a set of "actions-in-hand" and a hierarchical ordering of purposes and concerns. An agent is affected by external influences. This aspect of strong structuration helps reconcile an agent's dialectic of control and his/her more constrained set of "real choices."
Post-structuration and dualism
Margaret Archer objected to the inseparability of structure and agency in structuration theory. She proposed a notion of dualism rather than "duality of structure". She primarily examined structural frameworks and the action within the limits allowed by those conditions. She combined realist ontology and called her methodology analytical dualism. Archer maintained that structure precedes agency in social structure reproduction and analytical importance, and that they should be analysed separately. She emphasised the importance of temporality in social analysis, dividing it into four stages: structural conditioning, social interaction, its immediate outcome and structural elaboration. Thus her analysis considered embedded "structural conditions, emergent causal powers and properties, social interactions between agents, and subsequent structural changes or reproductions arising from the latter." Archer criticised structuration theory for denying time and place because of the inseparability between structure and agency.
Nicos Mouzelis reconstructed Giddens' original theories. Mouzelis kept Giddens' original formulation of structure as "rules and resources." However, he was considered a dualist, because he argued for dualism to be as important in social analysis as the duality of structure. Mouzelis reexamined human social action at the "syntagmatic" (syntactic) level. He claimed that the duality of structure does not account for all types of social relationships. Duality of structure works when agents do not question or disrupt rules, and interaction resembles "natural/performative" actions with a practical orientation. However, in other contexts, the relationship between structure and agency can resemble dualism more than duality, such as systems that are the result of powerful agents. In these situations, rules are not viewed as resources, but are in states of transition or redefinition, where actions are seen from a "strategic/monitoring orientation." In this orientation, dualism shows the distance between agents and structures. He called these situations "syntagmatic duality". For example, a professor can change the class he or she teaches, but has little capability to change the larger university structure. "In that case, syntagmatic duality gives way to syntagmatic dualism." This implies that systems are the outcome, but not the medium, of social actions. Mouzelis also criticised Giddens' lack of consideration for social hierarchies.
John Parker built on Archer and Mouzelis's support for dualism to propose a theoretical reclamation of historical sociology and macro-structures using concrete historical cases, claiming that dualism better explained the dynamics of social structures. Equally, Robert Archer developed and applied analytical dualism in his critical analysis of the impact of New Managerialism on education policy in England and Wales during the 1990s and organization theory.
John B. Thompson
Though he agreed with the soundness and overall purposes of Giddens' most expansive structuration concepts (i.e., against dualism and for the study of structure in concert with agency), John B. Thompson ("a close friend and colleague of Giddens at Cambridge University") wrote one of the most widely cited critiques of structuration theory. His central argument was that it needed to be more specific and more consistent both internally and with conventional social structure theory. Thompson focused on problematic aspects of Giddens' concept of structure as "rules and resources," focusing on "rules". He argued that Giddens' concept of rule was too broad.
Thompson claimed that Giddens presupposed a criterion of importance in contending that rules are a generalizable enough tool to apply to every aspect of human action and interaction; "on the other hand, Giddens is well aware that rules, or some kinds or aspects of rules, are much more important than others for the analysis of, for example, the social structure of capitalist societies." He found the term to be imprecise and to not designate which rules are more relevant for which social structures.
Thompson used the example of linguistic analysis to point out that the need for a prior framework which to enable analysis of, for example, the social structure of an entire nation. While semantic rules may be relevant to social structure, to study them "presupposes some structural points of reference which are not themselves , with regard to which [of] these semantic rules are differentiated" according to class, sex, region and so on. He called this structural differentiation.
Rules differently affect variously situated individuals. Thompson gave the example of a private school which restricts enrollment and thus participation. Thus rules—in this case, restrictions—"operate , affecting unevenly various groups of individuals whose categorization depends on certain assumptions about social structures." The isolated analysis of rules does not incorporate differences among agents.
Thompson claimed that Giddens offered no way of formulating structural identity. Some "rules" are better conceived of as broad inherent elements that define a structure's identity (e.g., Henry Ford and Harold Macmillan are "capitalistic"). These agents may differ, but have important traits in common due to their "capitalistic" identity. Thompson theorized that these traits were not rules in the sense that a manager could draw upon a "rule" to fire a tardy employee; rather, they were which " the kinds of rules which are possible and which thereby the scope for institutional variation." It is necessary to outline the broader social system to be able to analyze agents, actors, and rules within that system.
Thus Thompson concluded that Giddens' use of the term "rules" is problematic. "Structure" is similarly objectionable: "But to adhere to this conception of structure, while at the same time acknowledging the need for the study of 'structural principles,' 'structural sets' and 'axes of structuration,' is simply a recipe for conceptual confusion."
Thompson proposed several amendments. He requested sharper differentiation between the reproduction of institutions and the reproduction of social structure. He proposed an altered version of the structuration cycle. He defined "institutions" as "characterized by rules, regulations and conventions of various sorts, by differing kinds and quantities of resources and by hierarchical power relations between the occupants of institutional positions." Agents acting within institutions and conforming to institutional rules and regulations or using institutionally endowed power reproduce the institution. "If, in so doing, the institutions continue to satisfy certain structural conditions, both in the sense of conditions which delimit the scope for institutional variation and the conditions which underlie the operation of structural differentiation, then the agents may be said to reproduce social structure."
Thompson also proposed adding a range of alternatives to Giddens' conception of constraints on human action. He pointed out the paradoxical relationship between Giddens' "dialectic of control" and his acknowledgement that constraints may leave an agent with no choice. He demanded that Giddens better show how wants and desires relate to choice.
Giddens replied that a structural principle is not equivalent with rules, and pointed to his definition from A Contemporary Critique of Historical Materialism: "Structural principles are principles of organisation implicated in those practices most "deeply" (in time) and "pervasively" (in space) sedimented in society", and described structuration as a "mode of institutional articulation" with emphasis on the relationship between time and space and a host of institutional orderings including, but not limited to, rules.
Ultimately, Thompson concluded that the concept of structure as "rules and resources" in an elemental and ontological way resulted in conceptual confusion. Many theorists supported Thompson's argument that an analysis "based on structuration's ontology of structures as norms, interpretative schemes and power resources radically limits itself if it does not frame and locate itself within a more broadly conceived notion of social structures."
Change
Sewell provided a useful summary that included one of the theory's less specified aspects: the question "Why are structural transformations possible?" He claimed that Giddens' overrelied on rules and modified Giddens' argument by re-defining "resources" as the embodiment of cultural schemas. He argued that change arises from the multiplicity of structures, the transposable nature of schemas, the unpredictability of resource accumulation, the polysemy of resources and the intersection of structures.
The existence of multiple structures implies that the knowledgeable agents whose actions produce systems are capable of applying different schemas to contexts with differing resources, contrary to the conception of a universal habitus (learned dispositions, skills and ways of acting). He wrote that "Societies are based on practices that derived from many distinct structures, which exist at different levels, operate in different modalities, and are themselves based on widely varying types and quantities of resources. ...It is never true that all of them are homologous."
Originally from Bourdieu, transposable schemas can be "applied to a wide and not fully predictable range of cases outside the context in which they were initially learned." That capacity "is inherent in the knowledge of cultural schemas that characterizes all minimally competent members of society."
Agents may modify schemas even though their use does not predictably accumulate resources. For example, the effect of a joke is never quite certain, but a comedian may alter it based on the amount of laughter it garners regardless of this variability.
Agents may interpret a particular resource according to different schemas. E.g., a commander could attribute his wealth to military prowess, while others could see it as a blessing from the gods or a coincidental initial advantage.
Structures often overlap, confusing interpretation (e.g., the structure of capitalist society includes production from both private property and worker solidarity).
Technology
This theory was adapted and augmented by researchers interested in the relationship between technology and social structures, such as information technology in organizations. DeSanctis and Poole proposed an "adaptive structuration theory" with respect to the emergence and use of group decision support systems. In particular, they chose Giddens' notion of modalities to consider how technology is used with respect to its "spirit". "Appropriations" are the immediate, visible actions that reveal deeper structuration processes and are enacted with "moves". Appropriations may be faithful or unfaithful, be instrumental and be used with various attitudes.
Wanda Orlikowski applied the duality of structure to technology: "The duality of technology identifies prior views of technology as either objective force or as socially constructed product–as a false dichotomy." She compared this to previous models (the technological imperative, strategic choice, and technology as a trigger) and considered the importance of meaning, power, norms, and interpretive flexibility. Orlikowski later replaced the notion of embedded properties for enactment (use). The "practice lens" shows how people enact structures which shape their use of technology that they employ in their practices. While Orlikowski's work focused on corporations, it is equally applicable to the technology cultures that have emerged in smaller community-based organizations, and can be adapted through the gender sensitivity lens in approaches to technology governance.
Workman, Ford and Allen rearticulated structuration theory as structuration agency theory for modeling socio-biologically inspired structuration in security software. Software agents join humans to engage in social actions of information exchange, giving and receiving instructions, responding to other agents, and pursuing goals individually or jointly.
Four-flows-model
The four flows model of organizing is grounded in structuration theory. McPhee and Pamela Zaug (2001) identify four communication flows that collectively perform key organizational functions and distinguish organizations from less formal social groups:
Membership negotiation—socialization, but also identification and self-positioning;
Organizational self-structuring—reflexive, especially managerial, structuring and control activities;
Activity coordination—Interacting to align or adjust local work activities;
Institutional positioning in the social order of institutions—mostly external communication to gain recognition and inclusion in the web of social transactions.
Group communication
Poole, Seibold, and McPhee wrote that "group structuration theory," provides "a theory of group interaction commensurate with the complexities of the phenomenon."
The theory attempts to integrate macrosocial theories and individuals or small groups, as well as how to avoid the binary categorization of either "stable" or "emergent" groups.
Waldeck et al. concluded that the theory needs to better predict outcomes, rather than merely explaining them. Decision rules support decision-making, which produces a communication pattern that can be directly observable. Research has not yet examined the "rational" function of group communication and decision-making (i.e., how well it achieves goals), nor structural production or constraints. Researchers must empirically demonstrate the recursivity of action and structure, examine how structures stabilize and change over time due to group communication, and may want to integrate argumentation research.
Public relations
Falkheimer claimed that integrating structuration theory into public relations (PR) strategies could result in a less agency-driven business, return theoretical focus to the role of power structures in PR, and reject massive PR campaigns in favor of a more "holistic understanding of how PR may be used in local contexts both as a reproductive and [transformational] social instrument." Falkheimer portrayed PR as a method of communication and action whereby social systems emerge and reproduce. Structuration theory reinvigorates the study of space and time in PR theory. Applied structuration theory may emphasize community-based approaches, storytelling, rituals, and informal communication systems. Moreover, structuration theory integrates all organizational members in PR actions, integrating PR into all organizational levels rather than a separate office. Finally, structuration reveals interesting ethical considerations relating to whether a social system transform.
COVID-19 and structure
the COVID-19 pandemic had huge impact on society since the beginning. When investigating those impacts, many researchers found helpful using structuration theory to explain the change in society. Oliver (2021) used "a theoretical framework derived from Giddens' structuration theory to analyze societal information cultures, concentrating on information and health literacy perspectives." And this framework focused on "the three modalities of structuration, i.e., interpretive schemes, resources, and norms." And in Oliver's research, those three modalities are "resources", "information freedom" and "formal and informal concepts and rules of behavior". After analyzing four countries framework, Oliver and his research team concluded "All our case studies show a number of competing information sources – from traditional media and official websites to various social media platforms used by both the government and the general public – that complicate the information landscape in which we all try to navigate what we know, and what we do not yet know, about the pandemic."
In the research of interpreting how remote work environment change during COVID-19 in South Africa, Walter (2020) applied structuration theory because "it addresses the relationship between actors (or persons) and social structures and how these social structures ultimately realign and conform to the actions of actors" Plus, "these social structures from Giddens's structuration theory assist people to navigate through everyday life."
Zvokuomba (2021) also used Giddens' theory of structuration "to reflect at the various levels of fragilities within the context of COVID-19 lockdown measures." One example in the research is that "theory of structuration and agency point to situations when individuals and groups of people either in compliance or defiance of community norms and rules of survival adopt certain practices." And during pandemic, researched pointed out "reverting to the traditional midwifery became a pragmatic approach to a problem." One example to support this point is that "As medical centers were partly closed, with no basic medication and health staff, the only alternative was seek traditional medical services. "
Business and structure
Structuration theory can also be used in explaining business related issues including operating, managing and marketing.
Clifton Scott and Karen Myers (2010)studied how the duality of structure can explain the shifts of members' actions during the membership negotiations in an organization by This is an example of how structure evolves with the interaction of a group of people.
Another case study done by Dutta (2016) and his research team shows how the models shift because of the action of individuals. The article examines the relationship between CEO's behavior and a company's cross-border acquisition. This case can also demonstrate one of the major dimensions in the duality of structure, the sense of power from the CEO. Authors found out that the process follows the theory of duality of structure: under the circumstances of CEO is overconfident, and the company is the limitation of resources, the process of cross-border acquisition is likely to be different than before.
Yuan ElaineJ (2011)'s research focused on a certain demographic of people under the structure. Authors studied Chinese TV shows and audiences' flavor of the show. The author concludes in the relationship between the audience and the TV shows producers, audiences' behavior has higher-order patterns.
Pavlou and Majchrzak argued that research on business-to-business e-commerce portrayed technology as overly deterministic. The authors employed structuration theory to re-examine outcomes such as economic/business success as well as trust, coordination, innovation, and shared knowledge. They looked beyond technology into organizational structure and practices, and examined the effects on the structure of adapting to new technologies. The authors held that technology needs to be aligned and compatible with the existing "trustworthy" practices and organizational and market structure. The authors recommended measuring long-term adaptations using ethnography, monitoring and other methods to observe causal relationships and generate better predictions.
See also
Action theory (sociology)
Archaeology of religion and ritual
A Community of Witches § Wicca as a religion of late modernity
Comparative contextual analysis
Constitutive criminology
Grand theory
Health geography
Macrosociology
Social change
Sociology of space
Text and conversation theory
References
External links
Anthony Giddens'The constitution of society: An outline of the theory of structuration.. Giddens' most comprehensive work on structuration theory. Available in part for free online via Google Books
This book is intended to provide an accessible introduction to Giddens' work and also to situate structuration theory in the context of other approaches. Available in part for free online via Google Books.
A critical assessment of Giddens' entire body of work. Available in part for free online via Google Books.
Social theory for beginners. Available in part for free online via Google Books.
Anthony Giddens: The theory of structuration - Theory.org.uk.
detailing the structure of structuration theory as contrasted with Talcott Parsons's action theory.
Sociological theories
Critical theory
Social change
Social theories
| 0.768099 | 0.987514 | 0.758509 |
Romantic literature in English
|
Romanticism was an artistic, literary, and intellectual movement that originated in Europe toward the end of the 18th century. Scholars regard the publishing of William Wordsworth's and Samuel Coleridge's Lyrical Ballads in 1798 as probably the beginning of the movement in England, and the crowning of Queen Victoria in 1837 as its end. Romanticism arrived in other parts of the English-speaking world later; in the United States, about 1820.
The Romantic period was one of social change in England because of the depopulation of the countryside and the rapid growth of overcrowded industrial cities between 1798 and 1832. The movement of so many people in England was the result of two forces: the Agricultural Revolution, which involved enclosures that drove workers and their families off the land; and the Industrial Revolution, which provided jobs "in the factories and mills, operated by machines driven by steam-power". Indeed, Romanticism may be seen in part as a reaction to the Industrial Revolution,<ref>Encyclopædia Britannica. "Romanticism. Retrieved 30 January 2008, from Encyclopædia Britannica Online. Britannica.com. Retrieved 2010-08-24.</ref> though it was also a revolt against the aristocratic social and political norms of the Age of Enlightenment, as well as a reaction against the scientific rationalization of nature. The French Revolution had an important influence on the political thinking of many Romantic figures at this time as well.
England
18th-century precursors
The Romantic movement in English literature of the early 19th century has its roots in 18th-century poetry, the Gothic novel and the novel of sensibility."Pre-Romanticism." Encyclopædia Britannica. Encyclopædia Britannica Online Academic Edition. Encyclopædia Britannica Inc., 2012. Web. 5 October 2012. This includes the pre-Romantic graveyard poets from the 1740s, whose works are characterized by gloomy meditations on mortality, "skulls and coffins, epitaphs and worms". To this was added by later practitioners, a feeling for the "sublime" and uncanny, and an interest in ancient English poetic forms and folk poetry. These concepts are often considered precursors of the Gothic genre. Gothic poets include Thomas Gray (1716–71), whose Elegy Written in a Country Churchyard (1751) is "the best known product of this kind of sensibility"; William Cowper (1731–1800); Christopher Smart (1722–71); Thomas Chatterton (1752–70); Robert Blair (1699–1746), author of The Grave (1743), "which celebrates the horror of death"; and Edward Young (1683–1765), whose The Complaint, or Night-Thoughts on Life, Death and Immortality (1742–45) is another "noted example of the graveyard genre". Other precursors of Romanticism are the poets James Thomson (1700–48) and James Macpherson (1736–96).
The sentimental novel or "novel of sensibility" developed during the second half of the 18th century. It celebrates the emotional and intellectual concepts of sentiment, sentimentalism and sensibility. Sentimentalism, which is to be distinguished from sensibility, was a fashion in both poetry and prose fiction which began in reaction to the rationalism of the Augustan Age. Sentimental novels relied on emotional response both from their readers and characters. Scenes of distress and tenderness are common, and the plot is arranged to advance emotions rather than action. The result is a valorization of "fine feeling", displaying the characters as models for refined, sensitive emotional effect. The ability to display feelings was thought to show character and experience, and to shape social life and relations. Famous sentimental novels in English include Samuel Richardson's Pamela, or Virtue Rewarded (1740), Oliver Goldsmith's The Vicar of Wakefield (1766), Laurence Sterne's Tristram Shandy (1759–67) and A Sentimental Journey (1768), Henry Brooke's The Fool of Quality (1765–70), Henry Mackenzie's The Man of Feeling (1771) and Maria Edgeworth's Castle Rackrent (1800).
Foreign influences were the Germans Goethe, Schiller and August Wilhelm Schlegel, and French philosopher and writer Jean-Jacques Rousseau (1712–78). Edmund Burke's A Philosophical Enquiry into the Origin of Our Ideas of the Sublime and Beautiful (1757) is another important influence. The changing landscape, brought about by the expansion of the city and depopulation of the countryside, was another influence on the growth of the Romantic movement. The poor condition of workers, the new class conflicts and the pollution of the environment led to a reaction against urbanism and industrialization, and an emphasis on the beauty and value of nature.
Horace Walpole's 1764 novel The Castle of Otranto created the Gothic fiction genre by combining elements of horror and romance. Ann Radcliffe introduced the brooding figure of the gothic villain, which developed into the Byronic hero. Her most popular and influential work, The Mysteries of Udolpho (1795), is frequently cited as the archetypal Gothic novel. Vathek (1786) by William Beckford and The Monk (1796) by Matthew Lewis were other notable early works in the gothic and horror literary genres. The first English short stories were gothic tales like Richard Cumberland's "remarkable narrative" The Poisoner of Montremos (1791).
Romantic poetry
The physical landscape is prominent in the poetry of this period. The Romantics, and especially Wordsworth, are often described as "nature poets". However, these "nature poems" reveal wider concerns in that they are often meditations on "an emotional problem or personal crisis".
The poet, painter and printmaker William Blake (1757–1827) was an early writer of his kind. Largely disconnected from the major streams of the literature of his time, Blake was generally unrecognized during his lifetime but is now considered a seminal figure in the history of both the poetry and visual arts of the Romantic Age. Considered mad by contemporaries for his idiosyncratic views, Blake is held in high regard by later critics for his expressiveness and creativity, and for the philosophical and mystical undercurrents within his work. Among his most important works are Songs of Innocence (1789) and Songs of Experience (1794), "and profound and difficult 'prophecies'" such as Visions of the Daughters of Albion (1793), The Book of Urizen (1794), Milton (1804–1810) and Jerusalem The Emanation of the Giant Albion (1804–1820).
After Blake, among the earliest Romantics were the Lake Poets, a small group of friends, including William Wordsworth (1770–1850), Samuel Taylor Coleridge (1772–1834), Robert Southey (1774–1843) and journalist Thomas De Quincey (1785–1859). However, at the time, Walter Scott (1771–1832) was the most famous poet. Scott achieved immediate success with his long narrative poem The Lay of the Last Minstrel in 1805, followed by the full epic poem Marmion in 1808. Both were set in the distant Scottish past.
The early Romantic poets brought a new form of emotionalism and introspection, and their emergence is marked by the first romantic manifesto in English literature, the Preface to Lyrical Ballads (1798). In it Wordsworth discusses what he sees as the elements of a new type of poetry, one based on the "real language of men", and which avoids the poetic diction of much 18th-century poetry. Here, Wordsworth gives his famous definition of poetry, as "the spontaneous overflow of powerful feelings" which "takes its origin from emotion recollected in tranquility". The poems in Lyrical Ballads were mostly by Wordsworth, though Coleridge contributed one of the great poems of English literature, the long Rime of the Ancient Mariner, a tragic ballad about the survival of one sailor through a series of supernatural events on his voyage through the South Seas, and involves the symbolically significant slaying of an albatross. Coleridge is also especially remembered for Kubla Khan, Frost at Midnight, Dejection: An Ode, Christabel, as well as the major prose work, Biographia Literaria. His critical work, especially on Shakespeare, was highly influential, and he helped introduce German idealist philosophy to English-speaking culture. Coleridge and Wordsworth, along with Carlyle, were major influences through Emerson, on American transcendentalism. Among Wordsworth's most important poems are Michael, Lines Written a Few Miles Above Tintern Abbey , Resolution and Independence, Ode: Intimations of Immortality and the long, autobiographical epic The Prelude. The Prelude was begun in 1799, but published posthumously in 1850. Wordsworth's poetry is noteworthy for how he "inverted the traditional hierarchy of poetic genres, subjects, and style by elevating humble and rustic life and the plain [...] into the main subject and medium of poetry in general", and how, in Coleridge's words, he awakens in the reader a "freshness of sensation" in his depiction of familiar, commonplace objects.
Robert Southey (1774–1843) was another of the so-called "Lake Poets", and Poet Laureate for 30 years from 1813 to his death in 1843, although his fame has been long eclipsed by that of his contemporaries and friends William Wordsworth and Samuel Taylor Coleridge. Thomas De Quincey (1785–1859) was an English essayist, best known for his Confessions of an English Opium-Eater (1821), an autobiographical account of his laudanum use and its effect on his life. William Hazlitt (1778–1830), friend of both Coleridge and Wordsworth, is another important essayist at this time, though today he is best known for his literary criticism, especially Characters of Shakespear's Plays (1817–18).
Second generation
The second generation of Romantic poets includes Lord Byron (1788–1824), Percy Bysshe Shelley (1792–1822) and John Keats (1795–1821). Byron, however, was still influenced by 18th-century satirists and was, perhaps the least "romantic" of the three, preferring "the brilliant wit of Pope to what he called the 'wrong poetical system' of his Romantic contemporaries". Byron achieved enormous fame and influence throughout Europe with works exploiting the violence and drama of their exotic and historical settings. Goethe called Byron "undoubtedly the greatest genius of our century". A trip to Europe resulted in the first two cantos of Childe Harold's Pilgrimage (1812), a mock-heroic epic of a young man's adventures in Europe, but also a sharp satire against London society. The poem contains elements thought to be autobiographical, as Byron generated some of the storyline from experience gained during his travels between 1809 and 1811. However, despite the success of Childe Harold and other works, Byron was forced to leave England for good in 1816 and seek asylum on the Continent, because, among other things, of his alleged incestuous affair with his half-sister Augusta Leigh. Here he joined Percy Bysshe and Mary Shelley, with his secretary John William Polidori on the shores of Lake Geneva, during the "Year Without a Summer". Polidori's The Vampyre was published in 1819, creating the literary vampire genre. This short story was inspired by the life of Lord Byron and his poem The Giaour (1813). Between 1819 and 1824, Byron published his unfinished epic satire Don Juan, which, though initially condemned by the critics, "was much admired by Goethe who translated part of it".
Shelley is perhaps best known for poems such as Ozymandias, Ode to the West Wind, To a Skylark, Music, When Soft Voices Die, The Cloud, The Masque of Anarchy and Adonais, an elegy written on the death of Keats. Shelley's early profession of atheism, in the tract The Necessity of Atheism, led to his expulsion from Oxford, and branded him as a radical agitator and thinker, setting an early pattern of marginalization and ostracism from the intellectual and political circles of his time. Similarly, Shelley's 1821 essay A Defence of Poetry displayed a radical view of poetry, in which poets act as "the unacknowledged legislators of the world", because, of all of artists, they best perceive the undergirding structure of society. His close circle of admirers, however, included the most progressive thinkers of the day, including his future father-in-law, philosopher William Godwin. Works like Queen Mab (1813) reveal Shelley "as the direct heir to the French and British revolutionary intellectuals of the 1790s." Shelley became an idol of the next three or four generations of poets, including important Victorian and Pre-Raphaelite poets such as Robert Browning and Dante Gabriel Rossetti, as well as later W. B. Yeats. Shelley's influential poem The Masque of Anarchy (1819) calls for nonviolence in protest and political action. It is perhaps the first modern statement of the principle of nonviolent protest. Mahatma Gandhi's passive resistance was influenced and inspired by Shelley's verse, and Gandhi would often quote the poem to vast audiences.
Though John Keats shared Byron and Shelley's radical politics, "his best poetry is not political", but is especially noted for its sensuous music and imagery, along with a concern with material beauty and the transience of life.Wynne-Davies, pp. 649–50. Among his most famous works are The Eve of St. Agnes, Ode to Psyche, La Belle Dame sans Merci, Ode to a Nightingale, Ode on a Grecian Urn, Ode on Melancholy, To Autumn and the incomplete Hyperion, a "philosophical" poem in blank verse, which was "conceived on the model of Milton's Paradise Lost". Keats' letters "are among the finest in English" and important "for their discussion of his aesthetic ideas", including 'negative capability. Keats has always been regarded as a major Romantic, "and his stature as a poet has grown steadily through all changes of fashion".
Other poets
Another important poet in this period was John Clare (1793–1864). Clare was the son of a farm labourer, who came to be known for his celebratory representations of the English countryside and his lamentation for the changes taking place in rural England. His poetry underwent a major re-evaluation in the late 20th century and he is often now considered to be among the most important 19th-century poets. His biographer Jonathan Bate states that Clare was "the greatest labouring-class poet that England has ever produced. No one has ever written more powerfully of nature, of a rural childhood, and of the alienated and unstable self".
George Crabbe (1754–1832) was an English poet who, during the Romantic period, wrote "closely observed, realistic portraits of rural life [...] in the heroic couplets of the Augustan age". Lord Byron, who was an admirer of Crabbe's poetry, described him as "nature's sternest painter, yet the best". Modern critic Frank Whitehead has said that "Crabbe, in his verse tales in particular, is an important – indeed, a major – poet whose work has been and still is seriously undervalued". Crabbe's works include The Village (1783), Poems (1807), The Borough (1810), and his poetry collections Tales (1812) and Tales of the Hall (1819).
Female poets
Female writers were increasingly active in all genres throughout the 18th century, and by the 1790s women's poetry was flourishing. Notable poets later in the period include, Anna Laetitia Barbauld, Joanna Baillie, Susanna Blamire and Hannah More. Other women poets include, Mary Alcock ( – 1798) and Mary Robinson (1758–1800), both of whom "highlighted the enormous discrepancy between life for the rich and the poor", and Felicia Hemans (1793–1835), author of nineteen individual books during her lifetime and who continued to be republished widely after her death in 1835.
More interest has been shown in recent years in Dorothy Wordsworth (1771–1855), William's sister, who "was modest about her writing abilities, [but] she produced poems of her own; and her journals and travel narratives certainly provided inspiration for her brother".
In the past decades, there has been substantial scholarly and critical work done on women poets of this period, both to make them available in print or online, and second, to assess them and position them within the literary tradition. In particular, Felicia Hemans, although sticking to its forms, began a process of undermining the Romantic tradition, a deconstruction that was continued by Letitia Elizabeth Landon (1802–1838).Adriana Craciun (2003). "Letitia Landon's philosophy of decomposition". Fatal Women of Romanticism. Chapter 6. Cambridge University Press. Landon's novel forms of metrical romance and dramatic monologue was much copied and had a long and lasting influence on Victorian poetry. Her work is now frequently classified as post-romantic.Anne-Julia Zwierlein, Section 19: "Poetic Genres in the Victorian Age. I: Letitia Elizabeth Landon’s and Alfred Lord Tennyson’s Post-Romantic Verse Narratives", in Baumbach and others, A History of British Poetry, Trier, WVT, . She also produced three completed novels, a tragedy, and numerous short stories.
Romantic novel
Mary Shelley (1797–1851) is remembered as the author of Frankenstein (1818). The plot of this is said to have come from a waking dream she had, in the company of Percy Shelley, Lord Byron, and John Polidori, following a conversation about galvanism and the feasibility of returning a corpse or assembled body parts to life, and on the experiments of the 18th-century natural philosopher and poet Erasmus Darwin, who was said to have animated dead matter. Sitting around a log fire at Byron's villa, the company also amused themselves by reading German ghost stories, prompting Byron to suggest they each write their own supernatural tale.
Jane Austen's works critique the novels of sensibility of the second half of the 18th century and are part of the transition to 19th-century realism. Her plots, though fundamentally comic, highlight the dependence of women on marriage to secure social standing and economic security. Austen brings to light the hardships women faced, since they usually did not inherit money, could not work and were largely dependent on their husbands. She reveals not only the difficulties women faced in her day, but also what was expected of men and of the careers they had to follow. This she does with wit and humour and with endings where all characters, good or bad, receive exactly what they deserve. Her work brought her little personal fame and only a few positive reviews during her lifetime, but the publication in 1869 of her nephew's A Memoir of Jane Austen introduced her to a wider public, and by the 1940s she had become accepted as a major writer. The second half of the 20th century saw a proliferation of Austen scholarship and the emergence of a Janeite fan culture. Austen's works include Sense and Sensibility (1811), Pride and Prejudice (1813), Mansfield Park (1814), Emma (1815), Northanger Abbey (1817) and Persuasion (1817).
Drama
Byron, Keats and Percy Shelley all wrote for the stage, but with little success in England, with Shelley's The Cenci perhaps the best work produced, though that was not played in a public theatre in England until a century after his death. Byron's plays, along with dramatizations of his poems and Scott's novels, were much more popular on the Continent, and especially in France, and through these versions, several were turned into operas, many still performed today. If contemporary poets had little success on the stage, the period was a legendary one for performances of Shakespeare, and went some way to restoring his original texts and removing the Augustan "improvements" to them. The greatest actor of the period, Edmund Kean, restored the tragic ending to King Lear; Coleridge said that, “Seeing him act was like reading Shakespeare by flashes of lightning.”
Wales
Wales had its own Romantic movement, especially in Welsh literature (which was rarely translated or known outside Wales). The countryside and history of Wales exerted an influence on the Romantic imagination of Britons, especially in travel writings, and the poetry of Wordsworth.
The "poetry and bardic vision" of Edward Williams (1747–1826), better known by his bardic name Iolo Morganwg, bear the hallmarks of Romanticism. "His Romantic image of Wales and its past had a far-reaching effect on the way in which the Welsh envisaged their own national identity during the nineteenth century."Shawna Lichtenwalner (2008), Claiming Cambria: Invoking the Welsh in the Romantic Era. University of Delaware Press.
Scotland
James Macpherson was the first Scottish poet to gain an international reputation. Claiming to have found poetry written by the ancient bard Ossian, he published "translations" that acquired international popularity, being proclaimed as a Celtic equivalent of the Classical epics. Fingal, written in 1762, was speedily translated into many European languages, and its appreciation of natural beauty and treatment of the ancient legend have been credited, more than any single work, with bringing about the Romantic movement in European, and especially in German literature, through its influence on Johann Gottfried von Herder and Johann Wolfgang von Goethe. It was also popularised in France by figures that included Napoleon. Eventually it became clear that the poems were not direct translations from the Gaelic, but flowery adaptations made to suit the aesthetic expectations of his audience. Both Robert Burns (1759–96) and Walter Scott (1771–1832) were highly influenced by the Ossian cycle. Robert Burns (1759–1796) was a pioneer of the Romantic movement, and after his death he became a cultural icon in Scotland. As well as writing poems, Burns also collected folk songs from across Scotland, often revising or adapting them. His Poems, chiefly in the Scottish Dialect was published in 1786. Among poems and songs of Burns that remain well known across the world are, Auld Lang Syne; A Red, Red Rose; A Man's A Man for A' That; To a Louse; To a Mouse; The Battle of Sherramuir; Tam o' Shanter and Ae Fond Kiss.
One of the most important British novelists of the early 19th century was Sir Walter Scott, who was not only highly popular, but "the greatest single influence on fiction in the 19th century [...] [and] a European figure". Scott's novel writing career was launched in 1814 with Waverley, often called the first historical novel, and was followed by Ivanhoe. The Waverley Novels, including The Antiquary, Old Mortality, The Heart of Midlothian, and whose subject is Scottish history, are now generally regarded as Scott's masterpieces. He was one of the most popular novelists of the era, and his historical romances inspired a generation of painters, composers, and writers throughout Europe, including Franz Schubert, Felix Mendelssohn and J. M. W. Turner. His novels also inspired many operas, of which the most famous are Lucia di Lammermoor (1835) by Donizetti, and Bizet's La jolie fille de Perth, The Fair Maid of Perth (1867). However, today his contemporary, Jane Austen, is widely read and the source for films and television series, while Scott is comparatively neglected.
America
The European Romantic movement reached America in the early 19th century. American Romanticism was just as multifaceted and individualistic as it was in Europe. Like the Europeans, the American Romantics demonstrated a high level of moral enthusiasm, commitment to individualism and the unfolding of the self, an emphasis on intuitive perception, and the assumption that the natural world was inherently good, while human society was filled with corruption. Romanticism became popular in American politics, philosophy and art. The movement appealed to the revolutionary spirit of America as well as to those longing to break free of the strict religious traditions of early settlement. The Romantics rejected rationalism and religious intellect. It appealed to those in opposition of Calvinism, which includes the belief that the destiny of each individual is preordained.
Romantic Gothic literature made an early appearance with Washington Irving's The Legend of Sleepy Hollow (1820) and Rip Van Winkle (1819); there are picturesque "local color" elements in Washington Irving's essays and especially his travel books. From 1823, the prolific and popular novelist James Fenimore Cooper (1789–1851) began publishing his historical romances of frontier and Indian life, to create a unique form of American literature. Cooper is best remembered for his numerous sea-stories and the historical novels known as the Leatherstocking Tales, with their emphasis on heroic simplicity and their fervent landscape descriptions of an already-exotic mythicized frontier peopled by "noble savages", exemplified by Uncas, from The Last of the Mohicans (1826) show the influence of Rousseau's (1712–78) philosophy. Edgar Allan Poe's tales of the macabre that first appeared in the early 1830s, and his balladic poetry was more influential in France than at home.Ann Woodlief, "American Romanticism (or the American Renaissance): Introduction". English Department, Virginia Commonwealth University
By the mid-19th century, the pre-eminence of literature from the British Isles began to be challenged by writers from the former American colonies. This included one of the creators of the new genre of the short story, and inventor of the detective story Edgar Allan Poe (1809–49). A major influence on American writers at this time was Romanticism.
The Romantic movement gave rise to New England Transcendentalism, which portrayed a less restrictive relationship between God and Universe. The publication of Ralph Waldo Emerson's 1836 essay Nature is usually considered the watershed moment at which transcendentalism became a major cultural movement. The new philosophy presented the individual with a more personal relationship with God. Transcendentalism and Romanticism appealed to Americans in a similar fashion, for both privileged feeling over reason, individual freedom of expression over the restraints of tradition and custom. It often involved a rapturous response to nature. It encouraged the rejection of harsh, rigid Calvinism, and promised a new blossoming of American culture.
The romantic American novel developed fully with Nathaniel Hawthorne's (1804–1864) The Scarlet Letter (1850), a stark drama of a woman cast out of her community for committing adultery. Hawthorne's fiction had a profound impact on his friend Herman Melville (1819–1891). In Moby-Dick (1851), an adventurous whaling voyage becomes the vehicle for examining such themes as obsession, the nature of evil, and human struggle against the elements. By the 1880s, however, psychological and social realism were competing with Romanticism in the novel.
See also
Irish literature
Literature of Northern Ireland
Post-romanticism
References
Cited sources
External links
British Women Romantic Poets, 1789 - 1832
Encyclopaedia Britannica, 11th ed.1910-1911
Romanticism via Discovering Literature: Romantics and Victorians at the British Library
The Romantics, In Our Time, BBC Radio 4 discussion with Jonathan Bate, Rosemary Ashton and Nicholas Roe (Oct. 12, 2000)
The Later Romantics, In Our Time'', BBC Radio 4 discussion with Jonathan Bate, Robert Woof & Jennifer Wallace (Apr, 15, 2004)
1798 establishments in Great Britain
1837 disestablishments in the United Kingdom
English-language literature
History of literature in the United Kingdom
Romantic literature
18th-century British literature
19th-century British literature
| 0.764455 | 0.992217 | 0.758505 |
Quantitative history
|
Quantitative history is a method of historical research that uses quantitative, statistical and computer resources. It is a type of the social science history and has four major journals: Historical Methods (1967– ), Journal of Interdisciplinary History (1968– ), the Social Science History (1976– ), and Cliodynamics: The Journal of Quantitative History and Cultural Evolution (2010– ).
Quantitative historians use databases as their main sources of information. Large quantities of political, economic and demographic data are available in print or manuscript format, such as census information on individuals, and election returns. Large quantities have been converted into computer databases. The largest repository presently is the Inter-university Consortium for Political and Social Research (ICPSR) of the University of Michigan, which provides access to an extensive collection of downloadable political and social data for the United States and the world. Quantitative historians use statistical methods to find patterns of human behavior covering all sectors of society, not just the elites who create documents preserved in traditional archive.
Data bases
Content analysis is a technique borrowed from journalism research whereby text from newspapers, magazines or similar sources are coded numerically according to a standardized list of topics.
Economic history
Economic historians use major data sets, especially those collected by governments since the 1920s. Historians of slavery have used census data, sales receipts and price information to reconstruct the economic history of slavery.
Political history
Quantifiers study topics like voting behavior of groups in elections, the roll call behavior of legislators, public opinion distribution, and the occurrence rate of wars and legislation. Collective biography uses standardized information for a large group to deduce patterns of thought and behavior.
Social history
Social historians using quantitative methods (sometimes termed "new social historians", as they were "new" during the 1960s) use census data and other data sets to study entire populations. Topics include demographic issues such as population growth rates, rates of birth, death, marriage and disease, occupational and education distributions, genealogy and migrations and population changes.
A challenging technique is that of associating occurrences of the name of a given person ("nominal record linkage") whose information appears in multiple sources such as censuses, city directories, employment files and voting registration lists.
Cliodynamics is the application of scientific method to the study of history, combining insights from cultural evolution, macrosociology, and economic history/cliometrics to produce and analyse large quantitative datasets and identify general principles about the evolutionary dynamics and functioning of historical societies.
Topics
During 2007–2008, the five most viewed articles in Social Science History were:
S. J. Kleinberg, "Children's and Mothers' Wage Labor in Three Eastern U.S. Cities, 1880-1920" Mar 01, 2005; 29: 45-76.
Ted L. Gragson, Paul V. Bolstad, "A Local Analysis of Early-Eighteenth-Century Cherokee Settlement," Sep 01, 2007; 31: 435-468.
Helen Boritch, "The Criminal Class Revisited: Recidivism and Punishment in Ontario, 1871-1920," Mar 01, 2005; 29: 137-170.
Javier Silvestre, "Temporary Internal Migrations in Spain, 1860-1930," Dec 01, 2007; 31: 539-574.
Eric W. Sager, "The Transformation of the Canadian Domestic Servant, 1871-1931" Dec 01, 2007; 31: 509-537.
See also
Cliodynamics
Demographic history
Digital history
H-Net
Historiometrics
Modifiable temporal unit problem
New economic history
Qualitative geography
Quantitative geography
Time geography
References
Bibliography
Aydelotte, William O., Allan G. Bogue, and Robert William Fogel, eds. The Dimensions of Quantitative Research in History (Princeton UP, 1972). Essays by leading pioneers with case studies in the social, political, and economic development of the United States, France, and Great Britain.
Campbell, D'Ann, and Richard Jensen. "Community and Family History at the Newberry Library: Some Solutions to a National Need." The History Teacher 11.1 (1977): 47-54. online
Clubb, Jerome M., Erik W. Austin, and Gordon W. Kirk, Jr. The Process of Historical Inquiry: Everyday Lives of Working Americans (Columbia University Press, 1989). Uses case study of American textile workers in 1888-90
Clubb, J. M., and E. K. Scheuch (eds.) Historical Sozial Research: The Use of Historical and Process-Produced Data, Stuttgart 1980, European emphasis
Crymble, Adam Technology and the Historian: Transformations in the Digital Age (U of Illinois Press, 2021)
Dollar, Charles, and Richard Jensen. Historian's Guide to Statistics, (Holt, 1971; Krieger 1973); detailed textbook of quantitative political and social history with bibliography
Erickson, Charlotte. "Quantitative history." American Historical Review 80.2 (1975): 351-365.
Floud, Roderick. "Quantitative history: Evolution of methods and techniques." Journal of the Society of Archivists 5.7 (1977): 407-417.
Fogel, Robert William. "The limits of quantitative methods in history." American Historical Review 80.2 (1975): 329-350. Focus on economic history.
Fogel, Robert William and G. R. Elton, Which Road to the Past: Two Views of History (Yale University Press, 1983). Debate over merits.
Furet, François. "Quantitative history." Daedalus (1971): 151-167. The uses in France.
Haskins, Loren and Kirk Jeffrey. Understanding Quantitative History (M.I.T. Press, 1990). textbook
Hollingsworth, T.H. Historical Demography. Hodder & Stoughton, London 1969
Hudson, Pat. History by Numbers: An Introduction to Quantitative Approaches (Arnold, 2000). Comprehensive textbook; examples drawn mainly from British sources.
Jarausch, Konrad H. "The International Dimension of Quantitative History: Some Introductory Reflections." Social Science History 8.2 (1984): 123-132.
Jarausch, Konrad H. and Kenneth A. Hardy, Quantitative Methods for Historians: A Guide to Research, Data, and Statistics (University of North Carolina Press, 1991). textbook
Jensen, Richard. "The Accomplishments of the Newberry Library Family and Community History Programs: An Interview with Richard Jensen." The Public Historian 5.4 (1983): 49-61. online
Kimberly A. Neuendorf. The Content Analysis Guidebook (2002)
Kousser, J.M., "History QUASSHed: quantitative social scientific history." American Behavioral Scientist 23(1980), p. 885-904
Lorwin, Val R. and. J. M. Price, ed. The Dimensions of the Past: Materials, Problems and Opportunities for Quantitative Work in History, Yale UP 1972
Monkkonen, Eric H. "The challenge of quantitative history." Historical Methods: A Journal of Quantitative and Interdisciplinary History 17.3 (1984): 86-94.
Rowney, D.K., (ed.) Quantitative History: Selected Readings in the Quantitative Analysis of Historical Data, 1969
Swierenga, Robert P., ed. Quantification in American History: Theory and Research (Atheneum, 1970). Early essays on methodology, and examples of political, economic, and social history.
Wrigley, E.A. (ed.) Identifying People in the Past. Edward Arnold, 1973. Using demographic and census data
Other sources
Grinin, L. 2007. Periodization of History: A theoretic-mathematical analysis. In: History & Mathematics: Analyzing and Modeling Global Development. Edited by Leonid Grinin, Victor C. de Munck, and Andrey Korotayev. Moscow: KomKniga, 2006. P.10-38. .
Kimberly A. Neuendorf. (2002). The Content Analysis Guidebook. Los Angeles: Sage.
Moyal, J.E. (1949) The distribution of wars in time. Journal of the Royal Statistical Society, 112, 446-458.
Richardson, L. F. (1960). Statistics of deadly quarrels. Pacific Grove, CA: Boxwood Press.
Silver, N. C. & Hittner, J. B. (1998). Guidebook of statistical software for the social and behavioral sciences. Boston, MA: Allyn & Bacon.
Turchin, P., et al., eds. (2007). History & Mathematics: Historical Dynamics and Development of Complex Societies. Moscow: KomKniga.
Wilkinson, D. (1980). Deadly Quarrels: Lewis F. Richardson and the Statistical Study of War. Berkeley, CA: University of California Press.
Wright, Q. (1965). A Study of War. 2nd ed. Chicago: University of Chicago Press.
External links
Interuniversity Consortium for Political and Social Research (ICPSR) at the University of Michigan
Quantitative History
Fields of history
Social science methodology
Quantitative research
| 0.796505 | 0.952262 | 0.758481 |
Progenitor
|
In genealogy, the progenitor (rarer: primogenitor; or Ahnherr) is the – sometimes legendary – founder of a family, line of descent, clan or tribe, noble house, or ethnic group. Genealogy (commonly known as family history) understands a progenitor to be the earliest recorded ancestor of a consanguineous family group of descendants.
Progenitors are sometimes used to describe the status of a genealogical research project, or in order to compare the availability of genealogical data in different times and places. Often, progenitors are implied to be patrilineal. If a patrilineal dynasty is considered, each such dynasty has exactly one progenitor.
Aristocratic and dynastic families often look back to an ancestor who is seen as the founder and progenitor of their house (i.e. family line). Even the old Roman legal concept of agnates (Latin for "descendants") was based on the idea of the unbroken family line of a progenitor, but only includes male members of the family, whilst the women were referred to as "cognatic".
It is rarely possible to confirm biological parenthood in the case of ancient family lines (see bastardy). In addition, the progenitor is often a distant ancestor, only known as a result of oral tradition. Where people groups and communities rely solely on a patrilinear family line, their common ancestor often became the subject of a legend surrounding the origin of the family. By contrast, families and peoples with a matrilinear history trace themselves back to an original female progenitrix. Matrilinear rules of descent are found in about 200 of the 1300 known indigenous peoples and ethnic groups worldwide, whilst around 600 have patrilineal rules of descent (from father to son).
In the mythological beliefs of the Romans the god of war, Mars, was viewed as the progenitor of the Romans; which is why the Mars symbol (♂, a shield and spear), is used to refer to the male sex. Besides cities and countries, ethnic groups may also have a progenitor (often a god) in their mythologies, for example, the Hellenistic Greeks look back to Hellen as their progenitor. In Indian Hinduism Manu is the progenitor of all mankind. In the Abrahamic religions, Adam, Noah, Abraham and others are described as progenitors (see also Biblical patriarchy).
In archaeogenetics (archaeological genetics), a human Y-chromosomal Adam has been named as the most recent common ancestor from whom all currently living people are descended patrilinearly. This Adam lived in Africa at a time variously estimated from 60,000 to 338,000 years ago. And Mitochondrial Eve, the most recent common ancestor in the matrilineal line, is estimated to have lived from 100,000 to 230,000 years ago. (There being no suggestion that these, “Eve” and “Adam”, lived at nearby times or places. And there were many other common ancestors in other lines of descent.)
Examples of patrilineal progenitors
See also
Protoplast, progenitors of mankind in a creation story
Ancestor
Ahnentafel
Legendary progenitor
Progenitor cell
References
Kinship and descent
Genealogy
Ancestors
| 0.766657 | 0.989335 | 0.758481 |
Social
|
Social organisms, including human(s), live collectively in interacting populations. This interaction is considered social whether they are aware of it or not, and whether the exchange is voluntary or not.
Etymology
The word "social" derives from the Latin word socii ("allies"). It is particularly derived from the Italian Socii states, historical allies of the Roman Republic (although they rebelled against Rome in the Social War of 91–87 BC).
Social theorists
In the view of Karl Marx, human beings are intrinsically, necessarily and by definition social beings who, beyond being "gregarious creatures", cannot survive and meet their needs other than through social co-operation and association. Their social characteristics are therefore to a large extent an objectively given fact, stamped on them from birth and affirmed by socialization processes; and, according to Marx, in producing and reproducing their material life, people must necessarily enter into relations of production which are "independent of their will".
By contrast, the sociologist Max Weber for example defines human action as "social" if, by virtue of the subjective meanings attached to the action by individuals, it "takes account of the behavior of others, and is thereby oriented in its course".
In socialism
The term "socialism", used from the 1830s onwards in France and the United Kingdom, was directly related to what was called the social question. In essence, early socialists contended that the emergence of competitive market societies did not create "liberty, equality and fraternity" for all citizens, requiring the intervention of politics and social reform to tackle social problems, injustices and grievances (a topic on which Jean-Jacques Rousseau discourses at length in his classic work The Social Contract). Originally the term "socialist" was often used interchangeably with "co-operative", "mutualist", "associationist" and "collectivist" in reference to the organization of economic enterprise socialists advocated, in contrast to the private enterprise and corporate organizational structures inherent to capitalism.
The modern concept of socialism evolved in response to the development of industrial capitalism. The "social" in modern "socialism" came to refer to the specific perspective and understanding socialists had of the development of material, economic forces and determinants of human behavior in society. Specifically, it denoted the perspective that human behavior is largely determined by a person's immediate social environment, that modes of social organization were not supernatural or metaphysical constructs but products of the social system and social environment, which were in turn products of the level of technology/mode of production (the material world), and were therefore constantly changing. Social and economic systems were thus not the product of innate human nature, but of the underlying form of economic organization and level of technology in a given society, implying that human social relations and incentive-structures would also change as social relations and social organization changes in response to improvements in technology and evolving material forces (relations of production). This perspective formed the bulk of the foundation for Karl Marx's materialist conception of history.
Modern uses
In contemporary society, "social" often refers to the redistributive policies of the government which aim to apply resources in the public interest, for example, social security. Policy concerns then include the problems of social exclusion and social cohesion. Here, "social" contrasts with "private" and to the distinction between the public and the private (or privatised) spheres, where ownership relations define access to resources and attention.
The social domain is often also contrasted with that of physical nature, but in sociobiology analogies are drawn between humans and other living species in order to explain social behavior in terms of biological factors.
See also
Social construct
Social cue
Social issues
Social media
Social network
Social networking service
Social neuroscience
Social pension
Social psychology
Social skills
Social studies
Social support
Social undermining
Social work
Sociology
References
External links
Dolwick, JS. 2009. The 'Social' and Beyond: Introducing Actor Network Theory, article examining different meanings of the concept 'social'
Sociological terminology
Social sciences terminology
| 0.763153 | 0.993858 | 0.758466 |
Global politics
|
Global politics, also known as world politics, names both the discipline that studies the political and economic patterns of the world and the field that is being studied. At the centre of that field are the different processes of political globalization in relation to questions of social power.
The discipline studies the relationships between cities, nation-states, shell-states, multinational corporations, non-governmental organizations and international organizations. Current areas of discussion include national and ethnic conflict regulation, democracy and the politics of national self-determination, globalization and its relationship to democracy, conflict and peace studies, comparative politics, political economy, and the international political economy of the environment. One important area of global politics is contestation in the global political sphere over legitimacy.
Global politics is said by some to be distinct from the field of international politics (commonly seen as a branch of international relations), as it "does not stress the primacy of intergovernmental relations and transactions". This distinction however has not always been held among authors and political scientists, who often use the term "international politics" to mean global politics.
It has been suggested that global politics may be best understood as an "imaginary" of a political space existing beyond the sub-national, national, and international. This imaginary structures global politics as both a field of study and a set of practices, and though it only rose to prominence in the late twentieth century, has longer historical roots stretching back at least to the creation of medieval mappa mundi and to first contact between Afro-Eurasia and the Americas through colonialism and the Age of Sail.
Defining the field
Beginning in the late nineteenth century, several groups extended the definition of the political community beyond nation-states to include much, if not all, of humanity. These internationalists include Marxists, human rights advocates, environmentalists, peace activists, feminists, and minority groups. This was the general direction of thinking on global politics, though the term was not used as such. The way in which modern world politics is implemented is structured by a set of interpretations dating back to the rise of the European powers. They were able to overtake the rest of the world in terms of economic and military power. Europeans, with their global supremacy, imposed their own system and views on others, through envisioning the world as a whole and defining the regions of the world as 'modern' or 'backward'. They saw nation statehood as the best and highest form of political organization, therefore viewing world politics as the result of the pursuit of hegemony by competing states.
The modern world politics perspective is often identified with the works, in particular their 1972 work Transnational Relations and World Politics. Here, the authors argued that state-centric views of international relations were inadequate frameworks to utilize in political science or international relations studies due to the increased globalization. Today, the practices of global politics are defined by values: norms of human rights, ideas of human development, and beliefs such as Internationalism or cosmopolitanism about how we should relate to each. Over the last couple of decades cosmopolitanism has become one of the key contested ideologies of global politics:
The intensification of globalization led some writers to suggest that states were no longer relevant to global politics. This view has been subject to debate:
Cyclical theories in global politics
George Modelski
George Modelski defines global order as a 'management network centred on a lead unit and contenders for leadership, (pursuing) collective action at the global level'. The system is allegedly cyclical. Each cycle is about 100 years' duration and a new hegemonic power appears each time:
Portugal 1492-1580; in the Age of Discovery
The Netherlands 1580-1688; beginning with the Eighty Years' War, 1579-1588
United Kingdom (1) 1688-1792; beginning with the wars of Louis XVI
United Kingdom (2) 1792-1914; beginning with the French Revolution and Napoleonic wars
The United States 1914 to (predicted) 2030; beginning with World War I and two.
Each cycle has four phases;
1, Global war, which a) involves almost all global powers, b) is 'characteristically naval' c) is caused by a system breakdown, d) is extremely lethal, e) results in a new global leader, capable of tackling global problems. The war is a 'decision process' analogous to a national election. The Thirty Years War, though lasting and destructive, was not a 'global war'
2, World power, which lasts for 'about one generation'. The new incumbent power 'prioritises global problems', mobilises a coalition, is decisive and innovative. Pre-modern communities become dependent on the hegemonic power
3, Delegitimation. This phase can last for 20–27 years; the hegemonic power falters, as rival powers assert new nationalistic policies.
4, Deconcentration. The hegemony's problem-solving capacity declines. It yields to a multipolar order of warring rivals. Pre-modern communities become less dependent. A challenger appears (successively, Spain, France, France, Germany, and the USSR) and a new global war ensues.
The hegemonic nations tend to have: 'insular geography'; a stable, open society; a strong economy; strategic organisation, and strong political parties. By contrast, the 'challenger' nations have: closed systems; absolute rulers; domestic instability; and continental geographic locations.
The long cycle system is repetitive, but also evolutionary. According to Modelski, it originated in about 1493 through a) the decline of Venetian naval power, b) Chinese abandonment of naval exploration, and c) discovery of sea routes to India and the Americas. It has developed in parallel with the growth of the nation-state, political parties, command of the sea, and 'dependency of pre-modern communities'. The system is flawed, lacking in coherence, solidarity, and capacity to address the North-South divide. Modelski speculates that US deconcentration might be replaced by a power based in the 'Pacific rim' or by an explicit coalition of nations, as 'co-operation is urgently required in respect of nuclear weapons'.
Modelski 'dismisses the idea that international relations are anarchic'. His research, influenced by Immanuel Wallerstein, was 'measured in decades... a major achievement' says Peter J. Taylor
Joshua S. Goldstein
Goldstein in 1988 posited a 'hegemony cycle' of 150 years' duration, the four hegemonic powers since 1494 being;
Hapsburg Spain, 1494-1648; ended by the Thirty Years War, in which Spain itself was the 'challenger'; the Treaty of Westphalia and the beginnings of the nation-state.
the Netherlands, 1648-1815; ended by the challenge from France of the revolutionary and Napoleonic wars, the Treaty of Vienna and introduction of the Congress System
Great Britain, 1815-1945; ended by Germany's challenge in two World Wars, and the postwar settlement, including the World Bank, IMF, GATT, the United Nations and NATO
the United States, since 1945.
Goldstein suggests that US hegemony may 'at an indeterminate time' be challenged and ended by China (the 'best fit'), by western Europe, Japan, or (writing in 1988) the USSR. The situation is unstable due to the continuance of Machiavellian Power politics and the deployment of nuclear weapons. The choice lies between 'global cooperation or global suicide'. Thus there may be 'an end to hegemony itself'.
Goldstein speculates that Venetian hegemony, ceded to Spain in 1494, may have begun in 1350
See also
Anti-globalization movement
Global citizenship
Global governance
Power politics
Power Politics (Wight book)
World society
References
Notes
Further reading
Held, David, Anthony McGrew, David Goldblatt and Jonathan Perraton, Global Transformations: Politics, Economy and Culture, Cambridge, Polity Press, 1999.
Heywood, Andrew and Ben Whitham, Global Politics (3rd Edition), London, Bloomsbury Publishing, 2023.
McGrew, AG, and Lewis, PG, Global Politics, Cambridge, Polity Press, 1992.
External links
Global Power Barometer
Center for Global Politics
Berlin Forum on Global Politics
Cultural globalization
| 0.766663 | 0.989306 | 0.758464 |
European social model
|
The European social model is a concept that emerged in the discussion of economic globalization and typically contrasts the degree of employment regulation and social protection in European countries to conditions in the United States. It is commonly cited in policy debates in the European Union, including by representatives of both labour unions and employers, to connote broadly "the conviction that economic progress and social progress are inseparable" and that "[c]ompetitiveness and solidarity have both been taken into account in building a successful Europe for the future".
While European states do not all use a single social model, welfare states in Europe share several broad characteristics. These generally include an acceptance of political responsibility for levels and conditions of employment, social protections for all citizens, social inclusion, and democracy. Examples common among European countries include universal health care, free higher education, strong labor protections and regulations, and generous welfare programs in areas such as unemployment insurance, retirement pensions, and public housing. The Treaty of the European Community set out several social objectives: "promotion of employment, improved living and working conditions ... proper social protection, dialogue between management and labour, the development of human resources with a view to lasting high employment and the combating of exclusion." Because different European states focus on different aspects of the model, it has been argued that there are four distinct social models in Europe: the Nordic, British, Mediterranean and the Continental.
The general outlines of a European social model emerged during the post-war boom. Tony Judt lists a number of causes: the abandonment of protectionism, the baby boom, cheap energy, and a desire to catch up with living standards enjoyed in the United States. The European social model also enjoyed a low degree of external competition as the Soviet bloc, China and India were still isolated from the rest of the global economy. In recent years, some have questioned whether the European social model is sustainable in the face of low birthrates, globalisation, Europeanisation and an ageing population.
Welfare state in Europe
Some of the European welfare states have been described as the most well developed and extensive. A unique "European social model" is described in contrast with the social model existing in the US. Although each European country has its own singularities, four traditional welfare or social models are identified in Europe, as well as possible fifth one to cover formerly communist Central and Eastern Europe:
The Nordic (Social democratic) model in Denmark, Finland, Norway, Sweden, and the Netherlands
The Continental (Christian democratic) model in Austria, Belgium, Czech Republic, France, Germany, Hungary, Luxembourg, Poland, and Slovenia
The Anglo-Saxon (Liberal) model in Ireland and the United Kingdom
The Mediterranean model in Greece, Italy, Portugal, and Spain
The Eastern European model in Bulgaria, Estonia, Latvia, Lithuania, and Romania
Nordic model
As can be seen in the graph to the right, the Nordic model holds the highest level of social insurance. Its main characteristic is its universal provision nature which is based on the principle of "citizenship". Therefore, there exists a more generalised access, with lower conditionability, to the social provisions.
As regards labour market, these countries are characterised by important expenditures in active labour market policies whose aim is a rapid reinsertion of the unemployed into the labour market. These countries are also characterised by a high share of public employment. Trade unions have a high membership and an important decision-making power which induces a low wage dispersion or more equitable income distribution.
The Nordic model is also characterised by a high tax wedge.
Continental model
The Continental model has some similarities with the Nordic model. Nevertheless, it has a higher share of its expenditures devoted to pensions. The model is based on the principle of "security" and a system of subsidies which are not conditioned to employability (for example in the case of France or Belgium, there exist subsidies whose only requirement is being older than 25).
As regards the labour market, active policies are less important than in the Nordic model and in spite of a low membership rate, trade-unions have important decision-making powers in collective agreements.
Another important aspect of the Continental model is the disability pensions.
Anglo-Saxon model
The Anglo-Saxon model features a lower level of expenditures than the previous ones. Its main particularity is its social assistance of last resort. Subsidies are directed to a higher extent to the working-age population and to a lower extent to pensions. Access to subsidies is (more) conditioned to employability (for instance, they are conditioned on having worked previously).
Active labour market policies are important. Instead, trade unions have smaller decision-making powers than in the previous models, this is one of the reasons explaining their higher income dispersion and their higher number of low-wage employments.
Mediterranean model
The Mediterranean model corresponds to southern European countries who developed their welfare state later than the previous ones (during the 1970s and 1980s). It is the model with the lowest share of expenditures and is strongly based on pensions and a low level of social assistance. There exists in these countries a higher segmentation of rights and status of persons receiving subsidies which has as one of its consequences a strongly conditioned access to social provisions.
The main characteristic of labour market policies is a rigid employment protection legislation and a frequent resort to early retirement policies as a means to improve employment conditions. Trade unions tend to have an important membership which again is one of the explanations behind a lower income dispersion than in the Anglo-Saxon model.
Evaluating the different social models
To evaluate the different social models, we follow the criteria used in Boeri (2002) and Sapir (2005) which consider that a social model should satisfy the following:
Reduction in poverty.
Protection against labour market risks.
Rewards for labour participation.
Reduction in poverty
The graph on the right shows the reduction in inequality (as measured by the Gini index) after taking account of taxes and transfers, that is, to which extent does each social model reduce poverty without taking into account the reduction in poverty provoked by taxes and transfers. The level of social expenditures is an indicator of the capacity of each model to reduce poverty: a bigger share of expenditures is in general associated to a higher reduction in poverty. Nevertheless, another aspect that should be taken into account is the efficiency in this poverty reduction. By this is meant that with a lower share of expenditures a higher reduction in poverty may be obtained.
In this case, the graph on the right shows that the Anglosaxon and Nordic models are more efficient than the Continental or Mediterranean ones. The Continental model appears to be the least efficient. Given its high level of social expenditures, one would expect a higher poverty reduction than that attained by this model. Remark how the Anglosaxon model is found above the average line drawn whereas the Continental is found below that line.
Protection against labour market risks
Protection against labour market risks is generally assured by two means:
Regulation of the labour market by means of employment protection legislation which basically increases firing costs and severance payments for the employers. This is generally referred to as providing "employment" protection.
Unemployment benefits which are commonly financed with taxes or mandatory public insurances to the employees and employers. This is generally referred to as providing protection to the "worker" as opposed to "employment".
As can be seen in the graph, there is a clear trade-off between these two types of labour market instruments (remark the clear negative slope between both). Once again different European countries have chosen a different position in their use of these two mechanisms of labour market protection. These differences can be summarised as follows:
The Mediterranean countries have chosen a higher "employment" protection while a very low share of their unemployed workers receives unemployment benefits.
The Nordic countries have chosen to protect to a lesser extent "employment" and instead, an important share of their unemployed workers receives benefits.
The continental countries have a higher level of both mechanisms than the European average, although by a small margin.
The Anglo-Saxon countries base their protection on unemployment benefits and a low level of employment protection.
Evaluating the different choices is a hard task. In general there exists consensus among economists on the fact that employment protection generates inefficiencies inside firms. Instead, there is no such consensus as regards the question of whether employment protection generates a higher level of unemployment.
Rewards for labour participation
Sapir (2005) and Boeri (2002) propose looking at the employment-to-population ratio as the best way to analyse the incentives and rewards for employment in each social model. The Lisbon Strategy initiated in 2001 established that the members of the EU should attain a 70% employment rate by 2010.
In this case, the graph shows that the countries in the Nordic and Anglosaxon model are the ones with the highest employment rate whereas the Continental and Mediterranean countries have not attained the Lisbon Strategy target.
Conclusion
Sapir (2005) proposes as a general mean to evaluate the different social models, the following two criteria:
Efficiency, that is, whether the model provides the incentives so as to achieve the largest number possible of employed persons, that is, the highest employment rate.
Equity, that is, whether the social model achieves a relatively low poverty risk.
As can be seen in the graph, according to these two criteria, the best performance is achieved by the Nordic model. The Continental model should improve its efficiency whereas the Anglosaxon model its equity. The Mediterranean model under-performs in both criteria.
Some economists consider that between the Continental model and the Anglo-Saxon, the latter should be preferred given its better results in employment, which make it more sustainable in the long term, whereas the equity level depends on the preferences of each country (Sapir, 2005). Other economists argue that the Continental model cannot be considered worse than the Anglosaxon given that it is also the result of the preferences of those countries that support it (Fitoussi et al., 2000; Blanchard, 2004). This last argument can be used to justify any policy.
See also
Disability pension
Social insurance
Social protection
Social security
Social welfare provision
Welfare state
Location-specific:
Tax rates of Europe
US welfare state
References
Bibliography
Blanchard, O. (2004): The Economic Future of Europe. NBER Economic Papers.
Boeri, T. (2002): Let Social Policy Models Compete and Europe Will Win, conference in the John F. Kennedy School of Government, Harvard University, 11–12 April 2002.
Sapir, A. (2005): Globalisation and the Reform of European Social Models, Bruegel, Brussels. Downloadable here.
Fitoussi J.P. and O. Passet (2000): Reformes structurelles et politiques macroéconomiques: les enseignements des «modèles» de pays, en Reduction du chômage : les réussites en Europe. Rapport du Conseil d'Analyse Economique, n.23, Paris, La documentation Française, pp. 11–96.
Busch, Klaus: The Corridor Model – Relaunched, edited by Friedrich-Ebert-Stiftung, International Policy Analysis, Berlin 2011.
Economy of Europe
Society of Europe
Eurozone crisis
Political-economic models
Social systems
Welfare in Europe
Welfare state
| 0.778702 | 0.974003 | 0.758458 |
Cultural diversity
|
Cultural diversity is the quality of diverse or different cultures, as opposed to monoculture. It has a variety of meanings in different contexts, sometimes applying to cultural products like art works in museums or entertainment available online, and sometimes applying to the variety of human cultures or traditions in a specific region, or in the world as a whole. It can also refer to the inclusion of different cultural perspectives in an organization or society.
Cultural diversity can be affected by political factors such as censorship or the protection of the rights of artists, and by economic factors such as free trade or protectionism in the market for cultural goods. Since the middle of the 20th century, there has been a concerted international effort to protect cultural diversity, involving the United Nations Educational, Scientific and Cultural Organization (UNESCO) and its member states. This involves action at international, national, and local levels. Cultural diversity can also be promoted by individual citizens in the ways they choose to express or experience culture.
Characteristics
In the context of national and international efforts to promote or preserve cultural diversity, the term applies to five overlapping domains:
economic: the availability of diverse cultural goods or services,
artistic: the variety of artistic genres and styles that coexist,
participatory: the participation of diverse ethnic groups in a nation's culture,
heritage: the diversity of cultural traditions that are represented in heritage institutions such as museums, and
multicultural: the variety of ethnic groups and their traditions that are visible in a country.
Of these five, the economic meaning has come to dominate in international negotiations. Nations have principally looked to protect cultural diversity by strengthening the ability of their domestic cultural industries to sell goods or services. Since the 1990s, UNESCO has mainly used "cultural diversity" for the international aspects of diversity, preferring the term "cultural pluralism" for diversity within a country.
Governments and international bodies use "cultural diversity" in both a broad and a narrow sense. The broad meaning takes its inspiration from anthropology. It includes lifestyles, value systems, traditions, and beliefs in addition to creative works. It emphasises an ongoing process of interaction and dialogue between cultures. This meaning has been promoted to the international community by UNESCO, since the 2001 Universal Declaration on Cultural Diversity. In practice, governments use a narrower, more traditional, meaning that focuses on the economic domain mentioned above.
In the international legal context, cultural diversity has been described as analogous to biodiversity. The General Conference of UNESCO took this position in 2001, asserting in Article 1 of the Universal Declaration on Cultural Diversity that "cultural diversity is as necessary for humankind as biodiversity is for nature." The authors John Cavanagh and Jerry Mander took this analogy further, describing cultural diversity as "a sort of cultural gene pool to spur innovation toward ever higher levels of social, intellectual and spiritual accomplishment."
Quantification
Cultural diversity is difficult to quantify. One measure of diversity is the number of identifiable cultures. The United Nations Department of Economic and Social Affairs reports that, although their numbers are relatively small, indigenous peoples account for 5,000 distinct cultures and thus the majority of the world's cultural diversity.
Another aspect of cultural diversity is measured by counting the number of languages spoken in a region or in the world as a whole. By this measure, the world's cultural diversity is rapidly declining. Research carried out in the 1990s by David Crystal suggested that at that time, on average, one language was falling into disuse every two weeks. He calculated that if that rate of the language death were to continue, then by the year 2100, more than 90% of the languages currently spoken in the world will have gone extinct.
In 2003, James Fearon of Stanford University published, in the Journal of Economic Growth, a list of countries based on the diversity of ethnicities, languages, and religions.
International legal context
At the international level, the notion of cultural diversity has been defended by UNESCO since its founding in 1945, through a succession of declarations and legal instruments.
Many of the international legal agreements addressing cultural diversity were focused on intellectual property rights, and thus on tangible cultural expressions that can be bought or sold. The World Heritage List, established in 1972 by UNESCO, mainly listed architectural features and monuments. In the late 20th century, the diplomatic community recognised a need to protect intangible cultural heritage: the traditions, social structures, and skills that support creative expression. International efforts to define and protect this aspect of culture began with the 1989 UNESCO Recommendation on the Safeguarding of Traditional Culture and Folklore. UNESCO's Proclamation of Masterpieces of the Oral and Intangible Heritage of Humanity began in 2001, highlighting specific masterpieces to promote the responsibility of nations to protect intangible cultural heritage. Further proclamations were added in 2003 and 2005, bringing the total number of masterpieces to ninety. In 2001, UNESCO also hosted expert meetings to create a definition of intangible cultural heritage and a more legally binding treaty to protect it, resulting in the Convention for the Safeguarding of the Intangible Cultural Heritage. This was passed in 2003 and came into force in 2006. One result of this convention was the 2008 creation of UNESCO Representative List of Intangible Heritage, which incorporated the masterpieces from the 2001, 2003, and 2005 proclamations.
The first international instrument enshrining the value of cultural diversity and intercultural dialogue was the UNESCO Universal Declaration on Cultural Diversity, adopted unanimously in 2001. It calls on nations and institutions to work together for the preservation of culture in all its forms, and for policies that help to share ideas across cultures and inspire new forms of creativity. UNESCO no longer interpreted "culture" in terms of artistic masterpieces. With the Universal Declaration, it adopted a more expansive understanding based on anthropology. This defined cultural diversity as "the set of distinctive spiritual, material, intellectual, and emotional features of society or a social group", including lifestyles, value systems, traditions, and beliefs. The twelve articles of the Universal Declaration were published with an action plan for ways to promote cultural diversity. This action plan connected cultural diversity explicitly to human rights including freedom of expression, freedom of movement, and protection of indigenous knowledge. The declaration identifies cultural diversity as a source of innovation and creativity, as well as a driver of both economic development and personal development. UNESCO made a submission to a 2002 UN report on Human Rights and Cultural Diversity, quoting part of the declaration to emphasise that cultural diversity must not be used to infringe the rights of minorities and that cultural diversity requires the protection of individual freedoms.
Citing the Universal Declaration, the United Nations General Assembly established the World Day for Cultural Diversity for Dialogue and Development in December 2002. This continues to be celebrated on May 21 each year.
The Convention for the Safeguarding of the Intangible Cultural Heritage drew attention to increasing cultural homogenization by economic globalization and motivated UNESCO to negotiate a treaty protecting cultural diversity. The resulting Convention on the Protection and Promotion of the Diversity of Cultural Expressions (the "2005 Convention") was adopted in October 2005. This was the first international treaty to establish rights and obligations specifically relating to culture. The convention builds on the 2001 declaration by naming linguistic diversity as a fundamental part of cultural diversity and stating that cultural diversity depends on the free flow of ideas. To date, 151 signatory states, as well as the European Union, have registered their ratification of the convention, or a legally equivalent process.
The 2005 Convention created an International Fund for Cultural Diversity (IFCD), funded by voluntary contributions. This makes funding available to developing countries that are parties to the convention for specific activities that develop their cultural policies and cultural industries. As of April 2023, UNESCO reports that 140 projects in 69 developing countries have been carried out with funding from the IFCD.
Factors
Cultural policy scholar Johnathan Vickery has observed that cultural diversity, like biological diversity, is continually under threat from various factors. Cultural diversity, linguistic diversity and species diversity show a partially comparable pattern. These threats often come from other cultural expressions, as when imported entertainment undermines interest in a nation's own culture. Other examples he mentions include religious revivals and modern Western education systems. Factors that promote a country's cultural diversity include migration and a nation's openness to discussing and celebrating cultural differences (which is itself an aspect of culture).
The actions of governments, international bodies, and civil society (meaning non-governmental and cultural sector organisations) can promote or restrict cultural diversity. As part of the international effort to promote and preserve cultural diversity, the 2005 Convention established processes to monitor progress towards a favourable environment, including global reports every four years and national reports from individual states.
Imperialism and colonialism
Colonialism has frequently involved an intentional destruction of cultural diversity, when the colonising powers use education, media, and violence to replace the languages, religions, and cultural values of the colonised people with their own. This process of forced assimilation has been used many times in history, particularly by the European colonial powers from the 18th to 20th centuries, taking the form of forced conversion to the coloniser's religion, privatisation of community property, and replacement of systems of work. The protection of indigenous peoples' rights to maintain their own languages, religions, and culture has been enshrined in treaties including the 1965 International Covenant on Civil and Political Rights and the 1989 UN Convention on the Rights of the Child.
Artistic freedom
Artistic freedom, as defined by the 2005 Convention, includes the freedom of artists to work without government interference, and also the freedom of citizens to access diverse cultural content. Governments can repress these freedoms through censorship or surveillance of artists, or can choose to actively protect artists and their free expression. According to the 2017 and 2022 global reports, attacks against artists — including prosecution, imprisonment, or even killing — have increased in recent years. In 2020, 978 cases were reported around the world, compared to 771 in 2019 and 673 in 2018. Musicians are the most threatened group, especially rap musicians, whose lyrics tend to be provocative and politically challenging. While online services have provided new ways for artists to distribute images, music, and video to large audiences, they have brought their own threats to freedom in the form of censorship, surveillance, and trolling. The 2022 global report found that some countries had repealed laws restricting free expression, including blasphemy and defamation laws, but that in practice artistic freedom was not being better monitored or protected.
Mobility of artists and cultural professionals
Mobility restrictions present challenges to professionals in the cultural and creative industries, specifically to those from the Global South. Artists and cultural professionals need to travel to perform to new audiences or to attend a residency or to engage in networking. Their ability to do so depends on their country of origin; the holder of a German passport can travel to 176 countries without a visa while for an Afghan passport the number of countries is 24. Travel restrictions, including difficulties in obtaining visas, often impede artists from the Global South to participate in art biennales or film festivals, even when invited to receive an award or to promote their works. The 2022 global report found that, despite governments and civil society organisations taking this inequality more seriously, concrete improvements are lacking. Thus, the ability of artists from the Global South to reach audiences in the Global North "remains extremely weak".
Governance of culture
As well as protecting free expression and free movement, governments can promote cultural diversity by recognising and enforcing the rights of artists. The working conditions of artists are affected by their rights to organise labor unions, to workplace safety, and to social security protections for times when their work does not produce income. These economic and social rights are formally recognised by the International Covenant on Economic, Social and Cultural Rights passed by the UN in 1966 and by the 1980 Recommendation concerning the Status of the Artist adopted by UNESCO in 1980. Social security in particular allows a more diverse range of citizens to take part in artistic activities, because without it, financially insecure people are discouraged from working in a field with unstable income.
Gender equality in cultural and creative industries
A gender gap persists worldwide concerning equal pay, access to funding and prices charged for creative works. Consequently, women remain under-represented in key creative roles and are outnumbered in decision-making positions. As of 2018, women made up only 34% of Ministers for Culture (compared to 24% in 2005) and only 31% of national arts program directors. Generally, women are better represented in specific cultural fields such as arts education and training (60%), book publishing and press (54%), audiovisual and interactive media (26%), as well as design and creative services (33%). The 2022 global report found that cultural industries were increasingly making gender equality a priority, but that actual progress was slow. Though 48.1% of the work in cultural and entertainment sectors is done by women, the report concluded that they are still under-represented in leadership positions, get less public funding, and get less recognition for their work.
Trade and investment in cultural goods and services
Between 2015 and 2017, at least eight bilateral and regional free trade agreements have introduced cultural clauses or list of commitments that promote the objectives and principles of the 2005 Convention. Despite the lack of the promotion of the objectives and principles of the 2005 Convention with regard to the negotiation of mega-regional partnership agreements, some parties to the Trans Pacific Partnership (TTP) have succeeded in introducing cultural reservations to protect and promote the diversity of cultural expressions. The growth of online digital content has increased the diversity of culture that a person can get immediate access to, but also increased the threat to cultural diversity by making it easier for a small number of large companies to flood markets with their cultural products. Digital delivery of culture has also given a great deal of power to companies in the technology sector.
Cultural platforms
Organisations that promote access to culture can reflect diversity in what they choose to host or to exclude. Google Arts and Culture and Europeana are among the platforms who state a commitment to promoting cultural diversity. For Google Arts and Culture, diversity implies "working with communities that have historically been left out of the mainstream cultural narrative" while Europeana acknowledges that "stories told with/by cultural heritage items have not historically been representative of the population, and so we strive to share lesser-told stories from underrepresented communities."
Individual choices
Individual citizens can experience and promote cultural diversity through their own choices, including the choice to share their own culture. The "Do One Thing for Diversity and Inclusion" campaign has been run annually since 2011 by the United Nations Alliance of Civilizations (UNAOC) as a way to commemorate the World Day for Cultural Diversity. It encourages people to explore the music, literature, art, and traditions of unfamiliar cultures and to share their own culture with strangers.
The American lawyer Juliette Passer describes the UNESCO Universal Declaration on Cultural Diversity as prompting each individual to consider their own and others' diverse identities:
"We need social and educational experiences plus reflection on the experience to go beyond reliance on stereotypes. The more we interact with diverse others and mindfully reflect on the experience, the more we can improve our competency with differences."
National and local initiatives
In September 2002, the city of Porto Alegre in Brazil organized a world meeting for culture, bringing together mayors and technical directors of culture from different cities of the world, with observers from civil society. The cities of Porto Alegre and Barcelona have proposed the drafting of a reference document for the development of local cultural policies, inspired by Agenda 21, created in 1992 for the environment. The Culture 21 was thus designed with the aim of including cultural diversity at the local level. The document was approved on May 8, 2004 during the first edition of the Universal Forum of Cultures in Barcelona.
See also
Criticism of multiculturalism
Cross-cultural communication
Cultural agility
Cultural Diversity Award (UNESCO)
Cultural safety
Foundation for Endangered Languages
Heritage Day (South Africa)
Intercultural dialogue
Intercultural relations
Melting pot
Mondialogo
Multiculturalism
Social cohesion
Social integration
Subculture
References
Further reading
External links
UNESCO: Diversity of Cultural Expressions
Cultural geography
Cultural economics
Cultural politics
Cultural concepts
UNESCO
Multiculturalism
Majority–minority relations
| 0.761857 | 0.995521 | 0.758444 |
Pre-modern human migration
|
This article focusses on prehistorical migration since the Neolithic period until AD 1800. See Early human migrations for migration prior to the Neolithic, History of human migration for modern history, and human migration for contemporary migration.
Paleolithic migration prior to end of the Last Glacial Maximum
spread anatomically modern humans throughout Afro-Eurasia and to the Americas.
During the Holocene climatic optimum, formerly isolated populations began to move and merge, giving rise to the
pre-modern distribution of the world's major language families.
In the wake of the population movements of the Mesolithic came the Neolithic Revolution,
followed by the Indo-European expansion in Eurasia and the Bantu expansion in Africa.
Population movements of the proto-historical or early historical period include the Migration period, followed by (or connected to) the Slavic, Magyar, Norse, Turkic and Mongol expansions of the medieval period.
The last world regions to be permanently settled were the Pacific Islands and the Arctic, reached during the 1st millennium AD.
Since the beginning of the Age of Exploration and the beginning of the Early Modern period and its emerging colonial empires, an accelerated pace of migration on the intercontinental scale became possible.
Prehistory
Neolithic to Chalcolithic
Agriculture is believed to have first been practised around 10,000 BC in the Fertile Crescent (see Jericho). From there, it propagated as a "wave" across Europe, a view supported by Archaeogenetics, reaching northern Europe some 5 millennia ago. Millet was an early crop domesticated in Northern China 9,000 BC (11 kya).
The earlier population of Europe were the Mesolithic hunter-gatherers. The Neolithic farmers, called the Early European Farmers (EEF), migrated from Anatolia to the Balkans in large numbers during the 7th millennium BC. During the Chalcolithic and early Bronze Age, the EEF-derived cultures of Europe were overwhelmed by successive invasions of Western Steppe Herders (WSHs) from the Pontic–Caspian steppe, who carried about 60% Eastern Hunter-Gatherer (EHG) and 40% Caucasus Hunter-Gatherer (CHG) admixture. These invasions led to EEF paternal DNA lineages in Europe being almost entirely replaced with EHG/WSH paternal DNA (mainly R1b and R1a). EEF maternal DNA (mainly haplogroup N) also declined, being supplanted by steppe lineages, suggesting the migrations involved both males and females from the steppe. EEF mtDNA however remained frequent, suggesting admixture between WSH males and EEF females.
Some evidence (including a 2016 study by Busby et al.) suggests admixture from an ancient migration from Eurasia into parts of Sub-Saharan Africa. Another study (Ramsay et al. 2018) also shows evidence that ancient Eurasians migrated into Africa and that Eurasian admixture in modern Sub-Saharan Africans ranges from 0% to 50%, varying by region and generally highest (after North Africa) in the Horn of Africa and parts of the Sahel zone.
Bronze Age
The proposed Indo-European migration has variously been dated to the end of the Neolithic (Marija Gimbutas: Corded Ware culture, Yamna culture, Kurgan culture), the early Neolithic (Colin Renfrew: Starčevo-Körös, Linearbandkeramic) and the late Palaeolithic (Marcel Otte, Paleolithic continuity theory).
The speakers of the Proto-Indo-European language are usually believed to have originated to the North of the Black Sea (today Eastern Ukraine and Southern Russia), and from there they gradually migrated into, and spread their language by cultural diffusion to, Anatolia, Europe, and Central Asia Iran and South Asia starting from around the end of the Neolithic period (see Kurgan hypothesis). Other theories, such as that of Colin Renfrew, posit their development much earlier, in Anatolia, and claim that Indo-European languages and culture spread as a result of the agricultural revolution in the early Neolithic.
Relatively little is known about the inhabitants of pre-Indo-European "Old Europe". The Basque language remains from that era, as do the indigenous languages of the Caucasus. The Sami are genetically distinct among the peoples of Europe, but the Sami languages, as part of the Uralic languages, spread into Europe about the same time as the Indo-European languages. However, since that period speakers of other Uralic languages such as the Finns and the Estonians have had more contact with other Europeans, thus today sharing more genes with them than the Sami. Like other Western Uralic and Baltic Finnic peoples, the Finns originate from the Volga region in what is now Russia. Their ancestors migrated to Finland in the 8th century BC.
The earliest migrations we can reconstruct from historical sources are those of the 2nd millennium BC. The Proto-Indo-Iranians began their expansion from c. 2000 BC, the Rigveda documenting the presence of early Indo-Aryans in the Punjab from the late 2nd millennium BC, and Iranian tribes being attested in Assyrian sources as in the Iranian plateau from the 9th century BC. In the Late Bronze Age, the Aegean and Anatolia were overrun by moving populations, summarized as the "Sea Peoples", leading to the collapse of the Hittite Empire and ushering in the Iron Age.
A major archaeogenetics study uncovered a migration into southern Britain in the Bronze Age, during the 500-year period 1,300–800 BC. The newcomers were genetically most similar to ancient individuals from Gaul.
Austronesian expansion
The islands of the Pacific were populated during c. 1600 BC and AD 1000.
The Lapita people, who got their name from the archaeological site in Lapita, New Caledonia, where their characteristic pottery was first discovered, came from Austronesia, probably New Guinea, reaching the Solomon Islands, around 1600 BC, and later to Fiji, Samoa and Tonga. By the beginning of the 1st millennium BC, most of Polynesia was a loose web of thriving cultures who settled on the islands' coasts and lived off the sea. By 500 BC Micronesia was completely colonized; the last region of Polynesia to be reached was New Zealand in around 1000.
Bantu expansion
The Bantu expansion is the major prehistoric migratory pattern that shaped the ethno-linguistic composition of Sub-Saharan Africa.
The Bantu, a branch of the Niger-Congo phylum, originated in West Africa around the Benue-Cross rivers area in southeastern Nigeria. Beginning in the 2nd millennium BC, they spread to Central Africa, and later, during the 1st millennium BC onward southeastern, spreading pastoralism and agriculture. During the 1st millennium AD, they populated Southern Africa. In the process, the Bantu languages displaced the Khoisan languages indigenous to Central and Southern Africa.
Arctic peoples
One of the last areas to be permanently settled by humans was the Arctic, reached by the Dorset culture about 4,500 years ago. The Inuit are the descendants of the Thule culture, which emerged from western Alaska around AD 1000 and gradually displaced the Dorset culture.
Proto-historical and early historical migration
The German term Landnahme ("land-taking") is sometimes used in historiography
for a migration event associated with a founding legend, e.g. of the
conquest of Canaan in the Hebrew Bible,
the Indo-Aryan migration and expansion within India alluded to in the Rigveda, the invasion traditions in the Irish Mythological Cycle, accounting for how the Gaels came to Ireland,
the arrival of the Franks in Austrasia during the Migration period,
the Anglo-Saxon invasion of Britain,
the settlement of Iceland in the Viking Age,
the Slavic migrations, the Hungarian conquest, etc.
Iron Age
The Dorian invasion of Greece led to the Greek Dark Ages. The Urartians were displaced by Armenians, and the Cimmerians and the Mushki migrated from the Caucasus into Anatolia. A Thraco-Cimmerian connection links these movements to the Proto-Celtic world of central Europe, leading to the introduction of Iron to Europe and the Celtic expansion to western Europe and the British Isles around 500 BC.
Many scholars believe that the Ethiopian Semitic languages descended from the South Semitic branch which was spoken in the South Arabia. According to this theory, the speakers of the proto-Semitic language migrated from South Arabia to Ethiopia approximately 2800 years ago.
Beginning around 300 BC, the Japonic-speaking Yayoi people from the Korean Peninsula entered the Japanese islands and displaced or intermingled with the original Jōmon inhabitants. The linguistic homeland of Proto-Koreans is located somewhere in Southern Siberia/Manchuria, such the Liao river area or the Amur region. Proto-Koreans arrived in the southern part of the Korean Peninsula at around 300 BC, replacing and assimilating Japonic-speakers and likely causing the Yayoi migration.
Migration period
Western historians refer to the period of migrations that separated Antiquity from the Middle Ages in Europe as the Great Migrations or as the Migrations Period. This period is further divided into two phases.
The first phase, from 300 to 500, saw the movement of Germanic, Sarmatian and Hunnic tribes and ended with the settlement of these peoples in the areas of the former Western Roman Empire. (See also: Ostrogoths, Visigoths, Vandals, Burgundians, Suebi, Alamanni, Marcomanni).
The second phase, between 500 and 900, saw Slavic, Turkic and other tribes on the move, re-settling in Eastern Europe and gradually making it predominantly Slavic. Moreover, more Germanic tribes migrated within Europe during this period, including the Lombards (to Italy), and the Angles, Saxons, and Jutes (to the British Isles). See also: Avars, Bulgars, Huns, Arabs, Vikings, Varangians. The last phase of the migrations saw the coming of the Hungarians to the Pannonian plain.
German historians of the 19th century referred to these Germanic migrations as the Völkerwanderung, the migrations of the peoples.
In the 4th or 5th century, Gaelic culture was brought to Scotland by settlers from Ireland, who founded the Gaelic kingdom of Dál Riata on Scotland's west coast. Brittany was settled by Britons from Britain between the 5th and 7th centuries.
The European migration period is connected with the simultaneous Turkic expansion which at first displaced other peoples towards the west, and by High Medieval times, the Seljuk Turks themselves reached the Mediterranean.
Early medieval period
The main migration of Turkic peoples occurred between the 5th and 10th centuries, when they spread across most of Central Asia. The Turkic peoples slowly replaced and assimilated the previous Iranian-speaking locals, turning the population of Central Asia from largely Iranian, into primarily of East Asian descent.
The medieval period, although often presented as a time of limited human mobility and slow social change in the history of Europe, in fact saw widespread movement of peoples. The Vikings from Scandinavia raided all over Europe from the 8th century and settled in many places, including Normandy, the north of England, Scotland and Ireland (most of whose urban centres were founded by the Vikings). The Normans later conquered the Saxon Kingdom of England, most of Ireland, southern Italy and Sicily.
Iberia was invaded by Umayyad in the 8th century, founding new Kingdoms such as al Andalus and bringing with them a wave of settlers from North Africa. The invasion of North Africa by the Banu Hilal, a warlike Arab Bedouin tribe, in the 11th century was a major factor in the linguistic, cultural Arabization of the Maghreb.
The Burmese-speaking people first migrated from present-day Yunnan, China to the Irrawaddy valley in the 7th century.
The Tai peoples, from Guangxi began moving south – and westwards in the 1st millennium AD, eventually spreading across the whole of mainland Southeast Asia. Tai speaking tribes migrated southwestward along the rivers and over the lower passes into Southeast Asia, perhaps prompted by the Chinese expansion.
During the 4th–12th centuries, Han Chinese people from the central plains migrated and settled in the South of China. This gave rise to the Cantonese people and other dialect groups of Guangdong during the Tang dynasty. Genetic studies have shown that the Hakka people are largely descended from North Han Chinese. In a series of migrations, the Hakkas moved and settled in their present areas in South China.
Archaeological, historical and linguistic evidence suggest that the Nahuas originally came from the deserts of northern Mexico and migrated into central Mexico in several waves. The Aztecs were descended of Nahua ancestry, and the Toltecs are often thought to have been as well. After the fall of the Toltecs a period of large population movements followed and some Nahua groups such as the Nicarao arrived as far south as Nicaragua.
Late Middle Ages
Massive migrations of Germans took place into East Central and Eastern Europe, reaching its peak in the 12th to 14th centuries. These Ostsiedlung settlements in part followed territorial gains of the Holy Roman Empire, but areas beyond were settled, too.
Arvanites in Greece originate from Albanian settlers who moved south from areas in what is today southern Albania during the Middle Ages. They were the dominant population element of some regions in the south of Greece until the 19th century. Romance-speaking Vlachs were shepherds who migrated along the Carpathian Mountains with their herds.
The Navajos and Apaches are believed to have migrated from northwestern Canada and eastern Alaska, where the majority of Athabaskan speakers reside. Archaeological and historical evidence suggests the Athabaskan ancestors of the Navajos and Apaches entered the Southwest around 1400.
During the several centuries before Columbus's arrival in the Caribbean archipelago in 1492, the Caribs had in part displaced the Arawakan-speaking Taínos by warfare, extermination, and assimilation. The Taíno had settled the island chains earlier in history, migrating from the mainland. Arawak-speaking farmers replaced previous foraging populations about 2,500 years ago.
At the end of the Middle Ages, the Romani arrived in Europe from the Middle East. They originate in India, probably an offshoot of the Domba people of Northern India who had left for Sassanid Persia around the 5th century.
Turkic-speaking Yakut tribes migrated north from the Lake Baikal region to their present homeland in Yakutia in central Siberia under pressure from the Mongol tribes during the 13th to 15th centuries.
The Fula people are widely distributed, across the Sahel from the Atlantic coast to the Red Sea, particularly in West Africa. As their herds increased, small groups of Fulani herdsmen found themselves forced to move eastward and further southwards and so initiated a series of migrations throughout West Africa, which endures to the present day. By the 15th century, there was a steady flow of Fulɓe immigrants into Hausaland and, later on, Bornu.
Early Modern period
Asia
Conflict between the Hmong people of southern China and newly arrived Han settlers increased during the 18th century. This led to armed conflict and large-scale migrations well into the late 19th century, the period during which many Hmong people immigrated to Southeast Asia. The most likely homeland of the Hmong–Mien languages is in Southern China between the Yangtze and Mekong rivers, but speakers of these languages might have migrated from Central China as part of the Han Chinese expansion or as a result of exile from an original homeland by Han Chinese.
Africa
The Oromo migrations were a series of expansions in the 16th and 17th centuries by the Oromo people from southern Ethiopia into more northerly regions of Ethiopia.
Expansion of the Zulu Kingdom in South Africa in the early 19th century was a major factor of the Mfecane, a mass-migration of tribes fleeing the Zulus. The Ngoni people fled as far north as Tanzania and Malawi. The Mfengu refers to a variety of ethnic groups that fled from the Mfecane to enter into various Xhosa speaking areas.
North America
The Shoshone originated in the western Great Basin and spread north and east into present-day Idaho and Wyoming. By 1500, some Eastern Shoshone had crossed the Rocky Mountains into the Great Plains. After 1750, warfare and pressure pushed Eastern Shoshone south and westward. Some of them moved as far south as Texas, emerging as the Comanche by 1700.
Siouan language speakers may have originated in the lower Mississippi River region. They were agriculturalists and may have been part of the Mound Builder civilization during the 9th–12th centuries AD. In the late 16th and early 17th centuries, Dakota-Lakota speakers lived in the upper Mississippi Region. Wars with the Ojibwe and Cree peoples pushed the Lakota west onto the Great Plains in the mid- to late-17th century.
During the 1640s and 1650s, the Beaver Wars initiated by the Iroquois forced a massive demographic shift as their western neighbors fled the violence. They sought refuge west and north of Lake Michigan.
Early Modern Europe
The migration of the Mongolic-speaking Kalmyks to the Volga in the 17th century was the last wave of the westward expansion of Central Asian nomads.
Internal European migration stepped up in the Early Modern Period. In this period, major migration within Europe included the recruiting by monarchs of landless laborers to settle depopulated or uncultivated regions and a series of forced migration caused by religious persecution. Notable examples of this phenomenon include the expulsion of the Jews from Spain in 1492, mass migration of Protestants from the Spanish Netherlands to the Dutch Republic after the 1580s, the expulsion of the Moriscos (descendants of former Muslims) from Spain in 1609, and the expulsion of the Huguenots from France in the 1680s. Since the 14th century, the Serbs started leaving the areas of their medieval Kingdom and Empire that was overrun by the Ottoman Turks and migrated to the north, to the lands of today's Vojvodina (northern Serbia), which was ruled by the Kingdom of Hungary at that time. The Habsburg monarchs of Austria encouraged them to settle on their frontier with the Turks and provide military service by granting them free land and religious toleration. The two greatest migrations took place in 1690 and 1737. Other instances of labour recruitments include the Plantations of Ireland – the settling of Ireland with Protestant colonists from England, Scotland and Wales in the period 1560–1690 and the recruitment of Germans by Catherine the Great of Russia to settle the Volga region in the 18th century.
Colonial empires
European Colonialism from the 16th to the early 20th centuries led to an imposition of a European colonies in many regions of the world, particularly in the Americas, South Asia, Sub-Saharan Africa and Australia, where European languages remain either prevalent or in frequent use as administrative languages. Major human migration before the 18th century was largely state directed. For instance, Spanish emigration to the New World was limited to settlers from Castile who were intended to act as soldiers or administrators. Mass immigration was not encouraged due to a labour shortage in Europe (of which Spain was the worst affected by a depopulation of its core territories in the 17th century).
Europeans also tended to die of tropical diseases in the New World in this period and for this reason England, France and Spain preferred using slaves as free labor in their American possessions. Many historians attribute a change in this pattern in the 18th century to population increases in Europe.
However, in the less tropical regions of North America's east coast, large numbers of religious dissidents, mostly English Puritans, settled during the early 17th century. Spanish restrictions on emigration to Latin America were revoked and the English colonies in North America also saw a major influx of settlers attracted by cheap or free land, economic opportunity and the continued lure of religious toleration.
A period in which various early English colonies had a significant amount of self-rule prevailed from the time of the Plymouth colony's founding in 1620 through 1676, as the mother country was wracked by revolution and general instability. However, King William III decisively intervened in colonial affairs after 1688 and the English colonies gradually came more directly under royal governance, with a marked effect on the type of emigration. During the early 18th century, significant numbers of non-English seekers of greater religious and political freedom were allowed to settle within the British colonies, including Protestant German Palatines displaced by French conquest, French Huguenots disenfranchised by an end of religious tolerance, Scotch-Irish Presbyterians, Quakers who were often Welsh, as well as Presbyterian and Catholic Scottish Highlanders seeking a new start after a series of unsuccessful revolts.
The English colonists who came during this period were increasingly moved by economic necessity. Some colonies, including Georgia, were settled heavily by petty criminals and indentured servants who hoped to pay off their debts. By 1800, European emigration had transformed the demographic character of the American continent. This was due to the devastating effect of European diseases and warfare on Native American populations.
The European settlers' influence elsewhere was less pronounced as in South Asia and Africa, European settlement in this period was limited to a thin layer of administrators, traders and soldiers. Dutch-speaking settlers known as Boers arrived in southern Africa in the mid-17th century.
See also
Linguistic homeland
Nomadic pastoralism
Trans-cultural diffusion
Timeline of maritime migration and exploration
Further reading
References
Demographic history
| 0.770538 | 0.984305 | 0.758444 |
What Is History?
|
What Is History? is a 1961 non-fiction book by historian E. H. Carr on historiography. It discusses history, facts, the bias of historians, science, morality, individuals and society, and moral judgements in history.
The book originated in a series of lectures given by Carr in 1961 at the University of Cambridge. The lectures were intended as a broad introduction into the subject of the theory of history and their accessibility has resulted in What is History? becoming one of the key texts in the field of historiography.
Some of Carr's ideas are contentious, particularly his relativism and his rejection of contingency as an important factor in historical analysis. His work provoked a number of responses, most notably Geoffrey Elton's The Practice of History.
Carr was in the process of revising What is History? for a second edition at the time of his death.
Structure
The book begins with Chapter 1 The Historian and His Facts, this is followed by chapters on the (2) Society and the Individual, (3) History, Science and Morality, (4) Causation in History and (5) History as Progress before finishing with a chapter (6) on The Widening Horizon. The 2001 edition includes a new introduction by R.J. Evans, and material from the 2nd edition including An Introductory note from R.W. Davies, a Preface to Second Edition by Carr himself, as well 'notes From E.H. Carr's Files also by Davies.
Reception
Carr's views about the nature of historical work in What Is History? were controversial. In his 1967 book The Practice of History, Geoffrey Elton criticized Carr for his "whimsical" distinction between the "historical facts" and the "facts of the past", saying that it reflected "an extraordinarily arrogant attitude both to the past and to the place of the historian studying it". Elton praised Carr for rejecting the role of "accidents" in history, but said Carr's philosophy of history was an attempt to provide a secular version of the medieval view of history as the working of God's master plan with "Progress" playing the part of God.
British historian Hugh Trevor-Roper said Carr's dismissal of the "might-have-beens of history" reflected a fundamental lack of interest in examining historical causation. Trevor-Roper said examining possible alternative outcomes of history is not a "parlour-game", but is an essential part of historians' work. Trevor-Roper said historians could properly understand the period under study only by looking at all possible outcomes and all sides; historians who adopted Carr's perspective of only seeking to understand the winners of history and treating the outcome of a particular set of events as the only possible outcomes, were "bad historians".
In a review in 1963 in Historische Zeitschrift, Andreas Hillgruber wrote favourably of Carr's geistvoll-ironischer (ironically spirited) criticism of conservative, liberal and positivist historians. British philosopher W. H. Walsh said in a 1963 review that it is not a "fact of history" that he had toast for breakfast that day. Walsh said Carr was correct that historians did not stand above history, and were instead products of their own places and times, which in turn decided what "facts of the past" they determined into "facts of history".
British historian Richard J. Evans said What Is History? caused a revolution in British historiography in the 1960s. Australian historian Keith Windschuttle, a critic of Carr, said What Is History? is one of the most influential books written about historiography, and that very few historians working in the English language since the 1960s had not read it.
Editions
The first edition was published in 1961, with reprints in 1961, 1962 (twice), 1969, 1972, 1977 and 1982. In 1986 a posthumous second edition was published with a Preface by Bob Davies. This was reprinted in 2001 with a substantial critical introduction by Richard J. Evans.
References
External links
Reappraisal by Professor Alun Munslow
1961 non-fiction books
Books about historiography
| 0.764803 | 0.991678 | 0.758438 |
Aryanism
|
Aryanism is an ideology of racial supremacy which views the supposed Aryan race as a distinct and superior racial group which is entitled to rule the rest of humanity. Initially promoted by racial theorists such as Arthur de Gobineau and Houston Stewart Chamberlain, Aryanism reached its peak of influence in Nazi Germany. In the 1930s and 40s, the regime applied the ideology with full force, sparking World War II with the 1939 invasion of Poland in pursuit of Lebensraum, or living space, for the Aryan people. The racial policies which were implemented by the Nazis during the 1930s came to a head during their conquest of Europe and the Soviet Union, culminating in the industrial mass murder of six million Jews and eleven million other victims in what is now known as the Holocaust.
Background
By the late 19th century, a number of later writers, such as the French anthropologist Vacher de Lapouge in his book L'Aryen, argued that this superior branch could be identified biologically by using the cephalic index (a measure of head shape) and other indicators. He argued that the long-headed "dolichocephalic-blond" Europeans, characteristically found in Northern Europe, were natural leaders, destined to rule over more "brachycephalic" (short-headed) peoples. Similar theories were promoted by Arthur de Gobineau and Houston Stewart Chamberlain.
Nazi Aryanism
The ideology of Nazism was based upon the conception of the ancient Aryan race being a superior race, holding the highest position in the racial hierarchy and that the Germanic peoples were the most racially pure existing peoples of Aryan stock. The Nazi conception of the Aryan race arose from earlier proponents of a supremacist conception of the race as described by racial theorist figures such as Arthur de Gobineau and Houston Stewart Chamberlain.
Nazi racial theorist Hans F. K. Günther identified the European race as having five subtype races: Nordic, Mediterranean, Dinaric, Alpine, and East Baltic. Günther applied a Nordicist conception that Nordics were the highest in the racial hierarchy amongst these five European subtype races. In his book Rassenkunde des deutschen Volkes (1922) (Racial Science of the German People), Günther recognized Germans as being composed of all five European subtypes, but emphasized the strong Nordic heritage amongst Germans. Günther believed Slavic people to be of Eastern race, one that was separate from Germans and Nordics, and warned about mixing German blood with Slavic one. He defined each racial subtype according to general physical appearance and their psychological qualities including their racial soul – referring to their emotional traits and religious beliefs, and provided detailed information on their hair, eye, and skin colours, facial structure. He provided photographs of Germans identified as Nordic in places like Baden, Stuttgart, Salzburg, and Schwaben; and provided photographs of Germans he identified as Alpine and Mediterranean types, especially in Vorarlberg, Bavaria, and the Black Forest region of Baden. Adolf Hitler read Rassenkunde des deutschen Volkes, which influenced his racial policy and resulted in Günther's Nazi-backed attainment of a position in the anthropology department at the University of Jena in 1932 where Hitler attended Günther's inaugural lecture.
Günther distinguished Aryans from Jews, and identified Jews as descending from non-European races, particularly from what he classified as the Near Asian race (Vorderasiatische Rasse), more commonly known as the Armenoid race, and said that such origins rendered Jews fundamentally different from and incompatible with Germans and most Europeans. This association of Jews with the Armenoid type had been utilized by Zionist Jews who claimed that Jews were a group within that type. He claimed that the Near Eastern race descended from the Caucasus in the fifth and fourth millennia BC, and that it had expanded into Asia Minor and Mesopotamia and eventually to the west coast of the Eastern Mediterranean Sea. Aside from ascribing Armenians and Jews as having Near Eastern characteristics, he ascribed them to several other contemporary peoples; he included some Greeks, Turks, Syrians, and Iranians as having these characteristics as well. In his work Racial Characteristics of the Jewish People, he defined the racial soul of the Near Eastern race as emphasizing a commercial spirit (Handelgeist), and described them as artful traders – a term that Günther ascribed as being used by Jewish racial theorist Samuel Weissenberg to describe contemporary Armenians, Greeks, and Jews. Günther added to that description of the Near Eastern type as being composed primarily of commercially spirited and artful traders, by claiming that the type held strong psychological manipulation skills that aided them in trade. He claimed that the Near Eastern race had been bred not so much for the conquest and exploitation of nature as it was for the conquest and exploitation of people.
Hitler's conception of the Aryan Herrenvolk (master race) explicitly excluded the vast majority of Slavs, regarding the Slavs as having dangerous Jewish and Asiatic influences. Because of this, the Nazis declared Slavs to be Untermenschen (subhumans). Exceptions were made for a small percentage of Slavs who were seen by the Nazis to be descended from German settlers and therefore fit to be Germanised to be considered part of the Aryan folk or nation. Hitler described Slavs as a mass of born slaves who feel the need of a master. Hitler declared that the Geneva Conventions were not applicable to Slavs because they were subhumans, and German soldiers were thus permitted to ignore the Geneva Conventions in World War II with regard to Slavs. Hitler called Slavs a rabbit family meaning they were intrinsically idle and disorganized. Nazi Germany's propaganda minister Joseph Goebbels had media speak of Slavs as primitive animals who were from the Siberian tundra who were like a dark wave of filth. The Nazi notion of Slavs being inferior non-Aryans was part of the agenda for creating Lebensraum (living space) for Germans and other Germanic people in eastern Europe that was initiated during World War II under Generalplan Ost: millions of Germans and other Germanic settlers would be moved into conquered territories of Eastern Europe, while the original Slavic inhabitants were to be annihilated, removed, or enslaved. Nazi Germany's ally the Independent State of Croatia rejected the common conception that Croats were primarily a Slavic people and claimed that Croats were primarily the descendants of the Germanic Goths. However the Nazi regime continued to classify Croats as a subhuman in spite of the alliance. Nazi Germany's policy changed towards Slavs in response to military manpower shortages, in which it accepted Slavs to serve in its armed forces within occupied territories, in spite of them being considered subhuman, as a pragmatic means to resolve such manpower shortages.
Shortly after the Nazis came to power in 1933 they passed the Law for the Restoration of the Professional Civil Service law which required all civil servants to provide proof of their Aryan ancestry and defined non-Aryan as a person with one Jewish grandparent. In 1933, the German Interior Ministry official Albert Gorter drafted an official definition of the Aryan race for the new law which included all non-Jewish Europeans, this definition was unacceptable by the Nazis. However, Achim Gerke revised Gorter's draft of the Civil Service Law classifying Aryans as people tribally related to German blood. The Nuremberg race laws of 1935 classified as racially acceptable people with German or related blood.
Hitler often doubted whether Czechs were Aryan or not, he said in his table talk, "It is enough for a Czech to grow a moustache for anyone to see, from the way the thing droops, that his origin is Mongolian." The question of whether Italians were Aryan enough was questioned by the Nazi racial theorists. Hitler viewed northern Italians as strongly Aryan, but not southern Italians. The Nazis viewed the downfall of the Roman Empire as being the result of the pollution of blood from racial intermixing, claiming that Italians were a hybrid of races, including black African races. Hitler even mentioned his view of the presence of Negroid blood in the Mediterranean peoples during his first meeting with Mussolini in 1934. The definition of Aryan remained in constant flux to such an extent that the Nazis questioned whether European ethnic groups such as Finns or Hungarians were to be classified as Aryans. Hungarians were classified as tribally alien but not necessarily blood alien, in 1934 the Nazis published a pamphlet which declared Magyars (which it did not define) as Aryans. The following year, an article published by the Nazis admitted that there were disputes over the racial status of Hungarians. As late as 1943, there were disputes over whether Hungarians were to be classified as Aryan.
In 1942, Hitler declared that the Finns were racially related Germanic neighboring peoples, although there is no evidence to suggest that this was based on anything racial.
The idea of the Northern origins of the Aryans was particularly influential in Germany. It was widely believed that the Vedic Aryans were ethnically identical to the Goths, Vandals and other ancient Germanic peoples of the Völkerwanderung. This idea was often intertwined with antisemitic ideas. The distinctions between the Aryan and Semitic peoples were based on the aforementioned linguistic and ethnic history. A complete, highly speculative theory of Aryan and anti-Semitic history can be found in Alfred Rosenberg's major work, The Myth of the Twentieth Century.
Semitic peoples came to be seen as a foreign presence within Aryan societies, and the Semitic peoples were often pointed to as the cause of conversion and destruction of social order and values leading to culture and civilization's downfall by proto-Nazi theorists such as Houston Stewart Chamberlain.
These and other ideas evolved into the Nazi use of the term Aryan race to refer to what they saw as being a superior race, which was narrowly defined by the Nazis as being identical with the Nordic race, followed by other sub-races of the Aryan race and excluding Slavs as non-Aryan. They worked to maintain the purity of this race through eugenics programs (including anti-miscegenation legislation, compulsory sterilization of the mentally ill and the mentally deficient, the execution of the institutionalized mentally ill as part of a euthanasia program).
Heinrich Himmler (the Reichsführer of the SS), the person ordered by Adolf Hitler to implement the Final Solution, or the Holocaust, told his personal masseur Felix Kersten that he always carried with him a copy of the ancient Aryan scripture, the Bhagavad Gita because it relieved him of guilt about what he was doing – he felt that like the warrior Arjuna, he was simply doing his duty without attachment to his actions.
Italian Fascism and Aryanism
In a 1921 speech in Bologna, Mussolini stated that fascism was born out of a profound, perennial need of this our Aryan and Mediterranean race. In this speech Mussolini was referring to Italians as being the Mediterranean branch of the Aryan race, Aryan in the meaning of people of an Indo-European language and culture. Italian Fascism emphasized that race was bound by spiritual and cultural foundations, and identified a racial hierarchy based on spiritual and cultural factors. While Italian Fascism based its conception of race on spiritual and cultural factors, Mussolini explicitly rejected notions that biologically pure races existed though biology was still considered a relevant factor in race.
Italian Fascism strongly rejected the common Nordicist conception of the Aryan race that idealized pure Aryans as having certain physical traits that were defined as Nordic such as blond hair and blue eyes. The antipathy by Mussolini and other Italian Fascists to Nordicism was over the existence of what they viewed as the Mediterranean inferiority complex that they claimed had been instilled into Mediterraneans by the propagation of such theories by German and Anglo-Saxon Nordicists who viewed Mediterranean peoples as racially degenerate and thus in their view inferior. Mussolini refused to allow Italy to return again to this inferiority complex, initially rejecting Nordicism. However traditional Nordicist claims of Mediterraneans being degenerate due to having a darker colour of skin than Nordics had long been rebuked in anthropology through the depigmentation theory that claimed that lighter-skinned peoples had been depigmented from a darker skin, this theory has since become a widely accepted view in anthropology. Anthropologist Carleton S. Coon in his work The Races of Europe (1939) subscribed to depigmentation theory that claimed that the Nordic race's light-coloured skin was the result of depigmentation from their ancestors of the Mediterranean race.
In the early 1930s, with the rise to power of the Nazi Party in Germany and with dictator Adolf Hitler's emphasis on a Nordicist conception of the Aryan race, strong tensions arose between the Fascists and the Nazis over racial issues. In 1934, in the aftermath of Austrian Nazis killing Austrian Chancellor Engelbert Dollfuss, an ally of Italy, Mussolini became enraged and responded by angrily denouncing Nazism. Mussolini rebuked Nazism's Nordicism, claiming that the Nazis' emphasizing of a common Nordic Germanic race was absurd, saying, "A Germanic race does not exist. We repeat. Does not exist. Scientists say so. Hitler says so." The fact that Germans were not purely Nordic was indeed acknowledged by prominent Nazi racial theorist Hans F. K. Günther in his book Rassenkunde des deutschen Volkes (1922) (Racial Science of the German People), where Günther recognized Germans as being composed of five Aryan subtype races: Nordic, Mediterranean, Dinaric, Alpine, and East Baltic while asserting that the Nordics were the highest in a racial hierarchy of the five subtypes.
By 1936, the tensions between Fascist Italy and Nazi Germany reduced and relations became more amicable. In 1936, Mussolini decided to launch a racial programme in Italy, and was interested in the racial studies being conducted by Giulio Cogni. Cogni was a Nordicist but did not equate Nordic identity with Germanic identity as was commonly done by German Nordicists. Cogni had travelled to Germany where he had become impressed by Nazi racial theory and sought to create his own version of racial theory. On 11 September 1936, Cogni sent Mussolini a copy of his newly published book Il Razzismo (1936). Cogni declared the racial affinity of the Mediterranean and Nordic racial subtypes of the Aryan race and claimed that the intermixing of Nordic Aryans and Mediterranean Aryans in Italy produced a superior synthesis of Aryan Italians. Cogni addressed the issue of racial differences between northern and southern Italians, declaring southern Italians were mixed between Aryan and non-Aryan races, that he claimed was most likely due to infiltration by Asiatic peoples in Roman times and later Arab invasions. As such, Cogni viewed Southern Italian Mediterraneans as being polluted with orientalizing tendencies. He would later change his idea and claim that Nordics and Southern Italians were closely related groups both racially and spiritually. His opinion was that they were generally responsible for what is the best in European civilization. Initially Mussolini was not impressed with Cogni's work, however Cogni's ideas entered into the official Fascist racial policy several years later.
In 1938 Mussolini was concerned that if Italian Fascism did not recognize Nordic heritage within Italians, that the Mediterranean inferiority complex would return to Italian society. Therefore, in summer 1938, the Fascist government officially recognized Italians as having Nordic heritage and being of Nordic-Mediterranean descent. In a meeting with PNF members in June 1938, Mussolini identified himself as Nordic and declared that previous policy of focus on Mediterraneanism was to be replaced by a focus on Aryanism.
The Fascist regime began publication of the racialist magazine La Difesa della Razza in 1938. The Nordicist racial theorist Guido Landra took a major role in the early work of La Difesa, and published the Manifesto of Racial Scientists in the magazine in 1938. The Manifesto received substantial criticism, including its assertion of Italians being a pure race, as it was viewed as absurd. La Difesa published other theories that described long-term Nordic Aryan amongst Italians, such as the theory that in the Eneolithic age Nordic Aryans arrived to Italy. Many of the writers took up the traditional Nordicist claim that the decline and fall of the Roman Empire was due to the arrival of Semitic immigrants. La Difesas writers were divided on their claims that described how Italians extricated themselves from Semitic influence.
The Nordicist direction of Fascist racial policy was challenged in 1938 by a resurgence of the Mediterraneanist faction in the PNF. By 1939, the Mediterraneanists' advocacy of a nativist racial theory that rejected ascribing the achievements of the Italian people to Nordic peoples. This nativist racial policy was prominently promoted by Ugo Rellini. Rellini rejected the notion of large-scale invasions of Italy by Nordic Aryans in the Eneolithic age, and claimed that Italians were an indigenous people descended from the Cro-Magnons. Rellini claimed that Mediterranean and Nordic peoples arrived later and peacefully intermixed in small numbers with the indigenous Italian population.
In 1941 the PNF's Mediterraneanists through the influence of Giacomo Acerbo put forward a comprehensive definition of the Italian race. However these efforts were challenged by Mussolini's endorsement of Nordicist figures with the appointment of staunch spiritual Nordicist Alberto Luchini as head of Italy's Racial Office in May 1941, as well as with Mussolini becoming interested in Julius Evola's spiritual Nordicism in late 1941. Acerbo and the Mediterraneanists in his High Council on Demography and Race sought to bring the regime back to supporting Mediterraneanism by thoroughly denouncing the pro-Nordicist Manifesto of the Racial Scientists. The Council recognized Aryans as being a linguistic-based group, and condemned the Manifesto for denying the influence of pre-Aryan civilization on modern Italy, saying that the Manifesto constitutes an unjustifiable and undemonstrable negation of the anthropological, ethnological, and archaeological discoveries that have occurred and are occurring in our country. Furthermore, the Council denounced the Manifesto for implicitly crediting Germanic invaders of Italy in the guise of the Lombards for having a formative influence on the Italian race in a disproportional degree to the number of invaders and to their biological predominance. The Council claimed that the obvious superiority of the ancient Greeks and Romans in comparison with the ancient Germanic tribes made it inconceivable that Italian culture owed a debt to ancient Aryan Germans. The Council denounced the Manifestos Nordicist attitude towards Mediterraneans that it claimed was considering them as slaves and was a repudiation of the entire Italian civilization.
Neo-Nazism and Aryanism
Since the military defeat of Nazi Germany by the Allies in 1945, some neo-Nazis have developed a more inclusive definition of "Aryan", claiming that the peoples of Western Europe are the closest descendants of the ancient Aryans, with Nordic and Germanic peoples being the most "racially pure."
According to Nicholas Goodrick-Clarke, many neo-Nazis want to establish an autocratic state modeled after Nazi Germany to be called the Western Imperium. It is believed that this proposed state would be able to attain world domination by combining the nuclear arsenals of the four major Aryan world powers, the United States, the United Kingdom, France, and Russia under a single military command.
This proposed state would be led by a Führer-like figure called the Vindex, and would include all areas inhabited by the "Aryan race", as conceived by Neo-Nazis. Only those of the Aryan race would be full citizens of the state. The "Western Imperium" would embark on a vigorous and dynamic program of space exploration, followed by the creation by genetic engineering of a super race called Homo Galactica. The concept of the "Western Imperium" as outlined in the previous three sentences is based on the original concept of the Imperium as outlined in the 1947 book Imperium: The Philosophy of History and Politics by Francis Parker Yockey as further updated, extended and refined in the early 1990s in pamphlets published by David Myatt.
See also
Ahnenpass
Aryan Games
Aryan paragraph
Aryanization
Esotericism in Germany and Austria
Invented tradition
Ariosophy
British Israelism
Christian Identity
French Israelism
Master race
Honorary Aryan
Nordicism
Root race
White supremacy
References
Informational notes
Citations
Bibliography
Glinka, Lukasz Andrzej. Aryan Unconscious: Archetype of Discrimination, History & Politics. Great Abington, UK: Cambridge International Science Publishing. 2014
Arvidsson, Stefan. Aryan Idols. The Indo-European Mythology as Science and Ideology. Chicago: University of Chicago Press. 2006
Poliakov, Leon. The Aryan Myth: A History of Racist and Nationalistic Ideas In Europe New York: Barnes & Noble Books. 1996
Tilak, Bal Gangadhar (1903) The Arctic Home in the Vedas
Scientific racism
White supremacy
| 0.761103 | 0.996461 | 0.75841 |
Femonationalism
|
Femonationalism, sometimes known as feminationalism, is the association between a nationalist ideology and some feminist ideas, especially when driven by xenophobic motivations.
The term was originally proposed by the researcher Sara R. Farris to refer to the processes by which some powers line up with the claims of the feminist movement in order to justify aporophobic, racist, and xenophobic positions, arguing that immigrants are sexist and that Western society is entirely egalitarian.
The main critiques of this phenomenon focus on the partial and sectarian use of the feminist movement to further ends based on social intolerance, ignoring the sexism and lack of real social equality in Western society as a whole.
See also
References
Feminism
Nationalism and gender
Political movements
Racism
Women's studies
Xenophobia
| 0.77983 | 0.972531 | 0.75841 |
Ancient technology
|
During the growth of the ancient civilizations, ancient technology was the result from advances in engineering in ancient times. These advances in the history of technology stimulated societies to adopt new ways of living and governance.
This article includes the advances in technology and the development of several engineering sciences in historic times before the Middle Ages, which began after the fall of the Western Roman Empire in AD 476, the death of Justinian I in the 6th century, the coming of Islam in the 7th century, or the rise of Charlemagne in the 8th century. For technologies developed in medieval societies, see Medieval technology and Inventions in medieval Islam.
Ancient civilizations
Africa
Technology in Africa has a history stretching to the beginning of the human species, stretching back to the first evidence of tool use by hominid ancestors in the areas of Africa where humans are believed to have evolved. Africa saw the advent of some of the earliest ironworking technology in the Aïr Mountains region of what is today Niger and the erection of some of the world's oldest monuments, pyramids, and towers in Egypt, Nubia, and North Africa. In Nubia and ancient Kush, glazed quartzite and building in brick were developed to a greater extent than in Egypt. Parts of the East African Swahili Coast saw the creation of the world's oldest carbon steel creation with high-temperature blast furnaces created by the Haya people of Tanzania.
Mesopotamia
The Mesopotamians were one of the first to enter the Bronze Age in the world. Early on they used copper, bronze and gold, and later they used iron. Palaces were decorated with hundreds of kilograms of these very expensive metals. Also, copper, bronze, and iron were used for armor as well as for different weapons such as swords, daggers, spears, and maces.
Perhaps the most important advance made by the Mesopotamians was the invention of writing by the Sumerians. With the invention of writing came the first recorded laws called the Code of Hammurabi as well as the first major piece of literature called the Epic of Gilgamesh.
Several of the six classic simple machines were invented in Mesopotamia. Mesopotamians have been credited with the invention of the wheel. The wheel and axle mechanism first appeared with the potter's wheel, invented in Mesopotamia (modern Iraq) during the 5th millennium BC. This led to the invention of the wheeled vehicle in Mesopotamia during the early 4th millennium BC. Depictions of wheeled wagons found on clay tablet pictographs at the Eanna district of Uruk are dated between 3700 and 3500 BC. The lever was used in the shadoof water-lifting device, the first crane machine, which appeared in Mesopotamia circa 3000 BC. and then in ancient Egyptian technology circa 2000 BC. The earliest evidence of pulleys date back to Mesopotamia in the early 2nd millennium BC.
The screw, the last of the simple machines to be invented, first appeared in Mesopotamia during the Neo-Assyrian period (911-609) BC. According to the assyriologist Stephanie Dalley, the earliest pump was the screw pump, first used by Sennacherib, King of Assyria, for the water systems at the Hanging Gardens of Babylon and Nineveh in the 7th century BC. This attribution, however, is disputed by the historian John Peter Oleson.
The Mesopotamians used a sexagesimal number system with the base 60 (like we use base 10). They divided time up by 60s including a 60-second minute and a 60-minute hour, which we still use today. They also divided up the circle into 360 degrees. They had a wide knowledge of mathematics including addition, subtraction, multiplication, division, quadratic and cubic equations, and fractions. This was important in keeping track of records as well as in some of their large building projects. The Mesopotamians had formulas for figuring out the circumference and area for different geometric shapes like rectangles, circles, and triangles. Some evidence suggests that they even knew the Pythagorean Theorem long before Pythagoras wrote it down. They may have even discovered the number for pi in figuring the circumference of a circle.
Babylonian astronomy was able to follow the movements of the stars, planets, and the Moon. Application of advanced math predicted the movements of several planets. By studying the phases of the Moon, the Mesopotamians created the first calendar. It had 12 lunar months and was the predecessor for both the Jewish and Greek calendars.
Babylonian medicine used logic and recorded medical history to be able to diagnose and treat illnesses with various creams and pills. Mesopotamians had two kinds of medical practices, magical and physical, and would often use both practices on the same patient.
The Mesopotamians made many technological discoveries. They were the first to use the potter's wheel to make better pottery, they used irrigation to get water to their crops, they used bronze metal (and later iron metal) to make strong tools and weapons, and used looms to weave cloth from wool.
The Jerwan Aqueduct (c. 688 BC) is made with stone arches and lined with waterproof concrete.
For later technologies developed in the Mesopotamian region, now known as Iraq, see Persia below for developments under the ancient Persian Empire, and the Inventions in medieval Islam and Arab Agricultural Revolution articles for developments under the medieval Islamic Caliphates.
Egypt
The Egyptians invented and used many simple machines, such as the ramp to aid construction processes. They were among the first to extract gold by large-scale mining using fire-setting, and the first recognisable map, the Turin papyrus shows the plan of one such mine in Nubia.
The Egyptians are known for building pyramids centuries before the creation of modern tools. Historians and archaeologists have found evidence that the Egyptian pyramids were built using three of what is called the Six Simple Machines, from which all machines are based. These machines are the inclined plane, the wedge, and the lever, which allowed the ancient Egyptians to move millions of limestone blocks which weighed approximately 3.5 tons (7,000 lbs.) each into place to create structures like the Great Pyramid of Giza, which is high.
Egyptian paper, made from papyrus, and pottery were mass-produced and exported throughout the Mediterranean basin. The wheel, however, did not arrive until foreign invaders introduced the chariot. They developed Mediterranean maritime technology including ships and lighthouses. Early construction techniques utilized by the Ancient Egyptians made use of bricks composed mainly of clay, sand, silt, and other minerals. These constructs would have been vital in flood control and irrigation, especially along the Nile delta.
The screw pump is the oldest positive displacement pump. The first records of a screw pump, also known as a water screw or Archimedes' screw, dates back to Ancient Egypt before the 3rd century BC. The Egyptian screw, used to lift water from the Nile, was composed of tubes wound around a cylinder; as the entire unit rotates, water is lifted within the spiral tube to the higher elevation. A later screw pump design from Egypt had a spiral groove cut on the outside of a solid wooden cylinder and then the cylinder was covered by boards or sheets of metal closely covering the surfaces between the grooves. The screw pump was later introduced from Egypt to Greece.
For later technologies in Ptolemaic Egypt and Roman Egypt, see Ancient Greek technology and Roman technology, respectively. For later technology in medieval Arabic Egypt, see Inventions in medieval Islam and Arab Agricultural Revolution.
India
The history of science and technology in the Indian subcontinent dates back to the earliest civilizations of the world. The Indus Valley civilization yields evidence of mathematics, hydrography, metrology, metallurgy, astronomy, medicine, surgery, civil engineering and sewage collection and disposal being practiced by its inhabitants.
The Indus Valley civilization, situated in a resource-rich area (in modern Pakistan and northwestern India), is notable for its early application of city planning, sanitation technologies, and plumbing. Cities in the Indus Valley offer some of the first examples of closed gutters, public baths, and communal granaries.
The Takshashila University was an important seat of learning in the ancient world. It was the center of education for scholars from all over Asia. Many Greek, Persian and Chinese students studied here under great scholars including Kautilya, Panini, Jivaka, and Vishnu Sharma.
The ancient system of medicine in India, Ayurveda was a significant milestone in Indian history. It mainly uses herbs as medicines. Its origins can be traced back to origin of Atharvaveda. The Sushruta Samhita (400 BC) by Sushruta has details about performing cataract surgery, plastic surgery, etc.
Ancient India was also at the forefront of seafaring technology - a panel found at Mohenjo-daro, depicts a sailing craft. Ship construction is vividly described in the Yukti Kalpa Taru, an ancient Indian text on Shipbuilding. (The Yukti Kalpa Taru had been translated and published by Prof. Aufrecht in his 'Catalogue of Sanskrit Manuscripts').
Indian construction and architecture, called 'Vaastu Shastra', suggests a thorough understanding of materials engineering, hydrology, and sanitation. Ancient Indian culture was also pioneering in its use of vegetable dyes, cultivating plants including indigo and cinnabar. Many of the dyes were used in art and sculpture. The use of perfumes demonstrates some knowledge of chemistry, particularly distillation and purification processes.
China
The history of science and technology in China shows significant advances in science, technology, mathematics, and astronomy. The first recorded observations of comets, solar eclipses, and supernovae were made in China. Traditional Chinese medicine, acupuncture and herbal medicine were also practiced. The Four Great Inventions of China: the compass, gunpowder, papermaking, and printing were among the most important technological advances, only known in Europe by the end of the Middle Ages.
According to the Scottish researcher Joseph Needham, the Chinese made many first-known discoveries and developments. Major technological contributions from China include early seismological detectors, matches, paper, the double-action piston pump, cast iron, the iron plough, the multi-tube seed drill, the suspension bridge, natural gas as fuel, the magnetic compass, the raised-relief map, the propeller, the crossbow, the south-pointing chariot, and gunpowder. Other Chinese discoveries and inventions from the Medieval period, according to Joseph Needham's research, include: block printing and movable type, phosphorescent paint, and the spinning wheel.
The solid-fuel rocket was invented in China about 1150 AD, nearly 200 years after the invention of black powder (which acted as the rocket's fuel). At the same time that the Age of Exploration was occurring in the West, the Chinese emperors of the Ming Dynasty also sent ships, some reaching Africa. But the enterprises were not further funded, halting further exploration and development. When Ferdinand Magellan's ships reached Brunei in 1521, they found a wealthy city that had been fortified by Chinese engineers, and protected by a breakwater. Antonio Pigafetta noted that much of the technology of Brunei was equal to Western technology of the time. Also, there were more cannons in Brunei than on Magellan's ships, and the Chinese merchants to the Brunei court had sold them spectacles and porcelain, which were rarities in Europe.
Persian Empire
The Qanat, a water management system used for irrigation, originated in Iran before the Achaemenid period of Persia. The oldest and largest known qanat is in the Iranian city of Gonabad which, after 2,700 years, still provides drinking and agricultural water to nearly 40,000 people.It was designed to carry water from underground sources to desert areas for agricultural and population growth. The places where water is gathered are surrounded by zones to guarantee that its value is recognized and that they continue to function. To guarantee that the Qanat system can run smoothly for a very long time, the farmlands that depend on this water are also conserved.
The Yakhchāl is an ancient Persian refrigeration structure that was used to store ice and occasionally food in the hot summer months of modern day Iran. The structure is composed of an extensive below-ground storage area with a large above-ground dome. It was kept cool throughout the summer thanks to a system of Windcatchers and Qanats, and its structure was specially designed for optimal isolation. This was made possible due to its thickness and distinct composition which made the walls water impenetrable and heat resistant.
Wind catchers were also a significant role that played in the Persian ancient history. These ancient structures help control high speed winds to naturally cool buildings, which was in fact needed since Yazd has a hot and dry climate, allowing it to be livable.
According to Chris Soelberg and Julie Rich, researchers in a university in Utah, stated that Wind catchers have been seen as far back as 3,300 years ago in Egypt, but they were actually originated in Iran.
The earliest evidence of water wheels and watermills date back to the ancient Near East in the 4th century BC, specifically in the Persian Empire before 350 BC, in the regions of Mesopotamia (Iraq) and Persia (Iran). This pioneering use of water power constituted the first human-devised motive force not to rely on muscle power (besides the sail).
In the 7th century AD, Persians in Afghanistan developed the first practical windmills. For later medieval technologies developed in Islamic Persia, see Inventions in medieval Islam and Arab Agricultural Revolution.
The ancient Persians also developed advanced mining techniques, particularly for military purposes. During sieges, they used tunneling to undermine city walls, weakening fortifications and gaining access to cities. In one documented case, the Persians employed bitumen and sulfur during these operations to create toxic fumes. By igniting these materials, they generated poisonous gases that incapacitated defenders, marking one of the earliest examples of chemical Warfare. Evidence for this tactic includes the discovery of around 20 Roman soldiers’ remains near a city wall, believed to have been exposed to the gases.
The Baghdad Battery is a 2,000-year-old artifact believed to have originated in ancient Persia. It consists of a clay jar containing a copper cylinder and an iron rod. When filled with an acidic liquid, the device could generate a small electrical charge. While its exact function remains debated, many scholars suggest it may have been used for electroplating, a technique for coating objects with metals such as gold. This artifact suggests that the Persians had some understanding of electrochemical processes long before the modern discovery of electricity.
Mesoamerica and Andean Region
Lacking suitable beasts of burden and inhabiting domains often too mountainous or boggy for wheeled transport, the ancient civilizations of the Americas did not develop wheeled transport or the mechanics associated with animal power. Nevertheless, they produced advanced engineering including above ground and underground aqueducts, quake-proof masonry, artificial lakes, dykes, 'fountains,' pressurized water, road ways and complex terracing. Equally, gold-working commenced early in Peru (2000 BC), and eventually copper, tin, lead and bronze were used. Although metallurgy did not spread to Mesoamerica until the Middle Ages, it was employed here and in the Andes for sophisticated alloys and gilding. The Native Americans developed a complex understanding of the chemical properties or utility of natural substances, with the result that a majority of the world's early medicinal drugs and edible crops, many important adhesives, paints, fibres, plasters, and other useful items were the products of these civilizations. Perhaps the best-known Mesoamerican invention was rubber, which was used to create rubber bands, rubber bindings, balls, syringes, 'raincoats,' boots, and waterproof insulation on containers and flasks.
Hellenistic Mediterranean
The Hellenistic period of Mediterranean history began in the 4th century BC with Alexander's conquests, which led to the emergence of a Hellenistic civilization representing a synthesis of Greek and Near-Eastern cultures in the Eastern Mediterranean region, including the Balkans, Levant and Egypt. With Ptolemaic Egypt as its intellectual center and Greek as the lingua franca, the Hellenistic civilization included Greek, Egyptian, Jewish, Persian and Phoenician scholars and engineers who wrote in Greek.
Hellenistic technology made significant progress from the 4th century BC, continuing up to and including the Roman period. Some inventions that are credited to the ancient Greeks are the following: bronze casting techniques, water organ (hydraulis), and torsion siege engine. Many of these inventions occurred late in the Hellenistic period, often inspired by the need to improve weapons and tactics in war.
Hellenistic engineers of the Eastern Mediterranean were responsible for a number of inventions and improvements to existing technology. Archimedes invented several machines. Hellenistic engineers often combined scientific research with the development of new technologies. Technologies invented by Hellenistic engineers include the ballistae, the piston pump, and primitive analog computers like the Antikythera mechanism. Hellenistic architects built domes, and were the first to explore the Golden ratio and its relationship with geometry and architecture.
Other Hellenistic innovations include torsion catapults, pneumatic catapults, crossbows, rutways, organs, the keyboard mechanism, differential gears, showers, dry docks, diving bells, odometer and astrolabes. In architecture, Hellenistic engineers constructed monumental lighthouses such as the Pharos and devised central heating systems. The Tunnel of Eupalinos is the earliest tunnel which has been excavated with a scientific approach from both ends.
Automata like automatic doors and other ingenious devices were built by Hellenistic engineers as Ctesibius and Philo of Byzantium. Greek technological treatises were scrupulously studied and advanced by later Byzantine, Arabic and Latin scholars, and provided some of the foundations for further technological advances in these civilizations.
Roman Empire
The Roman Empire expanded from Italia across the entire Mediterranean region between the 1st century BC and 1st century AD. Its most advanced and economically productive provinces outside of Italia were the Eastern Roman provinces in the Balkans, Asia Minor, Egypt, and the Levant, with Roman Egypt in particular being the wealthiest Roman province outside of Italia.
Roman technology supported Roman civilization and made the expansion of Roman commerce and Roman military possible over nearly a thousand years. The Roman Empire had an advanced set of technology for their time. Some of the Roman technology in Europe may have been lost during the turbulent eras of Late Antiquity and the Early Middle Ages. Roman technological feats in many different areas such as civil engineering, construction materials, transport technology, and some inventions such as the mechanical reaper went unmatched until the 19th century. Romans developed an intensive and sophisticated agriculture, expanded upon existing iron working technology, created laws providing for individual ownership, advanced stonemasonry technology, advanced road-building (exceeded only in the 19th century), military engineering, civil engineering, spinning and weaving and several different machines like the Gallic reaper that helped to increase productivity in many sectors of the Roman economy. They also developed water power through building aqueducts on a grand scale, using water not just for drinking supplies but also for irrigation, powering water mills and in mining. They used drainage wheels extensively in deep underground mines, one device being the reverse overshot water-wheel. They were the first to apply hydraulic mining methods for prospecting for metal ores, and for extracting those ores from the ground when found using a method known as hushing.
Roman engineers have built triumphal arches, amphitheatres, aqueducts, public baths, true arch bridges, harbours, dams, vaults and domes on a very large scale across their Empire. Notable Roman inventions include the book (Codex), glass blowing and concrete. Because Rome was located on a volcanic peninsula, with sand which contained suitable crystalline grains, the concrete which the Romans formulated was especially durable. Some of their buildings have lasted 2000 years, to the present day. Roman society had also carried over the design of a door lock with tumblers and springs from Greece. Like many other aspects of innovation and culture that were carried on from Greece to Rome, the lines between where each one originated from have become skewed over time. These mechanisms were highly sophisticated and intricate for the era.
Roman civilization was highly urbanized by pre-modern standards. Many cities of the Roman Empire had over 100,000 inhabitants with the capital Rome being the largest metropolis of antiquity. Features of Roman urban life included multistory apartment buildings called insulae, street paving, public flush toilets, glass windows and floor and wall heating. The Romans understood hydraulics and constructed fountains and waterworks, particularly aqueducts, which were the hallmark of their civilization. They exploited water power by building water mills, sometimes in series, such as the sequence found at Barbegal in southern France and suspected on the Janiculum in Rome. Some Roman baths have lasted to this day. The Romans developed many technologies which were apparently lost in the Middle Ages, and were only fully reinvented in the 19th and 20th centuries. They also left texts describing their achievements, especially Pliny the Elder, Frontinus and Vitruvius.
Other less known Roman innovations include cement, boat mills, arch dams and possibly tide mills.
In Roman Egypt, Heron of Alexandria invented the aeolipile, a basic steam-powered device, and demonstrated knowledge of mechanic and pneumatic systems. He was also the first to experiment with a wind-powered mechanical device, a windwheel. He also described a vending machine. However, his inventions were primarily toys, rather than practical machines.
See also
History of technology
Prehistoric technology
Medieval technology
Muslim world
Arab Agricultural Revolution
Islamic Golden Age
List of inventions in the medieval Islamic world
References
Further reading
Humphrey, J. W. (2006). Ancient technology. Greenwood guides to historic events of the ancient world. Westport, Conn: Greenwood Press.
Rojcewicz, R. (2006). The gods and technology: a reading of Heidegger. SUNY series in theology and continental thought. Albany: State University of New York Press.
Krebs, R. E., & Krebs, C. A. (2004). Groundbreaking scientific experiments, inventions, and discoveries of the ancient world. Groundbreaking scientific experiments, inventions, and discoveries through the ages. Westport, Conn: Greenwood Press.
Childress, D. H. (2000). Technology of the gods: the incredible sciences of the ancients. Kempton, Ill: Adventures Unlimited Press.
Landels, J. G. (2000). Engineering in the ancient world. Berkeley: University of California Press.
James, P., & Thorpe, N. (1995). Ancient inventions. New York: Ballantine Books.
Hodges, H. (1992). Technology in the ancient world. New York: Barnes & Noble.
National Geographic Society (U.S.). (1986). Builders of the ancient world: marvels of engineering. Washington, D.C.: The Society.
American Ceramic Society, Kingery, W. D., & Lense, E. (1985). Ancient technology to modern science. Ceramics and civilization, v. 1. Columbus, Ohio: American Ceramic Society.
Brown, M. (1966). On the theory and measurement of technological change. Cambridge: Cambridge U.P.
Forbes, R. J. (1964). Studies in ancient technology. Leiden: E.J. Brill.
External links
| 0.764784 | 0.991658 | 0.758404 |
Social phenomenon
|
Social phenomena or social phenomenon (singular) are any behaviours, actions, or events that takes place because of social influence, including from contemporary as well as historical societal influences. They are often a result of multifaceted processes that add ever increasing dimensions as they operate through individual nodes of people. Because of this, social phenomenon are inherently dynamic and operate within a specific time and historical context.
Social phenomena are observable, measurable data. Psychological notions may drive them, but those notions are not directly observable; only the phenomena that express them.
See also
Phenomenological sociology
Sociological imagination
Further reading
References
Sociological terminology
Social philosophy
Phenomena
| 0.768817 | 0.986456 | 0.758404 |
History of human thought
|
The history of human thought covers the history of philosophy, history of science and history of political thought and spans across the history of humanity. The academic discipline studying it is called intellectual history.
Merlin Donald has claimed that human thought has progressed through three historic stages: the episodic, the mimetic, and the mythic stages, before reaching the current stage of theoretic thinking or culture. According to him the final transition occurred with the invention of science in Ancient Greece.
Prehistoric human thought
Prehistory covers human intellectual history before the invention of writing.
The first identified cultures are from the Upper Paleolithic era, evidenced by regional patterns in artefacts such as cave art, Venus figurines, and stone tools. The Aterian culture was engaged in symbolically constituted material culture, creating what are amongst the earliest African examples of personal ornamentation.
Origins of religion
The Natufian culture of ancient Middle East produced zoomorphic art. The Khiamian culture which followed moved into depicting human beings, which was called by Jacques Cauvin a "revolution in symbols", becoming increasingly realistic. According to him, this led to the development of religion, with the Woman and the Bull as the first sacred figures. He claims that this led to a revolution in human thinking, with humans for the first time moving from animal or spirit worship to the worship of a supreme being, with humans clearly in hierarchical relation to it. Another early form of religion has been identified by Marija Gimbutas as the worship of the Great Goddess, the Bird or Snake Goddess, the Vegetation Goddess, and the Male God in Old Europe.
An important innovation in religious thought was the belief in the sky god. The Aryans had a common god of the sky called Dyeus, and the Indian Dyaus, the Greek Zeus, and the Roman Jupiter were all further developments, with the Latin word for God being Deus. Any masculine sky god is often also king of the gods, taking the position of patriarch within a pantheon. Such king gods are collectively categorized as "sky father" deities, with a polarity between sky and earth often being expressed by pairing a "sky father" god with an "earth mother" goddess (pairings of a sky mother with an earth father are less frequent). A main sky goddess is often the queen of the gods. In antiquity, several sky goddesses in ancient Egypt, Mesopotamia, and the Near East were called Queen of Heaven.
Ancient thought
Axial age
The Axial Age was a period between 750 and 350 BCE during which major intellectual development happened around the world. This included the development of Chinese philosophy by Confucius, Mozi, and others; the Upanishads and Gautama Buddha in Indian philosophy; Zoroaster in Ancient Persia; the Jewish prophets Elijah, Isaiah, Jeremiah, and Deutero-Isaiah in Palestine; Ancient Greek philosophy and literature, all independently of each other.
Ancient Chinese thought
The Hundred Schools of Thought were philosophers and schools of thought that flourished in Ancient China from the 6th century to 221 BCE, an era of great cultural and intellectual expansion in China. Even though this period – known in its earlier part as the Spring and Autumn period and the Warring States period – in its latter part was fraught with chaos and bloody battles, it is also known as the Golden Age of Chinese philosophy because a broad range of thoughts and ideas were developed and discussed freely. The thoughts and ideas discussed and refined during this period have profoundly influenced lifestyles and social consciousness up to the present day in East Asian countries. The intellectual society of this era was characterized by itinerant scholars, who were often employed by various state rulers as advisers on the methods of government, war, and diplomacy. This period ended with the rise of the Qin dynasty and the subsequent purge of dissent. The Book of Han lists ten major schools, they are:
Confucianism, which teaches that human beings are teachable, improvable and perfectible through personal and communal endeavour especially including self-cultivation and self-creation. A main idea of Confucianism is the cultivation of virtue and the development of moral perfection. Confucianism holds that one should give up one's life, if necessary, either passively or actively, for the sake of upholding the cardinal moral values of ren and yi.
Legalism. Often compared with Machiavelli, and foundational for the traditional Chinese bureaucratic empire, the Legalists examined administrative methods, emphasizing a realistic consolidation of the wealth and power of autocrat and state.
Taoism, a philosophy which emphasizes the Three Jewels of the Tao: compassion, moderation, and humility, while Taoist thought generally focuses on nature, the relationship between humanity and the cosmos; health and longevity; and wu wei (action through inaction). Harmony with the Universe, or the source thereof (Tao), is the intended result of many Taoist rules and practices.
Mohism, which advocated the idea of universal love: Mozi believed that "everyone is equal before heaven", and that people should seek to imitate heaven by engaging in the practice of collective love. His epistemology can be regarded as primitive materialist empiricism; he believed that human cognition ought to be based on one's perceptions – one's sensory experiences, such as sight and hearing – instead of imagination or internal logic, elements founded on the human capacity for abstraction. Mozi advocated frugality, condemning the Confucian emphasis on ritual and music, which he denounced as extravagant.
Naturalism, the School of Naturalists or the Yin-yang school, which synthesized the concepts of yin and yang and the Five Elements; Zou Yan is considered the founder of this school.
Agrarianism, or the School of Agrarianism, which advocated peasant utopian communalism and egalitarianism. The Agrarians believed that Chinese society should be modeled around that of the early sage king Shen Nong, a folk hero which was portrayed in Chinese literature as "working in the fields, along with everyone else, and consulting with everyone else when any decision had to be reached."
The Logicians or the School of Names, which focused on definition and logic. It is said to have parallels with that of the Ancient Greek sophists or dialecticians. The most notable Logician was Gongsun Longzi.
The School of Diplomacy or School of Vertical and Horizontal [Alliances], which focused on practical matters instead of any moral principle, so it stressed political and diplomatic tactics, and debate and lobbying skill. Scholars from this school were good orators, debaters and tacticians.
The Miscellaneous School, which integrated teachings from different schools; for instance, Lü Buwei found scholars from different schools to write a book called Lüshi Chunqiu cooperatively. This school tried to integrate the merits of various schools and avoid their perceived flaws.
The School of "Minor-talks", which was not a unique school of thought, but a philosophy constructed of all the thoughts which were discussed by and originated from normal people on the street.
Other groups included:
The School of the Military that studied strategy and the philosophy of war; Sunzi and Sun Bin were influential leaders. However, this school was not one of the "Ten Schools" defined by Hanshu.
Yangism was a form of ethical egoism founded by Yang Zhu. It was once widespread but fell to obscurity before the Han dynasty. Due to its stress on individualism, it influenced later generations of Taoists.
School of the Medical Skills, a school which studied Medicine and health. Bian Que and Qibo were well-known scholars. Two of the earliest and existing Chinese medical works are Huangdi Neijing and Shanghan Lun.
Ancient Greek thought
Pre-Socratics
The earliest Greek philosophers, known as the pre-Socratics, were primarily concerned with cosmology, ontology, and mathematics. They were distinguished from "non-philosophers" insofar as they rejected mythological explanations in favor of reasoned discourse. They included various schools of thought:
The Milesian school of philosophy was founded by Thales of Miletus, regarded by Aristotle as the first philosopher, who held that all things arise from a single material substance, water. He was called the "first man of science," because he gave a naturalistic explanation of the cosmos and supported it with reasons. He was followed by Anaximander, who argued that the substratum or arche could not be water or any of the classical elements but was instead something "unlimited" or "indefinite" (in Greek, the apeiron). Anaximenes in turn held that the arche was air, although John Burnet argues that by this he meant that it was a transparent mist, the aether. Despite their varied answers, the Milesian school was united in looking for the Physis of the world.
Pythagoreanism was founded by Pythagoras and sought to reconcile religious belief and reason. He is said to have been a disciple of Anaximander and to have imbibed the cosmological concerns of the Ionians, including the idea that the cosmos is constructed of spheres, the importance of the infinite, and that air or aether is the arche of everything. Pythagoreanism also incorporated ascetic ideals, emphasizing purgation, metempsychosis, and consequently a respect for all animal life; much was made of the correspondence between mathematics and the cosmos in a musical harmony. Pythagoras believed that behind the appearance of things, there was the permanent principle of mathematics, and that the forms were based on a transcendental mathematical relation.
The Ephesian school was based on the thought of Heraclitus. Contrary to the Milesian school, which posits one stable element as the arche, Heraclitus taught that panta rhei ("everything flows"), the closest element to this eternal flux being fire. All things come to pass in accordance with Logos, which must be considered as "plan" or "formula", and "the Logos is common". He also posited a unity of opposites, expressed through dialectic, which structured this flux, such as that seeming opposites in fact are manifestations of a common substrate to good and evil itself. Heraclitus called the oppositional processes ἔρις (eris), "strife", and hypothesized that the apparently stable state of δίκη (dikê), or "justice", is the harmonic unity of these opposites.
The Eleatics' founder Parmenides of Elea cast his philosophy against those who held "it is and is not the same, and all things travel in opposite directions,"—presumably referring to Heraclitus and those who followed him. Whereas the doctrines of the Milesian school, in suggesting that the substratum could appear in a variety of different guises, implied that everything that exists is corpuscular, Parmenides argued that the first principle of being was One, indivisible, and unchanging. Being, he argued, by definition implies eternality, while only that which is can be thought; a thing which is, moreover, cannot be more or less, and so the rarefaction and condensation of the Milesians is impossible regarding Being; lastly, as movement requires that something exist apart from the thing moving (viz. the space into which it moves), the One or Being cannot move, since this would require that "space" both exist and not exist. While this doctrine is at odds with ordinary sensory experience, where things do indeed change and move, the Eleatic school followed Parmenides in denying that sense phenomena revealed the world as it actually was; instead, the only thing with Being was thought, or the question of whether something exists or not is one of whether it can be thought. In support of this, Parmenides' pupil Zeno of Elea attempted to prove that the concept of motion was absurd and as such motion did not exist. He also attacked the subsequent development of pluralism, arguing that it was incompatible with Being. His arguments are known as Zeno's paradoxes.
The Pluralist school came as the power of Parmenides' logic was such that some subsequent philosophers abandoned the monism of the Milesians, Xenophanes, Heraclitus, and Parmenides, where one thing was the arche, and adopted pluralism, such as Empedocles and Anaxagoras. There were, they said, multiple elements which were not reducible to one another and these were set in motion by love and strife (as in Empedocles) or by Mind (as in Anaxagoras). Agreeing with Parmenides that there is no coming into being or passing away, genesis or decay, they said that things appear to come into being and pass away because the elements out of which they are composed assemble or disassemble while themselves being unchanging.
This pluralist thought was taken further by Leucippus also proposed an ontological pluralism with a cosmogony based on two main elements: the vacuum and atoms. These, by means of their inherent movement, are crossing the void and creating the real material bodies. His theories were not well known by the time of Plato, however, and they were ultimately incorporated into the work of his student, Democritus, who founded Atomic theory.
The Sophists tended to teach rhetoric as their primary vocation. Prodicus, Gorgias, Hippias, and Thrasymachus appear in various dialogues, sometimes explicitly teaching that while nature provides no ethical guidance, the guidance that the laws provide is worthless, or that nature favors those who act against the laws.
Classic period
The classic period included:
The Cynics were an ascetic sect of philosophers beginning with Antisthenes in the 4th century BC and continuing until the 5th century AD. They believed that one should live a life of Virtue in agreement with Nature. This meant rejecting all conventional desires for wealth, power, health, or celebrity, and living a life free from possessions.
The Cyrenaics were a hedonist school of philosophy founded in the fourth century BC by Aristippus, who was a student of Socrates. They held that pleasure was the supreme good, especially immediate gratifications; and that people could only know their own experiences, beyond that truth was unknowable.
Platonism is the name given to the philosophy of Plato, which was maintained and developed by his followers. The central concept was the theory of forms: the transcendent, perfect archetypes, of which objects in the everyday world are imperfect copies. The highest form was the Form of the Good, the source of being, which could be known by reason.
The Peripatetic school was the name given to the philosophers who maintained and developed the philosophy of Aristotle. They advocated examination of the world to understand the ultimate foundation of things. The goal of life was the happiness which originated from virtuous actions, which consisted in keeping the mean between the two extremes of the too much and the too little.
The Megarian school, founded by Euclides of Megara, one of the pupils of Socrates. Its ethical teachings were derived from Socrates, recognizing a single good, which was apparently combined with the Eleatic doctrine of Unity. Some of Euclides' successors developed logic to such an extent that they became a separate school, known as the Dialectical school. Their work on modal logic, logical conditionals, and propositional logic played an important role in the development of logic in antiquity.
The Eretrian school, founded by Phaedo of Elis. Like the Megarians they seem to have believed in the individuality of "the Good," the denial of the plurality of virtue, and of any real difference existing between the Good and the True. Cicero tells us that they placed all good in the mind, and in that acuteness of mind by which the truth is discerned. They denied that truth could be inferred by negative categorical propositions, and would only allow positive ones, and of these only simple ones.
Hellenistic schools of thought
The Hellenistic schools of thought included:
Academic skepticism, which maintained that knowledge of things is impossible. Ideas or notions are never true; nevertheless, there are degrees of truth-likeness, and hence degrees of belief, which allow one to act. The school was characterized by its attacks on the Stoics and on the Stoic dogma that convincing impressions led to true knowledge.
Eclecticism, a system of philosophy which adopted no single set of doctrines but selected from existing philosophical beliefs those doctrines that seemed most reasonable. Its most notable advocate was Cicero.
Epicureanism, founded by Epicurus in the 3rd century BC. It viewed the universe as being ruled by chance, with no interference from gods. It regarded absence of pain as the greatest pleasure, and advocated a simple life. It was the main rival to Stoicism until both philosophies died out in the 3rd century AD.
Hellenistic Christianity was the attempt to reconcile Christianity with Greek philosophy, beginning in the late 2nd century. Drawing particularly on Platonism and the newly emerging Neoplatonism, figures such as Clement of Alexandria sought to provide Christianity with a philosophical framework.
Hellenistic Judaism was an attempt to establish the Jewish religious tradition within the culture and language of Hellenism. Its principal representative was Philo of Alexandria.
Neoplatonism, or Plotinism, a school of religious and mystical philosophy founded by Plotinus in the 3rd century AD and based on the teachings of Plato and the other Platonists. The summit of existence was the One or the Good, the source of all things. In virtue and meditation the soul had the power to elevate itself to attain union with the One, the true function of human beings.
Neopythagoreanism, a school of philosophy reviving Pythagorean doctrines, which was prominent in the 1st and 2nd centuries AD. It was an attempt to introduce a religious element into Greek philosophy, worshipping God by living an ascetic life, ignoring bodily pleasures and all sensuous impulses, to purify the soul.
Pyrrhonism, a school of philosophical skepticism that originated with Pyrrho in the 3rd century BC, and was further advanced by Aenesidemus in the 1st century BC. Its objective is ataraxia (being mentally unperturbed), which is achieved through epoché (i.e. suspension of judgment) about non-evident matters (i.e., matters of belief).
Stoicism, founded by Zeno of Citium in the 3rd century BC. Based on the ethical ideas of the Cynics, it taught that the goal of life was to live in accordance with Nature. It advocated the development of self-control and fortitude as a means of overcoming destructive emotions.
Indian philosophy
Orthodox schools
Many Hindu intellectual traditions were classified during the medieval period of Brahmanic-Sanskritic scholasticism into a standard list of six orthodox (Astika) schools (darshanas), the "Six Philosophies", all of which accept the testimony of the Vedas.
Samkhya, the rationalism school with dualism and atheistic themes
Yoga, a school similar to Samkhya but accepts personally defined theistic themes
Nyaya, the realism school emphasizing analytics and logic
Vaisheshika, the naturalism school with atomistic themes and related to the Nyaya school
Purva Mimamsa (or simply Mimamsa), the ritualism school with Vedic exegesis and philology emphasis, and
Vedanta (also called Uttara Mimamsa), the Upanishadic tradition, with many sub-schools ranging from dualism to nondualism.
These are often coupled into three groups for both historical and conceptual reasons: Nyaya-Vaishesika, Samkhya-Yoga, and Mimamsa-Vedanta. The Vedanta school is further divided into six sub-schools: Advaita (monism/nondualism), also includes the concept of Ajativada, Visishtadvaita (monism of the qualified whole), Dvaita (dualism), Dvaitadvaita (dualism-nondualism), Suddhadvaita, and Achintya Bheda Abheda schools.
Heterodox schools
Several Śramaṇic movements have existed before the 6th century BCE, and these influenced both the āstika and nāstika traditions of Indian philosophy. The Śramaṇa movement gave rise to diverse range of heterodox beliefs, ranging from accepting or denying the concept of soul, atomism, antinomian ethics, materialism, atheism, agnosticism, fatalism to free will, idealization of extreme asceticism to that of family life, strict ahimsa (non-violence) and vegetarianism to permissibility of violence and meat-eating. Notable philosophies that arose from Śramaṇic movement were Jainism, early Buddhism, Charvaka, Ajñana and Ājīvika.
Ajñana was one of the nāstika or "heterodox" schools of ancient Indian philosophy, and the ancient school of radical Indian skepticism. It was a Śramaṇa movement and a major rival of early Buddhism and Jainism. They have been recorded in Buddhist and Jain texts. They held that it was impossible to obtain knowledge of metaphysical nature or ascertain the truth value of philosophical propositions; and even if knowledge was possible, it was useless and disadvantageous for final salvation. They were sophists who specialised in refutation without propagating any positive doctrine of their own.
Jain philosophy is the oldest Indian philosophy that separates body (matter) from the soul (consciousness) completely. Jainism was established by Mahavira, the last and the 24th Tirthankara. Historians date the Mahavira as about contemporaneous with the Buddha in the 5th-century BC, and accordingly the historical Parshvanatha, based on the c. 250-year gap, is placed in 8th or 7th century BC. Jainism is a Śramaṇic religion and rejected the authority of the Vedas. However, like all Indian religions, it shares the core concepts such as karma, ethical living, rebirth, samsara and moksha. Jainism places strong emphasis on asceticism, ahimsa (non-violence) and anekantavada (relativity of viewpoints) as a means of spiritual liberation, ideas that influenced other Indian traditions. Jainism strongly upholds the individualistic nature of soul and personal responsibility for one's decisions; and that self-reliance and individual efforts alone are responsible for one's liberation. According to the Jain philosophy, the world (Saṃsāra) is full of hiṃsā (violence). Therefore, one should direct all his efforts in attainment of Ratnatraya, that are Samyak Darshan, Samyak Gnana, and Samyak Chàritra which are the key requisites to attain liberation.
Buddhist philosophy is a system of thought which started with the teachings of Siddhartha Gautama, the Buddha, or "awakened one". Buddhism is founded on elements of the Śramaṇa movement, which flowered in the first half of the 1st millennium BCE, but its foundations contain novel ideas not found or accepted by other Sramana movements. Buddhism shares many philosophical views with other Indian systems, such as belief in karma – a cause-and-effect relationship, samsara – ideas about cyclic afterlife and rebirth, dharma – ideas about ethics, duties and values, impermanence of all material things and of body, and possibility of spiritual liberation (nirvana or moksha). A major departure from Hindu and Jain philosophy is the Buddhist rejection of an eternal soul (atman) in favour of anatta (non-Self).
The philosophy of Ājīvika was founded by Makkhali Gosala, it was a Śramaṇa movement and a major rival of early Buddhism and Jainism. Ājīvikas were organised renunciates who formed discrete monastic communities prone to an ascetic and simple lifestyle. Original scriptures of the Ājīvika school of philosophy may once have existed, but these are currently unavailable and probably lost. The Ājīvika school is known for its Niyati doctrine of absolute determinism (fate), the premise that there is no free will, that everything that has happened, is happening and will happen is entirely preordained and a function of cosmic principles. Ājīvika considered the karma doctrine as a fallacy. Ājīvikas were atheists and rejected the authority of the Vedas, but they believed that in every living being is an ātman – a central premise of Hinduism and Jainism.
Charvaka or Lokāyata was a philosophy of scepticism and materialism, founded in the Mauryan period. They were extremely critical of other schools of philosophy of the time. Charvaka deemed Vedas to be tainted by the three faults of untruth, self-contradiction, and tautology. Likewise they faulted Buddhists and Jains, mocking the concept of liberation, reincarnation and accumulation of merit or demerit through karma. They believed that, the viewpoint of relinquishing pleasure to avoid pain was the "reasoning of fools".
Similarities between Greek and Indian thought
Several scholars have recognised parallels between the philosophy of Pythagoras and Plato and that of the Upanishads, including their ideas on sources of knowledge, concept of justice and path to salvation, and Plato's allegory of the cave. Platonic psychology with its divisions of reason, spirit and appetite, also bears resemblance to the three gunas in the Indian philosophy of Samkhya.
Various mechanisms for such a transmission of knowledge have been conjectured including Pythagoras traveling as far as India; Indian philosophers visiting Athens and meeting Socrates; Plato encountering the ideas when in exile in Syracuse; or, intermediated through Persia.
However, other scholars, such as Arthur Berriedale Keith, J. Burnet and A. R. Wadia, believe that the two systems developed independently. They note that there is no historical evidence of the philosophers of the two schools meeting, and point out significant differences in the stage of development, orientation and goals of the two philosophical systems. Wadia writes that Plato's metaphysics were rooted in this life and his primary aim was to develop an ideal state. In contrast, Upanishadic focus was the individual, the self (atman, soul), self-knowledge, and the means of an individual's moksha (freedom, liberation in this life or after-life).
Persian philosophy
Zoroastrianism, based on the teachings of Zarathustra (Zoroaster) appeared in Persia at some point during the period 1700-1800 BCE. His wisdom became the basis of the religion Zoroastrianism, and generally influenced the development of the Iranian branch of Indo-Iranian philosophy. Zarathustra was the first who treated the problem of evil in philosophical terms. He is also believed to be one of the oldest monotheists in the history of religion. He espoused an ethical philosophy based on the primacy of good thoughts (andiše-e-nik), good words (goftâr-e-nik), and good deeds (kerdâr-e-nik). The works of Zoroaster and Zoroastrianism had a significant influence on Greek philosophy and Roman philosophy. Several ancient Greek writers such as Eudoxus of Cnidus and Latin writers such as Pliny the Elder praised Zoroastrian philosophy as "the most famous and most useful". Plato learnt of Zoroastrian philosophy through Eudoxus and incorporated much of it into his own Platonic realism. In the 3rd century BC, however, Colotes accused Plato's The Republic of plagiarizing parts of Zoroaster's On Nature, such as the Myth of Er.
Manichaeism, founded by Mani, was influential from North Africa in the West, to China in the East. Its influence subtly continues in Western Christian thought via Saint Augustine of Hippo, who converted to Christianity from Manichaeism, which he passionately denounced in his writings, and whose writings continue to be influential among Catholic, Protestant, and Orthodox theologians. An important principle of Manichaeism was its dualistic cosmology/theology, which it shared with Mazdakism, a philosophy founded by Mazdak. Under this dualism, there were two original principles of the universe: Light, the good one; and Darkness, the evil one. These two had been mixed by a cosmic accident, and man's role in this life was through good conduct to release the parts of himself that belonged to Light. Mani saw the mixture of good and bad as a cosmic tragedy, while Mazdak viewed this in a more neutral, even optimistic way. Mazdak (d. 524/528 CE) was a proto-socialist Persian reformer who gained influence under the reign of the Sassanian king Kavadh I. He claimed to be a prophet of God, and instituted communal possessions and social welfare programs. In many ways Mazdak's teaching can be understood as a call for social revolution, and has been referred to as early "communism" or proto-socialism.
Zurvanism is characterized by the element of its First Principle which is Time, "Zurvan", as a primordial creator. According to Zaehner, Zurvanism appears to have three schools of thought all of which have classical Zurvanism as their foundation:Aesthetic Zurvanism which was apparently not as popular as the materialistic kind, viewed Zurvan as undifferentiated Time, which, under the influence of desire, divided into reason (a male principle) and concupiscence (a female principle). While Zoroaster's Ormuzd created the universe with his thought, materialist Zurvanism challenged the concept that anything could be made out of nothing. Fatalistic Zurvanism resulted from the doctrine of limited time with the implication that nothing could change this preordained course of the material universe and that the path of the astral bodies of the 'heavenly sphere' was representative of this preordained course. According to the Middle Persian work Menog-i Khrad: "Ohrmazd allotted happiness to man, but if man did not receive it, it was owing to the extortion of these planets."
Post-classical thought
Christianity
Early Christianity is often divided into three different branches that differ in theology and traditions, which all appeared in the 1st century AD/CE. They include Jewish Christianity, Pauline Christianity and Gnostic Christianity. All modern Christian denominations are said to have descended from the Jewish and Pauline Christianities, with Gnostic Christianity dying, or being hunted, out of existence after the early Christian era and being largely forgotten until discoveries made in the late 19th and early twentieth centuries. There are also other theories on the origin of Christianity.
The following Christian groups appeared between the beginning of the Christian religion and the First Council of Nicaea in 325.
Adamites
Arianism
Ebionites
Elcesaites
Marcionism
Nazarenes
Unlike the previously mentioned groups, the following are all considered to be related to Christian Gnosticism.Christianity has gone through many schisms (splits):
The first significant, lasting split in historic Christianity came from the Church of the East, who left following the Christological controversy over Nestorianism in 431.
Following the Council of Chalcedon in 451 over monophysitism, the next large split came with the Syriac and Coptic churches dividing themselves, with the dissenting churches becoming today's Oriental Orthodox. The Armenian Apostolic Church, whose representatives were not able to attend the council did not accept new dogmas and now is also seen as an Oriental Orthodox church.
The East–West Schism (also the Great Schism or Schism of 1054) is the break of communion since the 11th century between the Catholic Church and Eastern Orthodox Churches. The schism was the culmination of theological and political differences which had developed during the preceding centuries between Eastern and Western Christianity.
Later medieval splinter movements included:
The Cathars were a very strong movement in medieval southwestern France, but did not survive into modern times.
In northern Italy and southeastern France, Peter Waldo founded the Waldensians in the 12th century. This movement has largely been absorbed by modern-day Protestant groups.
In Bohemia, a movement in the early 15th century by Jan Hus called the Hussites defied Catholic dogma and still exists to this day (alternately known as the Moravian Church).
Agonoclita
Apostolic Brethren
Arnoldists
Beguines and Beghards
Bogomilism
Bosnian Church
Patarines
Brethren of the Free Spirit
Donatism
Dulcinians
Friends of God
Henricans
Heresy of the Judaizers
Lollardy
Neo-Adamites
Paulicianism
Petrobrusians
Strigolniki
Tondrakians
European Middle Ages
The spread of Christianity caused major change in European thought. As "Christianity actively rejected scientific inquiry", it meant that thinkers of the time were much more interested in studying revelation than the physical world. Ambrose argued that astronomy could be forsaken, "for wherein does it assist our salvation?". Philosophy and critical thinking were also discounted, since according to Gregory of Nyssa, "The human voice was fashioned for one reason alone – to be the threshold through which the sentiments of the heart, inspired by the Holy Spirit, might be translated into the Word itself".
Carolingian Renaissance
The Carolingian Renaissance was a period of intellectual and cultural revival in the Carolingian Empire occurring from the late eighth century to the ninth century, as the first of three medieval renaissances. It occurred mostly during the reigns of the Carolingian rulers Charlemagne and Louis the Pious. It was supported by the scholars of the Carolingian court, notably Alcuin of York For moral betterment the Carolingian renaissance reached for models drawn from the example of the Christian Roman Empire of the 4th century. During this period there was an increase of literature, writing, the arts, architecture, jurisprudence, liturgical reforms and scriptural studies. Charlemagne's Admonitio generalis (789) and his Epistola de litteris colendis served as manifestos. The effects of this cultural revival, however, were largely limited to a small group of court literati: "it had a spectacular effect on education and culture in Francia, a debatable effect on artistic endeavors, and an immeasurable effect on what mattered most to the Carolingians, the moral regeneration of society," John Contreni observes. Beyond their efforts to write better Latin, to copy and preserve patristic and classical texts and to develop a more legible, classicizing script, the Carolingian minuscule that Renaissance humanists took to be Roman and employed as humanist minuscule, from which has developed early modern Italic script, the secular and ecclesiastical leaders of the Carolingian Renaissance for the first time in centuries applied rational ideas to social issues, providing a common language and writing style that allowed for communication across most of Europe. One of the primary efforts was the creation of a standardized curriculum for use at the recently created schools. Alcuin led this effort and was responsible for the writing of textbooks, creation of word lists, and establishing the trivium and quadrivium as the basis for education. Art historian Kenneth Clark was of the view that by means of the Carolingian Renaissance, Western civilization survived by the skin of its teeth.
Ottonian Renaissance
The Ottonian Renaissance was a limited renaissance of logic, science, economy and art in central and southern Europe that accompanied the reigns of the first three emperors of the Saxon Dynasty, all named Otto: Otto I (936–973), Otto II (973–983), and Otto III (983–1002), and which in large part depended upon their patronage. Pope Sylvester II and Abbo of Fleury were leading figures in this movement. The Ottonian Renaissance began after Otto's marriage to Adelaide (951) united the kingdoms of Italy and Germany and thus brought the West closer to Byzantium. The period is sometimes extended to cover the reign of Henry II as well, and, rarely, the Salian dynasts. The term is generally confined to Imperial court culture conducted in Latin in Germany. It was shorter than the preceding Carolingian Renaissance and to a large extent a continuation of it - this has led historians such as Pierre Riché to prefer evoking it as a 'third Carolingian renaissance', covering the 10th century and running over into the 11th century, with the 'first Carolingian renaissance' occurring during Charlemagne's own reign and the 'second Carolingian renaissance' happening under his successors. The Ottonian Renaissance is recognized especially in the arts and architecture, invigorated by renewed contact with Constantinople, in some revived cathedral schools, such as that of Bruno of Cologne, in the production of illuminated manuscripts from a handful of elite scriptoria, such as Quedlinburg, founded by Otto in 936, and in political ideology. The Imperial court became the center of religious and spiritual life, led by the example of women of the royal family: Matilda the literate mother of Otto I, or his sister Gerberga of Saxony, or his consort Adelaide, or Empress Theophanu.
Renaissance of the 12th century
The Renaissance of the 12th century was a period of many changes at the outset of the High Middle Ages. It included social, political and economic transformations, and an intellectual revitalization of Western Europe with strong philosophical and scientific roots. For some historians these changes paved the way to later achievements such as the literary and artistic movement of the Italian Renaissance in the 15th century and the scientific developments of the 17th century.
The increased contact with the Islamic world in Spain and Sicily, the Crusades, the Reconquista, as well as increased contact with Byzantium, allowed Europeans to seek and translate the works of Hellenic and Islamic philosophers and scientists, especially the works of Aristotle. The development of medieval universities allowed them to aid materially in the translation and propagation of these texts and started a new infrastructure which was needed for scientific communities. In fact, the European university put many of these texts at the centre of its curriculum. The translation of texts from other cultures, especially ancient Greek works, was an important aspect of both this Twelfth-Century Renaissance and the latter Renaissance (of the 15th century), the relevant difference being that Latin scholars of this earlier period focused almost entirely on translating and studying Greek and Arabic works of natural science, philosophy and mathematics, while the latter Renaissance focus was on literary and historical texts.
A new method of learning called scholasticism developed in the late 12th century from the rediscovery of the works of Aristotle; the works of medieval Jewish and Islamic thinkers influenced by him, notably Maimonides, Avicenna (see Avicennism) and Averroes (see Averroism); and the Christian philosophers influenced by them, most notably Albertus Magnus, Bonaventure and Abélard. Those who practiced the scholastic method believed in empiricism and supporting Roman Catholic doctrines through secular study, reason, and logic. Other notable scholastics ("schoolmen") included Roscelin and Peter Lombard. One of the main questions during this time was the problem of the universals. Prominent non-scholastics of the time included Anselm of Canterbury, Peter Damian, Bernard of Clairvaux, and the Victorines. The most famous of the scholastic practitioners was Thomas Aquinas (later declared a Doctor of the Church), who led the move away from Platonism and Augustinianism and towards Aristotelianism.
During the High Middle Ages in Europe, there was increased innovation in means of production, leading to economic growth. These innovations included the windmill, manufacturing of paper, the spinning wheel, the magnetic compass, eyeglasses, the astrolabe, and Hindu–Arabic numerals.
Islamic contributions to Medieval Europe
During the high medieval period, the Islamic world was at its cultural peak, supplying information and ideas to Europe, via Al-Andalus, Sicily and the Crusader kingdoms in the Levant. These included Latin translations of the Greek Classics and of Arabic texts in astronomy, mathematics, science, and medicine. Translation of Arabic philosophical texts into Latin "led to the transformation of almost all philosophical disciplines in the medieval Latin world", with a particularly strong influence of Muslim philosophers being felt in natural philosophy, psychology and metaphysics. Other contributions included technological and scientific innovations via the Silk Road, including Chinese inventions such as paper and gunpowder. The Islamic world also influenced other aspects of medieval European culture, partly by original innovations made during the Islamic Golden Age, including various fields such as the arts, agriculture, alchemy, music, pottery, etc.
Islamic thought
The religion of Islam founded by Muhammad in 7th century Arabia incorporated ideas from Zoroastrianism, Judaism, and Christianity, specifically monotheism, Last Judgment, Heaven and Hell. However, it is closer to Judaism than Christianity, since it believes in the Unity of God, and God is seen as powerful rather than loving. The doctrine of Islam is based on five pillars: Shahada, faith; Salat, prayer; Zakat, alms-giving; Sawm, fasting; and Hajj, pilgrimage. Its beliefs are collected in the Quran, composed in its final form by 933.
Starting from soon after its foundation, Islam has broken into several strands, including:
Sunni Islam, also known as Ahl as-Sunnah wa'l-Jamā'h or simply Ahl as-Sunnah, is the largest denomination of Islam. The Sunnis believe that Muhammad did not specifically appoint a successor to lead the Muslim ummah (community) before his death, however they approve of the private election of the first companion, Abu Bakr. Sunni Muslims regard the first four caliphs (Abu Bakr, Umar ibn al-Khattab, Uthman ibn Affan and Ali ibn Abi Talib) as "al-Khulafā'ur-Rāshidūn" or "The Rightly Guided Caliphs."
Shia Islam is the second-largest denomination of Islam, comprising 10–20% of the total Muslim population. In addition to believing in the authority of the Quran and teachings of Muhammad, Shia believe that Muhammad's family, the Ahl al-Bayt (the "People of the House"), including his descendants known as Imams, have special spiritual and political authority over the community and believe that Ali ibn Abi Talib, Muhammad's cousin and son-in-law, was the first of these Imams and was the rightful successor to Muhammad, and thus reject the legitimacy of the first three Rashidun caliphs. The Shia Islamic faith is broad and includes many different groups. There are various Shia theological beliefs, schools of jurisprudence, philosophical beliefs, and spiritual movements:
The Twelvers believe in twelve Imams and are the only school to comply with Hadith of the Twelve Successors, where Muhammad stated that he would have twelve successors.
Ismailism, including the Nizārī, Sevener, Mustaali, Dawoodi Bohra, Hebtiahs Bohra, Sulaimani Bohra and Alavi Bohra sub-denominations.
The Zaidiyyah historically come from the followers of Zayd ibn Ali.
The Alawites are a distinct religion that developed in the 9th/10th century. Historically, Twelver Shia scholars (such as Shaykh Tusi) did not consider Alawites as Shia Muslims while condemning their heretical beliefs. Ibn Taymiyyah also pointed out that Alawites were not Shi'ites.
The Druze are a distinct traditional religion that developed in the 11th century as an offshoot of Ismailism. Druze are not generally considered Muslims.
Kharijite (literally, "those who seceded") is a general term embracing a variety of Muslim sects which, while originally supporting the Caliphate of Ali, later on fought against him and eventually succeeded in his martyrdom while he was praying in the mosque of Kufa. While there are few remaining Kharijite or Kharijite-related groups, the term is sometimes used to denote Muslims who refuse to compromise with those with whom they disagree. The major Kharijite sub-sect today is the Ibadi. The sect developed out of the 7th century Islamic sect of the Kharijites. While Ibadi Muslims maintain most of the beliefs of the original Kharijites, they have rejected the more aggressive methods. A number of Kharijite groups went extinct in the past:
Sufris were a sect of Islam in the 7th and 8th centuries, and a part of the Kharijites. Their most important branches were the:
Qurrīyya
Nukkari
Harūrīs were an early Muslim sect from the period of the Four Rightly-Guided Caliphs (632–661 CE), named for their first leader, Habīb ibn-Yazīd al-Harūrī.
Azariqa
Najdat
Adjarites
Sufism is Islam's mystical-ascetic dimension and is represented by schools or orders known as Tasawwufī-Ṭarīqah. It is seen as that aspect of Islamic teaching that deals with the purification of inner self. By focusing on the more spiritual aspects of religion, Sufis strive to obtain direct experience of God by making use of "intuitive and emotional faculties" that one must be trained to use. It is composed of different orders:
The Azeemiyya order was founded in 1960 by Qalandar Baba Auliya, also known as Syed Muhammad Azeem Barkhia.
The Bektashi order was founded in the 13th century by the Islamic saint Haji Bektash Veli, and greatly influenced during its formulative period by the Hurufi Ali al-'Ala in the 15th century and reorganized by Balım Sultan in the 16th century. Because of its adherence to the Twelve Imams it is classified under Twelver Shia Islam.
The Chishti order was founded by (Khawaja) Abu Ishaq Shami ("the Syrian"; died 941) who brought Sufism to the town of Chisht, some 95 miles east of Herat in present-day Afghanistan. Before returning to the Levant, Shami initiated, trained and deputized the son of the local Emir (Khwaja) Abu Ahmad Abdal (died 966). Under the leadership of Abu Ahmad's descendants, the Chishtiyya as they are also known, flourished as a regional mystical order. The founder of the Chishti Order in South Asia was Moinuddin Chishti.
The Kubrawiya order was founded in the 13th century by Najmuddin Kubra in Bukhara in modern-day Uzbekistan.
The Mevlevi order is better known in the West as the "whirling dervishes".
Mouride is most prominent in Senegal and The Gambia, with headquarters in the holy city of Touba, Senegal.
The Naqshbandi order was founded in 1380 by Baha-ud-Din Naqshband Bukhari. It is considered by some to be a "sober" order known for its silent dhikr (remembrance of God) rather than the vocalized forms of dhikr common in other orders. The Süleymani and Khalidiyya orders are offshoots of the Naqshbandi order.
The Ni'matullahi order is the most widespread Sufi order of Persia today. It was founded by Shah Ni'matullah Wali (d. 1367), established and transformed from his inheritance of the Ma'rufiyyah circle. There are several suborders in existence today, the most known and influential in the West following the lineage of Javad Nurbakhsh, who brought the order to the West following the 1979 Iranian Revolution.
The Noorbakshia order, also called Nurbakshia, claims to trace its direct spiritual lineage and chain (silsilah) to the Islamic prophet Muhammad, through Ali, by way of Ali Al-Ridha. This order became known as Nurbakshi after Shah Syed Muhammad Nurbakhsh Qahistani, who was aligned to the Kubrawiya order.
The Oveysi (or Uwaiysi) order claims to have been founded 1,400 years ago by Uwais al-Qarni from Yemen.
The Qadiri order is one of the oldest Sufi Orders. It derives its name from Abdul-Qadir Gilani (1077–1166), a native of the Iranian province of Gīlān. The order is one of the most widespread of the Sufi orders in the Islamic world, and can be found in Central Asia, Turkey, Balkans and much of East and West Africa. The Qadiriyyah have not developed any distinctive doctrines or teachings outside of mainstream Islam. They believe in the fundamental principles of Islam, but interpreted through mystical experience. The Ba'Alawi order is an offshoot of Qadiriyyah.
Senussi is a religious-political Sufi order established by Muhammad ibn Ali as-Senussi. As-Senussi founded this movement due to his criticism of the Egyptian ulema.
The Shadhili order was founded by Abu-l-Hassan ash-Shadhili. Followers (murids Arabic: seekers) of the Shadhiliyya are often known as Shadhilis.
The Suhrawardiyya order is a Sufi order founded by Abu al-Najib al-Suhrawardi (1097–1168).
The Tijaniyyah order attach a large importance to culture and education, and emphasize the individual adhesion of the disciple (murid).
Islamic Golden Age
The Islamic Golden Age was a period of cultural, economic, and scientific flourishing in the history of Islam, traditionally dated from the 8th century to the 14th century. This period is traditionally understood to have begun during the reign of the Abbasid caliph Harun al-Rashid (786 to 809) with the inauguration of the House of Wisdom in Baghdad, the world's largest city by then, where Islamic scholars and polymaths from various parts of the world with different cultural backgrounds were mandated to gather and translate all of the world's classical knowledge into Arabic and Persian. Several historic inventions and significant contributions in numerous fields were made throughout the Islamic Middle Ages that revolutionized human history. The period is traditionally said to have ended with the collapse of the Abbasid caliphate due to Mongol invasions and the Siege of Baghdad in 1258.
Judeo-Islamic philosophies
Timurid Renaissance
The Timurid Renaissance was a historical period in Asian and Islamic history spanning the late 14th, the 15th, and the early 16th centuries. Following the gradual downturn of the Islamic Golden Age, the Timurid Empire, based in Central Asia ruled by the Timurid dynasty, witnessed the revival of the arts and sciences. The movement spread across the Muslim world and left profound impacts on late medieval Asia. The Timurid Renaissance was marked simultaneously with the Renaissance movement in Europe. It was described as equal in glory to the Italian Quattrocento. The Timurid Renaissance reached its peak in the 15th century, after the end of the period of Mongol invasions and conquests.
Renaissance
The Renaissance was a period in European history marking the transition from the Middle Ages to Modernity and covering the 15th and 16th centuries. It occurred after the Crisis of the Late Middle Ages and was associated with great social change. In addition to the standard periodization, proponents of a long Renaissance put its beginning in the 14th century and its end in the 17th century. The traditional view focuses more on the early modern aspects of the Renaissance and argues that it was a break from the past, but many historians today focus more on its medieval aspects and argue that it was an extension of the Middle Ages.
The intellectual basis of the Renaissance was its version of humanism, derived from the concept of Roman Humanitas and the rediscovery of classical Greek philosophy, such as that of Protagoras, who said that "Man is the measure of all things." This new thinking became manifest in art, architecture, politics, science and literature. Early examples were the development of perspective in oil painting and the recycled knowledge of how to make concrete. Although the invention of metal movable type sped the dissemination of ideas from the later 15th century, the changes of the Renaissance were not uniformly experienced across Europe: the first traces appear in Italy as early as the late 13th century, in particular with the writings of Dante and the paintings of Giotto.
As a cultural movement, the Renaissance encompassed innovative flowering of Latin and vernacular literatures, beginning with the 14th-century resurgence of learning based on classical sources, which contemporaries credited to Petrarch; the development of linear perspective and other techniques of rendering a more natural reality in painting; and gradual but widespread educational reform. In politics, the Renaissance contributed to the development of the customs and conventions of diplomacy, and in science to an increased reliance on observation and inductive reasoning. Although the Renaissance saw revolutions in many intellectual pursuits, as well as social and political upheaval, it is perhaps best known for its artistic developments and the contributions of such polymaths as Leonardo da Vinci and Michelangelo, who inspired the term "Renaissance man".
Modern period
Scientific Revolution
The Scientific Revolution was a series of events that marked the emergence of modern science during the early modern period, when developments in mathematics, physics, astronomy, biology (including human anatomy) and chemistry transformed the views of society about nature. The Scientific Revolution took place in Europe towards the end of the Renaissance period and continued through the late 18th century, influencing the intellectual social movement known as the Enlightenment. While its dates are debated, the publication in 1543 of Nicolaus Copernicus' De revolutionibus orbium coelestium (On the Revolutions of the Heavenly Spheres) is often cited as marking the beginning of the Scientific Revolution.
Rationalism
A systematic school of philosophy in its own right for the first time in history – exerted an immense and profound influence on modern Western thought in general, with the birth of two influential rationalistic philosophical systems of Descartes (who spent most of his adult life and wrote all his major work in the United Provinces of the Netherlands) and Spinoza–namely Cartesianism and Spinozism. It was the 17th-century arch-rationalists like Descartes, Spinoza and Leibniz who have given the "Age of Reason" its name and place in history.
Dualism is closely associated with the thought of René Descartes (1641), which holds that the mind is a nonphysical—and therefore, non-spatial—substance. Descartes clearly identified the mind with consciousness and self-awareness and distinguished this from the brain as the seat of intelligence. Hence, he was the first to formulate the mind–body problem in the form in which it exists today. Dualism is contrasted with various kinds of monism. Spinozism (also spelled Spinozaism) is the monist philosophical system of Baruch Spinoza which defines "God" as a singular self-subsistent Substance, with both matter and thought being attributes of such.
Cult of Reason
The Cult of Reason was France's first established state-sponsored atheistic religion, intended as a replacement for Catholicism during the French Revolution. After holding sway for barely a year, in 1794 it was officially replaced by the rival Cult of the Supreme Being, promoted by Robespierre. Both cults were officially banned in 1802 by Napoleon Bonaparte with his Law on Cults of 18 Germinal, Year X.
Age of Enlightenment
The Age of Enlightenment (also known as the Age of Reason or simply the Enlightenment) was an intellectual and philosophical movement that dominated the world of ideas in Europe during the 17th to 19th centuries.
The Enlightenment emerged from a European intellectual and scholarly movement known as Renaissance humanism and was also preceded by the Scientific Revolution and the work of Francis Bacon, among others. Some date the beginning of the Enlightenment to René Descartes' 1637 philosophy of Cogito, ergo sum ("I think, therefore I Am"), while others cite the publication of Isaac Newton's Principia Mathematica (1687) as the culmination of the Scientific Revolution and the beginning of the Enlightenment. French historians traditionally date its beginning to the death of Louis XIV of France in 1715 until the 1789 outbreak of the French Revolution. Most end it with the beginning of the 19th century. A variety of 19th-century movements, including liberalism and neoclassicism, trace their intellectual heritage to the Enlightenment.
The Enlightenment included a range of ideas centered on the sovereignty of reason and the evidence of the senses as the primary sources of knowledge and advanced ideals such as liberty, progress, toleration, fraternity, constitutional government and separation of church and state. In France, the central doctrines of the Enlightenment philosophers were individual liberty and religious tolerance, in opposition to an absolute monarchy and the fixed dogmas of the Church. The Enlightenment was marked by an emphasis on the scientific method and reductionism, along with increased questioning of religious orthodoxy—an attitude captured by Immanuel Kant's essay Answering the Question: What is Enlightenment, where the phrase Sapere aude (Dare to know) can be found.
Romanticism
Romanticism was an artistic and intellectual movement that originated in Europe towards the end of the 18th century. The purpose of the movement was to advocate for the importance of subjectivity, imagination, and appreciation of nature in society and culture during the Age of Enlightenment and the Industrial Revolution. Romanticism was a complex movement, with a variety of viewpoints that permeated Western civilization across the globe. Romanticists rejected the social conventions of the time in favor of a moral outlook known as individualism. They argued that passion and intuition were crucial to understanding the world, and that beauty is more than merely an affair of form, but rather something that evokes a strong emotional response. With this philosophical foundation, the Romanticists elevated a number of key themes to which they were deeply committed: a reverence for nature and the supernatural, an idealization of the past as a nobler era, a fascination with the exotic and the mysterious, and a celebration of the heroic and the sublime.
Modernism
Modernism is an early 20th-century movement in literature, the visual arts and music, emphasizing experimentation, abstraction and subjective experience. Philosophy, politics and social issues, are also aspects of the movement, which sought to change how 'human beings in a society interact and live together'. The modernist movement emerged during the late 19th century in response to significant changes in Western culture, including secularization and the growing influence of science. It is characterized as a rejection of tradition and the hunt for newer and original means of cultural expression. Modernism was influenced by widespread technological innovation, industrialization and urbanization, as well as cultural and geopolitical shifts that occurred after World War I. The movement rejected both 19th-century realism and Romanticism's concept of absolute originality - the idea of "creation from nothingness" - with techniques of collage, reprise, incorporation, rewriting, recapitulation, revision, and parody. Modernism also took a critical stance towards Enlightenment rationalism.
Modernity in the Middle East
Islam and modernity encompass the relation and compatibility between the phenomenon of modernity, its related concepts and ideas, and the religion of Islam. In order to understand the relation between Islam and modernity, one point should be made in the beginning. Similarly, modernity is a complex and multidimensional phenomenon rather than a unified and coherent phenomenon. It has historically had different schools of thoughts moving in many directions.
Intellectual movements in Iran involve the Iranian experience of modernism, through which Iranian modernity and its associated art, science, literature, poetry, and political structures have been evolving since the 19th century. Religious intellectualism in Iran develops gradually and subtly. It reached its apogee during the Persian lightsaber (1906–11). The process involved numerous philosophers, sociologists, political scientists and cultural theorists. However the associated art, cinema and poetry remained to be developed.
Modern African thought
With the rise of Afrocentrism, the push away from Eurocentrism has led to the focus on the contributions of African people and their model of world civilization and history. Afrocentrism aims to shift the focus from a perceived European-centered history to an African-centered history. More broadly, Afrocentrism is concerned with distinguishing the influence of European and Oriental peoples from African achievements.
Pan-Africanism is a worldwide movement that aims to encourage and strengthen bonds of solidarity between all indigenous and diaspora ethnic groups of African descent. Based on a common goal dating back to the Atlantic slave trade, the movement extends beyond continental Africans with a substantial support base among the African diaspora in the Americas and Europe.
Postmodernism
Emerging in the mid-twentieth century as a reaction against modernism, postmodernism is an intellectual stance or mode of discourse characterized by skepticism towards scientific rationalism and the concept of objective reality (as opposed to subjective reality). It questions the "grand narratives" of modernity, rejects the certainty of knowledge and stable meaning, and acknowledges the influence of ideology in maintaining political power. Postmodernism embraces self-referentiality, epistemological relativism, moral relativism, pluralism, irony, irreverence, and eclecticism. It opposes the "universal validity" of binary oppositions, stable identity, hierarchy, and categorization.
Footnotes
References
Sources
Further reading
Armitage, D., 2007. The declaration of independence: A global history. Harvard University Press.
Bayly, C.A., 2004. The birth of the modern world, 1780-1914: global connections and comparisons. Oxford: Blackwell.
Hourani, A., 1983. Arabic thought in the liberal age 1798-1939. Cambridge University Press.
Moyn, S. and Sartori, A. eds., 2013. Global intellectual history. Columbia University Press.
Watson, P., 2005. Ideas: a history of thought and invention, from fire to Freud (p. 36). New York: HarperCollins.
| 0.775378 | 0.978088 | 0.758388 |
Paideia
|
Paideia (/paɪˈdeɪə/; also spelled paedeia; ) referred to the rearing and education of the ideal member of the ancient Greek polis or state. These educational ideals later spread to the Greco-Roman world at large, and were called humanitas in Latin.
Paideia was meant to instill aristocratic virtues in the young citizen men who were trained in this way. An ideal man within the polis would be well-rounded, refined in intellect, morals, and physicality, so training of the body, mind, and soul was important. Both practical, subject-based schooling as well as a focus upon the socialization of individuals within the aristocratic order of the polis were a part of this training.
The practical aspects of paideia included subjects within the modern designation of the liberal arts (e.g. rhetoric, grammar, and philosophy), as well as scientific disciplines like arithmetic and medicine. Gymnastics and wrestling were valued for their effect on the body alongside the moral education which was imparted by the study of music, poetry, and philosophy.
This approach to the rearing of a well-rounded Greek male was common to the Greek-speaking world, with the exception of Sparta, where agoge was practiced.
The idea of paideia in ancient and modern cultures
The Greeks considered paideia to be carried out by the aristocratic class, who tended to intellectualize their culture and their ideas. The culture and the youth were formed to the ideal of kalos kagathos ("beautiful and good").
Aristotle gives his paideia proposal in Book VIII of the Politics. In this, he says that, "education ought to be adapted to the particular form of constitution, since the particular character belonging to each constitution both guards the constitution generally and originally establishes it..." As a result, Aristotle argues that education should be a public system, not left up to individuals. He goes on to deliberate about what a proper education should entail, weighing different subjects, such as music and drawing, against their benefit towards cultivating virtue. He lists the ways he believes that gymnastic training should be carried out, bringing up some Spartan practices in order to see the benefits and drawbacks of their system. He talks extensively about music and its place in education, ultimately concluding that it should be included, but that there should be specific instruction, "in what times and what rhythms they should take part, and also what kinds of instruments should be used in their studies, as this naturally makes a difference."
The German-American classicist Werner Jaeger used the concept of paideia to trace the development of Greek thought and education from Homer to Demosthenes in Paideia: The Ideals of Greek Culture, Aristotelian philosopher Mortimer Adler gives a paideia proposal in his criticism of contemporary Western educational systems.
Isocrates' influence
Isocrates''' paideia was quite influential, particularly in Athens. Its goal was to construct a practice of education and politics that brought validity in the democratic deliberative practice while remaining intellectually respectable. Isocrates sought to encourage a love of wisdom in his audience by making them apply principles of intellectual consistency to their lives. The fundamental aspect of his paideia was consistency on the individual, civic, and panhellenic levels.
Sayings and proverbs that defined paideia
"Know thyself" and "Nothing in excess"
"Hard is the Good."
See also
Arete Classical education
The Paideia School
Notes
References
Werner Jaeger, Paideia: The Ideals of Greek Culture, vols. I–III, trans. Gilbert Highet, Oxford University Press, 1945.
Oxford English Dictionary, "Paedeia." 2005.
Further reading
Takis Fotopoulos, "From (mis)-education to Paideia", The International Journal of Inclusive Democracy'', vol 2, no 1, (2005).
Culture of ancient Greece
Education in classical antiquity
Political philosophy in ancient Greece
| 0.765069 | 0.991195 | 0.758333 |
Copper Age state societies
|
The Chalcolithic or Copper Age is the transitional period between the Neolithic and the Bronze Age.
It is taken to begin around the mid-5th millennium BC, and ends with the beginning of the Bronze Age proper, in the late 4th to 3rd millennium BC, depending on the region.
The Chalcolithic is part of prehistory, but based on archaeological evidence, the emergence of the first state societies can be inferred, notably in the Fertile Crescent (notably Sumer) Predynastic Egypt, and Proto-Minoan Crete, with late Neolithic societies of comparable complexity emerging in the Indus Valley (Mehrgarh), China, and along the north-western shores of the Black Sea.
The development of states—large-scale, populous, politically centralized, and socially stratified polities/societies governed by powerful rulers—marks one of the major milestones in the evolution of human societies. Archaeologists often distinguish between primary (or pristine) states and secondary states. Primary states evolved independently through largely internal developmental processes rather than through the influence of any other pre-existing state.
The earliest known primary states appeared in Anatolia c. 5200 BC, in Mesopotamia c. 3700 BC,, in Greece c. 3500 BC, in Egypt c. 3300 BC,
in the Indus Valley c. 3300 BC,
and in China c. 1600 BC.
In Africa, discoveries in the Agadez Region of Niger evidence signs of copper metallurgy as early as 2000BC. This date pre-dates the use of iron by a thousand years.Copper metallurgy seems to have been an indigenous invention in this area, because there is no clear evidence of influences from Northern Africa, and the Saharan wet phase was coming to an end, hindering human interactions across the Saharan region. It appeared to not be fully developed copper metallurgy, which suggests it was not from external origins.
List of known polities
See also
4th millennium BC
Cradle of civilization
List of Bronze Age states
List of Iron Age states
References
State society
Chalcolithic states
| 0.767027 | 0.988656 | 0.758327 |
Stereotypes of Africa
|
Stereotypes about Africa, Africans, and African culture are common, especially in the Western World. European imperialism was often justified on paternalistic grounds, portraying Africa as less civilized, and Africans as less capable of civilizing themselves. As of the 2010s, these stereotypes persisted in European media.
History
Europe
Several countries, such as France and Portugal, tried to 'civilize' Africa by colonizing it.
Belgian cartoonist Hergé depicted Africans as childlike in Tintin in the Congo.
The Germans falsely credited African accomplishments to a 'Hamitic' race descended from European settlers. Some Italians stereotype Africans as illegal immigrants and beggars. Poles' understanding of Africa is influenced by its press, which often dwells on bad or alarming news.
Northern America
In the 19th century, scientific racists such as Josiah C. Nott and George Gliddon likened Africans to the non-human apes. This comparison was used to justify the former's inferior status.
Present
Australia
Australians often view Africa as primitive and homogeneous. This view is influenced by stereotypes of African Americans.
East Asia
Japan sees Africa as a continent in need of help, as does China. In Chinese internet culture, unlucky or incompetent video game players are called 'Africans', a reference to the association of black faces with bad luck.
United Kingdom
Research by the British Council showed that from the perspective of young Britons, the African continent as a whole, is either idealized or demonized. Decades of images and stories in the news media and by charities highlighting themes including famine, drought, disease, inequality and instability have contributed to a perception of African countries as impoverished, dangerous, and lagging behind the rest of the world socio-economically and in terms of human rights. Factors commonly used to explain these issues included endemic local corruption, the historical and contemporary exploitation of Africa by foreign countries and private interests (including the UK and British companies), and the perceived remoteness and isolation of Africa relative to the rest of the world.
United States
In the United States, Africa is seen as primitive and full of disease. Africans are seen as peculiarly vulnerable to disease. Also, Africa is seen as a sparsely-peopled jungle full of wild animals. American cinema is blamed for disparaging stereotypes of Africa.
Themes
Environment
A common stereotype is that much or all of Africa is an inhospitable jungle or desert, inhabited only by wild animals like Elephants and Lions. Alternatively, many believe that wild animals are ubiquitous and familiar, like domestic animals. Although Africa has many wild animals, among them big game animals, most Africans see them only in zoos and safaris.
Homogeneity
Africa is often mistaken for a single state, whereas it is a continent with 54 UN member states and two de facto states. This mistake can lead people to think that all Africans belong to one ethnic group, or to apply disparaging stereotypes about one group to another.
Outsiders may have the misconception that there is only one language, known simply as "African". In reality, there are more than 1,000 African languages. Swahili is the single most widely spoken Indigenous African language.
Poverty
Africa is often considered primitive and impoverished. Though poverty exists in Africa, many countries have fast-growing economies.
Many people believe most Africans live "in a mud house in the middle of nowhere". Forty-three percent of Africans live in urban areas, slightly below the global average of 55%.
Technology
In rich countries, Africans are often seen as having no access to modern technology. As of 2013, 80% of Africans had a mobile phone. Internet use in Africa grew by 20% in 2018, reaching 59% of North Africa, 51% of Southern Africa, 39% of West Africa, and 45% of East Africa.
Another common stereotype is that Africans, particularly Nigerians, commit online fraud. The most well-known African scam is the advance-fee scam, nicknamed the "Nigerian prince scam".
References
History of Africa
Stereotypes of black people
| 0.775268 | 0.978116 | 0.758302 |
Cultural ecology
|
Cultural ecology is the study of human adaptations to social and physical environments. Human adaptation refers to both biological and cultural processes that enable a population to survive and reproduce within a given or changing environment. This may be carried out diachronically (examining entities that existed in different epochs), or synchronically (examining a present system and its components). The central argument is that the natural environment, in small scale or subsistence societies dependent in part upon it, is a major contributor to social organization and other human institutions. In the academic realm, when combined with study of political economy, the study of economies as polities, it becomes political ecology, another academic subfield. It also helps interrogate historical events like the Easter Island Syndrome.
History
Anthropologist Julian Steward (1902-1972) coined the term, envisioning cultural ecology as a methodology for understanding how humans adapt to such a wide variety of environments. In his Theory of Culture Change: The Methodology of Multilinear Evolution (1955), cultural ecology represents the "ways in which culture change is induced by adaptation to the environment". A key point is that any particular human adaptation is in part historically inherited and involves the technologies, practices, and knowledge that allow people to live in an environment. This means that while the environment influences the character of human adaptation, it does not determine it. In this way, Steward wisely separated the vagaries of the environment from the inner workings of a culture that occupied a given environment. Viewed over the long term, this means that environment and culture are on more or less separate evolutionary tracks and that the ability of one to influence the other is dependent on how each is structured. It is this assertion - that the physical and biological environment affects culture - that has proved controversial, because it implies an element of environmental determinism over human actions, which some social scientists find problematic, particularly those writing from a Marxist perspective. Cultural ecology recognizes that ecological locale plays a significant role in shaping the cultures of a region.
Steward's method was to:
Document the technologies and methods used to exploit the environment to get a living from it.
Look at patterns of human behavior/culture associated with using the environment.
Assess how much these patterns of behavior influenced other aspects of culture (e.g., how, in a drought-prone region, great concern over rainfall patterns meant this became central to everyday life, and led to the development of a religious belief system in which rainfall and water figured very strongly. This belief system may not appear in a society where good rainfall for crops can be taken for granted, or where irrigation was practiced).
Steward's concept of cultural ecology became widespread among anthropologists and archaeologists of the mid-20th century, though they would later be critiqued for their environmental determinism. Cultural ecology was one of the central tenets and driving factors in the development of processual archaeology in the 1960s, as archaeologists understood cultural change through the framework of technology and its effects on environmental adaptation.
In anthropology
Cultural ecology as developed by Steward is a major subdiscipline of anthropology. It derives from the work of Franz Boas and has branched out to cover a number of aspects of human society, in particular the distribution of wealth and power in a society, and how that affects such behaviour as hoarding or gifting (e.g. the tradition of the potlatch on the Northwest North American coast).
As transdisciplinary project
One 2000s-era conception of cultural ecology is as a general theory that regards ecology as a paradigm not only for the natural and human sciences, but for cultural studies as well. In his Die Ökologie des Wissens (The Ecology of Knowledge), Peter Finke explains that this theory brings together the various cultures of knowledge that have evolved in history, and that have been separated into more and more specialized disciplines and subdisciplines in the evolution of modern science (Finke 2005). In this view, cultural ecology considers the sphere of human culture not as separate from but as interdependent with and transfused by ecological processes and natural energy cycles. At the same time, it recognizes the relative independence and self-reflexive dynamics of cultural processes. As the dependency of culture on nature, and the ineradicable presence of nature in culture, are gaining interdisciplinary attention, the difference between cultural evolution and natural evolution is increasingly acknowledged by cultural ecologists. Rather than genetic laws, information and communication have become major driving forces of cultural evolution (see Finke 2006, 2007). Thus, causal deterministic laws do not apply to culture in a strict sense, but there are nevertheless productive analogies that can be drawn between ecological and cultural processes.
Gregory Bateson was the first to draw such analogies in his project of an Ecology of Mind (Bateson 1973), which was based on general principles of complex dynamic life processes, e.g. the concept of feedback loops, which he saw as operating both between the mind and the world and within the mind itself. Bateson thinks of the mind neither as an autonomous metaphysical force nor as a mere neurological function of the brain, but as a "dehierarchized concept of a mutual dependency between the (human) organism and its (natural) environment, subject and object, culture and nature", and thus as "a synonym for a cybernetic system of information circuits that are relevant for the survival of the species." (Gersdorf/ Mayer 2005: 9).
Finke fuses these ideas with concepts from systems theory. He describes the various sections and subsystems of society as 'cultural ecosystems' with their own processes of production, consumption, and reduction of energy (physical as well as psychic energy). This also applies to the cultural ecosystems of art and of literature, which follow their own internal forces of selection and self-renewal, but also have an important function within the cultural system as a whole (see next section).
In literary studies
The interrelatedness between culture and nature has been a special focus of literary culture from its archaic beginnings in myth, ritual, and oral story-telling, in legends and fairy tales, in the genres of pastoral literature, nature poetry. Important texts in this tradition include the stories of mutual transformations between human and nonhuman life, most famously collected in Ovid’s Metamorphoses, which became a highly influential text throughout literary history and across different cultures. This attention to culture-nature interaction became especially prominent in the era of romanticism, but continues to be characteristic of literary stagings of human experience up to the present.
The mutual opening and symbolic reconnection of culture and nature, mind and body, human and nonhuman life in a holistic and yet radically pluralistic way seems to be one significant mode in which literature functions and in which literary knowledge is produced. From this perspective, literature can itself be described as the symbolic medium of a particularly powerful form of "cultural ecology" (Zapf 2002). Literary texts have staged and explored, in ever new scenarios, the complex feedback relationship of prevailing cultural systems with the needs and manifestations of human and nonhuman "nature." From this paradoxical act of creative regression they have derived their specific power of innovation and cultural self-renewal.
German ecocritic Hubert Zapf argues that literature draws its cognitive and creative potential from a threefold dynamics in its relationship to the larger cultural system: as a "cultural-critical metadiscourse," an "imaginative counterdiscourse," and a "reintegrative interdiscourse" (Zapf 2001, 2002). It is a textual form which breaks up ossified social structures and ideologies, symbolically empowers the marginalized, and reconnects what is culturally separated. In that way, literature counteracts economic, political or pragmatic forms of interpreting and instrumentalizing human life, and breaks up one-dimensional views of the world and the self, opening them up towards their repressed or excluded other. Literature is thus, on the one hand, a sensorium for what goes wrong in a society, for the biophobic, life-paralyzing implications of one-sided forms of consciousness and civilizational uniformity, and it is, on the other hand, a medium of constant cultural self-renewal, in which the neglected biophilic energies can find a symbolic space of expression and of (re-)integration into the larger ecology of cultural discourses. This approach has been applied and widened in volumes of essays by scholars from over the world (ed. Zapf 2008, 2016), as well as in a recent monograph (Zapf 2016). Similar approaches have also been developed in adjacent fields, such as film studies (Paalman 2011).
In geography
In geography, cultural ecology developed in response to the "landscape morphology" approach of Carl O. Sauer. Sauer's school was criticized for being unscientific and later for holding a "reified" or "superorganic" conception of culture. Cultural ecology applied ideas from ecology and systems theory to understand the adaptation of humans to their environment. These cultural ecologists focused on flows of energy and materials, examining how beliefs and institutions in a culture regulated its interchanges with the natural ecology that surrounded it. In this perspective humans were as much a part of the ecology as any other organism. Important practitioners of this form of cultural ecology include Karl Butzer and David Stoddart.
The second form of cultural ecology introduced decision theory from agricultural economics, particularly inspired by the works of Alexander Chayanov and Ester Boserup. These cultural ecologists were concerned with how human groups made decisions about how they use their natural environment. They were particularly concerned with the question of agricultural intensification, refining the competing models of Thomas Malthus and Boserup. Notable cultural ecologists in this second tradition include Harold Brookfield and Billie Lee Turner II. Starting in the 1980s, cultural ecology came under criticism from political ecology. Political ecologists charged that cultural ecology ignored the connections between the local-scale systems they studied and the global political economy. Today few geographers self-identify as cultural ecologists, but ideas from cultural ecology have been adopted and built on by political ecology, land change science, and sustainability science.
Conceptual views
Human species
Books about culture and ecology began to emerge in the 1950s and 1960s. One of the first to be published in the United Kingdom was The Human Species by a zoologist, Anthony Barnett. It came out in 1950-subtitled The biology of man but was about a much narrower subset of topics. It dealt with the cultural bearing of some outstanding areas of environmental knowledge about health and disease, food, the sizes and quality of human populations, and the diversity of human types and their abilities. Barnett's view was that his selected areas of information "....are all topics on which knowledge is not only desirable, but for a twentieth-century adult, necessary". He went on to point out some of the concepts underpinning human ecology towards the social problems facing his readers in the 1950s as well as the assertion that human nature cannot change, what this statement could mean, and whether it is true. The third chapter deals in more detail with some aspects of human genetics.
Then come five chapters on the evolution of man, and the differences between groups of men (or races) and between individual men and women today in relation to population growth (the topic of 'human diversity'). Finally, there is a series of chapters on various aspects of human populations (the topic of "life and death"). Like other animals man must, in order to survive, overcome the dangers of starvation and infection; at the same time he must be fertile. Four chapters therefore deal with food, disease and the growth and decline of human populations.
Barnett anticipated that his personal scheme might be criticized on the grounds that it omits an account of those human characteristics, which distinguish humankind most clearly, and sharply from other animals. That is to say, the point might be expressed by saying that human behaviour is ignored; or some might say that human psychology is left out, or that no account is taken of the human mind. He justified his limited view, not because little importance was attached to what was left out, but because the omitted topics were so important that each needed a book of similar size even for a summary account. In other words, the author was embedded in a world of academic specialists and therefore somewhat worried about taking a partial conceptual, and idiosyncratic view of the zoology of Homo sapiens.
Ecology of man
Moves to produce prescriptions for adjusting human culture to ecological realities were also afoot in North America. In his 1957 Condon Lecture at the University of Oregon, entitled "The Ecology of Man", American ecologist Paul Sears called for "serious attention to the ecology of man" and demanded "its skillful application to human affairs". Sears was one of the few prominent ecologists to successfully write for popular audiences. Sears documents the mistakes American farmers made in creating conditions that led to the disastrous Dust Bowl. This book gave momentum to the soil conservation movement in the United States.
The "ecology of man" as a limiting factor which "should be respected", placing boundaries around the extent to which the human species can be manipulated, is reflected in the views of Popes Benedict XVI, and Francis.
Impact on nature
During this same time was J.A. Lauwery's Man's Impact on Nature, which was part of a series on 'Interdependence in Nature' published in 1969. Both Russel's and Lauwerys' books were about cultural ecology, although not titled as such. People still had difficulty in escaping from their labels. Even Beginnings and Blunders, produced in 1970 by the polymath zoologist Lancelot Hogben, with the subtitle Before Science Began, clung to anthropology as a traditional reference point. However, its slant makes it clear that 'cultural ecology' would be a more apt title to cover his wide-ranging description of how early societies adapted to environment with tools, technologies and social groupings. In 1973 the physicist Jacob Bronowski produced The Ascent of Man, which summarised a magnificent thirteen part BBC television series about all the ways in which humans have moulded the Earth and its future.
Changing the Earth
By the 1980s the human ecological-functional view had prevailed. It had become a conventional way to present scientific concepts in the ecological perspective of human animals dominating an overpopulated world, with the practical aim of producing a greener culture. This is exemplified by I. G. Simmons' book Changing the Face of the Earth, with its telling subtitle "Culture, Environment History" which was published in 1989. Simmons was a geographer, and his book was a tribute to the influence of W.L Thomas' edited collection, Man's role in 'Changing the Face of the Earth that came out in 1956.
Simmons' book was one of many interdisciplinary culture/environment publications of the 1970s and 1980s, which triggered a crisis in geography with regards its subject matter, academic sub-divisions, and boundaries. This was resolved by officially adopting conceptual frameworks as an approach to facilitate the organisation of research and teaching that cuts cross old subject divisions. Cultural ecology is in fact a conceptual arena that has, over the past six decades allowed sociologists, physicists, zoologists and geographers to enter common intellectual ground from the sidelines of their specialist subjects.
21st Century
In the first decade of the 21st century, there are publications dealing with the ways in which humans can develop a more acceptable cultural relationship with the environment. An example is sacred ecology, a sub-topic of cultural ecology, produced by Fikret Berkes in 1999. It seeks lessons from traditional ways of life in Northern Canada to shape a new environmental perception for urban dwellers. This particular conceptualisation of people and environment comes from various cultural levels of local knowledge about species and place, resource management systems using local experience, social institutions with their rules and codes of behaviour, and a world view through religion, ethics and broadly defined belief systems.
Despite the differences in information concepts, all of the publications carry the message that culture is a balancing act between the mindset devoted to the exploitation of natural resources and that, which conserves them. Perhaps the best model of cultural ecology in this context is, paradoxically, the mismatch of culture and ecology that have occurred when Europeans suppressed the age-old native methods of land use and have tried to settle European farming cultures on soils manifestly incapable of supporting them. There is a sacred ecology associated with environmental awareness, and the task of cultural ecology is to inspire urban dwellers to develop a more acceptable sustainable cultural relationship with the environment that supports them.
Educational framework
Cultural Core
To further develop the field of Cultural Ecology, Julian Steward developed a framework which he referred to as the cultural core. This framework, a “constellation” as Steward describes it, organizes the fundamental features of a culture that are most closely related to subsistence and economic arrangements.
At the core of this framework is the fundamental human-environment relationship as it pertains to subsistence. Outside of the core, in the second layer, lies the innumerable direct features of this relationship - tools, knowledge, economics, labor, etc. Outside of that second, directly correlated layer is the less-direct but still influential layer, typically associated with larger historical, institutional, political or social factors.
According to Steward, the secondary features are determined greatly by the “cultural-historical factors” and they contribute to building the uniqueness of the outward appearance of cultures when compared to others with similar cores. The field of Cultural Ecology is able to utilize the cultural core framework as a tool for better determining and understanding the features that are most closely involved in the utilization of the environment by humans and cultural groups.
See also
Cultural materialism
Dual inheritance theory
Ecological anthropology
Environmental history
Environmental racism
Human behavioral ecology
Political ecology
Sexecology
References
Sources
Barnett, A. 1950 The Human Species: MacGibbon and Kee, London.
Bateson, G. 1973 Steps to an Ecology of Mind: Paladin, London
Berkes, F. 1999 Sacred ecology: traditional ecological knowledge and resource management. Taylor and Francis.
Bronowski, J. 1973 The Ascent of Man, BBC Publications, London
Finke, P. 2005 Die Ökologie des Wissens. Exkursionen in eine gefährdete Landschaft: Alber, Freiburg and Munich
Finke, P. 2006 "Die Evolutionäre Kulturökologie: Hintergründe, Prinzipien und Perspektiven einer neuen Theorie der Kultur", in: Anglia 124.1, 2006, p. 175-217
Finke, P. 2013 "A Brief Outline of Evolutionary Cultural Ecology," in Traditions of Systems Theory: Major Figures and Contemporary Developments, ed. Darrell P. Arnold, New York: Routledge.
Frake, Charles O. (1962) "Cultural Ecology and Ethnography" American Anthropologist. 64 (1: 53–59. ISSN 0002-7294.
Gersdorf, C. and S. Mayer, eds. Natur – Kultur – Text: Beiträge zu Ökologie und Literaturwissenschaft: Winter, Heidelberg
Hamilton, G. 1947 History of the Homeland: George Allen and Unwin, London.
Hogben, L. 1970 Beginnings and Blunders: Heinemann, London
Hornborg, Alf; Cultural Ecology
Lauwerys, J.A. 1969 Man's Impact on Nature: Aldus Books, London
Maass, Petra (2008): The Cultural Context of Biodiversity Conservation. Seen and Unseen Dimensions of Indigenous Knowledge among Q'eqchi' Communities in Guatemala. Göttinger Beiträge zur Ethnologie - Band 2, Göttingen: Göttinger Universitätsverlag online-version
Paalman, F. 2011 Cinematic Rotterdam: The Times and Tides of a Modern City: 010 Publsihers, Rotterdam.
Russel, W.M.S. 1967 Man Nature and History: Aldus Books, London
Simmons, I.G. 1989 Changing the Face of the Earth: Blackwell, Oxford
Steward, Julian H. 1972 Theory of Culture Change: The Methodology of Multilinear Evolution: University of Illinois Press
Technical Report PNW-GTR-369. 1996. Defining social responsibility in ecosystem management. A workshop proceedings. United States Department of Agriculture Forest Service.
Turner, B. L., II 2002. "Contested identities: human-environment geography and disciplinary implications in a restructuring academy." Annals of the Association of American Geographers 92(1): 52–74.
Worster, D. 1977 Nature’s Economy. Cambridge University Press
Zapf, H. 2001 "Literature as Cultural Ecology: Notes Towards a Functional Theory of Imaginative Texts, with Examples from American Literature", in: REAL: Yearbook of Research in English and American Literature 17, 2001, p. 85-100.
Zapf, H. 2002 Literatur als kulturelle Ökologie. Zur kulturellen Funktion imaginativer Texte an Beispielen des amerikanischen Romans: Niemeyer, Tübingen
Zapf, H. 2008 Kulturökologie und Literatur: Beiträge zu einem transdisziplinären Paradigma der Literaturwissenschaft (Cultural Ecology and Literature: Contributions on a Transdisciplinary Paradigm of Literary Studies): Winter, Heidelberg
Zapf, H. 2016 Literature as Cultural Ecology: Sustainable Texts: Bloomsbury Academic, London
Zapf, H. 2016 ed. Handbook of Ecocriticism and Cultural Ecology: De Gruyter, Berlin
External links
Cultural and Political Ecology Specialty Group of the Association of American Geographers. Archive of newsletters, officers, award and honor recipients, as well as other resources associated with this community of scholars.
Notes on the development of cultural ecology with an excellent reference list: Catherine Marquette
Cultural ecology: an ideational scaffold for environmental education: an outcome of the EC LIFE ENVIRONMENT programme
Cultural anthropology
Ecology terminology
Environmental humanities
Human geography
Interdisciplinary historical research
| 0.771568 | 0.982779 | 0.75828 |
Trivium
|
The trivium is the lower division of the seven liberal arts and comprises grammar, logic, and rhetoric.
The trivium is implicit in De nuptiis Philologiae et Mercurii ("On the Marriage of Philology and Mercury") by Martianus Capella, but the term was not used until the Carolingian Renaissance, when it was coined in imitation of the earlier quadrivium. Grammar, logic, and rhetoric were essential to a classical education, as explained in Plato's dialogues. The three subjects together were denoted by the word trivium during the Middle Ages, but the tradition of first learning those three subjects was established in ancient Greece, by rhetoricians such as Isocrates. Contemporary iterations have taken various forms, including those found in certain British and American universities (some being part of the Classical education movement) and at the independent Oundle School in the United Kingdom.
Etymology
Etymologically, the Latin word trivium means "the place where three roads meet" (tri + via); hence, the subjects of the trivium are the foundation for the quadrivium, the upper division of the medieval education in the liberal arts, which consists of arithmetic (numbers as abstract concepts), geometry (numbers in space), music (numbers in time), and astronomy (numbers in space and time). Educationally, the trivium and the quadrivium imparted to the student the seven liberal arts of classical antiquity.
Description
Grammar teaches the mechanics of language to the student. This is the step where the student "comes to terms," defining the objects and information perceived by the five senses. Hence, the Law of Identity: a tree is a tree, and not a cat.
Logic (also dialectic) is the "mechanics" of thought and of analysis, the process of composing sound arguments and identifying fallacious arguments and statements and so systematically removing contradictions, thereby producing factual knowledge that can be trusted.
Rhetoric is the application of language in order to instruct and to persuade the listener and the reader. It is the knowledge (grammar) now understood (logic) and being transmitted outwards as wisdom (rhetoric).
Aristotle defined rhetoric as, "the power of perceiving in every thing that which is capable of producing persuasion."
Sister Miriam Joseph, in The Trivium: The Liberal Arts of Logic, Grammar, and Rhetoric (2002), described the trivium as follows:
Grammar is the art of inventing symbols and combining them to express thought; logic is the art of thinking; and rhetoric is the art of communicating thought from one mind to another, the adaptation of language to circumstance.
. . .
Grammar is concerned with the thing as-it-is-symbolized. Logic is concerned with the thing as-it-is-known. Rhetoric is concerned with the thing as-it-is-communicated.
John Ayto wrote in the Dictionary of Word Origins (1990) that study of the trivium (grammar, logic, and rhetoric) was requisite preparation for study of the quadrivium (arithmetic, geometry, music, and astronomy). For the medieval student, the trivium was the curricular beginning of the acquisition of the seven liberal arts; as such, it was the principal undergraduate course of study. The word trivial arose from the contrast between the simpler trivium and the more difficult quadrivium.
See also
Classical education movement
Quadrivium
The three Rs
Vyākaraṇa
References
Further reading
McLuhan, Marshall (2006). The Classical Trivium: The Place of Thomas Nashe in the Learning of His Time. (McLuhan's 1942 doctoral dissertation.) Gingko Press. .
Michell, John, Rachel Holley, Earl Fontainelle, Adina Arvatu, Andrew Aberdein, Octavia Wynne, and Gregory Beabout. "Trivium: The Classical Liberal Arts of Grammar, Logic, & Rhetoric. New York: Bloomsbury, 2016. Print. Wooden Books".
Robinson, Martin (2013). Trivium 21c: Preparing Young People for the Future with Lessons from the Past. London: Independent Thinking Press. .
Sayers, Dorothy L. (1947). "The Lost Tools of Learning". Essay presented at Oxford University.
Winterer, Caroline (2002). The Culture of Classicism: Ancient Greece and Rome in American Intellectual Life, 1780–1910. Baltimore: Johns Hopkins University Press.
Cultural lists
Philosophy of education
History of education
Alternative education
Medieval European education
Liberal arts education
es:Artes liberales#Las siete artes: Trivium et Quadrivium
pl:Siedem sztuk wyzwolonych#Trivium
| 0.761806 | 0.995355 | 0.758268 |
Euhemerism
|
Euhemerism is an approach to the interpretation of mythology in which mythological accounts are presumed to have originated from real historical events or personages. Euhemerism supposes that historical accounts become myths as they are exaggerated in the retelling, accumulating elaborations and alterations that reflect cultural mores. It was named after the Greek mythographer Euhemerus, who lived in the late 4th century BC. In the more recent literature of myth, such as Bulfinch's Mythology, euhemerism is termed the "historical theory" of mythology.
Euhemerus was not the first to attempt to rationalize mythology in historical terms: euhemeristic views are found in earlier writings including those of Sanchuniathon, Xenophanes, Herodotus, Hecataeus of Abdera and Ephorus. However, the enduring influence of Euhemerus upon later thinkers such as the classical poet Ennius (b. 239 BC) and modern author Antoine Banier (b. 1673 AD) identified him as the traditional founder of this school of thought.
Early history
In a scene described in Plato's Phaedrus, Socrates offers a euhemeristic interpretation of a myth concerning Boreas and Orithyia:
Socrates illustrates a euhemeristic approach to the myth of Boreas abducting Orithyia. He shows how the story of Boreas, the northern wind, can be rationalised: Orithyia is pushed off the rock cliffs through the equation of Boreas with a natural gust of wind, which accepts Orithyia as a historical personage. But here he also implies that this is equivalent to rejecting the myth. Socrates, despite holding some euhemeristic views, mocked the concept that all myths could be rationalized, noting that the mythical creatures of "absurd forms" such as Centaurs and the Chimera could not easily be explained.
In the ancient skeptic philosophical tradition of Theodorus of Cyrene and the Cyrenaics, Euhemerus forged a new method of interpretation for the contemporary religious beliefs. Though his work is lost, the reputation of Euhemerus was that he believed that much of Greek mythology could be interpreted as natural or historical events subsequently given supernatural characteristics through retelling. Subsequently, Euhemerus was considered to be an atheist by his opponents, most notably Callimachus.
Deification
Euhemerus' views were rooted in the deification of men, usually kings, into gods through apotheosis. In numerous cultures, kings were exalted or venerated into the status of divine beings and worshipped after their death, or sometimes even while they ruled. Dion, the tyrant ruler of Syracuse, was deified while he was alive and modern scholars consider his apotheosis to have influenced Euhemerus' views on the origin of all gods. Euhemerus was also living during the contemporaneous deification of the Seleucids and "pharaoization" of the Ptolemies in a fusion of Hellenic and Egyptian traditions.
Tomb of Zeus
Euhemerus argued that Zeus was a mortal king who died on Crete, and that his tomb could still be found there with the inscription bearing his name. This claim however did not originate with Euhemerus, as the general sentiment of Crete during the time of Epimenides of Knossos (c. 600 BC) was that Zeus was buried somewhere in Crete. For this reason, the Cretans were often considered atheists, and Epimenides called them all liars (see Epimenides paradox). Callimachus, an opponent of Euhemerus' views on mythology, argued that Zeus' Cretan tomb was fabricated, and that he was eternal:
A later Latin scholium on the Hymns of Callimachus attempted to account for the tomb of Zeus. According to the scholium, the original tomb inscription read: "The tomb of Minos, the son of Jupiter" but over time the words "Minos, the son" wore away leaving only "the tomb of Jupiter". This had misled the Cretans into thinking that Zeus had died and was buried there.
Influenced by Euhemerus, Porphyry in the 3rd century AD claimed that Pythagoras had discovered the tomb of Zeus on Crete and written on the tomb's surface an inscription reading: "Here died and was buried Zan, whom they call Zeus". Varro also wrote about the tomb of Zeus.
Christians
Hostile to paganism, the early Christians, such as the Church Fathers, embraced euhemerism in attempt to undermine the validity of pagan gods. The usefulness of euhemerist views to early Christian apologists may be summed up in Clement of Alexandria's triumphant cry in Cohortatio ad gentes: "Those to whom you bow were once men like yourselves."
The Book of Wisdom
The Wisdom of Solomon, a deuterocanonical book, has a passage giving a euhemerist explanation of the origin of idols.
Early Christian apologists
The early Christian apologists deployed the euhemerist argument to support their position that pagan mythology was merely an aggregate of fables of human invention. Cyprian, a North African convert to Christianity, wrote a short essay ("On the Vanity of Idols") in 247 AD that assumes the euhemeristic rationale as though it needed no demonstration. Cyprian begins:
Cyprian proceeds directly to examples, the apotheosis of Melicertes and Leucothea; "The Castors [i.e. Castor and Pollux] die by turns, that they may live", a reference to the daily sharing back and forth of their immortality by the Heavenly Twins. "The cave of Jupiter is to be seen in Crete, and his sepulchre is shown", Cyprian says, confounding Zeus and Dionysus but showing that the Minoan cave cult was still alive in Crete in the third century AD. In his exposition, it is to Cyprian's argument to marginalize the syncretism of pagan belief, in order to emphasize the individual variety of local deities:
Eusebius in his Chronicle employed euhemerism to argue the Babylonian God Baʿal was a deified ruler and that the god Belus was the first Assyrian king.
Euhemeristic views are found expressed also in Tertullian, the Octavius of Marcus Minucius Felix and in Origen. Arnobius' dismissal of paganism in the fifth century, on rationalizing grounds, may have depended on a reading of Cyprian, with the details enormously expanded. Isidore of Seville, compiler of the most influential early medieval encyclopedia, devoted a chapter to elucidating, with numerous examples and elaborated genealogies of gods, the principle drawn from Lactantius, ("Those whom pagans claim to be gods were once mere men"). Elaborating logically, he attempted to place these deified men in the six great periods of history as he divided it, and created mythological dynasties. Isidore's euhemeristic bent was codified in a rigid parallel with sacred history in Petrus Comestor's appendix to his much translated (written c. 1160), further condensing Isidore to provide strict parallels of figures from the pagan legend, as it was now viewed in historicised narrative, and the mighty human spirits of the patriarchs of the Old Testament. Martin of Braga, in his , wrote that idolatry stemmed from post-deluge survivors of Noah's family, who began to worship the sun and stars instead of God. In his view, the Greek gods were deified descendants of Noah who were once real personages.
Middle Ages
Christian writers during the Middle Ages continued to embrace euhemerism, such as Vincent of Beauvais, Petrus Comestor, Roger Bacon and Godfrey of Viterbo.
"After all, it was during this time that Christian apologists had adopted the views of the rationalist Greek philosophers. And had captured the purpose for Euhemerism, which was to explain the mundane origins of the Hellenistic divinities. Euhemerism explained simply in two ways: first in the strictest sense as a movement which reflected the known views of Euhemerus' Hiera Anagraphe regarding Panchaia and the historicity of the family of Saturn and Uranus. The principal sources of these views are the handed-down accounts of Lactantius and Diodorus; or second, in the widest sense, as a rationalist movement which sought to explain the mundane origins of all the Hellenistic gods and heroes as mortals." Other modern theorists labeled Euhemerism as a "subject of classical paganism that was fostered in the minds of the people of the Middle Ages through the realization that while in most respects the ancient Greeks and Roman had been superior to themselves, they had been in error regarding their religious beliefs. An examination of the principal writings in Middle English with considerable reading of literature other than English, discloses the fact that the people of the Middle Ages rarely regarded the so-called gods as mere figments of the imagination but rather believed that they were or had been real beings, sometimes possessing actual power" (John Daniel Cook).
Snorri Sturluson's "euhemerism"
In the Prose Edda, composed around 1220, the Christian Icelandic bard and historian Snorri Sturluson proposes that the Norse gods were originally historical leaders and kings. Odin, the father of the gods, is introduced as a historical person originally from Asia Minor, tracing his ancestry back to Priam, the king of Troy during the Trojan War.
As Odin travels north to settle in the Nordic countries, he establishes the royal families ruling in Denmark, Sweden and Norway at the time:
Snorri's euhemerism follows the early Christian tradition.
In the modern world
Euhemeristic interpretations of mythology continued throughout the early modern period from the 16th century, to modern times. In 1711, the French historian Antoine Banier in his ("The Mythology and Fables of the Ancients, Explained") presented strong arguments for a euhemerist interpretation of Greek mythology. Jacob Bryant's A New System or Analysis of Ancient Mythology (1744) was also another key work on euhemerism of the period, but argued so from a Biblical basis. Of the early 19th century, George Stanley Faber was another Biblical euhemerist. His work The Origin of Pagan Idolatry (1816) proposed that all the pagan nations worshipped the same gods, who were all deified men. Outside of Biblical influenced literature, some archaeologists embraced euhemerist views since they discovered myths could verify archaeological findings. Heinrich Schliemann was a prominent archaeologist of the 19th century who argued myths had historical truths embedded in them. Schliemann was an advocate of the historical reality of places and characters mentioned in the works of Homer. He excavated Troy and claimed to have discovered artifacts associated with various figures from Greek mythology, including the Mask of Agamemnon and Priam's Treasure.
Herbert Spencer embraced some euhemeristic arguments in attempt to explain the anthropocentric origin of religion, through ancestor worship. Rationalizing methods of interpretation that treat some myths as traditional accounts based upon historical events are a continuous feature of some modern readings of mythology.
The twentieth century poet and mythographer Robert Graves offered many such "euhemerist" interpretations in his telling of The White Goddess (1948) and The Greek Myths (1955). His suggestions that such myths record and justify the political and religious overthrow of earlier cult systems have been widely criticized and are rejected by most scholars.
Euhemerization
Author Richard Carrier defines "euhemerization" as "the taking of a cosmic god and placing him at a definite point in history as an actual person who was later deified".
In this framing, rather than being presumed to have originated from real historical events or personages, the mythological accounts are claimed to have had such origins, and historical accounts invented accordingly – such that, counter to the usual sense of "Euhemerism", in "euhemerization" a mythological figure is in fact transformed into a (pseudo)historical one.
See also
Demythologization
Geomythology
References
Greek mythology studies
Mythology
Philosophy of history
Religious studies
Historicity of religious figures
Theories in ancient Greek philosophy
| 0.762434 | 0.994531 | 0.758264 |
Cyclical theory (United States history)
|
The cyclical theory refers to a model used by historians Arthur M. Schlesinger Sr. and Arthur M. Schlesinger Jr. to explain the fluctuations in politics throughout American history. In this theory, the United States's national mood alternates between liberalism and conservatism. Each phase has characteristic features, and each phase is self-limiting, generating the other phase. This alternation has repeated itself several times over the history of the United States.
A similar theory for American foreign policy was proposed by historian Frank J. Klingberg. He proposed that the United States has repeatedly alternated between foreign-policy extroversion and introversion, willingness to go on international adventures and unwillingness to do so.
Several other cycles of American history have been proposed, with varying degrees of support.
Schlesinger's liberal-conservative cycle
Lib: Liberal
Con: Conservative
The Schlesingers' periodization closely parallels other periodizations of United States history, like in History of the United States, and links to Wikipedia articles on those periods are given as appropriate.
The features of each phase in the cycle can be summarized with a table.
The Schlesingers proposed that their cycles are "self-generating", meaning that each kind of phase generates the other kind of phase. This process then repeats, causing cycles. Arthur Schlesinger Jr. speculated on possible reasons for these transitions. He speculated that since liberal phases involve bursts of reform effort, such bursts can be exhausting, and the body politic thus needs the rest of a conservative phase. He also speculates that conservative phases accumulate unsolved social problems, problems that require the efforts of a liberal phase. He also speculated on generational effects, since most of the liberal-conservative phase pairs are roughly 30 years long, roughly the length of a human generation.
The Schlesingers' identified phases end in a conservative period, and in a foreword written in 1999, Schlesinger Jr. speculated about why it has lasted unusually long, instead of ending in the early 1990s. One of his speculations was the continuing Computer Revolution, as disruptive as the earlier Industrial Revolution had been. Another of them was wanting a long rest after major national traumas. The 1860s Civil War and Reconstruction preceded the unusually-long Gilded Age, and the strife of the 1960s likewise preceded the recent unusually-long conservative period.
An alternative identification is due to Andrew S. McFarland. He identifies the liberal phases as reform ones and conservative phases as business ones, and he additionally identifies transitions from the reform ones to the business ones. From his Figure 1,
Roughly agreeing with Schlesinger's identifications.
Huntington's periods of creedal passion
Historian Samuel P. Huntington has proposed that American history has had several bursts of "creedal passion". Huntington described the "American Creed" of government in these terms: "In terms of American beliefs, government is supposed to be egalitarian, participatory, open, noncoercive, and responsive to the demands of individuals and groups. Yet no government can be all these things and still remain a government." This contradiction produces an unavoidable gap between ideals and institutions, an "IvI" gap. This gap is normally tolerable, but it is a gap that sometimes leads to bursts of "creedal passion" against existing systems and institutions, bursts that typically last around 15 years. He identified four of them:
1770s: Revolutionary era
1830s: Jacksonian era
1900s: Progressive era
1960s: S&S: Sixties and Seventies (Huntington's name)
Huntington described 14 features of creedal-passion eras. Nine of them describe the general mood:
"Discontent was widespread; authority, hierarchy, specialization, and expertise were widely questioned or rejected."
"Political ideas were taken seriously and played an important role in the controversies of the time."
"Traditional American values of liberty, individualism, equality, popular control of government, and the openness of government were stressed in public discussion."
"Moral indignation over the IvI gap was widespread."
"Politics was characterized by agitation, excitement, commotion, even upheaval — far beyond the usual routine of interest-group conflict."
"Hostility toward power (the antipower ethic) was intense, with the central issue of politics often being defined as 'liberty versus power.
"The exposure or muckraking of the IvI gap was a central feature of politics."
"Movements flourished devoted to specific reforms or 'causes' (women, minorities, criminal justice, temperance, peace)."
"New media forms appeared, significantly increasing the influence of the media in politics."
The remaining five describe the resulting changes:
"Political participation expanded, often assuming new forms and often expressed through hitherto unusual channels."
"The principal political cleavages of the period tended to cut across economic class lines, with some combination of middle- and working-class groups promoting change."
"Major reforms were attempted in political institutions in order to limit power and reshape institutions in terms of American ideals (some of which were successful and some of which were lasting)."
"A basic realignment occurred in the relations between social forces and political institutions, often including but not limited to the political party system."
"The prevailing ethos promoting reform in the name of traditional ideals was, in a sense, both forward-looking and backward-looking, progressive and conservative."
Party systems and realignment elections
The United States has gone through several party systems, where in each system, the two main parties have characteristic platforms and constituencies. Likewise, the United States has had several realigning elections, elections that bring fast and large-scale changes. These events are mentioned here because their repeated occurrence may be interpreted as a kind of cycle.
Opinions differ on the timing of the transition from the fifth to the sixth systems, opinions ranging from the 1960s to the 1990s. Some political scientists argue that it was a gradual transition, one without any well-defined date.
Other dates sometimes cited are 1874, 1964 (Lyndon B. Johnson), 1968 (Richard Nixon), 1980 (Ronald Reagan), 1992 (Bill Clinton), 1994, 2008 (Barack Obama), and 2016 (Donald Trump).
Skowronek's presidency types
Political scientist Stephen Skowronek has proposed four main types of presidencies, and these types of presidencies also fit into a cycle. He proposes that the United States has had several political regimes over its history, regimes with a characteristic cycle of presidency types. Each political regime has had a dominant party and an opposition party, and presidents can be in either the dominant party or the opposition party.
The cycle begins with a reconstructive president, one who typically serves more than one term. He establishes a new regime, and his party becomes the dominant one for that regime. He is usually succeeded by his vice president, his successor is usually an articulation one, and that president usually serves only one term. This president is usually followed by a preemptive president, and articulating and preemptive presidents may continue to alternate. The cycle ends with one or more disjunctive presidents. Such presidents are typically loners, detached from their parties, considered ineffective, and serving only one term.
Some of the articulating and preemptive presidents' types have been inferred from their party affiliations, and George Washington is here classified as a reconstructing president because he was the first one.
Some of the sources propose that Presidents William McKinley or Theodore Roosevelt were reconstructing presidents instead of articulating ones.
The Klingberg foreign-policy cycle
Historian Frank J. Klingberg described what he called "the historical alternation of moods in American foreign policy," an alternation between "extroversion", willingness to confront other nations and to expand American influence and territory, and "introversion", unwillingness to do so. He examined presidents' speeches, party platforms, naval expenditures, wars, and annexations, identifying in 1952 seven alternations since 1776. He and others have extended this work into more recent years, finding more alternations.
Ext: Extroversion
Int: Introversion
(none): no events listed in the sources
Arthur Schlesinger, Jr. concluded that this cycle is not synchronized with the liberal-conservative cycle, and for that reason, he concluded that these two cycles have separate causes.
Criticism
Sean Trende, senior elections analyst at RealClearPolitics, who argues against realignment theory and the "emerging Democratic majority" thesis proposed by journalist John Judis and political scientist Ruy Teixeira in his 2012 book The Lost Majority states, "Almost none of the theories propounded by realignment theorists has endured the test of time... It turns out that finding a 'realigning' election is a lot like finding an image of Jesus in a grilled-cheese sandwich – if you stare long enough and hard enough, you will eventually find what you are looking for." In August 2013, Trende observed that U.S. presidential election results from 1880 through 2012 form a 0.96 correlation with the expected sets of outcomes (i.e. events) in the binomial distribution of a fair coin flip experiment. In May 2015, statistician and FiveThirtyEight editor-in-chief Nate Silver argued against a blue wall Electoral College advantage for the Democratic Party in the 2016 U.S. presidential election, and in post-election analysis, Silver cited Trende in noting that "there are few if any permanent majorities" and both Silver and Trende argued that the "emerging Democratic majority" thesis led most news coverage and commentary preceding the election to overstate Hillary Clinton's chances of being elected.
See also
Cycle of violence
Determinism
Deterministic system
Social cycle theory
Strauss–Howe generational theory
References
Bundled references
Further reading
Cyclical theories
Historiography of the United States
Jacksonian democracy
| 0.779143 | 0.973191 | 0.758255 |
Human condition
|
The human condition can be defined as the characteristics and key events of human life, including birth, learning, emotion, aspiration, reason, morality, conflict, and death. This is a very broad topic that has been and continues to be pondered and analyzed from many perspectives, including those of art, biology, literature, philosophy, psychology, and religion.
As a literary term, "human condition" is typically used in the context of ambiguous subjects, such as the meaning of life or moral concerns.
Some perspectives
Each major religion has definitive beliefs regarding the human condition. For example, Buddhism teaches that existence is a perpetual cycle of suffering, death, and rebirth from which humans can be liberated via the Noble Eightfold Path. Meanwhile, many Christians believe that humans are born in a sinful condition and are doomed in the afterlife unless they receive salvation through Jesus Christ.
Philosophers have provided many perspectives. An influential ancient view was that of the Republic in which Plato explored the question "what is justice?" and postulated that it is not primarily a matter among individuals but of society as a whole, prompting him to devise a utopia. Two thousand years later René Descartes declared "I think, therefore I am" because he believed the human mind, particularly its faculty of reason, to be the primary determiner of truth; for this he is often credited as the father of modern philosophy. One such modern school, existentialism, attempts to reconcile an individual's sense of disorientation and confusion in a universe believed to be absurd.
Many works of literature provide a perspective on the human condition. One famous example is Shakespeare's monologue "All the world's a stage" which pensively summarizes seven phases of human life.
Psychology has many theories, including Maslow's hierarchy of needs and the notions of identity crisis and terror management. It also has various methods, e.g. the logotherapy developed by Holocaust survivor Viktor Frankl to discover and affirm a sense of meaning. Another method, cognitive behavioral therapy, has become a widespread treatment for clinical depression.
Charles Darwin established the biological theory of evolution, which posits that the human species is related to all others, living and extinct, and that natural selection is the primary survival factor. This led to subsequent beliefs, such as social Darwinism, which eventually lost its connection to natural selection, and theistic evolution of a creator deity acting through laws of nature, including evolution.
See also
Human nature
Know thyself
References
Concepts in philosophical anthropology
Concepts in social philosophy
Concepts in the philosophy of mind
Existentialist concepts
Humans
Personal life
Philosophy of life
Psychological concepts
| 0.760039 | 0.997638 | 0.758244 |
Structural inequality
|
Structural inequality occurs when the fabric of organizations, institutions, governments or social networks contains an embedded cultural, linguistic, economic, religious/belief, physical or identity based bias which provides advantages for some members and marginalizes or produces disadvantages for other members. This can involve, personal agency, freedom of expression, property rights, freedom of association, religious freedom,social status, or unequal access to health care, housing, education, physical, cultural, social, religious or political belief, financial resources or other social opportunities. Structural inequality is believed to be an embedded part of all known cultural groups. The global history of slavery, serfdom, indentured servitude and other forms of coerced cultural or government mandated labour or economic exploitation that marginalizes individuals and the subsequent suppression of human rights ( see UDHR) are key factors defining structural inequality.
Structural inequality can be encouraged and maintained in society through structured institutions such as state governments, and other cultural institutions like government run school systems with the goal of maintaining the existing governance/tax structure regardless of wealth, employment opportunities, and social standing of different identity groups by keeping minority students from high academic achievement in high school and college as well as in the workforce of the country. In the attempt to equalize allocation of state funding, policymakers evaluate the elements of disparity to determine an equalization of funding throughout school districts.(14)
Formal equality of opportunity disregards collective dimensions of inequality, which are addressed by substantive equality with equality of outcomes for each group. Combating structural inequality therefore often requires the broad, policy based structural change on behalf of government organizations, and is often a critical component of poverty reduction. In many ways, a well-organized democratic government that can effectively combine moderate growth with redistributive policies stands the best chance of combating structural inequality.
Education
Education is the base for equality. Specifically in the structuring of schools, the concept of tracking is believed by some scholars to create a social disparity in providing students an equal education. Schools have been found to have a unique acculturative process that helps to pattern self-perceptions and world views. Schools not only provide education but also a setting for students to develop into adults, form future social status and roles, and maintain social and organizational structures of society. Tracking is an educational term that indicates where students will be placed during their secondary school years.[3] "Depending on how early students are separated into these tracks, determines the difficulty in changing from one track to another" (Grob, 2003, p. 202).
Tracking or sorting categorizes students into different groups based on standardized test scores. These groups or tracks are vocational, general, and academic. Students are sorted into groups that will determine educational and vocational outcomes for the future. The sorting that occurs in the educational system parallels the hierarchical social and economic structures in society. Thus, students are viewed and treated differently according to their individual track. Each track has a designed curriculum that is meant to fit the unique educational and social needs of each sorted group. Consequently, the information taught as well as the expectations of the teachers differ based on the track resulting in the creation of dissimilar classroom cultures.
Spatial/regional
Globally, the issue of spatial inequality is largely a result of disparities between urban and rural areas. A study commissioned by the United Nations University WIDER project has shown that for the twenty-six countries included in the study, spatial inequalities have been high and on the increase, especially for developing nations. Many of these inequalities were traced back to “second nature” geographic forces that describe the infrastructure a society has in place for facilitating the trade of goods and employment between economic agents. Another dominant and related factor is the ease of access to bodies of water and forms of long-distance trade like ports. The discrepancies between the growth of communities close to these bodies of water and those further away have been noted in cases between and within countries. In the United States and many other developed countries, spatial inequality has developed into more specific forms described by residential segregation and housing discrimination. This has especially come into focus as education and employment are often tied into where a household is located relative to urban centers, and a variety of metrics, from education levels to welfare benefits have been correlated to spatial data.
Consequences
Specifically, studies have identified a number of economic consequences of housing segregation. Perhaps the most obvious is the isolation of minorities, which creates a deficit in the potential for developing human capital. Second, many of the public schools that areas of low socioeconomic status have access to are underperforming, in part due to the limited budgeting the district receives from the limited tax base in the same area. Finally, another large factor is simply the wealth and security homeownership represents. Property values rarely increase in areas where poverty is high in the first place.
Causes
The causes of spatial inequality, however, are more complex. The mid-20th century phenomenon of the large-scale migration of white middle-class families from urban centers has coined the term white flight. While the current state of housing discrimination can be partly attributed to this phenomenon, a larger set of institutionalized discrimination, like bias in loan and real estate industries and government policies, have helped to perpetuate the division created since then. These include bias found in the banking and real estate industries as well as discriminatory public policies that promote racial segregation. In addition, rising income inequality between blacks and whites since the 1970s have created affluent neighborhoods that tend to be composed of a homogeneous racial background of families within the same income bracket. A similar situation within the racial lines have helped to explain how more than 32% of blacks now live in suburbs. However, these new suburbs are often divided along racial lines, and a 1992 survey showed that 82% of blacks preferred to live a suburb where their race is in the majority. This is further aggravated by practices like racial steering, in which realtors guide home buyers towards neighborhood based on race.
Transportation
Government policies that have tended to promote spatial inequalities include actions by the Federal Housing Administration (FHA) in the United States in promoting redlining, a practice where mortgages could be selectively administered while excluding certain urban neighborhood deemed risky, oftentimes because of race. Practices like this continued to prevent home buyers from getting mortgages in redlined areas until the 1960s, when the FHA discontinued the determination of restrictions based on racial composition.
The advent of freeways also added a complex layer of incentives and barriers which helped to increase spatial inequalities. First, these new networks allowed for middle-class families to move out to the suburbs while retaining connections like employment to the urban center. Second, and perhaps more importantly, freeways were routed through minority neighborhoods, oftentimes creating barriers between these neighborhoods and central business districts and middle class areas. Highway plans often avoided a more direct route through upper or middle class neighbors because minorities did not have sufficient power to prevent such actions from happening.
Solutions
Douglas Steven Massey identifies three goals specifically for the United States to end residential segregation: reorganize the structure of metropolitan government, make greater investment in education, and finally open housing market so full participation More specifically, he advocates broader, metropolitan-wide units of taxation and governance where the tax base and decisions are made equally by both the urban and suburban population. Education is the key to closing employment inequalities in a post-manufacturing era. And finally, the federal government must take large strides towards enforcing the anti-segregation measures related to housing it has already put into place, like the Fair Housing Act, the Home Mortgage Disclosure Act, and the Community Reinvestment Act.
Another set of divisions that may be useful in framing policy solutions include three categories: place-based policies, people-based policies, and indirect approaches. Place-based policies include improving community facilities and services like schools and public safety in inner-city areas in an effort to appeal to middle-class families. These programs must be balanced with concerns of gentrification. People-based policies help increase access to credit for low-income families looking to move, and this sort of policy has been typified by the Community Reinvestment Act and its many revisions throughout its legislative history. Finally, indirect approaches often involve providing better transportation options to low-income areas, like public transit routes or subsidized car ownership. These approaches target the consequences rather than the causes of segregation, and rely on the assumption that one of the most harmful effects of spatial inequality is the lack of access to employment opportunities. In conclusion, a common feature in all of these is the investment in the capital and infrastructure of inner-city or neighborhood.
Healthcare
The quality of healthcare that a patient receives strongly depends upon its accessibility. Kelley et al. define access to healthcare as “the timely use of personal health services to achieve the best health outcomes”. Health disparities, which are largely caused by unequal access to healthcare, can be defined as “a difference in which disadvantaged social groups such as the poor, racial/ethnic minorities, women and other groups who have persistently experienced social disadvantage or discrimination systematically experience worse health or greater health risks than most advantaged social groups.” Manifestations of inequality in healthcare appear throughout the world and are a topic of urgency in the United States. In fact, studies have shown that income-related inequality in healthcare expenditures favors the wealthy to a greater degree in the United States than most other Western nations. The enormous costs of healthcare, coupled with the vast number of Americans lacking health insurance, indicate the severe inequality and serious problems that exist. The healthcare system in the United States perpetuates inequality by “rationing health care according to a person’s ability to pay, by providing inadequate and inferior health care to poor people and persons of color, and by failing to establish structures that can meet the health needs of Americans”.
Racial
Racial disparity in access and quality of healthcare is a serious problem in the United States and is reflected by evidence such as the fact that African American life expectancies lag behind that of whites by over 5 years, and African Americans tend to experience more chronic conditions. African Americans have a 30% higher death rate from cardiovascular disease and experience 50% more diabetic complications than their white counterparts. The Agency for Healthcare Research and Quality (AHRQ), directed by Congress, led an effort for the development of two annual reports by the Department of Health and Human Services (DHHS), the National Healthcare Quality Report and the National Healthcare Disparities Report, which tracked disparities in healthcare in relation to racial and socioeconomic factors. These reports developed about 140 measures of quality of care and about 100 measures of access to care, which were used to measure the healthcare disparities. The first reports, released in December 2003, found that blacks and Hispanics experienced poorer healthcare quality for about half of the quality measures reported in the NHQR and NDHR. Also, Hispanics and Asians experienced poorer access to care for about two thirds of the healthcare access measures. Recent studies on Medicare patients show that black patients receive poorer medical care than their white counterparts. Compared with white patients, blacks receive far fewer operations, tests, medications and other treatments, suffering greater illnesses and more deaths as a result. Measures done by the Agency for Healthcare Research and Quality (AHRQ) show that “fewer than 20% of disparities faced by Blacks, AI/ANs and Hispanics showed evidence of narrowing.”
One specific study showed that African Americans are less likely than whites to be referred for cardiac catheterization and bypass grafting, prescription of analgesia for pain control, and surgical treatment of lung cancer. Both African Americans and Latinos also receive less pain medication than whites for long bone fractures and cancer. Other studies showed that African Americans are reported to receive fewer pediatric prescriptions, poorer quality of hospital care, fewer hospital admissions for chest pain, lower quality of prenatal care, and less appropriate management of congestive heart failure and pneumonia.
Language-barriers became a large factor in the process of seeking healthcare due to the rise in minorities across the United States. In 2007, an estimate done by the Census Bureau stated that 33.6% of the United States belonged to racial ethnicities other than non-Hispanic whites. Of people within the United States during this time, 20% spoke a language different from English at home. Having a language-barrier can cause many hurdles when pertaining to healthcare: difficulty communicating with health professionals, sourcing/the funding of language assistance, having little to no access to translators, etc. A 2050 projection showed that over 50% of the United States would belong to a racial category other than non-Hispanic white. Thus, demonstrating the rapid increase of minorities over time within the United States and the importance of it.
Gender
In addition to race, healthcare inequality also manifests across gender lines. Though women tend to live longer than men, they tend to report poorer health status, more disabilities as they age, and tend to be higher utilizers of the healthcare system. Healthcare disparities often put women at a disadvantage. Such time must be scheduled around work (whether formal or informal), child care needs, and the geography—which increases the travel time necessary for those who do not live near healthcare facilities. Furthermore, “poor women and their children tend to have inadequate housing, poor nutrition, poor sanitation, and high rates of physical, emotional, and sexual abuse.” Since women and children constitute 80% of the poor in the United States, they are particularly susceptible to experiencing the negative impact of healthcare inequality.
Spatial
Spatial inequalities in distribution and geographic location also affect access and quality of healthcare. A study done by Rowland, Lyons, and Edwards (1988) found that rural patients were more likely to be poor and uninsured. Because of the fewer healthcare resources available in rural areas, these patients received fewer medical services than urban patients. Other studies showed that African Americans and Hispanics are more likely than whites to live in areas that are underserved by healthcare providers, forcing them to wait longer for care in crowded and/or understaffed facilities or traveling longer distances to receive care in other areas. This travel time often poses an obstacle to receiving medical care and often leads patients to delay care until later. In fact, African Americans and Hispanics are more likely than whites to delay seeking medical care until their condition becomes serious, rather than seeking regular medical care, because travel and wait times are both costly and an interference in other daily activities.
An individual's environment greatly impacts his or her health status. For example, three of the five largest landfills in the United States are situated in communities which are predominantly African American and Latino, contributing to some of the highest pediatric asthma rates in those groups. Impoverished individuals who find themselves unable to leave their neighborhoods consequently are continuously exposed to the same harmful environment, which negatively impacts health.
Economic
Socioeconomic background is another source of inequality in healthcare. Poverty significantly influences the production of disease since poverty increases the likelihood of having poor health in addition to decreasing the ability to afford preventive and routine healthcare. Lack of access to healthcare has a significant negative impact on patients, especially those who are uninsured, since they are less likely to have a regular source of care, such as a primary care physician, and are more likely to delay seeking care until their condition becomes life-threatening. Studies show that people with health insurance receive significantly more care than those who are uninsured, the most vulnerable groups being minorities, young adults, and low-income individuals. The same trend for uninsured versus insured patients holds true for children as well.
Hadley, Steinberg, and Feder (1991) found that hospitalized patients who are not covered under health insurance are less likely to receive high-cost, specialized procedures and as a result, are more likely to die while hospitalized. Feder, Hadley, and Mullner (1984) noticed that hospitals often ration free care by denying care to those who are unable to pay and cutting services commonly used by the uninsured poor. Minorities are less likely to have health insurance because are less likely to occupy middle to upper income brackets, and therefore are incapable of purchasing health insurance, and also because they tend to hold low-paying jobs that do not provide health insurance as part of their job-related benefits. Census data show that 78.7% of whites are covered by private insurance compared with 54% of blacks and 51% of Hispanics. About 29% of Hispanics in the United States have neither private nor government health insurance of any kind.
A study done on Medicare recipients also showed that despite the uniform benefits offered, high-income elderly patients received 60% more physician services and 45% more days of hospital care than lower-income elderly patients not covered by Medicaid. After adjustment for health status, people with higher incomes are shown to have higher expenditures, indicating that the wealthy are strongly favored in income-related inequality in medical care. However, this inequality differs across age groups. Inequality was shown to be greatest for senior citizens, then adults, and least for children. This pattern showed that financial resources and other associated attributes, such as educational attainment, were very influential in access and utilization of medical care.
Solutions
The acknowledgement that access to health services differed depending on race, geographic location, and socioeconomic background was an impetus in establishing health policies to benefit these vulnerable groups. In 1965, specific programs, such as Medicare and Medicaid, were implemented in the United States in an attempt to extend health insurance to a greater portion of the population. Medicare is a federally funded program that provides health insurance for people aged 65 or older, people younger than 65 with certain disabilities, and people of any age who have End-Stage Renal Disease (ERSD). Medicaid, on the other hand, provides health coverage to certain low income people and families and is largely state-governed. However, studies have shown that for-profit hospitals tend to make healthcare less accessible to uninsured patients in addition to those under Medicaid in an effort to contain costs. Another program, the State Children's Health Insurance Program (SCHIP) provides low cost health insurance to children in families who do not qualify for Medicaid but cannot afford private health insurance on their own.
The necessity of achieving equity in quality of and access to healthcare is glaring and urgent. According to Fein (1972), this goal could include equal health outcomes for all by income group, equal expenditures per capita across income groups, or eliminating income as a healthcare rationing device. Some have proposed that a national health insurance plan with comprehensive benefits and no deductibles or other costs from the patients would provide the most equity. Fein also stressed that healthcare reform was needed, specifically in eliminating financial assistance to treat patients that depended on patient income or the quantity of services given. He proposed instead paying physicians on a salaried basis.
Another study, by Reynolds (1976), found that community health centers improved access to health care for many vulnerable groups, including youth, blacks, and people with serious diseases. The study indicated that community health centers provided more preventive care and greater continuity of care, though there were problems in obtaining adequate funding as well as adequate staffing. Engaging the community to understand the link between social issues such as employment, education, and poverty can help motivate community members to advocate for policies that improve health status.
Increasing the racial and ethnic diversity of healthcare providers can also serve as a potential solution. Racial and ethnic minority healthcare providers are much more likely than their white counterparts to serve minority communities, which can have many positive effects. Advocating for an increase in minority healthcare providers can help improve the quality of patient-physician communication as well as reduce the crowding in understaffed facilities in areas in which minorities reside. This can help decrease wait times as well as increase the likelihood that such patients will seek out nearby healthcare facilities rather than traveling farther distances as a last resort.
Implementing efforts to increase translation services can also improve quality of healthcare. This means increased availability of bilingual and bicultural healthcare providers for non-English speakers. Studies show that non-English speaking patients self-reported better physical functioning, psychological well-being, health perceptions, and lower pain when receiving treatment from a physician who spoke their language. Hispanic patients specifically reported increased compliance to treatment plans when their physician spoke Spanish and also shared a similar background. Training programs to improve and broaden physicians’ communication skills can increase patient satisfaction, patient compliance, patient participation in treatment decisions, and utilization of preventative care services
The idea of universal health care, which is implemented in many other countries, has been a subject of heated debate in the United States.
Employment
Employment is a key source of income for a majority of the world's population, and therefore is the most direct method through which people can escape poverty. However, unequal access to decent work and persistent labor market inequalities frustrate efforts to reduce poverty. Studies have further divided employment segregation into two categories: first generation and second generation discrimination. First generation discrimination occurs as an overt bias displayed by employers, and since the end of the civil rights era has been on the decline. Second generation discrimination; on the other hand, is less direct and therefore much harder to legislate against. This helps explain the disparity between female hiring rates and male/female ratios, which have gone up recently, and the relative scarcity of women in upper-level management positions. Therefore, while there is extensive legislation passed regarding employment discrimination, informal barriers still exist in the workplace. For instance, gender discrimination often takes the form of working hours and childcare-related benefits. In many cases, female professionals who must take maternity leave or single mothers who must care for their children often are at a disadvantage when it comes to promotions and advancement.
Education level
Employment discrimination is also closely linked to education and skills. One of the most important factors that can help describe employment disparities was that for much of the post-WWII-era, many Western countries began shedding the manufacturing jobs that provided relatively high-wage jobs to people with moderate to low job skills. Starting from the 1960s, the United States began a shift away from low-wage jobs, especially in the manufacturing sector, towards technology-based or service-based employment. This had an unbalanced effect of decreasing employment opportunities for the least educated in the labor force while at the same time increasing the productivity and therefore wages of the skilled labor force, increasing the level of inequality. In addition, globalization has tended to compound this decrease in demand of domestic unskilled labor.
Finally, weak labor market policies since the 70's and 80's have failed to address the income inequalities that those who are employed at lower income levels have to face. Namely, the union movement began to shrink, decreasing the power for employees to negotiate employment terms, and the minimum wage was prevented from increasing alongside inflation.
Racial
Other barriers include human capital occupations that require an extensive network for developing clientele, like lawyers, physicians, and salesmen. Studies have shown that for blacks and whites in the same occupation, whites can often benefit for a wealthier pool of clients and connections. In addition, studies show that only a small percentage of low-skilled employees are hired through advertisements or cold calls, highlighting the importance of social connections with middle- and upper- class employers. Furthermore, racially disparate employment consequences can arise from racial patterns in other social processes and institutions, such as criminal justice contact (often with spillover effects on local communities of color). At the county level, for example, jail incarceration has been found to significantly diminish local labor markets in areas with relatively high proportions of Black residents.
Gender
Though women have become an increasing presence in the workforce, there currently exists a gender gap in earnings. Statistics show that women who work full-time year-round earn 75% of the income as their male counterparts. Part of the gender gap in employment earnings is due to women concentrating in different occupational fields than men, which is known as occupational segregation. The 1990 Census data show that more than 50% of women would have to change jobs before women would be distributed in the same way as men within the job market, achieving complete gender integration. This can be attributed to the tendency of women to choose degrees that funnel into jobs that are less lucrative than those chosen by men.
Other studies have shown that the Hay system, which evaluates jobs, undervalues the occupations that tend to be filled by women, which continues to bias wages against women's work. Once a certain job becomes associated with women, its social value decreases. Almost all studies show that the percentage of women is correlated with lower earnings for both males and females even in fields that required significant job skills, which suggests a strong effect of gender composition on earnings.
Additionally, women tend to be hired into less desirable jobs than men and are denied access to more skilled jobs or jobs that place them in an authoritative role. In general, women tend to hold fewer positions of power when compared to men. A study done by Reskin and Ross (1982) showed that when tenure and productivity-related measurements were controlled, women had less authority and earned less than men of equal standing in their occupation. Exclusionary practices provide the most valuable job openings and career opportunities for members of groups of higher status which, in the United States, mostly means Caucasian males. Therefore, males are afforded more advantages than females and perpetuate this cycle while they still hold more social power, allocating lower-skilled and lower-paying jobs to females and minorities.
Inequality in investment of skills
Another factor of the gender earnings gap is reflected in the difference in job skills between women and men. Studies suggest that women invest less in their own occupational training because they stay in the workforce for a shorter period of time than men (because of marriage or rearing children) and therefore have a shorter time span to benefit from their extra efforts. However, there is also discrimination by the employer. Studies have shown that the earnings gap is also due to employers investing less money in training female employees, which leads to a gender disparity in accessing career development opportunities.
Prescribed gender roles
Women tend to stay in the workforce for less time than men due to marriage or the time devoted to raising children. Consequently, men are typically viewed as the “breadwinners” of the family, which is reflected in the employee benefits provided in careers that are traditionally occupied by males. A study done by Heidi M. Berggren, assessing the employee benefits provided to nurses (a traditional female career) and automobile mechanics and repairmen (a traditional male career), found that the latter provided more significant benefits such as health insurance and other medical emergency benefits whereas the former provided more access to sick leave with full pay. This outlines the roles allotted to women as the caregivers and the men as the providers of the family which subsequently encourages men to seek gainful employment while encouraging women to have a larger role at home than in the workplace. Many parental leave policies in the US are poorly developed and reinforce the roles of men as the breadwinner and women as the caregiver.
Glass ceiling
Women have often described subtle gender barriers in career advancement, known as the glass ceiling. This refers to the limited mobility of women in the workforce due to social restrictions that limit their opportunities and affect their career decisions.
Solutions
A study done by Doorne-Huiskes, den Dulk, and Schippers (1999) showed that in countries with government policy addressing the balance between work and family life, women have high participation in the work force and there is a smaller gender wage gap, indicating that such policy could encourage mothers to stay in their occupations while also encouraging men to take on a greater child-rearing role. Such measure include mandating employers to provide paid parental leave for employees so that both parents can care for children without risk to their careers. Another suggested measure is government-provided day care for children aged 0–6 or financial support for employees to pay for their own child-care.
In 1978, the Pregnancy Discrimination Act was passed and amended Title VII of the Civil Rights Act of 1964. This act designated discrimination based on pregnancy, childbirth, or associated medical issues as illegal gender discrimination. The Family and Medical Leave Act, passed in 1993, required employers to give up to twelve weeks of unpaid leave for the birth or adoption of a child and providing care for immediate family members who are ill. These two acts helped publicize the important role women play in caring for family members and gave women more opportunities to retain jobs that they would have previously lost. However, the Family and Medical Leave Act of 1993 is limited in that only 60% of all employees in the U.S. are eligible for this leave since many small business are exempt from such coverage.
The fact that parental leave measures continue to enforce traditional division of labor between the genders indicates a need to reduce the stigma of male parenting as well as the stigma of parenthood on female employment opportunities. Some possible developments to improve parental leave include: offering job protection, full benefits, and substantial pay as a part of parental leave to heighten the social value of both parents caring for children, making parental leave more flexible so that both parents can take time off, reducing the negative impact of parental leave on job standing, and encouraging fathers to care for children by providing educational programs regarding pre-natal and post-natal care.
References
Sociological terminology
Social inequality
| 0.782865 | 0.968545 | 0.75824 |
Are
|
Are commonly refers to:
Are (unit), a unit of area equal to 100 m2
Are, ARE or Åre may also refer to:
Places
Åre, a locality in Sweden
Åre Municipality, a municipality in Sweden
Åre ski resort in Sweden
Are Parish, a municipality in Pärnu County, Estonia
Are, Estonia, a small borough in Are Parish
Are-Gymnasium, a secondary school in Bad Neuenahr-Ahrweiler
Are, Saare County, a village in Pöide Parish, Saare County, Estonia
Arab Republic of Egypt
United Arab Emirates (ISO 3166-1 alpha-3 country code ARE)
Science, technology, and mathematics
Are (moth), a genus of moth
Admiralty Research Establishment, a precursor to the UK's Defence Research Agency
Aircraft Reactor Experiment, a US military program in the 1950s
Algebraic Riccati equation, in control theory
Asymptotic relative efficiency, in statistics
AU-rich element, in genetics
Organisations
Admiralty Research Establishment, a precursor to the UK's Defence Research Agency
Association for Research and Enlightenment, an organization devoted to American claimed psychic Edgar Cayce
Associate of the Royal Society of Painter-Printmakers, in the UK
AIRES, a Colombian airline (ICAO code ARE)
Other uses
are, a form of the English verb "to be"
Are, note name, see Guidonian hand
Are (surname), a surname recorded in Chinese history
Dirk van Are, bishop and lord of Utrecht in the 13th century
Are languages, a subgroup of the Are-Taupota languages
Are language, a language from Papua New Guinea
A.R.E. Weapons, a band from New York City, formed in 1999
Architect Registration Examination, a professional licensure examination in the US
See also
Ar (disambiguation)
ARR (disambiguation)
Arre (disambiguation)
R (disambiguation)
| 0.763557 | 0.992838 | 0.758089 |
Pleiotropy
|
Pleiotropy (from Greek , 'more', and , 'way') occurs when one gene influences two or more seemingly unrelated phenotypic traits. Such a gene that exhibits multiple phenotypic expression is called a pleiotropic gene. Mutation in a pleiotropic gene may have an effect on several traits simultaneously, due to the gene coding for a product used by a myriad of cells or different targets that have the same signaling function.
Pleiotropy can arise from several distinct but potentially overlapping mechanisms, such as gene pleiotropy, developmental pleiotropy, and selectional pleiotropy. Gene pleiotropy occurs when a gene product interacts with multiple other proteins or catalyzes multiple reactions. Developmental pleiotropy occurs when mutations have multiple effects on the resulting phenotype. Selectional pleiotropy occurs when the resulting phenotype has many effects on fitness (depending on factors such as age and gender).
An example of pleiotropy is phenylketonuria, an inherited disorder that affects the level of phenylalanine, an amino acid that can be obtained from food, in the human body. Phenylketonuria causes this amino acid to increase in amount in the body, which can be very dangerous. The disease is caused by a defect in a single gene on chromosome 12 that codes for enzyme phenylalanine hydroxylase, that affects multiple systems, such as the nervous and integumentary system.
Pleiotropic gene action can limit the rate of multivariate evolution when natural selection, sexual selection or artificial selection on one trait favors one allele, while selection on other traits favors a different allele. Some gene evolution is harmful to an organism. Genetic correlations and responses to selection most often exemplify pleiotropy.
History
Pleiotropic traits had been previously recognized in the scientific community but had not been experimented on until Gregor Mendel's 1866 pea plant experiment. Mendel recognized that certain pea plant traits (seed coat color, flower color, and axial spots) seemed to be inherited together; however, their correlation to a single gene has never been proven. The term "pleiotropie" was first coined by Ludwig Plate in his Festschrift, which was published in 1910. He originally defined pleiotropy as occurring when "several characteristics are dependent upon ... [inheritance]; these characteristics will then always appear together and may thus appear correlated". This definition is still used today.
After Plate's definition, Hans Gruneberg was the first to study the mechanisms of pleiotropy. In 1938 Gruneberg published an article dividing pleiotropy into two distinct types: "genuine" and "spurious" pleiotropy. "Genuine" pleiotropy is when two distinct primary products arise from one locus. "Spurious" pleiotropy, on the other hand, is either when one primary product is utilized in different ways or when one primary product initiates a cascade of events with different phenotypic consequences. Gruneberg came to these distinctions after experimenting on rats with skeletal mutations. He recognized that "spurious" pleiotropy was present in the mutation, while "genuine" pleiotropy was not, thus partially invalidating his own original theory. Through subsequent research, it has been established that Gruneberg's definition of "spurious" pleiotropy is what we now identify simply as "pleiotropy".
In 1941 American geneticists George Beadle and Edward Tatum further invalidated Gruneberg's definition of "genuine" pleiotropy, advocating instead for the "one gene-one enzyme" hypothesis that was originally introduced by French biologist Lucien Cuénot in 1903. This hypothesis shifted future research regarding pleiotropy towards how a single gene can produce various phenotypes.
In the mid-1950s Richard Goldschmidt and Ernst Hadorn, through separate individual research, reinforced the faultiness of "genuine" pleiotropy. A few years later, Hadorn partitioned pleiotropy into a "mosaic" model (which states that one locus directly affects two phenotypic traits) and a "relational" model (which is analogous to "spurious" pleiotropy). These terms are no longer in use but have contributed to the current understanding of pleiotropy.
By accepting the one gene-one enzyme hypothesis, scientists instead focused on how uncoupled phenotypic traits can be affected by genetic recombination and mutations, applying it to populations and evolution. This view of pleiotropy, "universal pleiotropy", defined as locus mutations being capable of affecting essentially all traits, was first implied by Ronald Fisher's Geometric Model in 1930. This mathematical model illustrates how evolutionary fitness depends on the independence of phenotypic variation from random changes (that is, mutations). It theorizes that an increasing phenotypic independence corresponds to a decrease in the likelihood that a given mutation will result in an increase in fitness. Expanding on Fisher's work, Sewall Wright provided more evidence in his 1968 book Evolution and the Genetics of Populations: Genetic and Biometric Foundations by using molecular genetics to support the idea of "universal pleiotropy". The concepts of these various studies on evolution have seeded numerous other research projects relating to individual fitness.
In 1957 evolutionary biologist George C. Williams theorized that antagonistic effects will be exhibited during an organism's life cycle if it is closely linked and pleiotropic. Natural selection favors genes that are more beneficial prior to reproduction than after (leading to an increase in reproductive success). Knowing this, Williams argued that if only close linkage was present, then beneficial traits will occur both before and after reproduction due to natural selection. This, however, is not observed in nature, and thus antagonistic pleiotropy contributes to the slow deterioration with age (senescence).
Mechanism
Pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. The underlying mechanism is genes that code for a product that is either used by various cells or has a cascade-like signaling function that affects various targets.
Polygenic traits
Most genetic traits are polygenic in nature: controlled by many genetic variants, each of small effect. These genetic variants can reside in protein coding or non-coding regions of the genome. In this context pleiotropy refers to the influence that a specific genetic variant, e.g., a single nucleotide polymorphism or SNP, has on two or more distinct traits.
Genome-wide association studies (GWAS) and machine learning analysis of large genomic datasets have led to the construction of SNP based polygenic predictors for human traits such as height, bone density, and many disease risks. Similar predictors exist for plant and animal species and are used in agricultural breeding.
One measure of pleiotropy is the fraction of genetic variance that is common between two distinct complex human traits: e.g., height vs bone density, breast cancer vs heart attack risk, or diabetes vs hypothyroidism risk. This has been calculated for hundreds of pairs of traits, with results shown in the Table. In most cases examined the genomic regions controlling each trait are largely disjoint, with only modest overlap.
Thus, at least for complex human traits so far examined, pleiotropy is limited in extent.
Models for the origin
One basic model of pleiotropy's origin describes a single gene locus to the expression of a certain trait. The locus affects the expressed trait only through changing the expression of other loci. Over time, that locus would affect two traits by interacting with a second locus. Directional selection for both traits during the same time period would increase the positive correlation between the traits, while selection on only one trait would decrease the positive correlation between the two traits. Eventually, traits that underwent directional selection simultaneously were linked by a single gene, resulting in pleiotropy.
The "pleiotropy-barrier" model proposes a logistic growth pattern for the increase of pleiotropy over time. This model differentiates between the levels of pleiotropy in evolutionarily younger and older genes subjected to natural selection. It suggests a higher potential for phenotypic innovation in evolutionarily newer genes due to their lower levels of pleiotropy.
Other more complex models compensate for some of the basic model's oversights, such as multiple traits or assumptions about how the loci affect the traits. They also propose the idea that pleiotropy increases the phenotypic variation of both traits since a single mutation on a gene would have twice the effect.
Evolution
Pleiotropy can have an effect on the evolutionary rate of genes and allele frequencies. Traditionally, models of pleiotropy have predicted that evolutionary rate of genes is related negatively with pleiotropyas the number of traits of an organism increases, the evolutionary rates of genes in the organism's population decrease. This relationship has not been clearly found in empirical studies for a long time. However, a study based on human disease genes revealed the evidence of lower evolutionary rate in genes with higher pleiotropy.
In mating, for many animals the signals and receptors of sexual communication may have evolved simultaneously as the expression of a single gene, instead of the result of selection on two independent genes, one that affects the signaling trait and one that affects the receptor trait. In such a case, pleiotropy would facilitate mating and survival. However, pleiotropy can act negatively as well. A study on seed beetles found that intralocus sexual conflict arises when selection for certain alleles of a gene that are beneficial for one sex causes expression of potentially harmful traits by the same gene in the other sex, especially if the gene is located on an autosomal chromosome.
Pleiotropic genes act as an arbitrating force in speciation. William R. Rice and Ellen E. Hostert (1993) conclude that the observed prezygotic isolation in their studies is a product of pleiotropy's balancing role in indirect selection. By imitating the traits of all-infertile hybridized species, they noticed that the fertilization of eggs was prevented in all eight of their separate studies, a likely effect of pleiotropic genes on speciation. Likewise, pleiotropic gene's stabilizing selection allows for the allele frequency to be altered.
Studies on fungal evolutionary genomics have shown pleiotropic traits that simultaneously affect adaptation and reproductive isolation, converting adaptations directly to speciation. A particularly telling case of this effect is host specificity in pathogenic ascomycetes and specifically, in venturia, the fungus responsible for apple scab. These parasitic fungi each adapts to a host, and are only able to mate within a shared host after obtaining resources. Since a single toxin gene or virulence allele can grant the ability to colonize the host, adaptation and reproductive isolation are instantly facilitated, and in turn, pleiotropically causes adaptive speciation. The studies on fungal evolutionary genomics will further elucidate the earliest stages of divergence as a result of gene flow, and provide insight into pleiotropically induced adaptive divergence in other eukaryotes.
Antagonistic pleiotropy
Sometimes, a pleiotropic gene may be both harmful and beneficial to an organism, which is referred to as antagonistic pleiotropy. This may occur when the trait is beneficial for the organism's early life, but not its late life. Such "trade-offs" are possible since natural selection affects traits expressed earlier in life, when most organisms are most fertile, more than traits expressed later in life.
This idea is central to the antagonistic pleiotropy hypothesis, which was first developed by G.C. Williams in 1957. Williams suggested that some genes responsible for increased fitness in the younger, fertile organism contribute to decreased fitness later in life, which may give an evolutionary explanation for senescence. An example is the p53 gene, which suppresses cancer but also suppresses stem cells, which replenish worn-out tissue.
Unfortunately, the process of antagonistic pleiotropy may result in an altered evolutionary path with delayed adaptation, in addition to effectively cutting the overall benefit of any alleles by roughly half. However, antagonistic pleiotropy also lends greater evolutionary "staying power" to genes controlling beneficial traits, since an organism with a mutation to those genes would have a decreased chance of successfully reproducing, as multiple traits would be affected, potentially for the worse.
Sickle cell anemia is a classic example of the mixed benefit given by the staying power of pleiotropic genes, as the mutation to Hb-S provides the fitness benefit of malaria resistance to heterozygotes as sickle cell trait, while homozygotes have significantly lowered life expectancy—what is known as "heterozygote advantage". Since both of these states are linked to the same mutated gene, large populations today are susceptible to sickle cell despite it being a fitness-impairing genetic disorder.
Examples
Albinism
Albinism is the mutation of the TYR gene, also termed tyrosinase. This mutation causes the most common form of albinism. The mutation alters the production of melanin, thereby affecting melanin-related and other dependent traits throughout the organism. Melanin is a substance made by the body that is used to absorb light and provides coloration to the skin. Indications of albinism are the absence of color in an organism's eyes, hair, and skin, due to the lack of melanin. Some forms of albinism are also known to have symptoms that manifest themselves through rapid-eye movement, light sensitivity, and strabismus.
Autism and schizophrenia
Pleiotropy in genes has been linked between certain psychiatric disorders as well. Deletion in the 22q11.2 region of chromosome 22 has been associated with schizophrenia and autism. Schizophrenia and autism are linked to the same gene deletion but manifest very differently from each other. The resulting phenotype depends on the stage of life at which the individual develops the disorder. Childhood manifestation of the gene deletion is typically associated with autism, while adolescent and later expression of the gene deletion often manifests in schizophrenia or other psychotic disorders. Though the disorders are linked by genetics, there is no increased risk found for adult schizophrenia in patients who experienced autism in childhood.
A 2013 study also genetically linked five psychiatric disorders, including schizophrenia and autism. The link was a single nucleotide polymorphism of two genes involved in calcium channel signaling with neurons. One of these genes, CACNA1C, has been found to influence cognition. It has been associated with autism, as well as linked in studies to schizophrenia and bipolar disorder. These particular studies show clustering of these diseases within patients themselves or families. The estimated heritability of schizophrenia is 70% to 90%, therefore the pleiotropy of genes is crucial since it causes an increased risk for certain psychotic disorders and can aid psychiatric diagnosis.
Phenylketonuria (PKU)
A common example of pleiotropy is the human disease phenylketonuria (PKU). This disease causes mental retardation and reduced hair and skin pigmentation, and can be caused by any of a large number of mutations in the single gene on chromosome 12 that codes for the enzyme phenylalanine hydroxylase, which converts the amino acid phenylalanine to tyrosine. Depending on the mutation involved, this conversion is reduced or ceases entirely. Unconverted phenylalanine builds up in the bloodstream and can lead to levels that are toxic to the developing nervous system of newborn and infant children. The most dangerous form of this is called classic PKU, which is common in infants. The baby seems normal at first but actually incurs permanent intellectual disability. This can cause symptoms such as mental retardation, abnormal gait and posture, and delayed growth. Because tyrosine is used by the body to make melanin (a component of the pigment found in the hair and skin), failure to convert normal levels of phenylalanine to tyrosine can lead to fair hair and skin.
The frequency of this disease varies greatly. Specifically, in the United States, PKU is found at a rate of nearly 1 in 10,000 births. Due to newborn screening, doctors are able to detect PKU in a baby sooner. This allows them to start treatment early, preventing the baby from suffering from the severe effects of PKU. PKU is caused by a mutation in the PAH gene, whose role is to instruct the body on how to make phenylalanine hydroxylase. Phenylalanine hydroxylase is what converts the phenylalanine, taken in through diet, into other things that the body can use. The mutation often decreases the effectiveness or rate at which the hydroxylase breaks down the phenylalanine. This is what causes the phenylalanine to build up in the body.
Sickle cell anemia
Sickle cell anemia is a genetic disease that causes deformed red blood cells with a rigid, crescent shape instead of the normal flexible, round shape. It is caused by a change in one nucleotide, a point mutation in the HBB gene. The HBB gene encodes information to make the beta-globin subunit of hemoglobin, which is the protein red blood cells use to carry oxygen throughout the body. Sickle cell anemia occurs when the HBB gene mutation causes both beta-globin subunits of hemoglobin to change into hemoglobinS (HbS).
Sickle cell anemia is a pleiotropic disease because the expression of a single mutated HBB gene produces numerous consequences throughout the body. The mutated hemoglobin forms polymers and clumps together causing the deoxygenated sickle red blood cells to assume the disfigured sickle shape. As a result, the cells are inflexible and cannot easily flow through blood vessels, increasing the risk of blood clots and possibly depriving vital organs of oxygen. Some complications associated with sickle cell anemia include pain, damaged organs, strokes, high blood pressure, and loss of vision. Sickle red blood cells also have a shortened lifespan and die prematurely.
Marfan syndrome
Marfan syndrome (MFS) is an autosomal dominant disorder which affects 1 in 5–10,000 people. MFS arises from a mutation in the FBN1 gene, which encodes for the glycoprotein fibrillin-1, a major constituent of extracellular microfibrils which form connective tissues. Over 1,000 different mutations in FBN1 have been found to result in abnormal function of fibrillin, which consequently relates to connective tissues elongating progressively and weakening. Because these fibers are found in tissues throughout the body, mutations in this gene can have a widespread effect on certain systems, including the skeletal, cardiovascular, and nervous system, as well as the eyes and lungs.
Without medical intervention, prognosis of Marfan syndrome can range from moderate to life-threatening, with 90% of known causes of death in diagnosed patients relating to cardiovascular complications and congestive cardiac failure. Other characteristics of MFS include an increased arm span and decreased upper to lower body ratio.
"Mini-muscle" allele
A gene recently discovered in laboratory house mice, termed "mini-muscle", causes, when mutated, a 50% reduction in hindlimb muscle mass as its primary effect (the phenotypic effect by which it was originally identified). In addition to smaller hindlimb muscle mass, the mutant mice exhibit lower heart rates during physical activity, and a higher endurance. Mini Muscle Mice also exhibit larger kidneys and livers. All of these morphological deviations influence the behavior and metabolism of the mouse. For example, mice with the Mini Muscle mutation were observed to have a higher per-gram aerobic capacity. The mini-muscle allele shows a mendelian recessive behavior. The mutation is a single nucleotide polymorphism (SNP) in an intron of the myosin heavy polypeptide4 gene.
DNA repair proteins
DNA repair pathways that repair damage to cellular DNA use many different proteins. These proteins often have other functions in addition to DNA repair. In humans, defects in some of these multifunctional proteins can cause widely differing clinical phenotypes. As an example, mutations in the XPB gene that encodes the largest subunit of the basal Transcription factor II H have several pleiotropic effects. XPB mutations are known to be deficient in nucleotide excision repair of DNA and in the quite separate process of gene transcription. In humans, XPB mutations can give rise to the cancer-prone disorder xeroderma pigmentosum or the noncancer-prone multisystem disorder trichothiodystrophy. Another example in humans is the ERCC6 gene, which encodes a protein that mediates DNA repair, transcription, and other cellular processes throughout the body. Mutations in ERCC6 are associated with disorders of the eye (retinal dystrophy), heart (cardiac arrhythmias), and immune system (lymphocyte immunodeficiency).
Chickens
Chickens exhibit various traits affected by pleiotropic genes. Some chickens exhibit frizzle feather trait, where their feathers all curl outward and upward rather than lying flat against the body. Frizzle feather was found to stem from a deletion in the genomic region coding for α-Keratin. This gene seems to pleiotropically lead to other abnormalities like increased metabolism, higher food consumption, accelerated heart rate, and delayed sexual maturity.
Domesticated chickens underwent a rapid selection process that led to unrelated phenotypes having high correlations, suggesting pleiotropic, or at least close linkage, effects between comb mass and physiological structures related to reproductive abilities. Both males and females with larger combs have higher bone density and strength, which allows females to deposit more calcium into eggshells. This linkage is further evidenced by the fact that two of the genes, HAO1 and BMP2, affecting medullary bone (the part of the bone that transfers calcium into developing eggshells) are located at the same locus as the gene affecting comb mass. HAO1 and BMP2 also display pleiotropic effects with commonly desired domestic chicken behavior; those chickens who express higher levels of these two genes in bone tissue produce more eggs and display less egg incubation behavior.
See also
cis-regulatory element
Enhancer (genetics)
Epistasis
Genetic correlation
Metabolic network
Metabolic supermice
Polygene
References
External links
Pleiotropy is 100 years old
Evolutionary developmental biology
Genetics concepts
| 0.763495 | 0.992904 | 0.758077 |
Reification (fallacy)
|
Reification (also known as concretism, hypostatization, or the fallacy of misplaced concreteness) is a fallacy of ambiguity, when an abstraction (abstract belief or hypothetical construct) is treated as if it were a concrete real event or physical entity.
In other words, it is the error of treating something that is not concrete, such as an idea, as a concrete thing. A common case of reification is the confusion of a model with reality: "the map is not the territory".
Reification is part of normal usage of natural language, as well as of literature, where a reified abstraction is intended as a figure of speech, and actually understood as such. But the use of reification in logical reasoning or rhetoric is misleading and usually regarded as a fallacy.
A potential consequence of reification is exemplified by Goodhart's law, where changes in the measurement of a phenomenon are mistaken for changes to the phenomenon itself.
Etymology
The term "reification" originates from the combination of the Latin terms res ("thing") and -fication, a suffix related to facere ("to make"). Thus reification can be loosely translated as "thing-making"; the turning of something abstract into a concrete thing or object.
Theory
Reification takes place when natural or social processes are misunderstood or simplified; for example, when human creations are described as "facts of nature, results of cosmic laws, or manifestations of divine will".
Reification may derive from an innate tendency to simplify experience by assuming constancy as much as possible.
Fallacy of misplaced concreteness
According to Alfred North Whitehead, one commits the fallacy of misplaced concreteness when one mistakes an abstract belief, opinion, or concept about the way things are for a physical or "concrete" reality: "There is an error; but it is merely the accidental error of mistaking the abstract for the concrete. It is an example of what might be called the 'Fallacy of Misplaced Concreteness. Whitehead proposed the fallacy in a discussion of the relation of spatial and temporal location of objects. He rejects the notion that a concrete physical object in the universe can be ascribed a simple spatial or temporal extension, that is, without reference to its relations to other spatial or temporal extensions.
[...] apart from any essential reference of the relations of [a] bit of matter to other regions of space [...] there is no element whatever which possesses this character of simple location. [... Instead,] I hold that by a process of constructive abstraction we can arrive at abstractions which are the simply located bits of material, and at other abstractions which are the minds included in the scientific scheme. Accordingly, the real error is an example of what I have termed: The Fallacy of Misplaced Concreteness.
Vicious abstractionism
William James used the notion of "vicious abstractionism" and "vicious intellectualism" in various places, especially to criticize Immanuel Kant's and Georg Wilhelm Friedrich Hegel's idealistic philosophies. In The Meaning of Truth, James wrote:
Let me give the name of "vicious abstractionism" to a way of using concepts which may be thus described: We conceive a concrete situation by singling out some salient or important feature in it, and classing it under that; then, instead of adding to its previous characters all the positive consequences which the new way of conceiving it may bring, we proceed to use our concept privatively; reducing the originally rich phenomenon to the naked suggestions of that name abstractly taken, treating it as a case of "nothing but" that concept, and acting as if all the other characters from out of which the concept is abstracted were expunged. Abstraction, functioning in this way, becomes a means of arrest far more than a means of advance in thought. ... The viciously privative employment of abstract characters and class names is, I am persuaded, one of the great original sins of the rationalistic mind.
In a chapter on "The Methods and Snares of Psychology" in The Principles of Psychology, James describes a related fallacy, the psychologist's fallacy, thus: "The great snare of the psychologist is the confusion of his own standpoint with that of the mental fact about which he is making his report. I shall hereafter call this the "psychologist's fallacy" par excellence" (volume 1, p. 196). John Dewey followed James in describing a variety of fallacies, including "the philosophic fallacy", "the analytic fallacy", and "the fallacy of definition".
Use of constructs in science
The concept of a "construct" has a long history in science; it is used in many, if not most, areas of science. A construct is a hypothetical explanatory variable that is not directly observable. For example, the concepts of motivation in psychology, utility in economics, and gravitational field in physics are constructs; they are not directly observable, but instead are tools to describe natural phenomena.
The degree to which a construct is useful and accepted as part of the current paradigm in a scientific community depends on empirical research that has demonstrated that a scientific construct has construct validity (especially, predictive validity).
Stephen Jay Gould draws heavily on the idea of fallacy of reification in his book The Mismeasure of Man. He argues that the error in using intelligence quotient scores to judge people's intelligence is that, just because a quantity called "intelligence" or "intelligence quotient" is defined as a measurable thing does not mean that intelligence is real; thus denying the validity of the construct "intelligence."
Relation to other fallacies
Pathetic fallacy (also known as anthropomorphic fallacy or anthropomorphization) is a specific type of reification. Just as reification is the attribution of concrete characteristics to an abstract idea, a pathetic fallacy is committed when those characteristics are specifically human characteristics, especially thoughts or feelings. Pathetic fallacy is also related to personification, which is a direct and explicit ascription of life and sentience to the thing in question, whereas the pathetic fallacy is much broader and more allusive.
The animistic fallacy involves attributing personal intention to an event or situation.
Reification fallacy should not be confused with other fallacies of ambiguity:
Accentus, where the ambiguity arises from the emphasis (accent) placed on a word or phrase
Amphiboly, a verbal fallacy arising from ambiguity in the grammatical structure of a sentence
Composition, when one assumes that a whole has a property solely because its various parts have that property
Division, when one assumes that various parts have a property solely because the whole has that same property
Equivocation, the misleading use of a word with more than one meaning
As a rhetorical device
The rhetorical devices of metaphor and personification express a form of reification, but short of a fallacy. These devices, by definition, do not apply literally and thus exclude any fallacious conclusion that the formal reification is real. For example, the metaphor known as the pathetic fallacy, "the sea was angry" reifies anger, but does not imply that anger is a concrete substance, or that water is sentient. The distinction is that a fallacy inhabits faulty reasoning, and not the mere illustration or poetry of rhetoric.
Counterexamples
Reification, while usually fallacious, is sometimes considered a valid argument. Thomas Schelling, a game theorist during the Cold War, argued that for many purposes an abstraction shared between disparate people caused itself to become real. Some examples include the effect of round numbers in stock prices, the importance placed on the Dow Jones Industrial index, national borders, preferred numbers, and many others. (Compare the theory of social constructionism.)
See also
All models are wrong
Counterfactual definiteness
Idolatry
Objectification
Philosophical realism
Problem of universals, a debate about the reality of categories
Surrogation
Hypostatic abstraction
References
Informal fallacies
| 0.763091 | 0.993398 | 0.758053 |
History of South America
|
The history of South America is the study of the past, particularly the written record, oral histories, and traditions, passed down from generation to generation on the continent of South America. The continent continues to be home to indigenous peoples, some of whom built high civilizations prior to the arrival of Europeans in the late 1400s and early 1500s. South America has a history that has a wide range of human cultures and forms of civilization. The Norte Chico civilization in Peru dating back to about 3500 BCE is the oldest civilization in the Americas and one of the first six independent civilizations in the world; it was contemporaneous with the Egyptian pyramids. It predated the Mesoamerican Olmec by nearly two millennia.
Indigenous peoples' thousands of years of independent life were disrupted by European colonization from Spain and Portugal and by demographic collapse. The resulting civilizations, however, were very different from those of their colonizers, both in the mestizos and the indigenous cultures of the continent. Through the trans-Atlantic slave trade, South America (especially Brazil) became the home of millions of people of the African diaspora. The mixing of ethnic groups led to new social structures.
The tensions between Europeans, indigenous peoples, and African slaves and their descendants shaped South America as a whole, starting in the sixteenth century. Most of Spanish America achieved its independence in the early nineteenth century through hard-fought wars, while Portuguese Brazil first became the seat of the Portuguese empire and then an empire independent of Portugal. With the revolution for independence from the Spanish crown achieved during the 19th century, South America underwent yet more social and political changes. These have included nation building projects, absorbing waves of immigration from Europe in the late 19th and 20th centuries, dealing with increased international trade, colonization of hinterlands, and wars about territory ownership and power balance. During this period there has also been the reorganization of Indigenous rights and duties, subjugation of Indigenous peoples living in the states' frontiers, that lasted until the early 1900s; liberal-conservative conflicts among the ruling classes, and major demographic and environmental changes accompanying the development of sensitive habitats.
Prehistory
In the Paleozoic and Early Mesozoic eras, South America and Africa were connected in a landmass called Gondwana, as part of the supercontinent Pangaea. In the Albian, around 110 mya, South America and Africa began to diverge along the southern Mid-Atlantic Ridge, giving rise to a landmass of Antarctica and South America. During the late Eocene, around 35 mya, Antarctica and South America separated and South America became a massive, biologically rich island-continent. During approximately 30 million years, the biodiversity of South America was isolated from the rest of the world, leading to the evolution of species within the continent.
The event that caused the mass-extinction of dinosaurs 66 Mya gave rise to neotropical rainforest biomes like the Amazonia, replacing species composition and structure of local forests. During ~6 million years of recovery to former levels of plant diversity, they evolved from widely spaced gymnosperm-dominated forests to the forests with thick canopies which block sunlight, prevalent flowering plants and high vertical layering as known today.
Geological evidence suggests that approximately 3 million years ago, South America became connected to North America when the Bolivar Trough marine barrier disappeared and the Panamanian land bridge formed. The joining of these two land masses led to the Great American Interchange, in which biota from both continents expanded their ranges. The first species known to have made the northward migration was Pliometanastes, a fossil ground sloth roughly the size of a modern black bear. Migrations to the Southern Hemisphere were undertaken by several North American mammalian carnivores. Fewer species migrated in the opposite direction from south to north. The result of the expansion of a North American fauna was a mass extinction in which hundreds of species disappeared in a relatively short time. About 60% of present-day South American mammals have evolved from North American species. Some South American species were able to adapt and spread into North America. Apart from Pliometanastes, during the Irvingtonian stage of the mammal land stages, around 1.9 mya, species as Pampatherium, a giant armadillo, ground sloth Megatherium, giant anteater Myrmecophaga, a Neogene capybara (Hydrochoerus), Meizonyx, opossum Didelphis, and Mixotoxodon followed the route north. The terror bird Titanis, the only large carnivore in South American, dispersed into North America.
Pre-Columbian era
Earliest inhabitants
The Americas are thought to have been first inhabited by people from eastern Asia who crossed the Bering Land Bridge to present-day Alaska; the land separated and the continents are divided by the Bering Strait. Over the course of millennia, three waves of migrants spread to all parts of the Americas. Genetic and linguistic evidence has shown that the last wave of migrant peoples settled across the northern tier, and did not reach South America.
Amongst the oldest evidence for human presence in South America is the Monte Verde II site in Chile, suggested to date to around 14,500 years ago. From around 13,000 years ago, the Fishtail projectile point style became widespread across South America, with its disppearance around 11,000 years ago coincident with the disappearance of South America's megafauna as part of the Quaternary extinction event.
Agriculture and domestication of animals
The first evidence for the existence of agricultural practices in South America dates back to circa 6500 BCE, when potatoes, chilies and beans began to be cultivated for food in the Amazon Basin. Pottery evidence suggests that manioc, which remains a staple food supply today, was being cultivated as early as 2000 BCE.
South American cultures began domesticating llamas and alpacas in the highlands of the Andes circa 3500 BCE. These animals were used for both transportation and meat; their fur was shorn or collected to use to make clothing. Guinea pigs were also domesticated as a food source at this time.
By 2000 BCE, many agrarian village communities had developed throughout the Andes and the surrounding regions. Fishing became a widespread practice along the coast, with fish being the primary source of food for those communities. Irrigation systems were also developed at this time, which aided in the rise of agrarian societies. The food crops were quinoa, corn, lima beans, common beans, peanuts, manioc, sweet potatoes, potatoes, oca and squashes. Cotton was also grown and was particularly important as the only major fiber crop.
Among the earliest permanent settlements, dated to 4700 BC is the Huaca Prieta site on the coast of Peru, and at 3500 BC the Valdivia culture in Ecuador. Other groups also formed permanent settlements. Among those groups were the Muisca or "Muysca," and the Tairona, located in present-day Colombia. The Cañari of Ecuador, Quechua of Peru, and Aymara of Bolivia were the three most important Native peoples who developed societies of sedentary agriculture in South America.
In the last two thousand years, there may have been contact with the Polynesians who sailed to and from the continent across the South Pacific Ocean. The sweet potato, which originated in South America, spread through some areas of the Pacific. There is no genetic legacy of human contact.
Caral-Supe / Norte Chico
On the north-western coast of present-day Peru, the Caral-Supe civilization, also known as the Norte Chico civilization emerged as one of six civilizations to develop independently in the world. It was roughly contemporaneous with the Egyptian pyramids. It preceded the civilization of Mesoamerica by two millennia. It is believed to have been the only civilization dependent on fishing rather than agriculture to support its population.
The Caral Supe complex is one of the larger Norte Chico sites and has been dated to 27th century BCE. It is noteworthy for having absolutely no signs of warfare. It was contemporary with urbanism's rise in Mesopotamia.
Cañari
The Cañari were the indigenous natives of today's Ecuadorian provinces of Cañar and Azuay at the time of European contact. They were an elaborate civilization with advanced architecture and religious belief. Most of their remains were either burned or destroyed from attacks by the Inca and later the Spaniards. Their old city "Guapondelig", was replaced twice, first by the Incan city of Tomipamba, and later by the colonial city of Cuenca. The city was believed by the Spanish to be the site of El Dorado, the city of gold from the mythology of Colombia.
The Cañari were most notable in having repulsed the Incan invasion with fierce resistance for many years until they fell to Tupac Yupanqui. It is said that the Inca strategically married the Cañari princess Paccha to conquer the people. Many of their descendants still reside in Cañar.
Chibchan Nations
The Chibcha-speaking communities were the most numerous, the most extended by territory, and the most socio-economically developed of the Pre-Hispanic Colombian cultures. They were divided into two linguistic subgroups; the Arwako-Chimila languages, with the Tairona, Kankuamo, Kogi, Arhuaco, Chimila and Chitarero people and the Kuna-Colombian languages with Kuna, Nutabe, Motilon, U'wa, Lache, Guane, Sutagao and Muisca.
Muisca
Of these indigenous groups, the Muisca were the most advanced and formed one of the four grand civilisations in the Americas. With the Inca in Peru, they constituted the two developed and specialised societies of South America. The Muisca, meaning "people" or "person" in their version of the Chibcha language; Muysccubun, inhabited the Altiplano Cundiboyacense, the high plateau in the Eastern Ranges of the Colombian Andes and surrounding valleys, such as the Tenza Valley. Commonly set at 800 AD, their history succeeded the Herrera Period. The people were organised in a loose confederation of rulers, later called the Muisca Confederation. At the time of the Spanish conquest, their reign spread across the modern departments Cundinamarca and Boyacá with small parts of southern Santander with a surface area of approximately and a total population of between 300,000 and two million individuals.
The Muisca were known as "The Salt People", thanks to their extraction of and trade in halite from brines in various salt mines of which those in Zipaquirá and Nemocón are still the most important. This extraction process was the work of the Muisca women exclusively and formed the backbone of their highly regarded trading with other Chibcha-, Arawak- and Cariban-speaking neighboring indigenous groups. Trading was performed using salt, small cotton cloths and larger mantles and ceramics as barter trade. Their economy was agricultural in nature, profiting from the fertile soils of the Pleistocene Lake Humboldt that existed on the Bogotá savanna until around 30,000 years BP. Their crops were cultivated using irrigation and drainage on elevated terraces and mounds. To the Spanish conquistadors they were best known for their advanced gold-working, as represented in the tunjos (votive offer pieces), spread in museum collections all around the world. The famous Muisca raft, centerpiece in the collection of the Museo del Oro in the Colombian capital Bogotá, shows the skilled goldworking of the inhabitants of the Altiplano. The Muisca were the only pre-Columbian civilization known in South America to have used coins (tejuelos).
The gold and tumbaga (a gold-silver-copper alloy elaborated by the Muisca) created the legend of El Dorado; the "land, city or man of gold". The Spanish conquistadors who landed in the Caribbean city of Santa Marta were informed of the rich gold culture and led by Gonzalo Jiménez de Quesada and his brother Hernán Pérez, organised the most strenuous of the Spanish conquests into the heart of the Andes in April 1536. After an expedition of a year, where 80% of the soldiers died due to the harsh climate, carnivores such as caimans and jaguars and the frequent attacks of the indigenous peoples found along the route, Tisquesusa, the zipa of Bacatá, on the Bogotá savanna, was beaten by the Spanish on April 20, 1537, and died "bathing in his own blood", as prophesied by the mohan Popón.
Amazon
For a long time, scholars believed that Amazon forests were occupied by small numbers of hunter-gatherer tribes. Archeologist Betty J. Meggers was a prominent proponent of this idea, as described in her book Amazonia: Man and Culture in a Counterfeit Paradise. However, recent archeological findings have suggested that the region was densely populated. From the 1970s, numerous geoglyphs have been discovered on deforested land dating between 0–1250 AD. Additional finds have led to conclusions that there were highly developed and populous cultures in the forests, organized as Pre-Columbian civilizations. The BBC's Unnatural Histories claimed that the Amazon rainforest, rather than being a pristine wilderness, has been shaped by man for at least 11,000 years through practices such as forest gardening. The discovery of the Upano Valley sites in present-day eastern Ecuador predate all known complex Amazonian societies.
The first European to travel the length of the Amazon River was Francisco de Orellana in 1542. The BBC documentary Unnatural Histories presents evidence that Francisco de Orellana, rather than exaggerating his claims as previously thought, was correct in his observations that an advanced civilization was flourishing along the Amazon in the 1540s. It is believed that the civilization was later devastated by the spread of infectious diseases from Europe, such as smallpox, to which the natives had no immunity. Some 5 million people may have lived in the Amazon region in 1500, divided between dense coastal settlements, such as that at Marajó, and inland dwellers. By 1900 the population had fallen to 1 million, and by the early 1980s, it was less than 200,000.
Researchers have found that the fertile terra preta (black earth) is distributed over large areas in the Amazon forest. It is now widely accepted that these soils are a product of indigenous soil management. The development of this soil enabled agriculture and silviculture to be conducted in the previously hostile environment. Large portions of the Amazon rainforest are therefore probably the result of centuries of human management, rather than naturally occurring as has previously been supposed. In the region of the Xinguanos tribe, remains of some of these large, mid-forest Amazon settlements were found in 2003 by Michael Heckenberger and colleagues of the University of Florida. Among those remains were evidence of constructed roads, bridges and large plazas.
Andean civilizations
Chavín
The Chavín, a South American preliterate civilization, established a trade network and developed agriculture by 900 BCE, according to some estimates and archeological finds. Artifacts were found at a site called Chavín de Huantar in modern Peru at an elevation of 3,177 meters. Chavín civilization spanned 900 to 200 BCE.
Moche
The Moche thrived on the north coast of Peru between the first and ninth century CE. The heritage of the Moche comes down to us through their elaborate burials, excavated by former UCLA professor Christopher B. Donnan in association with the National Geographic Society.
Skilled artisans, the Moche were a technologically advanced people who traded with faraway peoples, like the Maya. Knowledge about the Moche has been derived mostly from their ceramic pottery, which is carved with representations of their daily lives. They practiced human sacrifice, had blood-drinking rituals, and their religion incorporated non-procreative sexual practices (such as fellatio).
Inca
Holding their capital at the great puma-shaped city of Cuzco, the Inca civilization dominated the Andes region from 1438 to 1533. Known as Tawantin suyu, or "the land of the four regions," in Quechua, the Inca civilization was highly distinct and developed. Inca rule extended to nearly a hundred linguistic or ethnic communities, some 9 to 14 million people connected by a 25,000-kilometre road system. Cities were built with precise, unmatched stonework, constructed over many levels of mountain terrain. Terrace farming was a useful form of agriculture. There is evidence of excellent metalwork and successful skull surgery in Inca civilization. The Inca had no written language, but used quipu, a system of knotted strings, to record information. Ongoing Kiphu research suggests that the Inca used a phonetic system as a form of writing in the kiphu.
Arawak and Carib civilizations
The Arawak lived along the eastern coast of South America, from present-day Guyana to as far south as what is now Brazil. Explorer Christopher Columbus described them at first encounter as a peaceful people, having already dominated other local groups such as the Ciboney. The Arawak had, however, come under increasing military pressure from the Carib, who are believed to have left the Orinoco river area to settle on islands and the coast of the Caribbean Sea. Over the century leading up to Columbus' arrival in the Caribbean archipelago in 1492, the Carib are believed to have displaced many of the Arawak who previously settled the island chains. The Carib also encroached on Arawak territory in what is modern Guyana.
The Carib were skilled boatbuilders and sailors who owed their dominance in the Caribbean basin to their military skills. The Carib war rituals included cannibalism; they had a practice of taking home the limbs of victims as trophies.
It is not known how many indigenous peoples lived in Venezuela and Colombia before the Spanish Conquest; it may have been approximately one million, including groups such as the Auaké, Caquetio, Mariche, and Timoto-cuicas. The number of people fell dramatically after the Conquest, mainly due to high mortality rates in epidemics of infectious Eurasian diseases introduced by the explorers, who carried them as an endemic disease. There were two main north–south axes of pre-Columbian population; producing maize in the west and manioc in the east. Large parts of the llanos plains were cultivated through a combination of slash and burn and permanent settled agriculture.
European colonization
Before the arrival of Europeans 20–30 million people lived in South America.
Between 1452 and 1493, a series of papal bulls (Dum Diversas, Romanus Pontifex, and Inter caetera) paved the way for the European colonization and Catholic missions in the New World. These authorized the European Christian nations to "take possession" of non-Christian lands and encouraged subduing and converting the non-Christian people of Africa and the Americas.
In 1494, Portugal and Spain, the two great maritime powers of that time, signed the Treaty of Tordesillas in the expectation of new lands being discovered in the west. Through the treaty, they agreed that all the land outside Europe should be an exclusive duopoly between the two countries. The treaty established an imaginary line along a north–south meridian 370 leagues west of Cape Verde Islands, roughly 46° 37' W. In terms of the treaty, all land to the west of the line (which is now known to include most of the South American soil), would belong to Spain, and all land to the east, to Portugal. Because accurate measurements of longitude were not possible at that time, the line was not strictly enforced, resulting in a Portuguese expansion of Brazil across the meridian.
In 1498, during his third voyage to the Americas, Christopher Columbus sailed near the Orinoco Delta and then landed in the Gulf of Paria (in what is now Venezuela). Amazed by the great offshore current of freshwater which deflected his course eastward, Columbus stated in his letter to Isabella I and Ferdinand II that he must have reached heaven on Earth (terrestrial paradise):
Beginning in 1499, the people and natural resources of South America were repeatedly exploited by foreign conquistadors, first from Spain and later from Portugal. These competing colonial nations claimed the land and resources as their own and divided it into colonies.
European diseases (smallpox, influenza, measles and typhus) to which the native populations had no resistance were the overwhelming cause of the depopulation of the Native American population. Systems of forced labor (such as encomiendas and mining industries mita) under Spanish control also contributed to depopulation. Lower bound estimates speak of a decline in the population of around 20–50 percent, whereas the highest estimates reach 90 percent. Following this, enslaved Africans, who had developed immunity to these diseases, were quickly brought in to replace them.
The Spaniards were committed to converting their American subjects to Christianity and were quick to purge any native cultural practices that hindered this end. However, most initial attempts at this were only partially successful; American groups simply blended Catholicism with their traditional beliefs. The Spaniards did not impose their language to the degree they did their religion. In fact, the missionary work of the Roman Catholic Church in Quechua, Nahuatl, and Guarani actually contributed to the expansion of these American languages, equipping them with writing systems.
Eventually, the natives and the Spaniards interbred, forming a Mestizo class. Mestizos and the Native Americans were often forced to pay unfair taxes to the Spanish government (although all subjects paid taxes) and were punished harshly for disobeying their laws. Many native artworks were considered pagan idols and destroyed by Spanish explorers. This included a great number of gold and silver sculptures, which were melted down before transport to Europe.
17th and 18th centuries
In 1616, the Dutch, attracted by the legend of El Dorado, founded a fort in Guayana and established three colonies:
In 1624 France attempted to settle in the area of modern-day French Guiana, but was forced to abandon it in the face of hostility from the Portuguese, who viewed it as a violation of the Treaty of Tordesillas. However French settlers returned in 1630 and in 1643 managed to establish a settlement at Cayenne along with some small-scale plantations.
Since the sixteenth century, there were some movements of discontent to the Spanish and Portuguese colonial system. Among these movements, the most famous being that of the Maroons, slaves who escaped their masters and in the shelter of the forest communities organized free communities. Attempts to subject them by the royal army were unsuccessful because the Maroons had learned to master the South American jungles. In a royal decree of 1713, the king gave legality to the first free population of the continent: Palenque de San Basilio in Colombia today, led by Benkos Bioho. Brazil saw the formation of a genuine African kingdom on their soil, with the Quilombo of Palmares.
Between 1721 and 1735, the Revolt of the Comuneros of Paraguay arose, because of clashes between the Paraguayan settlers and the Jesuits, who ran the large and prosperous Jesuit Reductions and controlled a large number of Christianized Natives.
Between 1742 and 1756, was the insurrection of Juan Santos Atahualpa in the central jungle of Peru. In 1780, the Viceroyalty of Peru was met with the insurrection of curaca Joseph Gabriel Condorcanqui or Tupac Amaru II, which would be continued by Tupac Katari in Upper Peru.
In 1763, the African Coffy led a revolt in Guyana which was bloodily suppressed by the Dutch.
In 1781, the Revolt of the Comuneros (New Granada), an insurrection of the villagers in the Viceroyalty of New Granada, was a popular revolution that united indigenous people and mestizos. The villagers tried to be the colonial power and despite the capitulation were signed, the Viceroy Manuel Antonio Flórez did not comply, and instead ran to the main leaders José Antonio Galán.
In 1796, the Dutch colony of Essequibo was captured by the British during the French Revolutionary Wars.
During the eighteenth century, the figure of the priest, mathematician and botanist José Celestino Mutis (1732–1808), was delegated by the Viceroy Antonio Caballero y Gongora to conduct an inventory of the nature of the Nueva Granada, which became known as the Botanical Expedition, which classified plants, wildlife and founded the first astronomical observatory in the city of Santa Fé de Bogotá.
On August 15, 1801, the Prussian scientist Alexander von Humboldt reached Fontibón where Mutis had begun his expedition to New Granada, Quito. The meeting between the two scholars is considered the brightest spot of the botanical expedition.
Humboldt also visited Venezuela, Mexico, United States, Chile, and Peru.
Through his observations of temperature differences between the Pacific Ocean between Chile and Peru in different periods of the year, he discovered cold currents moving from south to north up the coast of Peru, which was named the Humboldt Current in his honor.
Between 1806 and 1807, British military forces tried to invade the area of the Rio de la Plata, at the command of Home Riggs Popham and William Carr Beresford, and John Whitelocke. The invasions were repelled, but powerfully affected the Spanish authority.
Independence and 19th century
The Spanish colonies won their independence in the first quarter of the 19th century, in the Spanish American wars of independence. Simón Bolívar (Greater Colombia, Peru, Bolivia), José de San Martín (United Provinces of the River Plate, Chile, and Peru), and Bernardo O'Higgins (Chile) led their independence struggle. Although Bolivar attempted to keep the Spanish-speaking parts of the continent politically unified, they rapidly became independent of one another.
Unlike the Spanish colonies, the Brazilian independence came as an indirect consequence of the Napoleonic Invasions to Portugal – French invasion under General Junot led to the capture of Lisbon on 8 December 1807. In order not to lose its sovereignty, the Portuguese Court moved the capital from Lisbon to Rio de Janeiro, which was the Portuguese Empire's capital between 1808 and 1821 and rose the relevance of Brazil within the Portuguese Empire's framework. Following the Portuguese Liberal Revolution of 1820, and after several battles and skirmishes were fought in Pará and in Bahia, the heir apparent Pedro, son of King John VI of Portugal, proclaimed the country's independence in 1822 and became Brazil's first emperor (He later also reigned as Pedro IV of Portugal). This was one of the most peaceful colonial independences ever seen in human history.
A struggle for power emerged among the new nations, and several further wars were soon fought thereafter.
The first few wars were fought for supremacy in the northern and southern parts of the continent. The Gran Colombia – Peru War of the north and the Cisplatine War (between the Empire of Brazil and the United Provinces of the River Plate) ended in stalemate, although the latter resulted in the independence of Uruguay (1828). A few years later, after the break-up of Gran Colombia in 1831, the balance of power shifted in favor of the newly formed Peru-Bolivian Confederation (1836–1839). Nonetheless, this power structure proved temporary and shifted once more as a result of the Northern Peruvian State's victory over the Southern Peruvian State-Bolivia War of the Confederation (1836–1839), and the Argentine Confederation's defeat in the Guerra Grande (1839–1852).
Later conflicts between the South American nations continued to define their borders and power status. In the Pacific coast, Chile and Peru continued to exhibit their increasing domination, defeating Spain in the Chincha Islands War. Finally, after precariously defeating Peru during the War of the Pacific (1879–1883), Chile emerged as the dominant power of the Pacific Coast of South America. In the Atlantic side, Paraguay attempted to gain a more dominant status in the region, but an alliance of Argentina, Brazil, and Uruguay (in the resulting 1864–1870 War of the Triple Alliance) ended Paraguayan ambitions. Thereupon, the Southern Cone nations of Argentina, Brazil, and Chile entered the 20th century as the major continental powers.
A few countries did not gain independence until the 20th century:
Panama, from Colombia, in 1903
Trinidad and Tobago, from the United Kingdom, in 1962
Guyana, from the United Kingdom, in 1966
Suriname, from the Netherlands, in 1975
French Guiana remains an overseas department of France.
20th century
1900–1920
By the start of the century, the United States continued its interventionist attitude, which aimed to directly defend its interests in the region. This was officially articulated in Theodore Roosevelt's Big Stick Doctrine, which modified the old Monroe Doctrine, which had simply aimed to deter European intervention in the hemisphere.
1930–1960
The Great Depression posed a challenge to the region. The collapse of the world economy meant that the demand for raw materials drastically declined, undermining many of the economies of South America. Intellectuals and government leaders in South America turned their backs on the older economic policies and turned toward import substitution industrialization. The goal was to create self-sufficient economies, which would have their own industrial sectors and large middle classes and which would be immune to the ups and downs of the global economy. Despite the potential threats to United States commercial interests, the Roosevelt administration (1933–1945) understood that the United States could not wholly oppose import substitution. Roosevelt implemented a good neighbor policy and allowed the nationalization of some American companies in South America. The Second World War also brought the United States and most Latin American nations together.
The history of South America during World War II is important because of the significant economic, political, and military changes that occurred throughout much of the region as a result of the war. In order to better protect the Panama Canal, combat Axis influence, and optimize the production of goods for the war effort, the United States through Lend-Lease and similar programs greatly expanded its interests in Latin America, resulting in large-scale modernization and a major economic boost for the countries that participated.
Strategically, Brazil was of great importance because of its having the closest point in the Americas to Africa where the Allies were actively engaged in fighting the Germans and Italians. For the Axis, the Southern Cone nations of Argentina and Chile were where they found most of their South American support, and they utilised it to the fullest by interfering with internal affairs, conducting espionage, and distributing propaganda.
Brazil was the only country to send an Expeditionary force to the European theatre; however, several countries had skirmishes with German U-boats and cruisers in the Caribbean and South Atlantic. Mexico sent a fighter squadron of 300 volunteers to the Pacific, the Escuadrón 201 were known as the Aztec Eagles (Aguilas Aztecas).
The Brazilian active participation on the battle field in Europe was divined after the Casablanca Conference. The President of the U.S., Franklin D. Roosevelt on his way back from Morocco met the President of Brazil, Getulio Vargas, in Natal, Rio Grande do Norte, this meeting is known as the Potenji River Conference, and defined the creation of the Brazilian Expeditionary Force.
Economics
According to author Thomas M. Leonard, World War II had a major impact on Latin American economies. Following the December 7, 1941 Japanese attack on Pearl Harbor, most of Latin America either severed relations with the Axis powers or declared war on them. As a result, many nations (including all of Central America, the Dominican Republic, Mexico, Chile, Peru, Argentina, and Venezuela) suddenly found that they were now dependent on the United States for trade. The United States' high demand for particular products and commodities during the war further distorted trade. For example, the United States wanted all of the platinum produced in Colombia, all the silver of Chile, and all of cotton, gold and copper of Peru. The parties agreed upon set prices, often with a high premium, but the various nations lost their ability to bargain and trade in the open market.
Cold War
Wars became less frequent in the 20th century, with Bolivia-Paraguay and Peru-Ecuador fighting the last inter-state wars.
Early in the 20th century, the three wealthiest South American countries engaged in a vastly expensive naval arms race which was catalyzed by the introduction of a new warship type, the "dreadnought". At one point, the Argentine government was spending a fifth of its entire yearly budget for just two dreadnoughts, a price that did not include later in-service costs, which for the Brazilian dreadnoughts was sixty percent of the initial purchase.
The continent became a battlefield of the Cold War in the late 20th century. Some democratically elected governments of Argentina, Brazil, Chile, Uruguay, and Paraguay were overthrown or displaced by military dictatorships in the 1960s and 1970s. To curtail opposition, their governments detained tens of thousands of political prisoners, many of whom were tortured and/or killed on inter-state collaboration. Economically, they began a transition to neoliberal economic policies. They placed their own actions within the US Cold War doctrine of "National Security" against internal subversion. Throughout the 1980s and 1990s, Peru suffered from an internal conflict. South America, like many other continents, became a battlefield for the superpowers during the Cold War in the late 20th century. In the postwar period, the expansion of communism became the greatest political issue for both the United States and governments in the region. The start of the Cold War forced governments to choose between the United States and the Soviet Union.
Late 20th century military regimes and revolutions
By the 1970s, leftists had acquired a significant political influence which prompted the right-wing, ecclesiastical authorities and a large portion of each individual country's upper class to support coups d'état to avoid what they perceived as a communist threat. This was further fueled by Cuban and United States intervention which led to a political polarisation. Most South American countries were in some periods ruled by military dictatorships that were supported by the United States of America.
Also around the 1970s, the regimes of the Southern Cone collaborated in Operation Condor killing many leftist dissidents, including some urban guerrillas.
However, by the early 1990s all countries had restored their democracies.
Colombia has had an ongoing, though diminished internal conflict, which started in 1964 with the creation of Marxist guerrillas (FARC-EP) and then involved several illegal armed groups of leftist-leaning ideology as well as the private armies of powerful drug lords. Many of these are now defunct, and only a small portion of the ELN remains, along with the stronger, though also greatly reduced FARC. These leftist groups smuggle narcotics out of Colombia to fund their operations, while also using kidnapping, bombings, land mines and assassinations as weapons against both elected and non-elected citizens.
Revolutionary movements and right-wing military dictatorships became common after World War II, but since the 1980s, a wave of democratisation came through the continent, and democratic rule is widespread now. Nonetheless, allegations of corruption are still very common, and several countries have developed crises which have forced the resignation of their governments, although, in most occasions, regular civilian succession has continued.
In the 1960s and 1970s, the governments of Argentina, Brazil, Chile, and Uruguay were overthrown or displaced by U.S.-aligned military dictatorships. These detained tens of thousands of political prisoners, many of whom were tortured and/or killed (on inter-state collaboration, see Operation Condor). Economically, they began a transition to neoliberal economic policies. They placed their own actions within the U.S. Cold War doctrine of
"National Security" against internal subversion. Throughout the 1980s and 1990s, Peru suffered from an internal conflict (see Túpac Amaru Revolutionary Movement and Shining Path). Revolutionary movements and right-wing military dictatorships have been common, but starting in the 1980s a wave of democratization came through the continent, and democratic rule is now widespread. Allegations of corruption remain common, and several nations have seen crises which have forced the resignation of their presidents, although normal civilian succession has continued. International indebtedness became a recurrent problem, with examples like the 1980s debt crisis, the mid-1990s Mexican peso crisis and Argentina's 2001 default.
Washington Consensus
The set of specific economic policy prescriptions that were considered the "standard" reform package were promoted for crisis-wracked developing countries by Washington, DC-based institutions such as the International Monetary Fund (IMF), World Bank, and the US Treasury Department during the 1980s and '90s.
21st century
A turn to the left
According to the BBC, a "common element of the 'pink tide' is a clean break with what was known at the outset of the 1990s as the 'Washington consensus', the mixture of open markets and privatisation pushed by the United States". According to Cristina Fernández de Kirchner, a pink tide president herself, Hugo Chávez of Venezuela (inaugurated 1999), Luiz Inácio Lula da Silva of Brazil (inaugurated 2003) and Evo Morales of Bolivia (inaugurated 2006) were "the three musketeers" of the left in South America. By 2005, the BBC reported that out of 350 million people in South America, three out of four of them lived in countries ruled by "left-leaning presidents" elected during the preceding six years.
Despite the presence of a number of Latin American governments which profess to embrace a leftist ideology, it is difficult to categorize Latin American states "according to dominant political tendencies, like a red-blue post-electoral map of the United States." According to the Institute for Policy Studies, a liberal non-profit think-tank based in Washington, D.C.: "a deeper analysis of elections in Ecuador, Venezuela, Nicaragua, and Mexico indicates that the "pink tide" interpretation—that a diluted trend leftward is sweeping the continent—may be insufficient to understand the complexity of what's really taking place in each country and the region as a whole".
While this political shift is difficult to quantify, its effects are widely noticed. According to the Institute for Policy Studies, 2006 meetings of the South American Summit of Nations and the Social Forum for the Integration of Peoples demonstrated that certain discussions that "used to take place on the margins of the dominant discourse of neoliberalism, (have) now moved to the centre of public debate."
Pink tide
The term 'pink tide' (, ) or 'turn to the Left' (Sp.: vuelta hacia la izquierda, Pt.: Guinada à Esquerda) are phrases which are used in contemporary 21st century political analysis in the media and elsewhere to describe the perception that leftist ideology in general, and left-wing politics in particular, were increasingly becoming influential in Latin America.
Since the 2000s or 1990s in some countries, left-wing political parties have risen to power. Hugo Chávez in Venezuela, Luiz Inácio Lula da Silva and Dilma Rousseff in Brazil, Fernando Lugo in Paraguay, Néstor and Cristina Fernández de Kirchner in Argentina, Tabaré Vázquez and José Mujica in Uruguay, the Lagos and Bachelet governments in Chile, Evo Morales in Bolivia, and Rafael Correa of Ecuador are all part of this wave of left-wing politicians who also often declare themselves socialists, Latin Americanists or anti-imperialists.
The list of leftist South American presidents is, by date of election, the following
1998: Hugo Chávez, Venezuela
1999: Ricardo Lagos, Chile
2002: Luiz Inácio Lula da Silva, Brazil
2002: Lucio Gutiérrez, Ecuador
2003: Néstor Kirchner, Argentina
2004: Tabaré Vázquez, Uruguay
2005: Evo Morales, Bolivia
2006: Michelle Bachelet, Chile
2006: Rafael Correa, Ecuador
2007: Cristina Fernández de Kirchner, Argentina
2008: Fernando Lugo, Paraguay
2010: José Mujica, Uruguay
2010: Dilma Rousseff, Brazil
2011: Ollanta Humala, Peru
2013: Nicolás Maduro, Venezuela
2014: Michelle Bachelet, Chile
2015: Tabaré Vázquez, Uruguay
2017: Lenín Moreno, Ecuador
2019: Alberto Fernández, Argentina
2020: Luis Arce, Bolivia
2021: Pedro Castillo, Peru
2022: Gabriel Boric Font, Chile
2022: Gustavo Petro, Colombia
2023: Luiz Inácio Lula da Silva, Brazil
Politics
During the first decade of the 21st century, South American governments move to the political left, with leftist leaders being elected in Chile, Uruguay, Brazil, Argentina, Ecuador, Bolivia, Paraguay, Peru, and Venezuela. Most South American countries are making an increasing use of protectionist policies, undermining a greater global integration but helping local development.
In 2008, the Union of South American Nations (USAN) was founded, which aimed to merge the two existing customs unions, Mercosur and the Andean Community, thus forming the third-largest trade bloc in the world. The organization is planning for political integration in the European Union style, seeking to establish free movement of people, economic development, a common defense policy and the elimination of tariffs. According to Noam Chomsky, USAN represents that "for the first time since the European conquest, Latin America began to move towards integration".
Most recent heads of state in South America
2010: Dilma Rousseff, Brazil
2010: José Mujica, Uruguay
2010: Sebastián Piñera, Chile
2010: Juan Manuel Santos
2011: Ollanta Humala, Peru
2013: Nicolás Maduro, Venezuela
2013: Horacio Cartés, Paraguay
2014: Michelle Bachelet, Chile
2015: Mauricio Macri, Argentina
2015: Tabaré Vázquez, Uruguay
2015: David Granger, Guyana
2016: Michel Temer, Brazil
2016: Pedro Pablo Kuczynski Godard, Peru
2017: Lenín Moreno, Ecuador
2018: Sebastián Piñera, Chile
2018: Iván Duque Márquez, Colombia
2018: Martín Vizcarra, Peru
2018: Mario Abdo, Paraguay
2019: Jair Bolsonaro, Brazil
2019: Alberto Fernández, Argentina
2020: Luis Lacalle, Uruguay
2020: Luis Arce, Bolivia
2020: Manuel Merino de Lama, Peru
2020: Chandrikapersad "Chan" Santokhi, Suriname
2020: Irfaan Ali, Guyana
2020: Francisco Sagasti, Peru
2021: Guillermo Lasso, Ecuador
2021: Pedro Castillo, Peru
2022: Gabriel Boric Font, Chile
2022: Gustavo Petro, Colombia
2022: Dina Boluarte, Peru
2023: Luiz Inácio Lula da Silva, Brazil
2023: Santiago Peña, Paraguay
2023: Daniel Noboa, Ecuador
2023: Javier Milei, Argentina
See also
Inca Empire
Gran Colombia
History of Latin America
Military history of South America
Peru–Bolivian Confederation
Simón Bolívar
José de San Martín
Francisco Pizarro
Spanish American wars of independence
Peopling of the Americas
European colonization of the Americas
Decolonization of the Americas
Notes
References
Historiography
Deforestation. World Geography. Columbus, Ohio: McGraw-Hill/Glencoe, 2000. 202–204
Hensel, Silke. "Was There an Age of Revolution in Latin America?: New Literature on Latin American Independence." Latin American Research Review (2003) 38#3 pp. 237–249. online
Uribe, Victor M. "The Enigma of Latin American Independence: Analyses of the Last Ten Years," Latin American Research Review (1997) 32#1 pp. 236–255 in JSTOR
Wade, Lizzie. (2015). "Drones and satellites spot lost civilizations in unlikely places." Science (American Association for the Advancement of Science),
Bibliography
Prehistory
Muisca
Further reading
| 0.762005 | 0.994777 | 0.758025 |
16th century
|
The 16th century began with the Julian year 1501 (represented by the Roman numerals MDI) and ended with either the Julian or the Gregorian year 1600 (MDC), depending on the reckoning used (the Gregorian calendar introduced a lapse of 10 days in October 1582).
The 16th century is regarded by historians as the century which saw the rise of Western civilization.
The Renaissance in Italy and Europe saw the emergence of important artists, authors and scientists, and led to the foundation of important subjects which include accounting and political science. Copernicus proposed the heliocentric universe, which was met with strong resistance, and Tycho Brahe refuted the theory of celestial spheres through observational measurement of the 1572 appearance of a Milky Way supernova. These events directly challenged the long-held notion of an immutable universe supported by Ptolemy and Aristotle, and led to major revolutions in astronomy and science. Galileo Galilei became a champion of the new sciences, invented the first thermometer and made substantial contributions in the fields of physics and astronomy, becoming a major figure in the Scientific Revolution in Europe.
Spain and Portugal colonized large parts of Central and South America, followed by France and England in Northern America and the Lesser Antilles. The Portuguese became the masters of trade between Brazil, the coasts of Africa, and their possessions in the Indies, whereas the Spanish came to dominate the Greater Antilles, Mexico, Peru, and opened trade across the Pacific Ocean, linking the Americas with the Indies. English and French privateers began to practice persistent theft of Spanish and Portuguese treasures. This era of colonialism established mercantilism as the leading school of economic thought, where the economic system was viewed as a zero-sum game in which any gain by one party required a loss by another. The mercantilist doctrine encouraged the many intra-European wars of the period and arguably fueled European expansion and imperialism throughout the world until the 19th century or early 20th century.
The Reformation in central and northern Europe gave a major blow to the authority of the papacy and the Catholic Church. In England, the British-Italian Alberico Gentili wrote the first book on public international law and divided secularism from canon law and Catholic theology. European politics became dominated by religious conflicts, with the groundwork for the epochal Thirty Years' War being laid towards the end of the century.
In the Middle East, the Ottoman Empire continued to expand, with the sultan taking the title of caliph, while dealing with a resurgent Persia. Iran and Iraq were caught by a major popularity of the Shia sect of Islam under the rule of the Safavid dynasty of warrior-mystics, providing grounds for a Persia independent of the majority-Sunni Muslim world.
In the Indian subcontinent, following the defeat of the Delhi Sultanate and Vijayanagara Empire, new powers emerged, the Sur Empire founded by Sher Shah Suri, Deccan sultanates, Rajput states, and the Mughal Empire by Emperor Babur, a direct descendant of Timur and Genghis Khan. His successors Humayun and Akbar, enlarged the empire to include most of South Asia.
Japan suffered a severe civil war at this time, known as the Sengoku period, and emerged from it as a unified nation under Toyotomi Hideyoshi. China was ruled by the Ming dynasty, which was becoming increasingly isolationist, coming into conflict with Japan over the control of Korea as well as Japanese pirates.
In Africa, Christianity had begun to spread in Central Africa and Southern Africa. Until the Scramble for Africa in the late 19th century, most of Africa was left uncolonized.
Significant events
1501–1509
1501: Michelangelo returns to his native Florence to begin work on the statue David.
1501: Safavid dynasty reunifies Iran and rules over it until 1736. Safavids adopt a Shia branch of Islam.
1501: First Battle of Cannanore between the Third Portuguese Armada and Kingdom of Cochin under João da Nova and Zamorin of Kozhikode's navy marks the beginning of Portuguese conflicts in the Indian Ocean.
1502: First reported African slaves in the New World
1502: The Crimean Khanate sacks Sarai in the Golden Horde, ending its existence.
1503: Spain defeats France at the Battle of Cerignola. Considered to be the first battle in history won by gunpowder small arms.
1503: Leonardo da Vinci begins painting the Mona Lisa and completes it three years later.
1503: Nostradamus is born on either December 14 or December 21.
1504: A period of drought, with famine in all of Spain.
1504: Death of Isabella I of Castile; Joanna of Castile becomes the Queen.
1504: Foundation of the Sultanate of Sennar by Amara Dunqas, in what is modern Sudan
1505: Zhengde Emperor ascends the throne of Ming dynasty.
1505: Martin Luther enters St. Augustine's Monastery at Erfurt, Germany, on 17 July and begins his journey to instigating the Reformation.
1505: Sultan Trenggono builds the first Muslim kingdom in Java, called Demak, in Indonesia. Many other small kingdoms were established in other islands to fight against Portuguese. Each kingdom introduced local language as a way of communication and unity.
1506: Leonardo da Vinci completes the Mona Lisa.
1506: King Afonso I of Kongo wins the battle of Mbanza Kongo, resulting in Catholicism becoming Kongo's state religion.
1506: At least two thousand converted Jews are massacred in a Lisbon riot, Portugal.
1506: Christopher Columbus dies in Valladolid, Spain.
1506: Poland is invaded by Tatars from the Crimean Khanate.
1507: The first recorded epidemic of smallpox in the New World on the island of Hispaniola. It devastates the native Taíno population.
1507: Afonso de Albuquerque conquered Hormuz and Muscat, among other bases in the Persian Gulf, taking control of the region at the entrance of the Gulf.
1508: The Christian-Islamic power struggle in Europe and West Asia spills over into the Indian Ocean as Battle of Chaul during the Portuguese-Mamluk War
1508–1512: Michelangelo paints the Sistine Chapel ceiling.
1509: The defeat of joint fleet of the Sultan of Gujarat, the Mamlûk Burji Sultanate of Egypt, and the Zamorin of Calicut with support of the Republic of Venice and the Ottoman Empire in Battle of Diu marks the beginning of Portuguese dominance of the Spice trade and the Indian Ocean.
1509: The Portuguese king sends Diogo Lopes de Sequeira to find Malacca, the eastern terminus of Asian trade. After initially receiving Sequeira, Sultan Mahmud Shah captures and/or kills several of his men and attempts an assault on the four Portuguese ships, which escape. The Javanese fleet is also destroyed in Malacca.
1509: Krishnadevaraya ascends the throne of Vijayanagara Empire.
1510s
1509–1510: The 'great plague' in various parts of Tudor England.
1510: Afonso de Albuquerque of Portugal conquers Goa in India.
1511: Afonso de Albuquerque of Portugal conquers Malacca, the capital of the Sultanate of Malacca in present-day Malaysia.
1512: Copernicus writes Commentariolus, and proclaims the Sun the center of the Solar System.
1512: The southern part (historical core) of the Kingdom of Navarre is invaded by Castile and Aragon.
1512: Qutb Shahi dynasty, founded by Quli Qutb Mulk, rules Golconda Sultanate until 1687.
1512: The first Portuguese exploratory expedition was sent eastward from Malacca (in present-day Malaysia) to search for the 'Spice Islands' (Maluku) led by Francisco Serrão. Serrão is shipwrecked but struggles on to Hitu (northern Ambon) and wins the favour of the local rulers.
1513: Machiavelli writes The Prince, a treatise about political philosophy
1513: The Portuguese mariner Jorge Álvares lands at Macau, China, during the Ming dynasty.
1513: Henry VIII defeats the French at the Battle of the Spurs.
1513: The Battle of Flodden Field in which invading Scots are defeated by Henry VIII's forces.
1513: Sultan Selim I ("The Grim") orders the massacre of Shia Muslims in Anatolia (present-day Turkey).
1513: Vasco Núñez de Balboa, in service of Spain arrives at the Pacific Ocean (which he called Mar del Sur) across the Isthmus of Panama. He was the first European to do so.
1514: The Battle of Orsha halts Muscovy's expansion into Eastern Europe.
1514: Dózsa rebellion (peasant revolt) in Hungary.
1514: The Battle of Chaldiran, the Ottoman Empire gains decisive victory against Safavid dynasty.
1515: Ascension of Francis I of France as King of France following the death of Louis XII.
1515: The Ottoman Empire wrests Eastern Anatolia from the Safavids after the Battle of Chaldiran.
1515: The Ottomans conquer the last beyliks of Anatolia, the Dulkadirs and the Ramadanids.
1516–1517: The Ottomans defeat the Mamluks and gain control of Egypt, Arabia, and the Levant.
1517: The Sweating sickness epidemic in Tudor England.
1517: The Reformation begins when Martin Luther posts his Ninety-five Theses in Saxony.
1518: The Treaty of London was a non-aggression pact between the major European nations. The signatories were Burgundy, France, England, the Holy Roman Empire, the Netherlands, the Papal States and Spain, all of whom agreed not to attack one another and to come to the aid of any that were under attack.
1518: Mir Chakar Khan Rind leaves Baluchistan and settles in Punjab.
1518: Leo Africanus, also known as al-Hasan ibn Muhammad al-Wazzan al-Fasi, an Andalusian Berber diplomat who is best known for his book Descrittione dell’Africa (Description of Africa), is captured by Spanish pirates; he is taken to Rome and presented to Pope Leo X.
1518: The dancing plague of 1518 begins in Strasbourg, lasting for about one month.
1519: Leonardo da Vinci dies of natural causes on May 2.
1519: Wang Yangming, the Chinese philosopher and governor of Jiangxi province, describes his intent to use the firepower of the fo-lang-ji, a breech-loading Portuguese culverin, in order to suppress the rebellion of Prince Zhu Chenhao.
1519: Barbary pirates led by Hayreddin Barbarossa, a Turk appointed to ruling position in Algiers by the Ottoman Empire, raid Provence and Toulon in southern France.
1519: Charles I of Austria, Spain, and the Low Countries becomes Emperor of Holy Roman Empire as Charles V, Holy Roman Emperor (ruled until 1556).
1519–1522: Spanish expedition commanded by Magellan and Elcano are the first to Circumnavigate the Earth.
1519–1521: Hernán Cortés leads the Spanish conquest of the Aztec Empire.
1520s
1520–1566: The reign of Suleiman the Magnificent marks the zenith of the Ottoman Empire.
1520: The first European diplomatic mission to Ethiopia, sent by the Portuguese, arrives at Massawa 9 April, and reaches the imperial encampment of Emperor Dawit II in Shewa 9 October.
1520: Vijayanagara Empire forces under Krishnadevaraya defeat the Adil Shahi under at the Battle of Raichur
1520: Sultan Ali Mughayat Shah of Aceh begins an expansionist campaign capturing Daya on the west Sumatran coast (in present-day Indonesia), and the pepper and gold producing lands on the east coast.
1520: The Portuguese established a trading post in the village of Lamakera on the eastern side of Solor (in present-day Indonesia) as a transit harbour between Maluku and Malacca.
1521: Belgrade (in present-day Serbia) is captured by the Ottoman Empire.
1521: After building fortifications at Tuen Mun, the Portuguese attempt to invade Ming dynasty China, but are expelled by Chinese naval forces.
1521: Philippines encountered by Ferdinand Magellan. He was later killed in the Battle of Mactan in central Philippines in the same year.
1521: Jiajing Emperor ascended the throne of Ming dynasty, China.
1521: November, Ferdinand Magellan's expedition reaches Maluku (in present-day Indonesia) and after trade with Ternate returns to Europe with a load of cloves.
1521: Pati Unus leads the invasion of Malacca (in present-day Malaysia) against the Portuguese occupation. Pati Unus was killed in this battle, and was succeeded by his brother, sultan Trenggana.
1522: Rhodes falls to the Ottomans of Suleiman the Magnificent.
1522: The Portuguese ally themselves with the rulers of Ternate (in present-day Indonesia) and begin construction of a fort.
1522: August, Luso-Sundanese Treaty signed between Portugal and Sunda Kingdom granted Portuguese permit to build fortress in Sunda Kelapa.
1523: Sweden gains independence from the Kalmar Union.
1523: The Cacao bean is introduced to Spain by Hernán Cortés
1524–1525: German Peasants' War in the Holy Roman Empire.
1524: Giovanni da Verrazzano is the first European to explore the Atlantic coast of North America between South Carolina and Newfoundland.
1524: Ismail I, the founder of Safavid dynasty, dies and Tahmasp I becomes king.
1525: Timurid Empire forces under Babur defeat the Lodi dynasty at the First Battle of Panipat, end of the Delhi Sultanate.
1525: German and Spanish forces defeat France at the Battle of Pavia, Francis I of France is captured.
1526: The Ottomans defeat the Kingdom of Hungary at the Battle of Mohács.
1526: Mughal Empire, founded by Babur.
1527: Sack of Rome with Pope Clement VII escaping and the Swiss Guards defending the Vatican being killed. The sack of the city of Rome considered the end of the Italian Renaissance.
1527: Protestant Reformation begins in Sweden.
1527: The last ruler of Majapahit falls from power. This state (located in present-day Indonesia) was finally extinguished at the hands of the Demak. A large number of courtiers, artisans, priests, and members of the royalty moved east to the island of Bali; however, the power and the seat of government transferred to Demak under the leadership of Pangeran, later Sultan Fatah.
1527: June 22, The Javanese Prince Fatahillah of the Cirebon Sultanate successfully defeated the Portuguese armed forces at the site of the Sunda Kelapa Harbor. The city was then renamed Jayakarta, meaning "a glorious victory." This eventful day came to be acknowledged as Jakarta's Founding Anniversary.
1527: Mughal Empire forces defeat the Rajput led by Rana Sanga of Mewar at the Battle of Khanwa
1529: The Austrians defeat the Ottoman Empire at the siege of Vienna.
1529: Treaty of Zaragoza defined the antimeridian of Tordesillas attributing the Moluccas to Portugal and Philippines to Spain.
1529: Imam Ahmad Gurey defeats the Ethiopian Emperor Dawit II in the Battle of Shimbra Kure, the opening clash of the Ethiopian–Adal War.
1530s
1531–1532: The Church of England breaks away from the Catholic Church and recognizes King Henry VIII as the head of the Church.
1531: The Inca Civil War is fought between the two brothers, Atahualpa and Huáscar.
1532: Francisco Pizarro leads the Spanish conquest of the Inca Empire.
1532: Foundation of São Vicente, the first permanent Portuguese settlement in the Americas.
1533: Anne Boleyn becomes Queen of England.
1533: Elizabeth Tudor is born.
1534: Jacques Cartier claims Canada for France.
1534: The Ottomans capture Baghdad from the Safavids.
1534: Affair of the Placards, where King Francis I becomes more active in repression of French Protestants.
1535: The Münster Rebellion, an attempt of radical, millennialist, Anabaptists to establish a theocracy, ends in bloodshed.
1535: The Portuguese in Ternate depose Sultan Tabariji (or Tabarija) and send him to Portuguese Goa where he converts to Christianity and bequeaths his Portuguese godfather Jordao de Freitas the island of Ambon. Hairun becomes the next sultan.
1536: Catherine of Aragon dies in Kimbolton Castle, in England.
1536: In England, Anne Boleyn is beheaded for adultery and treason.
1536: Establishment of the Inquisition in Portugal.
1536: Foundation of Buenos Aires (in present-day Argentina) by Pedro de Mendoza.
1537: The Portuguese establish Recife in Pernambuco, north-east of Brazil.
1537: William Tyndale's partial translation of the Bible into English is published, which would eventually be incorporated into the King James Bible.
1538: Gonzalo Jiménez de Quesada founds Bogotá.
1538: Spanish–Venetian fleet is defeated by the Ottoman Turks at the Battle of Preveza.
1539: Hernando de Soto explores inland North America.
1540s
1540: The Society of Jesus, or the Jesuits, is founded by Ignatius of Loyola and six companions with the approval of Pope Paul III.
1540: Sher Shah Suri founds the Suri dynasty in South Asia, an ethnic Pashtun (Pathan) of the house of Sur, who supplanted the Mughal dynasty as rulers of North India during the reign of the relatively ineffectual second Mughal emperor Humayun. Sher Shah Suri decisively defeats Humayun in the Battle of Bilgram (May 17, 1540).
1541: Pedro de Valdivia founds Santiago de Chile.
1541: An Algerian military campaign by Charles V of Spain (Habsburg) is unsuccessful.
1541: Amazon River is encountered and explored by Francisco de Orellana.
1541: Capture of Buda and the absorption of the major part of Hungary by the Ottoman Empire.
1541: Sahib I Giray of Crimea invades Russia.
1542: The Italian War of 1542–1546 War resumes between Francis I of France and Emperor Charles V. This time Henry VIII is allied with the Emperor, while James V of Scotland and Sultan Suleiman I are allied with the French.
1542: Akbar The Great is born in the Rajput Umarkot Fort
1542: Spanish explorer Ruy López de Villalobos named the island of Samar and Leyte Las Islas Filipinas honoring Philip II of Spain and became the official name of the archipelago.
1543: Ethiopian/Portuguese troops defeat the Adal army led by Imam Ahmad Gurey at the Battle of Wayna Daga; Imam Ahmad Gurey is killed at this battle.
1543: Copernicus publishes his theory that the Earth and the other planets revolve around the Sun
1543: The Nanban trade period begins after Portuguese traders make contact with Japan.
1544: The French defeat an Imperial–Spanish army at the Battle of Ceresole.
1544: Battle of the Shirts in Scotland. The Frasers and Macdonalds of Clan Ranald fight over a disputed chiefship; reportedly, 5 Frasers and 8 Macdonalds survive.
1545: Songhai forces sack the Malian capital of Niani
1545: The Council of Trent meets for the first time in Trent (in northern Italy).
1546: Michelangelo Buonarroti is made chief architect of St. Peter's Basilica.
1546: Francis Xavier works among the peoples of Ambon, Ternate and Morotai (Moro) laying the foundations for a permanent mission. (to 1547)
1547: Henry VIII dies in the Palace of Whitehall on 28 January at the age of 55.
1547: Francis I dies in the Château de Rambouillet on 31 March at the age of 52.
1547: Edward VI becomes King of England and Ireland on 28 January and is crowned on 20 February at the age of 9.
1547: Emperor Charles V decisively dismantles the Schmalkaldic League at the Battle of Mühlberg.
1547: Grand Prince Ivan the Terrible is crowned tsar of (All) Russia, thenceforth becoming the first Russian tsar.
1548: Battle of Uedahara: Firearms are used for the first time on the battlefield in Japan, and Takeda Shingen is defeated by Murakami Yoshikiyo.
1548: Askia Daoud, who reigned from 1548 to 1583, establishes public libraries in Timbuktu (in present-day Mali).
1548: The Ming dynasty government of China issues a decree banning all foreign trade and closes down all seaports along the coast; these Hai jin laws came during the Wokou wars with Japanese pirates.
1549: Tomé de Sousa establishes Salvador in Bahia, north-east of Brazil.
1549: Arya Penangsang with the support of his teacher, Sunan Kudus, avenges the death of Raden Kikin by sending an envoy named Rangkud to kill Sunan Prawoto by Keris Kyai Satan Kober (in present-day Indonesia).
1550s
1550: The architect Mimar Sinan builds the Süleymaniye Mosque in Istanbul.
1550: Mongols led by Altan Khan invade China and besiege Beijing.
1550–1551: Valladolid debate concerning the human rights of the Indigenous people of the Americas.
1551: Fifth outbreak of sweating sickness in England. John Caius of Shrewsbury writes the first full contemporary account of the symptoms of the disease.
1551: North African pirates enslave the entire population of the Maltese island Gozo, between 5,000 and 6,000, sending them to Libya.
1552: Russia conquers the Khanate of Kazan in central Asia.
1552: Jesuit China Mission, Francis Xavier dies.
1553: Mary Tudor becomes the first queen regnant of England and restores the Church of England under Papal authority.
1553: The Portuguese found a settlement at Macau.
1554: Missionaries José de Anchieta and Manuel da Nóbrega establishes São Paulo, southeast Brazil.
1554: Princess Elizabeth is imprisoned in the Tower of London upon the orders of Mary I for suspicion of being involved in the Wyatt rebellion.
1555: The Muscovy Company is the first major English joint stock trading company.
1556: Publication in Venice of Delle Navigiationi et Viaggi (terzo volume) by Giovanni Battista Ramusio, secretary of Council of Ten, with plan La Terra de Hochelaga, an illustration of the Hochelaga.
1556: The Shaanxi earthquake in China is history's deadliest known earthquake during the Ming dynasty.
1556: Georgius Agricola, the "Father of Mineralogy", publishes his De re metallica.
1556: Akbar defeats Hemu at the Second battle of Panipat.
1556: Russia conquers the Astrakhan Khanate.
1556–1605: During his reign, Akbar expands the Mughal Empire in a series of conquests (in the Indian subcontinent).
1556: Mir Chakar Khan Rind captures Delhi with Humayun.
1556: Pomponio Algerio, radical theologian, is executed by boiling in oil as part of the Roman Inquisition.
1557: Habsburg Spain declares bankruptcy. Philip II of Spain had to declare four state bankruptcies in 1557, 1560, 1575 and 1596.
1557: The Portuguese settle in Macau (on the western side of the Pearl River Delta across from present-day Hong Kong).
1557: The Ottomans capture Massawa, all but isolating Ethiopia from the rest of the world.
1558: Elizabeth Tudor becomes Queen Elizabeth I at age 25.
1558–1603: The Elizabethan era is considered the height of the English Renaissance.
1558–1583: Livonian War between Poland, Grand Principality of Lithuania, Sweden, Denmark and Russia.
1558: After 200 years, the Kingdom of England loses Calais to France.
1559: With the Peace of Cateau Cambrésis, the Italian Wars conclude.
1559: Sultan Hairun of Ternate (in present-day Indonesia) protests the Portuguese's Christianisation activities in his lands. Hostilities between Ternate and the Portuguese.
1560s
1560: Ottoman navy defeats the Spanish fleet at the Battle of Djerba.
1560: Elizabeth Bathory is born in Nyirbator, Hungary.
1560: By winning the Battle of Okehazama, Oda Nobunaga becomes one of the pre-eminent warlords of Japan.
1560: Jeanne d'Albret declares Calvinism the official religion of Navarre.
1560: Lazarus Church, Macau
1561: Sir Francis Bacon is born in London.
1561: The fourth battle of Kawanakajima between the Uesugi and Takeda at Hachimanbara takes place.
1561: Guido de Bres draws up the Belgic Confession of Protestant faith.
1562: Mughal emperor Akbar reconciles the Muslim and Hindu factions by marrying into the powerful Rajput Hindu caste.
1562–1598: French Wars of Religion between Catholics and Huguenots.
1562: Massacre of Wassy and Battle of Dreux in the French Wars of Religion.
1562: Portuguese Dominican priests build a palm-trunk fortress which Javanese Muslims burned down the following year. The fort was rebuilt from more durable materials and the Dominicans commenced the Christianisation of the local population.
1563: Plague outbreak claimed 80,000 people in Elizabethan England. In London alone, over 20,000 people died of the disease.
1564: Galileo Galilei born on February 15
1564: William Shakespeare baptized 26 April
1565: Deccan sultanates defeat the Vijayanagara Empire at the Battle of Talikota.
1565: Mir Chakar Khan Rind dies at aged 97.
1565: Estácio de Sá establishes Rio de Janeiro in Brazil.
1565: The Hospitallers, a Crusading Order, defeat the Ottoman Empire at the siege of Malta (1565).
1565: Miguel López de Legazpi establishes in Cebu the first Spanish settlement in the Philippines starting a period of Spanish colonization that would last over three hundred years.
1565: Spanish navigator Andres de Urdaneta discovers the maritime route from Asia to the Americas across the Pacific Ocean, also known as the tornaviaje.
1565: Royal Exchange is founded by Thomas Gresham.
1566: Suleiman the Magnificent, ruler of the Ottoman Empire, dies on September 7, during the battle of Szigetvar.
1566–1648: Eighty Years' War between Spain and the Netherlands.
1566: Da le Balle Contrade d'Oriente, composed by Cipriano de Rore.
1567: After 45 years' reign, Jiajing Emperor died in the Forbidden City, Longqing Emperor ascended the throne of Ming dynasty.
1567: Mary, Queen of Scots, is imprisoned by Elizabeth I.
1568: The Transylvanian Diet, under the patronage of the prince John Sigismund Zápolya, the former king of Hungary, inspired by the teachings of Ferenc Dávid, the founder of the Unitarian Church of Transylvania, promulgates the Edict of Torda, the first law of freedom of religion and of conscience in the World.
1568–1571: Morisco Revolt in Spain.
1568–1600: The Azuchi-Momoyama period in Japan.
1568: Hadiwijaya sent his adopted son and son in-law Sutawijaya, who would later become the first ruler of the Mataram dynasty of Indonesia, to kill Arya Penangsang.
1569: Rising of the North in England.
1569: Mercator 1569 world map published by Gerardus Mercator.
1569: The Polish–Lithuanian Commonwealth is created with the Union of Lublin which lasts until 1795.
1569: Peace treaty signed by Sultan Hairun of Ternate and Governor Lopez De Mesquita of Portugal.
1570s
1570: Ivan the Terrible, tsar of Russia, orders the massacre of inhabitants of Novgorod.
1570: Pope Pius V issues Regnans in Excelsis, a papal bull excommunicating all who obeyed Elizabeth I and calling on all Catholics to rebel against her.
1570: Sultan Hairun of Ternate (in present-day Indonesia) is killed by the Portuguese. Babullah becomes the next Sultan.
1570: 20,000 inhabitants of Nicosia in Cyprus were massacred and every church, public building, and palace was looted. Cyprus fell to the Ottoman Turks the following year.
1571: Pope Pius V completes the Holy League as a united front against the Ottoman Turks, responding to the fall of Cyprus to the Ottomans.
1571: The Spanish-led Holy League navy destroys the Ottoman Empire navy at the Battle of Lepanto.
1571: Crimean Tatars attack and sack Moscow, burning everything but the Kremlin.
1571: American Indians kill Spanish missionaries in what would later be Jamestown, Virginia.
1571: Spanish conquistador Miguel López de Legazpi establishes Manila, Philippines as the capital of the Spanish East Indies.
1572: Brielle is taken from Habsburg Spain by Protestant Watergeuzen in the Capture of Brielle, in the Eighty Years' War.
1572: Spanish conquistadores apprehend the last Inca leader Tupak Amaru at Vilcabamba, Peru, and execute him in Cuzco.
1572: Jeanne d'Albret dies aged 43 and is succeeded by Henry of Navarre.
1572: Catherine de' Medici instigates the St. Bartholomew's Day massacre which takes the lives of Protestant leader Gaspard de Coligny and thousands of Huguenots. The violence spreads from Paris to other cities and the countryside.
1572: First edition of the epic The Lusiads of Luís Vaz de Camões, three years after the author returned from the East.
1572: The 9 years old Taizi, Zhu Yijun ascended the throne of Ming dynasty, known as Wanli Emperor.
1573: After heavy losses on both sides the siege of Haarlem ends in a Spanish victory.
1574: in the Eighty Years' War the capital of Zeeland, Middelburg declares for the Protestants.
1574: After a siege of 4 months the siege of Leiden ends in a comprehensive Dutch rebel victory.
1575: Oda Nobunaga finally captures Nagashima fortress.
1575: Following a five-year war, the Ternateans under Sultan Babullah defeated the Portuguese.
1576: Tahmasp I, Safavid shah, dies.
1576: The Battle of Haldighati is fought between the ruler of Mewar, Maharana Pratap and the Mughal Empire's forces under Emperor Akbar led by Raja Man Singh.
1576: Sack of Antwerp by badly paid Spanish soldiers.
1577–1580: Francis Drake circles the world.
1577: Ki Ageng Pemanahan built his palace in Pasargede or Kotagede.
1578: King Sebastian of Portugal is killed at the Battle of Alcazarquivir.
1578: The Portuguese establish a fort on Tidore but the main centre for Portuguese activities in Maluku becomes Ambon.
1578: Sonam Gyatso is conferred the title of Dalai Lama by Tumed Mongol ruler, Altan Khan. Recognised as the reincarnation of two previous Lamas, Sonam Gyatso becomes the third Dalai Lama in the lineage.
1578: Governor-General Francisco de Sande officially declared war against Brunei in 1578, starting the Castilian War of 1578.
1579: The Union of Utrecht unifies the northern Netherlands, a foundation for the later Dutch Republic.
1579: The Union of Arras unifies the southern Netherlands, a foundation for the later states of the Spanish Netherlands, the Austrian Netherlands and Belgium.
1579: The British navigator Sir Francis Drake passes through Maluku and transit in Ternate on his circumnavigation of the world. The Portuguese establish a fort on Tidore but the main centre for Portuguese activities in Maluku becomes Ambon.
1580s
1580: Drake's royal reception after his attacks on Spanish possessions influences Philip II of Spain to build up the Spanish Armada. English ships in Spanish harbours are impounded.
1580: Spain unifies with Portugal under Philip II. The struggle for the throne of Portugal ends the Portuguese Empire. The Spanish and Portuguese crowns are united for 60 years, i.e. until 1640.
1580–1587: Nagasaki comes under control of the Jesuits.
1581: Dutch Act of Abjuration, declaring abjuring allegiance to Philip II of Spain.
1581: Bayinnaung dies at the age of 65.
1582: Oda Nobunaga commits seppuku during the Honnō-ji Incident coup by his general, Akechi Mitsuhide.
1582: Pope Gregory XIII issues the Gregorian calendar. The last day of the Julian calendar was Thursday, 4 October 1582 and this was followed by the first day of the Gregorian calendar, Friday, 15 October 1582
1582: Yermak Timofeyevich conquers the Siberia Khanate on behalf of the Stroganovs.
1583: Denmark builds the world's first theme park, Bakken.
1583: Death of Sultan Babullah of Ternate.
1584–1585: After the siege of Antwerp, many of its merchants flee to Amsterdam. According to Luc-Normand Tellier, "At its peak, between 1510 and 1557, Antwerp concentrated about 40% of the world trade...It is estimated that the port of Antwerp was earning the Spanish crown seven times more revenues than the Americas."
1584: Ki Ageng Pemanahan died. Sultan Pajang raised Sutawijaya, son of Ki Ageng Pemanahan as the new ruler in Mataram, titled "Loring Ngabehi Market" (because of his home in the north of the market).
1585: Akbar annexes Kashmir and adds it to the Kabul Subah
1585: Colony at Roanoke founded in North America.
1585–1604: The Anglo-Spanish War is fought on both sides of the Atlantic.
1587: Mary, Queen of Scots is executed by Elizabeth I.
1587: The reign of Abbas I marks the zenith of the Safavid dynasty.
1587: Troops that would invade Pajang Mataram Sultanate storm ravaged the eruption of Mount Merapi. Sutawijaya and his men survived.
1588: Mataram into the kingdom with Sutawijaya as Sultan, titled "Senapati Ingalaga Sayidin Panatagama" means the warlord and cleric Manager Religious Life.
1588: England repulses the Spanish Armada.
1589: Spain repulses the English Armada.
1589: Catherine de' Medici dies at aged 69.
1590–1600
1590: Siege of Odawara: the Go-Hojo clan surrender to Toyotomi Hideyoshi, and Japan is unified.
1591: Gazi Giray leads a huge Tatar expedition against Moscow.
1591: In Mali, Moroccan forces of the Sultan Ahmad al-Mansur led by Judar Pasha defeat the Songhai Empire at the Battle of Tondibi.
1592–1593: John Stow reports 10,675 plague deaths in London, a city of approximately 200,000 people.
1592–1598: Korea, with the help of Ming dynasty China, repels two Japanese invasions.
1593–1606: The Long War between the Habsburg monarchy and the Ottoman Turks.
1594: St. Paul's College, Macau, founded by Alessandro Valignano.
1595: First Dutch expedition to Indonesia sets sail for the East Indies with two hundred and forty-nine men and sixty-four cannons led by Cornelis de Houtman.
1596: Birth of René Descartes.
1596: June, de Houtman's expedition reaches Banten the main pepper port of West Java where they clash with both the Portuguese and Indonesians. It then sails east along the north coast of Java losing twelve crew to a Javanese attack at Sidayu and killing a local ruler in Madura.
1597: Romeo and Juliet is published.
1597: Cornelis de Houtman's expedition returns to the Netherlands with enough spices to make a considerable profit.
1598: The Edict of Nantes ends the French Wars of Religion.
1598: Abbas I moves Safavids capital from Qazvin to Isfahan in 1598.
1598–1613: Russia descends into anarchy during the Time of Troubles.
1598: The Portuguese require an armada of 90 ships to put down a Solorese uprising. (to 1599)
1598: More Dutch fleets leave for Indonesia and most are profitable.
1598: The province of Santa Fe de Nuevo México is established in Northern New Spain. The region would later become a territory of Mexico, the New Mexico Territory in the United States, and the US State of New Mexico.
1598: Death of Toyotomi Hideyoshi, known as the unifier of Japan.
1599: The Mali Empire is defeated at the Battle of Jenné.
1599: The van Neck expedition returns to Europe. The expedition makes a 400 per cent profit. (to 1600)
1599: March, Leaving Europe the previous year, a fleet of eight ships under Jacob van Neck was the first Dutch fleet to reach the ‘Spice Islands’ of Maluku.
1600: Giordano Bruno is burned at the stake for heresy in Rome.
1600: Battle of Sekigahara in Japan. End of the Warring States period and beginning of the Edo period.
1600: The Portuguese win a major naval battle in the bay of Ambon. Later in the year, the Dutch join forces with the local Hituese in an anti-Portuguese alliance, in return for which the Dutch would have the sole right to purchase spices from Hitu.
1600: Elizabeth I grants a charter to the British East India Company beginning the English advance in Asia.
1600: Michael the Brave unifies the three principalities: Wallachia, Moldavia and Transylvania after the Battle of Șelimbăr from 1599.
Undated
Polybius' The Histories translated into Italian, English, German and French.
Mississippian culture disappears.
Medallion rug, variant Star Ushak style, Anatolia (modern Turkey), is made. It is now kept at the Saint Louis Art Museum.
Gallery
Inventions, discoveries, introductions
Related article: List of 16th century inventions.
The Columbian Exchange introduces many plants, animals and diseases to the Old and New Worlds.
Introduction of the spinning wheel revolutionizes textile production in Europe.
The letter J is introduced into the English alphabet.
1500: First portable watch is created by Peter Henlein of Germany.
1513: Juan Ponce de León sights Florida and Vasco Núñez de Balboa sights the eastern edge of the Pacific Ocean.
1519–1522: Ferdinand Magellan and Juan Sebastián Elcano lead the first circumnavigation of the world.
1519–1540: In America, Hernando de Soto expeditions map the Gulf of Mexico coastline and bays.
1525: Modern square root symbol (√)
1540: Francisco Vásquez de Coronado sights the Grand Canyon.
1541–42: Francisco de Orellana sails the length of the Amazon River.
1542–43: Firearms are introduced into Japan by the Portuguese.
1543: Copernicus publishes his theory that the Earth and the other planets revolve around the Sun
1545: Theory of complex numbers is first developed by Gerolamo Cardano of Italy.
1558: Camera obscura is first used in Europe by Giambattista della Porta of Italy.
1559–1562: Spanish settlements in Alabama/Florida and Georgia confirm dangers of hurricanes and local native warring tribes.
1565: Spanish settlers outside New Spain (Mexico) colonize Florida's coastline at St. Augustine.
1565: Invention of the graphite pencil (in a wooden holder) by Conrad Gesner. Modernized in 1812.
1568: Gerardus Mercator creates the first Mercator projection map.
1572: Supernova SN 1572 is observed by Tycho Brahe in the Milky Way.
1582: Gregorian calendar is introduced in Europe by Pope Gregory XIII and adopted by Catholic countries.
c. 1583: Galileo Galilei of Pisa, Italy identifies the constant swing of a pendulum, leading to development of reliable timekeepers.
1585: earliest known reference to the 'sailing carriage' in China.
1589: William Lee invents the stocking frame.
1591: First flush toilet is introduced by Sir John Harrington of England, the design published under the title 'The Metamorphosis of Ajax'.
1593: Galileo Galilei invents a thermometer.
1596: William Barents discovers Spitsbergen.
1597: Opera in Florence by Jacopo Peri.
See also
Entertainment in the 16th century
References
Further reading
Langer, William. An Encyclopedia of World History (5th ed. 1973); highly detailed outline of events online free
External links
Timelines of 16th century events, science, culture and persons
Centuries
Early modern period
2nd millennium
| 0.760226 | 0.997103 | 0.758024 |
Caveman
|
The caveman is a stock character representative of primitive humans in the Paleolithic. The popularization of the type dates to the early 20th century, when Neanderthals were influentially described as "simian" or "ape-like" by Marcellin Boule and Arthur Keith.
The term "caveman" has its taxonomic equivalent in the now-obsolete binomial classification of Homo troglodytes (Linnaeus, 1758).
Characteristics
Cavemen are typically portrayed as wearing shaggy animal hides, and capable of cave painting like behaviorally modern humans of the last glacial period. They are often shown armed with rocks, cattle bone clubs, spears, or sticks with rocks tied to them, and are portrayed as unintelligent, easily frightened, and aggressive. Typically, they have a low pitched rough voice and make vocalisations such as "ooga-booga" and grunts or speak using simple phrases. Popular culture also frequently represents cavemen as living with, or alongside of, dinosaurs, even though non-avian dinosaurs became extinct at the end of the Cretaceous period, 66 million years before the emergence of Homo sapiens. The era typically associated with the archetype is the Paleolithic Era, sometimes referred to as the Stone Age, though the Paleolithic is but one part of the Stone Age. This era extends from more than 2 million years into the past until between 40,000 and 5,000 years before the present (i.e., from around 2,000 kya to between 40 and 5 kya).
The image of these people living in caves arises from the fact that caves are where the preponderance of artifacts have been found from European Stone Age cultures. However, this most likely reflects the degree of preservation that caves provide over the millennia, rather than an indication of them being a typical form of shelter. Until the last glacial period, the great majority of humans did not live in caves, as nomadic hunter-gatherer tribes lived in a variety of temporary structures, such as tents and wooden huts (e.g., at Ohalo). A few genuine cave dwellings did exist, however, such as at Mount Carmel in Israel.
Stereotypical cavemen have traditionally been depicted wearing smock-like garments made from the skins of animals and held up by a shoulder strap on one side, or loincloths made from leopard or tiger skins. Stereotypical cavewomen are similarly depicted, but sometimes with slimmer proportions and bones tied up in their hair. They are also depicted carrying large clubs approximately conical in shape. They often have grunt-like names, such as "Ugg" and "Zog".
History
Caveman-like heraldic "wild men" were found in European and African iconography for hundreds of years. During the Middle Ages, these beings were generally depicted in art and literature as bearded and covered in hair, and often wielding clubs and dwelling in caves. While wild men were always depicted as living outside of civilization, it was not always clearly whether they were human or non-human.
In Sir Arthur Conan Doyle's The Lost World (1912), ape-men are depicted in a fight with modern humans. How the First Letter Was Written and How the Alphabet was Made are two of Rudyard Kipling's Just So Stories (1902) featuring a group of cave-people. Edgar Rice Burroughs adapted this idea for The Land That Time Forgot (1918). A genre of cavemen films emerged, typified by D. W. Griffith's Man's Genesis (1912); they inspired Charles Chaplin's satiric take in His Prehistoric Past (1914), as well as Brute Force (1914), The Cave Man (1912), and later, Cave Man (1934). From the descriptions, Griffith's characters cannot talk, and use sticks and stones for weapons, while the hero of Cave Man is a Tarzanesque figure who fights dinosaurs. Captain Caveman and the Teen Angels (1977-1980), is an animated comedy depicting cavemen as being hairy and carrying clubs.
Griffith's Brute Force represents one of the earliest portrayals of cavemen and dinosaurs together, with its depiction of a Ceratosaurus. The film reinforced the incorrect notion that non-avian dinosaurs co-existed with prehistoric humans. The anachronistic combination of cavemen with dinosaurs eventually became a cliché, and has often been intentionally invoked for comedic effect. The comic strips B.C., Alley Oop, the Spanish comic franchise Mortadelo y Filemón, and occasionally The Far Side and Gogs portray "cavemen" with dinosaurs. Gary Larson, in his 1989 book The Prehistory of the Far Side, stated he once felt that he needed to confess his cartooning sins in this regard: "O Father, I Have Portrayed Primitive Man and Dinosaurs In The Same Cartoon". The animated series The Flintstones, a spoof on family sitcoms, portrays the Flintstones even using dinosaurs, pterosaurs and prehistoric mammals as tools, household appliances, vehicles, and construction equipment.
See also
Cave dweller
Cavewoman, a 1993-2009 American comic book series
Cavegirl, a 2002-03 British TV series
Walking with Cavemen, 2003 documentary miniseries
Dawn of Humanity, 2015 PBS film
Man cave
Neanderthals in popular culture
Prehistoric fiction
Troglodyte (disambiguation)
Wild man
References
External links
Apes as Human at The Encyclopedia of Science Fiction
Cavemen at Comic Vine
Origin of Man at The Encyclopedia of Science Fiction
Adventure fiction
Fantasy tropes
Prehistoric people in popular culture
Stock characters
Wild men
| 0.760006 | 0.997367 | 0.758004 |
Deglobalization
|
Deglobalization or deglobalisation is the process of diminishing interdependence and integration between certain units around the world, typically nation-states. It is widely used to describe the periods of history when economic trade and investment between countries decline. It stands in contrast to globalization, in which units become increasingly integrated over time, and generally spans the time between periods of globalization. While globalization and deglobalization are antitheses, they are not mirror images.
The term of deglobalization has derived from some of the very profound change in many developed nations, where trade as a proportion of total economic activity until the 1970s was below previous peak levels in the early 1910s. This decline reflects that their economies become less integrated with the rest of the world economies in spite of the deepening scope of economic globalization. At the global level only two longer periods of deglobalization occurred, namely in the 1930s during the Great Depression and 2010s, when following the Great Trade Collapse the period of the World Trade Slowdown set in.
The occurrence of deglobalization has strong proponents who have claimed the death of globalization, but is also contested by the former Director-General of the World Trade Organization Pascal Lamy and leading academics such as Michael Bordo who argue that it is too soon to give a good diagnosis and Mervyn Martin who argues that US and UK policies are rational answers to essential temporary problems of even strong nations.
While as with globalization, deglobalization can refer to economic, trade, social, technological, cultural and political dimensions, much of the work that has been conducted in the study of deglobalization refers to the field of international economics.
1930s versus 2010s
Periods of deglobalization have mainly been seen as interesting comparators to other periods, such as 1850–1914 and 1950–2007, in which globalization had been the norm, given that globalization is the norm for most people and because the interpretation of the global economy has mainly been framed as inevitably increasing integration. Therefore, even periods of stagnant international interaction are often wrongly seen as periods of deglobalization. Recently, scientists have started to also compare the major periods of deglobalization in order to better understand drivers and consequences of this phenomenon.
The two major phases of deglobalization are not identical twins. The two phases of deglobalization were equally triggered by a demand shock in the wake of a financial crisis. Both in the 1930s and in the 2000s the composition of trade was a second key determinant: manufacturing trade bore the brunt of the contraction. One important finding is that country experiences both during the Great Depression and Great Recession are very heterogeneous so that one-size-fits-all policies to counter negative impacts of deglobalization are inappropriate. In the 1930s, democracies supported free trade, and deglobalization was driven by autocratic decisions to strengthen self-sufficiency. In the 2010s, political institutions are just as significant, but now democratic decisions such as the election of President Trump with an America First agenda and Brexit drive the deglobalization process worldwide. Indeed, while the industrialised countries in the 2010s avoided the pitfalls of protectionism and deflation, they have experienced different political dynamics.
Measures of deglobalization
As with globalization, economic deglobalization can be measured in different ways. These centre around the four main economic flows:
Goods and services, e.g. exports plus imports as a proportion of national income or per head of population.
Labour/people, e.g. net migration rates; inward or outward migration flows, weighted by population (and resultant remittances in per cent of GDP)
Capital, e.g. inward or outward direct investment as a proportion of national income or per head of population
It is generally not thought possible to measure deglobalization through lack of flows of technology, the fourth main flow.
Those areas that are measurable do suggest other possible measures, including:
Average tariffs
Border restrictions on labour
Capital controls, including restrictions on foreign direct investment or outward direct investment
The multi-dimensional globalization index of KOF Swiss Economic Institute shows a clear break for economic globalization in 2009 in 2015 KOF observed for its overall index: "The level of globalisation worldwide increased rapidly between 1990 and 2007 and has risen only slightly since the Great Recession. In 2015, globalisation decreased for the first time since 1975. The fall was due to the decline in economic globalisation, with social globalisation stagnating and political globalisation increasing slightly."
Other indicators of deglobalization include the development of Foreign Direct Investment, that according to UNCTAD slipped further in 2017 and in stark contrast with production.
Risks of deglobalization
Typically a reduction of the level of international integration of economies and the world economy at large are expected to exert second round effects related to four feedback mechanisms:
A reduction of (the rate of growth) of international trade will feed negatively into long-run growth.
A loss of interaction, the co-movement of economies.
Trade policy feedbacks in the sense that reduced international interaction and lower growth will stimulate protectionism and non-economic issue areas where reduced cooperation among countries and even an increasing risk of international conflict can be expected.
International political economy of deglobalization
Deglobalization has also been used as a political agenda item or a term in framing the debate on a new World economic order, for example by Walden Bello in his 2005 book Deglobalization.
One of the prominent examples of deglobalization movement could be found in the United States of America, where the Bush and Obama administration instituted Buy American Act clause as party of massive stimulus package, which was designed to favor American-made goods over traded goods. Likewise, the EU has imposed new subsidies to protect their agricultural sectors for their own protection. These movements of deglobalization can be seen as the example of how developed nations react to the Financial crisis of 2007–08 through deglobalization movements.
Recently a change in the pattern of anti-globalism has been observed: anti-globalism now has a strong foothold in the Global North and among right-wing (conservative) politicians, with much different attitudes in the Global South, particular among the BRICS countries.
See also
I'm Backing Britain
Trade-to-GDP ratio
References
Cultural geography
Economic geography
Globalization
World government
| 0.770351 | 0.983948 | 0.757985 |
Religions by country
|
This is an overview of religion by country or territory in 2010 according to a 2012
Pew Research Center report. The article Religious information by country gives information from The World Factbook of the CIA and the U.S. Department of State.
World
Africa
Americas
Asia
Europe
Oceania
See also
Religion
Faith
Theocracy
Buddhism by country
Christianity by country (Catholic Church by country, Protestantism by country, Eastern Orthodoxy by country and Oriental Orthodoxy by country)
Hinduism by country
Islam by country
Judaism by country or Jewish population by country
List of religious populations
Importance of religion by country
List of countries by irreligion
Sikhism by country
Religion and geography
Religious information by country
State religion
Notes
References
Adherents.com World Religions Religion Statistics Geography Church Statistics
BBC News's Muslims in Europe: Country guide
CIA FactBook
Religious Intelligence
The University of Virginia
The US State Department's International Religious Freedom Report 2007
The US State Department's Background Notes
Vipassana Foundation's Buddhists around the world
World Statesmen
Catholic Hierarchy's Its Bishops and Dioceses, Current and Past
External links
Geographical Distribution of Major World Religions (showing regional variations inside the same country)
Religion-related lists
| 0.760843 | 0.996199 | 0.757951 |
Anglo-Saxonism in the 19th century
|
Anglo-Saxonism is a cultural belief system developed by British and American intellectuals, politicians, and academics in the 19th century. Racialized Anglo-Saxonism contained both competing and intersecting doctrines, such as Victorian era Old Northernism and the Teutonic germ theory which it relied upon in appropriating Germanic (particularly Norse) cultural and racial origins for the Anglo-Saxon "race".
Predominantly a product of certain Anglo-American societies, and organisations of the era:
In 2017, Mary Dockray-Miller, an American scholar of Anglo-Saxon England, stated that there was an increasing interest in the study of Anglo-Saxonism in the 19th century. Anglo-Saxonism is regarded as a predecessor ideology to the later Nordicism of the 20th century, which was generally less anti-Celtic and broadly sought to racially reconcile Celtic identity with Germanic under the label of Nordic.
Background
In terminology, Anglo-Saxonism is by far the most commonly used phrase to describe the historical ideology of rooting a Germanic racial identity, whether Anglo-Saxon, Norse, or Teutonic, into the concept of the English, Scottish or British nation, and subsequently founded-nations such as the United States, Canada, Australia, and New Zealand.
In both historical and contemporary literature however, Anglo-Saxonism has many derivations, such as the commonly used phrase Teutonism or Anglo-Teutonism, which can be used as form of catch-all to describe American or British Teutonism and further extractions such as English or Scottish Teutonism. It is also occasionally encompassed by the longer phrase Anglo-Saxon Teutonism, or shorter labels Anglism or Saxonism, along with the most frequently used term of Anglo-Saxonism itself.
American medievalist Allen Frantzen credits historian L. Perry Curtis's use of Anglo-Saxonism as a term for "an unquestioned belief in Anglo-Saxon 'genius'" during this period of history. Curtis has pointed toward a radical change from adulation of Anglo-Saxon institutions in the 16th and 17th centuries towards something more racial and imperialist. Historian Barbara Yorke, who specializes in the subject, has similarly argued that the earlier self-governance oriented Anglo-Saxonism of Thomas Jefferson's era had by the mid-19th century developed into "a belief in racial superiority".
According to Australian scholar Helen Young, the ideology of Anglo-Saxonism was "profoundly racist" and influenced authors such as J. R. R. Tolkien and his fictional works into the 20th century. Similarly, Marxist writer Peter Fryer claimed that "Anglo-Saxonism was a form of racism that originally arose to justify the British conquest and occupation of Ireland". Some scholars believe the Anglo-Saxonism championed by historians and politicians of the Victorian era influenced and helped to spawn the Greater Britain Movement of the mid-20th-century. In 2019, the International Society of Anglo-Saxonists decided to change its name due to the potential confusion of their organization's name with racist Anglo-Saxonism.
At the passing of the Anglo-Saxonism era, progressive intellectual Randolph Bourne's essay Trans-National America reacted positively to integration ("We have needed the new peoples"), and while mocking the "indistinguishable dough of Anglo-Saxonism" in the context of very early 20th-century migration to the United States, Bourne manages to express an anxiety at the American melting pot theory.
Origins
Early references
In 1647, English MP John Hare, who served during the Long Parliament, issued a pamphlet declaring England as a "member of the Teutonick nation, and descended out of Germany". In the context of the English Civil War, this anti-Norman and pro-Germanic paradigm has been identified as perhaps the earliest iteration of "English Teutonism" by Professor Nick Groom, who has suggested the 1714 Hanoverian succession, where the German House of Hanover ascended the throne of Great Britain, is the culmination of this Anglo-Saxonist ideology.
Teutonic germ theory
Many historians and political scientists in Britain and the United States supported it in the 19th century. The theory supposed that American and British democracy and institutions had their roots in Teutonic peoples, and that Germanic tribes had spread this "germ" within their race from ancient Germany to England and on to North America. Advocacy in Britain included the likes of John Mitchell Kemble, William Stubbs, and Edward Augustus Freeman. Within the U.S., future president Woodrow Wilson, along with Albert Bushnell Hart and Herbert Baxter Adams were applying historical and social science in advocacy for Anglo-Saxonism through the theory. In the 1890s, under the influence of Frederick Jackson Turner, Wilson abandoned the Teutonic germ theory in favor of a frontier model for the sources of American democracy.
Ancestry and racial identity
Germanic and Teutonic
Anglo-Saxonism of the era sought to emphasize Britain's cultural and racial ties with Germany, frequently referring to Teutonic peoples as a source of strength and similarity. Contemporary historian Robert Boyce notes that many British politicians of the 19th century promoted these Germanic links, such as Henry Bulwer, 1st Baron Dalling and Bulwer who said that it was "in the free forests of Germany that the infant genius of our liberty was nursed", and Thomas Arnold who claimed that "Our English race is the German race; for though our Norman fathers had learned to speak a stranger's language, yet in blood, as we know, they were the Saxon's brethren both alike belonging to the Teutonic or German stock".
Norman and Celtic
Anglo-Saxonists in the 19th century often sought to downplay, or outright denigrate, the significance of both Norman and Celtic racial and cultural influence in Britain. Less frequently however, some form of solidarity was expressed by some Anglo-Saxonists, who conveyed that Anglo-Saxonism was simply "the best-known term to denote that mix of Celtic, Saxon, Norse, and Norman blood which now flows in the united stream in the veins of the Anglo-Saxon peoples". Although a staunch Anglo-Saxonist, Thomas Carlyle had even disparagingly described the United States as a kind of "formless" Saxon tribal order, and claimed that Normans had given Anglo-Saxons and their descendants a greater sense of order for national structure, and that this was particularly evident in England.
Northern European
Edward Augustus Freeman, a leading Anglo-Saxonist of the era, promoted a larger Northern European identity, favorably comparing civilizational roots from "German forest" or "Scandinavian rock" with the cultural legacy of ancient Greece and Rome. American scholar Mary Dockray-Miller expands on this concept to suggest that pre-World War I Anglo-Saxonism ideology helped establish the "primacy of northern European ancestry in United States culture at large".
Lowland Scottish
During the 19th century in particular, Scottish people living in Lowland Scotland, near the Anglo-Scottish border, "increasingly identified themselves with the Teutonic world destiny of Anglo-Saxonism", and sought to separate their identity from that of Highland Scots, or the "inhabitants of Romantic Scotland". With some considering themselves "Anglo-Saxon Lowlanders", public opinion of Lowland Scots turned on Gaels within the context of the Highland Famine, with suggestions of deportations to British colonies for Highlanders of the "'inferior Celtic race". Amongst others, Goldwin Smith, a devout Anglo-Saxonist, believed the Anglo-Saxon "race" included Lowland Scots and should not be exclusively defined by English ancestry within the context of the United Kingdom's greater empire.
Thomas Carlyle, himself a Scot, was one of the earliest notable people to express a "belief in Anglo-Saxon racial superiority". Historian Richard J. Finlay has suggested that the Scots National League, which campaigned for Scotland to separate from the United Kingdom, was a response or opposition to the history of "Anglo-Saxon teutonism" embedded in some Scottish culture.
Mythology and religions
Anglo-Saxonism was largely aligned with Protestantism, generally perceiving Catholics as outsiders, and was orientated as an ideology in opposition to other "races", such as the "Celts" of Ireland and "Latins" of Spain.
Charles Kingsley, Regius Professor of Modern History at the University of Cambridge, was particularly focused on there being a "strong Norse element in Teutonism and Anglo-Saxonism". He blended Protestantism of the day with the Old Norse religion, saying that the Church of England was "wonderfully and mysteriously fitted for the souls of a free Norse-Saxon race". He believed the ancestors of Anglo-Saxons, Norse people and Germanic peoples had physically fought beside the god Odin, and that the British monarchy of his time was genetically descended from him.
Political aims
Expansion
Embedded in 19th century, American Anglo-Saxonism was a growing sense that the "Anglo Saxon" race had to expand into surrounding territories. This particularly expressed itself in the ideology of manifest destiny, which claimed the U.S. had a right to expand across North America.
Shared citizenship
A persistent "Anglo-Saxonist" idea, Albert Venn Dicey believed in the creation of a shared citizenship between Britons and Americans, and the concept of cooperation, even federation, of those from the "Anglo-Saxon" race.
See also
Albion's Seed
Anglosphere
British Israelism
Englishry
Our Island Story
White Anglo-Saxon Protestants
White Dominions
References
Further reading
Anderson, Stuart. Race and Rapprochement: Anglo-Saxonism and Anglo-American Relations, 1895–1904 (London: Associated University Presses, 1981).
Healy, David. U.S. Expansionism: The Imperialist Urge in the 1890s (University of Wisconsin Press, 1970).
Horsman, Reginald. Race and Manifest Destiny: The Origins of American Racial Anglo-Saxonism (Harvard University Press, 1981) online.
Kramer, Paul. “Empires, Exceptions, and Anglo-Saxons: Race and Rule between the British and United States Empires, 1880–1910.” Journal of American History 88#4 (2002): 1315–1353. online
Ninkovich, Frank. The United States and Imperialism (Oxford: Blackwell, 2001).
Primary sources
Strong, Josiah. Our Country: Its Possible Future and Its Present Crisis (New York, 1891).
Nordicism
Anglo-Saxon society
Anglo-Saxon England
Anglo-Norse England
British nationalism
Anglosphere
Cultural history of the United Kingdom
| 0.774859 | 0.978169 | 0.757943 |
Melting pot
|
A melting pot is a monocultural metaphor for a heterogeneous society becoming more homogeneous, the different elements "melting together" with a common culture; an alternative being a homogeneous society becoming more heterogeneous through the influx of foreign elements with different cultural backgrounds. It can also create a harmonious hybridized society known as cultural amalgamation. In the United States, the term is often used to describe the cultural integration of immigrants to the country. A related concept has been defined as "cultural additivity."
The melting-together metaphor was in use by the 1780s. The exact term "melting pot" came into general usage in the United States after it was used as a metaphor describing a fusion of nationalities, cultures and ethnicities in Israel Zangwill's 1908 play of the same name.
The desirability of assimilation and the melting pot model has been rejected by proponents of multiculturalism, who have suggested alternative metaphors to describe the current American society, such as a salad bowl, or kaleidoscope, in which different cultures mix, but remain distinct in some aspects. The melting pot continues to be used as an assimilation model in vernacular and political discourse along with more inclusive models of assimilation in the academic debates on identity, adaptation and integration of immigrants into various political, social and economic spheres.
Use of the term
The concept of immigrants "melting" into the receiving culture is found in the writings of J. Hector St. John de Crèvecœur. In his Letters from an American Farmer (1782) Crèvecœur writes, in response to his own question, "What then is the American, this new man?" that the American is one who "leaving behind him all his ancient prejudices and manners, receives new ones from the new mode of life he has embraced, the government he obeys, and the new rank he holds. He becomes an American by being received in the broad lap of our great Alma Mater. Here individuals of all nations are melted into a new race of men, whose labors and posterity will one day cause great changes in the world."
In 1845, Ralph Waldo Emerson, alluding to the development of European civilization out of the medieval Dark Ages, wrote in his private journal of America as the Utopian product of a culturally and racially mixed "smelting pot", but only in 1912 were his remarks first published.
A magazine article in 1876 used the metaphor explicitly:
In 1893, historian Frederick Jackson Turner also used the metaphor of immigrants melting into one American culture. In his essay The Significance of the Frontier in American History, he referred to the "composite nationality" of the American people, arguing that the frontier had functioned as a "crucible" where "the immigrants were Americanized, liberated and fused into a mixed race, English in neither nationality nor characteristics".
In his 1905 travel narrative The American Scene, Henry James discusses cultural intermixing in New York City as a "fusion, as of elements in solution in a vast hot pot".
United States
Melting pot "methods"
There were a number of ways that the melting pot is considered to have worked throughout American history. For example, baseball was often said to play a significant role in integrating immigrants in particular. The sport's unifying powers were first perceived in the aftermath of the 1860s Civil War. In terms of baseball's effect on native-born citizens, Jackie Robinson was a major black baseball-playing icon who crossed Major League Baseball's color line by 1947, which helped to reduce racial segregation.
Multiracial influences on culture
White Americans long regarded some elements of African-American culture quintessentially "American", while at the same time treating African Americans as second-class citizens. White appropriation, stereotyping and mimicking of black culture played an important role in the construction of an urban popular culture in which European immigrants could express themselves as Americans, through such traditions as blackface, minstrel shows and later in jazz and in early Hollywood cinema, notably in The Jazz Singer (1927).
Analyzing the "racial masquerade" that was involved in creation of a white "melting pot" culture through the stereotyping and imitation of black and other non-white cultures in the early 20th century, historian Michael Rogin has commented: "Repudiating 1920s nativism, these films [Rogin discusses The Jazz Singer, Old San Francisco (1927), Whoopee! (1930), King of Jazz (1930) celebrate the melting pot. Unlike other racially stigmatized groups, white immigrants can put on and take off their mask of difference. But the freedom promised immigrants to make themselves over points to the vacancy, the violence, the deception, and the melancholy at the core of American self-fashioning".
Ethnicity in films
This trend towards greater acceptance of ethnic and racial minorities was evident in popular culture in the combat films of World War II, starting with Bataan (1943). This film celebrated solidarity and cooperation between Americans of all races and ethnicities through the depiction of a multiracial American unit. At the time blacks and Japanese in the armed forces were still segregated, while Chinese and Indians were in integrated units.
Historian Richard Slotkin sees Bataan and the combat genre that sprang from it as the source of the "melting pot platoon", a cinematic and cultural convention symbolizing in the 1940s "an American community that did not yet exist", and thus presenting an implicit protest against racial segregation. However, Slotkin points out that ethnic and racial harmony within this platoon is predicated upon racist hatred for the Japanese enemy: "the emotion which enables the platoon to transcend racial prejudice is itself a virulent expression of racial hatred...The final heat which blends the ingredients of the melting pot is rage against an enemy which is fully dehumanized as a race of 'dirty monkeys. He sees this racist rage as an expression of "the unresolved tension between racialism and civic egalitarianism in American life".
Olympic Games
Following the September 11, 2001 terrorist attacks, the 2002 Winter Olympics in Salt Lake City strongly revived the melting pot image, returning to a bedrock form of American nationalism and patriotism. The reemergence of Olympic melting pot discourse was driven especially by the unprecedented success of African Americans, Mexican Americans, Asian Americans, and Native Americans in events traditionally associated with Europeans and white North Americans such as speed skating and the bobsled. The 2002 Winter Olympics was also a showcase of American religious freedom and cultural tolerance of the history of Utah's large majority population of the Church of Jesus Christ of Latter-day Saints, as well as representation of Muslim Americans and other religious groups in the U.S. Olympic team.
Melting pot and cultural pluralism
In Henry Ford's Ford English School (established in 1914), the graduation ceremony for immigrant employees involved symbolically stepping off an immigrant ship and passing through the melting pot, entering at one end in costumes designating their nationality and emerging at the other end in identical suits and waving American flags.
In response to the pressure exerted on immigrants to culturally assimilate and also as a reaction against the denigration of the culture of non-Anglo white immigrants by Nativists, intellectuals on the left, such as Horace Kallen in Democracy Versus the Melting-Pot (1915), and Randolph Bourne in Trans-National America (1916), laid the foundations for the concept of cultural pluralism. This term was coined by Kallen.
In the United States, where the term melting pot is still commonly used, the ideas of cultural pluralism and multiculturalism have, in some circles, taken precedence over the idea of assimilation. Alternate models where immigrants retain their native cultures such as the "salad bowl" or the "symphony" are more often used by sociologists to describe how cultures and ethnicities mix in the United States. Mayor David Dinkins, when referring to New York City, described it as "not a melting pot, but a gorgeous mosaic...of race and religious faith, of national origin and sexual orientation – of individuals whose families arrived yesterday and generations ago..."
Since the 1960s, much research in Sociology and History has disregarded the melting pot theory for describing interethnic relations in the United States and other countries.
Whether to support a melting-pot or multicultural approach has developed into an issue of much debate within some countries. For example, the French and British governments and populace are currently debating whether Islamic cultural practices and dress conflict with their attempts to form culturally unified countries.
Use in other regions
Israel
Today the reaction to this doctrine is ambivalent; some say that it was a necessary measure in the founding years, while others claim that it amounted to cultural oppression. Others argue that the melting pot policy did not achieve its declared target: for example, the persons born in Israel are more similar from an economic point of view to their parents than to the rest of the population.
Southeast Asia
The term has been used to describe a number of countries in Southeast Asia. Given the region's location and importance to trade routes between China and the Western world, certain countries in the region have become ethnically diverse. In Vietnam, a relevant phenomenon is "" (lit. "Three spears, one point," idiomatically "three teachers, one lesson"), references the harmonious co-existence and mutually influencing teachings of the nation's three major religious schools, Confucianism, Buddhism, and Taoism, demonstrating a process described as "cultural addivity".
In contrast to the melting pot theory, Malaysia and Singapore promote cultural preservation of their various ethnicities. In Malaysia they say "agama, bangsa, negara" which means "various religions, various ethnicities, one nation." Malaysia is made up of different religions and ethnicities yet all are citizens and everyone should respect one another, join hands, and work together. Each ethnicity should work to preserve their own ethnic identity while at the same time working together to build Malaysia as a national effort, living in peace and harmony.
In popular culture
Animated educational series Schoolhouse Rock! has a song entitled "The Great American Melting Pot".
Quotations
See also
Cultural amalgamation
Cultural assimilation
Cultural diversity
Cultural pluralism
Ethnic group
Hyphenated American
Interculturalism
Lusotropicalism
More Irish than the Irish themselves
Multiculturalism in Canada
Multicultural media in Canada
Nation-building
Race of the future
Racial integration
Social integration
Transculturation
Zhonghua minzu
References
External links
1780s neologisms
Culture of the United States
Cultural assimilation
English-language idioms
Idioms
Metaphors referring to objects
Nationalism
Political metaphors
| 0.76006 | 0.997167 | 0.757907 |
Finishing school
|
A finishing school focuses on teaching young women social graces and upper-class cultural rites as a preparation for entry into society. The name reflects the fact that it follows ordinary school and is intended to complete a young woman's education by providing classes primarily on deportment, etiquette, and other non-academic subjects. The school may offer an intensive course, or a one-year programme. In the United States, a finishing school is sometimes called a charm school.
Graeme Donald claims that the educational ladies' salons of the late 19th century led to the formal finishing institutions common in Switzerland around that time. At the schools' peak, thousands of wealthy young women were sent to one of the dozens of finishing schools available. The primary goals of such institutions were to teach students the skills necessary to attract a good husband, and to become interesting socialites and wives.
The 1960s marked the decline of the finishing schools worldwide. This decline can be attributed to the shifting conceptions of women's role in society, competition from tertiary education, to succession issues within the typically family-run schools, and, sometimes, to commercial pressures driven by the high value of the properties that the schools occupied. The 1990s saw a revival of the finishing school, although the business model was radically altered.
By country
Switzerland
In the early 20th century, Switzerland was known for its private finishing schools. Most operated in the French-speaking cantons near Lake Geneva. The country was favoured by parents and guardians because of its reputation as a healthful environment, its multi-lingual and cosmopolitan aura, and the country's political stability.
Notable examples
The finishing schools that made Switzerland renowned for such institutions included:
Brillantmont (founded in 1882, now an international secondary-school that offers a 'grade 14' or graduate year of cultural studies) and Château Mont-Choisi (founded in 1885, closed in 1995 or 1996). Both were in Lausanne. The Maharani of Jaipur (1919-2009) studied at Brillantmont. In her memoir, she portrayed the time as a happy one, in which she wrote letters to her husband-to-be and pursued skiing and other sports. Actress Gene Tierney (1920-1991) also attended Brillantmont, speaking only French and holidaying with fellow-students in Norway and England.
was attended by Carla Bruni-Sarkozy, as well as by Princess Elena of Romania, Monique Lhuillier, actress Kitty Carlisle, Saudi scholar Mai Yamani and New York socialite Fabiola Beracasa-Beckman. It was one of the first Swiss finishing schools in the 19th century and in its early years a pioneer in secondary education. It was owned by an Italian family for five years prior to its closure (due to financial reasons) after over 100 years of educating women. Like many of its peers it adopted a serious secondary-education programme in the early 20th century.
Institut Alpin Videmanette in Rougemont was attended by Diana, Princess of Wales (1961-1997), Princess Irene of Greece and Denmark, Tiggy Legge-Bourke and Tamara Mellon. Lady Diana was sent to Alpin Videmanette by her father after failing all her O-Levels. She had met the Prince of Wales that year.
Mon Fertile in Tolochenaz, educated Queen Camilla and Ingrid Detter de Lupis Frankopan.
Institut Le Mesnil was attended by Queen Anne-Marie of Greece after completing her high-school education at the nearby Le Chatelard School, also in Montreux. Le Mesnil, owned by the Navarro family, closed in 2004. Le Chatelard today offers education in the American model of junior-high and high-school up to the age of 17. The organization today offers savoir vivre and culinary courses along the lines of the traditional finishing schools, but these supplement rather than replace academic subjects.
Le Manoir, in Lausanne, educated British secret agent Vera Atkins (1908–2000) and a sister of the king of the Albanians. It had a private beach and students were taken skiing in St Moritz.
Institut Villa Pierrefeu in Glion, Vaud, founded in 1954, is the last remaining traditional Swiss finishing school.
Great Britain
In London there were a number of schools in the 20th century including the Cygnet's House, the Monkey Club, St James and Lucie Clayton. The latter two merged in 2005 to become St James and Lucie Clayton College and were joined by a third, Queens (a secretarial college), to become the current Quest Professional, although the curriculum stopped offering any etiquette or protocol training, which was instead absorbed by a former Lucie Clayton tutor, who started The English Manner in 2001, when Lucie Clayton wound up. It is in London's Victoria district and offers business administration courses for students aged 16–25 years old. It is coeducational.
Eggleston Hall was located in County Durham and taught young ladies aged 16–20 from the 1960s until the late 1980s.
Evendine Court in Malvern began as a small school in the late 19th century teaching young ladies the duties of their families' household staff, by requiring them to complete domestic work themselves. Courses typically lasted six weeks. By 1900, the school had become popular. It extended to several buildings and included a working dairy farm to teach practical farming. During the Second World War it adopted more traditional finishing school subjects for young women unable to travel to Europe. Pupil numbers remained high until the mid-1990s, with a broader curriculum covering cordon bleu cookery, self presentation, and secretarial skills. It closed in 1998.
Winkfield Place in Ascot specialised in culinary expertise and moved to a new location in Surrey around 1990 when it joined with Moor Park Finishing School before Moor Park closed in 1998/99. Winkfield Place was founded by women's educator Constance Spry as a flower arranging and domestic science school and had an international reputation. It taught girls across three terms of an academic year with the possibility of studying Le Cordon Bleu courses with Rosemary Hume in a fourth term.
About a decade after these schools had closed, mostly by the end of the 20th century, public relations and image consultancy firms started to appear in London offering largely 1- or 2-day finishing courses and social skills at commercial rate fees which were proportionately far higher that those charged by the schools.
The old finishing schools were stand-alone organizations that lasted 15–50 years and were often family run. Curricula varied between schools based on the proprietor's philosophy, much like the British private school model of the 18th and 19th centuries. Some schools offered some O-level and A-level courses or recognised arts and languages certificates. They sometimes allowed pupils to retake a course they may not have passed at secondary school level. They often taught languages and commercially and/or domestically applicable skills, such as cooking, secretarial and later business studies with the aim of broadening the students horizons from formal schooling education.
United States
Through much of their history, American finishing schools emphasised social graces and de-emphasised scholarship: society encouraged a polished young lady to hide her intellectual prowess for fear of frightening away suitors. For instance, Miss Porter's School in 1843 advertised itself as Miss Porter's Finishing School for Young Ladies—even though its founder was a noted scholar offering a rigorous curriculum that educated the illustrious classicist Edith Hamilton.
Today, with a new cultural climate and a different attitude to the role of women, the situation has reversed: Miss Porter's School downplays its origins as a finishing school, and emphasises the rigour of its academics. Likewise, Finch College on Manhattan's Upper East Side was "one of the most famed of U.S. girls' finishing schools", but its last president chose to describe it as a liberal arts college, offering academics as rigorous as Barnard or Bryn Mawr. It closed in 1976.
The term finishing school is occasionally used, or misused, in American parlance to refer to certain small women's colleges, primarily on the East Coast, that were once known for preparing their female students for marriage. Since the 1960s, many of these schools have closed as a result of financial difficulties. These stemmed from changing societal norms, which made it easier for women to pursue academic and professional paths.
In literature
The Finishing School, a 2004 novel by Scottish author Muriel Spark, concerns 'College Sunrise', a present-day finishing school in Ouchy on the banks of Lake Geneva near Lausanne in Switzerland. Unlike the traditional finishing schools, the one in this novel is mixed-sex.
References
School types
Women and education
| 0.761496 | 0.995283 | 0.757905 |
Whiggism
|
Whiggism or Whiggery is a political philosophy that grew out of the Parliamentarian faction in the Wars of the Three Kingdoms (1639–1651) and was concretely formulated by Lord Shaftesbury during the Stuart Restoration. The Whigs advocated the supremacy of Parliament (as opposed to that of the king), government centralization, and coercive Anglicisation through the educational system. They also staunchly opposed granting freedom of religion, civil rights, or voting rights to anyone who worshipped outside of the Established Churches of the realm. Eventually, the Whigs grudgingly conceded strictly limited religious toleration for Protestant dissenters, while continuing the religious persecution and disenfranchisement of Roman Catholics and Scottish Episcopalians. They were particularly determined to prevent the ascension of a Catholic heir presumptive to the British throne, especially of James II or his legitimate male descendants and instead granted the throne to the Protestant House of Hanover in 1714. Whig ideology is associated with early conservative liberalism.
Beginning with the Titus Oates plot and Exclusion Crisis of 1679-1681, and the Glorious Revolution of 1688–1689, Whiggism dominated English and British politics until about 1760, after which the Whigs splintered into different political factions. In the same year, King George III was crowned and allowed the Tories back into the Government. Even so, some modern historians now call the period between 1714 and 1783 the, "age of the Whig oligarchy".
Even after 1760, the Whigs still included about half of the newest noble families in England, Ireland, Wales, and Scotland, as well as most merchants, dissenters, and the middle classes. The opposing Tory position was held by the other great families, the non-juring and high church factions within the Church of England, many Catholics and Protestant Dissenters, most of the landed gentry and the traditional officer class of the British armed forces. Whigs especially opposed regime change efforts by adherents of Jacobitism, a movement of legitimist monarchists which promised freedom of religion and civil rights to all outside the Established Churches, devolution in the United Kingdom, linguistic rights for minority languages, and many other political reforms, and which shared a substantial overlap with and heavily influenced both early Toryism and what is now termed traditionalist conservatism. While in power, Whig politicians frequently denounced all their political opponents and critics as "Jacobites" or "dupes of Jacobites".
The term "Old Whigs" was also used in Great Britain for those Whigs who opposed Robert Walpole as part of the Country Party. Whiggism originally referred to the Whigs of the British Isles, but the name of "Old Whigs" was largely adopted by the American Patriots in the Thirteen Colonies. Before and during the American Revolution, American Whiggism, in a deeply ironic reversal, weaponized Whig political philosophy about the social contract enforced by the right of revolution against both the Whig-dominated government in Westminster and the Hanoverian monarchs. In the process, American Whiggism ultimately transitioned from monarchism into republicanism and Federalism, while also co-opting many traditionally Jacobite, Counter-Enlightenment, and early Tory positions. A similar but far more discreet co-opting was also taking place in the British Isles among many self-described Whigs, including Edmund Burke, Henry Grattan, William Wilberforce, Daniel O'Connell, and William Pitt the Younger. Even though they were often influenced in this regard by the writings of early Tories and other intellectual critics of the Whig party like Jonathan Swift, Lord Bolingbroke, and David Hume, these reformist Whigs, similarly to American Patriots, refused to use the word "Tory" as anything other than a term of abuse against those with more traditionalist Whig ideology, which ultimately changed the word's meaning completely. Whig history, which was largely developed by Thomas Babington Macaulay to justify the party's political ideology and past practices, remained the official history of the British Empire until serious challenges were raised to its claims by John Lingard, William Cobbett, Hilaire Belloc, G.K. Chesterton, Roger Scruton, Saunders Lewis, and John Lorne Campbell.
Origins of the term
Quickly following the adoption of "Whig" as the name of a political faction, the word "Whiggism" arose from the appendage of the suffix "", creating a term for the Whigs' political ideology. It was already in use by the 1680s. In 1682, Edmund Hickeringill published his History of Whiggism. In 1702, writing satirically in the guise of a Tory, Daniel Defoe asserted: "We can never enjoy a settled uninterrupted Union and Tranquility in this Nation, till the Spirit of Whiggisme, Faction, and Schism is melted down like the Old-Money". The name probably originates from a shortening of Whiggamore referring to the Whiggamore Raid.
The word "Whiggery", deriving from "Whig" and the suffix "", has a similar meaning and has been used since the late 1600s.
Origins
The true origins of what became known as Whiggism lie in the Wars of the Three Kingdoms and the power struggle between the Parliament of England and King Charles I, which eventually turned into the English Civil Wars, but only after the example of the successful use of violent opposition to the king set by the Bishops' Wars, which were fought between the same king in his capacity as king of Scotland on the one side and the Parliament of Scotland and the Church of Scotland on the other. However, the immediate origins of the Whigs and Whiggism were in the Exclusion Bill crisis of 1678 to 1681, in which a country party battled a court party in an unsuccessful attempt to exclude James, Duke of York, from succeeding his brother Charles II as king of England, Scotland and Ireland. This crisis was prompted by Charles's lack of a legitimate heir, by the discovery in 1673 that James was a Roman Catholic, and by the so-called Popish Plot of 1678.
While a major principle of Whiggism was opposition to popery, that was always much more than a mere religious preference in favour of Protestantism, although most Whigs did have such a preference. Sir Henry Capel outlined the principal motivation of the cry of "no popery" when he said in the House of Commons on 27 April 1679:
Although they were unsuccessful in preventing the accession of the Duke of York to the throne, the Whigs in alliance with William of Orange brought him down in the Glorious Revolution of 1688. By that event, a new supremacy of parliament was established, which itself was one of the principles of Whiggism, much as it had been the chief principle of the Roundheads in an earlier generation.
The great Whiggish achievement was the Bill of Rights of 1689. It made Parliament, not the Crown, supreme. It established free elections to the Commons (although they were mostly controlled by the local landlord), free speech in parliamentary debates, and asserted the prohibition of "cruel or unusual punishment".
Variations
Lee Ward (2008) argues that the philosophical origins of Whiggism came in James Tyrrell's Patriarcha Non Monarcha (1681), John Locke's Two Treatises of Government (1689) and Algernon Sidney's Discourses Concerning Government (1698). All three were united in opposing Sir Robert Filmer's defence of divine right and absolute monarchy. Tyrrell propounded a moderate Whiggism which interpreted England's balanced and mixed constitution "as the product of a contextualized social compact blending elements of custom, history, and prescription with inherent natural law obligations". Sidney, on the other hand, emphasised the main themes of republicanism and based Whig ideology in the sovereignty of the people by proposing a constitutional reordering that would both elevate the authority of Parliament and democratise its forms. Sidney also emphasised classical republican notions of virtue. Ward says that Locke's liberal Whiggism rested on a radically individualist theory of natural rights and limited government. Tyrrell's moderate position came to dominate Whiggism and British constitutionalism as a whole from 1688 to the 1770s. The more radical ideas of Sidney and Locke, argues Ward, became marginalised in Britain, but emerged as a dominant strand in American republicanism. The issues raised by the Americans, starting with the Stamp Act crisis of 1765, ripped Whiggism apart in a battle of parliamentary sovereignty (Tyrrell) versus popular sovereignty (Sidney and Locke).
Across the British Empire
Whiggism took different forms in England and Scotland, even though from 1707 the two nations shared a single parliament. While English Whiggism had at its heart the power of parliament, creating for that purpose a constitutional monarchy and a permanently Protestant succession to the throne, Scottish Whigs gave a higher priority to using power for religious purposes, including maintaining the authority of the Church of Scotland, justifying the Protestant Reformation and emulating the Covenanters.
There were also Whigs in the North American colonies and while Whiggism there had much in common with that in Great Britain, it too had its own priorities. In the unfolding of the American Revolution such Whiggism became known as republicanism.
In India, Prashad (1966) argues that the profound influence of the ideas of Edmund Burke introduced Whiggism into the mainstream of Indian political thought. The Indians adopted the basic assumptions of Whiggism, especially the natural leadership of an elite, the political incapacity of the masses, the great partnership of the civil society and the best methods of achieving social progress, analysing the nature of society and the nation and depicting the character of the ideal state.
Further reading
Carswell, John, The Old Cause: Three Biographical Studies in Whiggism (London: The Cresset Press, 1954) deals with Thomas Wharton, George Dodington, and Charles James Fox
H. T. Dickinson, Walpole and the Whig Supremacy (University of London Press, 1973)
Dickinson, H. T. "Whiggism in the eighteenth century", in John Cannon, ed., The Whig Ascendancy: Colloquies on Hanoverian Britain (1981), pp. 28–44
William Herbert Dray, "J. H. Hexter, Neo-Whiggism and Early Stuart Historiography" in History and Theory, vol. 26 (1987), pp. 133–149
Goldie, Mark, "The Roots of True Whiggism, 1688–94", in History of Political Thought 1 (1980), pp. 195–236
Guttridge, George Herbert, English Whiggism and the American Revolution (Berkeley and Los Angeles: University of California Press, 1942)
Mitchell, Leslie, Whig World: 1760–1837 (2006)
O'Gorman, Frank, The Rise of Party in England: The Rockingham Whigs 1760–1782 (London: George Allen & Unwin Ltd, 1975)
Robbins, Caroline. The Eighteenth-Century Commonwealthman: Studies in the Transmission, Development, and Circumstance of English Liberal Thought from the Restoration of Charles II until the War with the Thirteen Colonies (1959, 2004).
Smith, Ernest Anthony, Whig principles and party politics: Earl Fitzwilliam and the Whig party, 1748–1833 (Manchester University Press, 1975)
Ward, Lee, The Politics of Liberty in England and Revolutionary America (Cambridge University Press, 2004)
Williams, Basil, and C. H. Stuart, The Whig Supremacy, 1714–1760 (Oxford History of England) (2nd ed. 1962)
Womersley, David, Paddy Bullard, Abigail Williams, Cultures of Whiggism: New Essays on English Literature and Culture in the Long Eighteenth Century (University of Delaware Press, 2005)
Primary sources
Burke, Edmund, An Appeal from the New to the Old Whigs (1791)
De Quincey, Thomas. "Dr. Samuel Parr: or, Whiggism in its relations to literature" in Works of Thomas de Quincey, vol. v, from p. 30
Disraeli, Benjamin, ed. William Hutcheon, Whigs and Whiggism: political writings (new edition, 1971)
Hickeringill, Edmund, The history of Whiggism: or, The Whiggish-plots, principles, and practices (London: Printed for E. Smith, at the Elephant and Castle in Cornhill, 1682)
See also
Whig history
Patriot Whigs
Radical Whigs
Rockingham Whigs
True Whig Party
Whig Party (United States)
Patriot (American Revolution)
References
Conservative liberalism
Classical liberalism
Liberalism in the United Kingdom
Political ideologies
Political philosophy
Political culture
Political history of England
Politics of the Kingdom of Great Britain
| 0.763613 | 0.992488 | 0.757877 |
Culture theory
|
Culture theory is the branch of comparative anthropology and semiotics that seeks to define the heuristic concept of culture in operational and/or scientific terms.
Overview
In the 19th century, "culture" was used by some to refer to a wide array of human activities, and by some others as a synonym for "civilization". In the 20th century, anthropologists began theorizing about culture as an object of scientific analysis. Some used it to distinguish human adaptive strategies from the largely instinctive adaptive strategies of animals, including the adaptive strategies of other primates and non-human hominids, whereas others used it to refer to symbolic representations and expressions of human experience, with no direct adaptive value. Both groups understood culture as being definitive of human nature.
According to many theories that have gained wide acceptance among anthropologists, culture exhibits the way that humans interpret their biology and their environment. According to this point of view, culture becomes such an integral part of human existence that it is the human environment, and most cultural change can be attributed to human adaptation to historical events. Moreover, given that culture is seen as the primary adaptive mechanism of humans and takes place much faster than human biological evolution, most cultural change can be viewed as culture adapting to itself.
Although most anthropologists try to define culture in such a way that it separates human beings from other animals, many human traits are similar to those of other animals, particularly the traits of other primates. For example, chimpanzees have big brains, but human brains are bigger. Similarly, bonobos exhibit complex sexual behaviour, but human beings exhibit much more complex sexual behaviours. As such, anthropologists often debate whether human behaviour is different from animal behaviour in degree rather than in kind; they must also find ways to distinguish cultural behaviour from sociological behaviour and psychological behavior.
Acceleration and amplification of these various aspects of culture change have been explored by complexity economist, W. Brian Arthur. In his book, The Nature of Technology, Arthur attempts to articulate a theory of change that considers that existing technologies (or material culture) are combined in unique ways that lead to novel new technologies. Behind that novel combination is a purposeful effort arising in human motivation. This articulation would suggest that we are just beginning to understand what might be required for a more robust theory of culture and culture change, one that brings coherence across many disciplines and reflects an integrating elegance.
See also
Cultural studies
Culturology
Cultural behavior
Culture industry
Critical theory
Dual inheritance theory
Engaged theory
Intercultural relations
Popular culture studies
Semiotics of culture
Structuralism
Tartu–Moscow Semiotic School
References
Groh, Arnold A. Theories of Culture. Routledge, London. 2020.
Ogburn, William F. Social Change. 1922. Reprint. Dell, New York. 1966.
Rogers, G.F.C. The Nature of the Engineering: A Philosophy of Technology. Palgrave Macmillan, London, 1983.
Schumpeter, Joseph. The Theory of Economic Development. 1912. Reprint. Harvard University Press, Cambridge, Massachusetts. 1966. 1934.
Cultural anthropology
Cultural studies
Theories
| 0.772039 | 0.981653 | 0.757874 |
Toponymy
|
Toponymy, toponymics, or toponomastics is the study of toponyms (proper names of places, also known as place names and geographic names), including their origins, meanings, usage and types. Toponym is the general term for a proper name of any geographical feature, and full scope of the term also includes proper names of all cosmographical features.
In a more specific sense, the term toponymy refers to an inventory of toponyms, while the discipline researching such names is referred to as toponymics or toponomastics. Toponymy is a branch of onomastics, the study of proper names of all kinds. A person who studies toponymy is called toponymist.
Etymology
The term toponymy comes from / , 'place', and / , 'name'.
The Oxford English Dictionary records toponymy (meaning "place name") first appearing in English in 1876. Since then, toponym has come to replace the term place-name in professional discourse among geographers.
Toponymic typology
Toponyms can be divided in two principal groups:
geonyms - proper names of all geographical features, on planet Earth.
cosmonyms - proper names of cosmographical features, outside Earth.
Various types of geographical toponyms (geonyms) include, in alphabetical order:
agronyms - proper names of fields and plains.
choronyms - proper names of regions or countries.
dromonyms - proper names of roads or any other transport routes by land, water or air.
drymonyms - proper names of woods and forests.
econyms - proper names of inhabited locations, like houses, villages, towns or cities, including:
comonyms - proper names of villages.
astionyms - proper names of towns and cities.
hydronyms - proper names of various bodies of water, including:
helonyms - proper names of swamps, marshes and bogs.
limnonyms - proper names of lakes and ponds.
oceanonyms - proper names of oceans.
pelagonyms - proper names of seas.
potamonyms - proper names of rivers and streams.
insulonyms - proper names of islands.
metatoponyms - proper names of places containing recursive elements (e.g. Red River Valley Road).
oronyms - proper names of relief features, like mountains, hills and valleys, including:
speleonyms - proper names of caves or some other subterranean features.
petronyms - proper names of rock climbing routes.
urbanonyms - proper names of urban elements (streets, squares etc.) in settlements, including:
agoronyms - proper names of squares and marketplaces.
hodonyms - proper names of streets and roads.
Various types of cosmographical toponyms (cosmonyms) include:
asteroidonyms - proper names of asteroids.
astronyms - proper names of stars and constellations.
cometonyms - proper names of comets.
meteoronyms - proper names of meteors.
planetonyms - proper names of planets and planetary systems.
History
Probably the first toponymists were the storytellers and poets who explained the origin of specific place names as part of their tales; sometimes place-names served as the basis for their etiological legends. The process of folk etymology usually took over, whereby a false meaning was extracted from a name based on its structure or sounds. Thus, for example, the toponym of Hellespont was explained by Greek poets as being named after Helle, daughter of Athamas, who drowned there as she crossed it with her brother Phrixus on a flying golden ram. The name, however, is probably derived from an older language, such as Pelasgian, which was unknown to those who explained its origin. In his Names on the Globe, George R. Stewart theorizes that Hellespont originally meant something like 'narrow Pontus' or 'entrance to Pontus', Pontus being an ancient name for the region around the Black Sea, and by extension, for the sea itself.
Especially in the 19th century, the age of exploration, a lot of toponyms got a different name because of national pride. Thus the famous German cartographer Petermann thought that the naming of newly discovered physical features was one of the privileges of a map-editor, especially as he was fed up with forever encountering toponyms like 'Victoria', 'Wellington', 'Smith', 'Jones', etc. He writes: "While constructing the new map to specify the detailed topographical portrayal and after consulting with and authorization of messr. Theodor von Heuglin and count Karl Graf von Waldburg-Zeil I have entered 118 names in the map: partly they are the names derived from celebrities of arctic explorations and discoveries, arctic travellers anyway as well as excellent friends, patrons, and participants of different nationalities in the newest northpolar expeditions, partly eminent German travellers in Africa, Australia, America ...".
Toponyms may have different names through time, due to changes and developments in languages, political developments and border adjustments to name but a few. More recently many postcolonial countries revert to their own nomenclature for toponyms that have been named by colonial powers.
Toponomastics
Place names provide the most useful geographical reference system in the world. Consistency and accuracy are essential in referring to a place to prevent confusion in everyday business and recreation.
A toponymist, through well-established local principles and procedures developed in cooperation and consultation with the United Nations Group of Experts on Geographical Names (UNGEGN), applies the science of toponymy to establish officially recognized geographical names. A toponymist relies not only on maps and local histories, but interviews with local residents to determine names with established local usage. The exact application of a toponym, its specific language, its pronunciation, and its origins and meaning are all important facts to be recorded during name surveys.
Scholars have found that toponyms provide valuable insight into the historical geography of a particular region. In 1954, F. M. Powicke said of place-name study that it "uses, enriches and tests the discoveries of archaeology and history and the rules of the philologists."
Toponyms not only illustrate ethnic settlement patterns, but they can also help identify discrete periods of immigration.
Toponymists are responsible for the active preservation of their region's culture through its toponymy. They typically ensure the ongoing development of a geographical names database and associated publications, for recording and disseminating authoritative hard-copy and digital toponymic data. This data may be disseminated in a wide variety of formats, including hard-copy topographic maps as well as digital formats such as geographic information systems, Google Maps, or thesauri like the Getty Thesaurus of Geographic Names.
Toponymic commemoration
In 2002, the United Nations Conference on the Standardization of Geographical Names acknowledged that while common, the practice of naming geographical places after living persons (toponymic commemoration) could be problematic. Therefore, the United Nations Group of Experts on Geographical Names recommends that it be avoided and that national authorities should set their own guidelines as to the time required after a person's death for the use of a commemorative name.
In the same vein, writers Pinchevski and Torgovnik (2002) consider the naming of streets as a political act in which holders of the legitimate monopoly to name aspire to engrave their ideological views in the social space. Similarly, the revisionist practice of
renaming streets, as both the celebration of triumph and the repudiation of the old regime is another issue of toponymy. Also, in the context of Slavic nationalism, the name of Saint Petersburg was changed to the more Slavic sounding Petrograd from 1914 to 1924, then to Leningrad following the death of Vladimir Lenin and back to Saint-Peterburg in 1991 following the dissolution of the Soviet Union. After 1830, in the wake of the Greek War of Independence and the establishment of an independent Greek state, Turkish, Slavic and Italian place names were Hellenized, as an effort of "toponymic cleansing." This nationalization of place names can also manifest itself in a postcolonial context.
In Canada, there have been initiatives in recent years "to restore traditional names to reflect the Indigenous culture wherever possible". Indigenous mapping is a process that can include restoring place names by Indigenous communities themselves.
Frictions sometimes arise between countries because of toponymy, as illustrated by the Macedonia naming dispute in which Greece has claimed the name Macedonia, the Sea of Japan naming dispute between Japan and Korea, as well as the Persian Gulf naming dispute. On 20 September 1996 a note on the internet reflected a query by a Canadian surfer, who said as follows: 'One producer of maps labeled the water body
"Persian Gulf" on a 1977 map of Iran, and then "Arabian Gulf", also in 1977, in a map which focused on the Gulf States. I would gather that this is an indication of the "politics of maps", but I would be interested to know if this was done to avoid upsetting users of the Iran map and users of the map showing Arab Gulf States'. This symbolizes a further aspect of the topic, namely the spilling over of the problem from the purely political to the economic sphere.
Geographic names boards
A geographic names board is an official body established by a government to decide on official names for geographical areas and features.
Most countries have such a body, which is commonly (but not always) known under this name. Also, in some countries (especially those organised on a federal basis), subdivisions such as individual states or provinces will have individual boards.
Individual geographic names boards include:
Antarctic Place-names Commission
Commission nationale de toponymie (National toponymy commission - France)
Geographical Names Board of Canada
Geographical Names Board of New South Wales
New Zealand Geographic Board
South African Geographical Names Council
United States Board on Geographic Names
Notable toponymists
Marcel Aurousseau (1891–1983), Australian geographer, geologist, war hero, historian and translator
Guido Borghi (born 1969), Italian historical linguist and toponymist
Andrew Breeze (born 1954), English linguist
William Bright (1928–2006), American linguist
Richard Coates (born 1949), English linguist
Joan Coromines (1905–1997), etymologist, dialectologist, toponymist
Albert Dauzat (1877–1955), French linguist
Eilert Ekwall (1877–1964, Sweden)
Henry Gannett (1846–1914), American geographer
Margaret Gelling (1924–2009), English toponymist
Michel Grosclaude (1926–2002), philosopher and French linguist
Erwin Gustav Gudde
Joshua Nash, Australian linguist and toponymist
Ernest Nègre (1907–2000), French toponymist
W. F. H. Nicolaisen (1927–2016), folklorist, linguist, medievalist
Oliver Padel (born 1948), English medievalist and toponymist
Francesco Perono Cacciafoco (born 1980), Italian historical linguist and toponymist
Robert L. Ramsay (1880–1953), American linguist
Adrian Room (1933–2010), British toponymist and onomastician
Charles Rostaing (1904–1999), French linguist
Henry Schoolcraft (1793–1864), American geographer, geologist and ethnologist
Walter Skeat (1835–1912), British philologist
Albert Hugh Smith (1903–1967), scholar of Old English and Scandinavian languages
Frank Stenton (1880–1967), historian of Anglo-Saxon England
George R. Stewart (1895–1980), American historian, toponymist and novelist
Jan Paul Strid (1947–2018), Swedish toponymist
Isaac Taylor (1829–1901), philologist, toponymist and Anglican canon of York
Jan Tent, Australian linguist and toponymist
James Hammond Trumbull (1821–1897), American scholar and philologist
William J. Watson (1865–1948), Scottish scholar
See also
Related concepts
Anthroponymy
Demonymy
List of demonyms for US states and territories
Ethnonymy
Exonym and endonym
Gazetteer
Lists of places
Oeconym
Toponymy of the Kerguelen Islands
Toponymy
Toponymic surname
Planetary nomenclature
Hydronymy
Latin names of European rivers
Latin names of rivers
List of river name etymologies
Old European hydronymy
Regional toponymy
Biblical toponyms in the United States
Celtic toponymy
German toponymy
Germanic toponymy
Historical African place names
Japanese place names
Korean toponymy and list of place names
List of English exonyms for German toponyms
List of French exonyms for Dutch toponyms
List of French exonyms for German toponyms
List of French exonyms for Italian toponyms
List of Latin place names in Europe
List of modern names for biblical place names
List of renamed places in the United States
List of U.S. place names connected to Sweden
List of U.S. States and Territorial demonyms
List of U.S. state name etymologies
List of U.S. state nicknames
Maghreb toponymy
Names of European cities in different languages
New Zealand place names
Norman toponymy
Oikonyms in Western and South Asia
Place names of Palestine
Hebraization of Palestinian place names
Place names in Sri Lanka
Roman place names
Toponyms of Finland
Toponyms of Turkey
Toponymy in the United Kingdom and Ireland
List of British places with Latin names
List of generic forms in place names in the British Isles
List of places in the United Kingdom
List of Roman place names in Britain
Place names in Irish
Welsh place names
Territorial designation
Toponymical list of counties of the United Kingdom
Other
Labeling (map design)
List of adjectival forms of place names
List of double placenames
List of long place names
List of names in English with counterintuitive pronunciations
List of places named after peace
List of places named after Lenin
List of places named after Stalin
List of places named for their main products
List of political entities named after people
List of short place names
List of tautological place names
List of words derived from toponyms
Lists of things named after places
List of geographic acronyms and initialisms
List of geographic portmanteaus
List of geographic anagrams and ananyms
United Nations Group of Experts on Geographical Names
UNGEGN Toponymic Guidelines
References
Sources
Further reading
Berg, Lawrence D. and Jani Vuolteenaho. 2009. Critical Toponymies (Re-Materialising Cultural Geography). Ashgate Publishing.
Cablitz, Gabriele H. 2008. "When 'what' is 'where': A linguistic analysis of landscape terms, place names and body part terms in Marquesan (Oceanic, French Polynesia)." Language Sciences 30(2/3):200–26.
Desjardins, Louis-Hébert. 1973. Les nons géographiques: lexique polyglotte, suivi d'un glossaire de 500 mots. Leméac.
Hargitai, Henrik I. 2006. "Planetary Maps: Visualization and Nomenclature." Cartographica 41(2):149–64
Hargitai, Henrik I., Hugh S. Greqorv, Jan Osburq, and Dennis Hands. 2007. "Development of a Local Toponym System at the Mars Desert Research Station." Cartographica 42(2):179–87.
Hercus, Luise, Flavia Hodges, and Jane Simpson. 2009. The Land is a Map: Placenames of Indigenous Origin in Australia. Pandanus Books.
Kadmon, Naftali. 2000. Toponymy: the lore, laws, and language of geographical names. Vantage Press.
Perono Cacciafoco, Francesco and Francesco Paolo Cavallaro. 2023. Place Names: Approaches and Perspectives in Toponymy and Toponomastics. Cambridge University Press. , Book 0; Book 1 ; DOI
External links
Who Was Who in North American Name Study
Forgotten Toponymy Board (German)
The origins of British place names (archived 1 March 2012)
An Index to the Historical Place Names of Cornwall
Celtic toponymy (archived 10 February 2012)
The Doukhobor Gazetteer, Doukhobor Heritage website, by Jonathan Kalmakoff.
O'Brien Jr., Francis J. (Moondancer) "Indian Place Names—Aquidneck Indian Council"
Ghana Place Names
Index Anatolicus: Toponyms of Turkey
The University of Nottingham's: Key to English Place-names searchable map.
The Etymology of Mars crater names on Internet Archive
| 0.760975 | 0.995912 | 0.757865 |
Jidaigeki
|
is a genre of film, television, video game, and theatre in Japan. Literally meaning "period dramas", it refers to stories that take place before the Meiji Restoration of 1868.
Jidaigeki show the lives of the samurai, farmers, craftsmen, and merchants of their time. Jidaigeki films are sometimes referred to as chambara movies, a word meaning "sword fight", though chambara is more accurately a subgenre of jidaigeki. Jidaigeki rely on an established set of dramatic conventions including the use of makeup, language, catchphrases, and plotlines.
Types
Many jidaigeki take place in Edo, the military capital. Others show the adventures of people wandering from place to place. The long-running television series Zenigata Heiji and Abarenbō Shōgun typify the Edo jidaigeki. Mito Kōmon, the fictitious story of the travels of the historical daimyō Tokugawa Mitsukuni, and the Zatoichi movies and television series, exemplify the traveling style.
Another way to categorize jidaigeki is according to the social status of the principal characters. The title character of Abarenbō Shōgun is Tokugawa Yoshimune, the eighth Tokugawa shōgun. The head of the samurai class, Yoshimune assumes the disguise of a low-ranking , a samurai in the service of the shogun. Similarly, Mito Kōmon is the retired vice-shogun, masquerading as a merchant.
In contrast, the coin-throwing Heiji of Zenigata Heiji is a commoner, working for the police, while Ichi (the title character of Zatoichi), a blind masseur, is an outcast, as were many disabled people in that era. In fact, masseurs, who typically were at the bottom of the professional food chain, was one of the few vocational positions available to the blind in that era. Gokenin Zankurō is a samurai but, due to his low rank and income, he has to work extra jobs that higher-ranking samurai were unaccustomed to doing.
Whether the lead role is samurai or commoner, jidaigeki usually reach a climax in an immense sword fight just before the end. The title character of a series always wins, whether using a sword or a jutte (the device police used to trap, and sometimes to bend or break, an opponent's sword).
Roles
Among the characters in jidaigeki are a parade of people with occupations unfamiliar to modern Japanese and especially to foreigners. Here are a few:
Warriors
The warrior class included samurai, hereditary members in the military service of a daimyō or the shōgun, who was a samurai himself. Rōnin, samurai without masters, were also warriors, and like samurai, wore two swords, but they were without inherited employment or status. Bugeisha were men, or in some stories women, who aimed to perfect their martial arts, often by traveling throughout the country. Ninja were the secret service, specializing in stealth, the use of disguises, explosives, and concealed weapons.
Craftsmen
Craftsmen in jidaigeki included metalworkers (often abducted to mint counterfeit coins), bucket-makers, carpenters and plasterers, and makers of woodblock prints for art or newspapers.
Merchants
In addition to the owners of businesses large and small, the jidaigeki often portray the employees. The bantō was a high-ranking employee of a merchant, the tedai, a lower helper. Many merchants employed children, or kozō. Itinerant merchants included the organized medicine-sellers, vegetable-growers from outside the city, and peddlers at fairs outside temples and shrines. In contrast, the great brokers in rice, lumber and other commodities operated sprawling shops in the city.
Governments
In the highest ranks of the shogunate were the rojū. Below them were the wakadoshiyori, then the various bugyō or administrators, including the jisha bugyō (who administered temples and shrines), the kanjō bugyō (in charge of finances) and the two Edo machi bugyō. These last alternated by month as chief administrator of the city. Their role encompassed mayor, chief of police, and judge, and jury in criminal and civil matters.
The machi bugyō oversaw the police and fire departments. The police, or , included the high-ranking and the below them; both were samurai. In they often have full-time patrolmen, and , who were commoners. (Historically, such people were irregulars and were called to service only when necessary.) Zenigata Heiji is an . The police lived in barracks at Hatchōbori in Edo. They manned ban'ya, the watch-houses, throughout the metropolis. The was the symbol of the police, from to .
A separate police force handled matters involving samurai. The ōmetsuke were high-ranking officials in the shogunate; the metsuke and kachi-metsuke, lower-ranking police who could detain samurai. Yet another police force investigated arson-robberies, while Shinto shrines and Buddhist temples fell under the control of another authority. The feudal nature of Japan made these matters delicate, and jurisdictional disputes are common in jidaigeki.
Edo had three fire departments. The daimyō-bikeshi were in the service of designated daimyōs; the jōbikeshi reported to the shogunate; while the machi-bikeshi, beginning under Yoshimune, were commoners under the administration of the machi-bugyō. Thus, even the fire companies have turf wars in the jidaigeki.
Each daimyō maintained a residence in Edo, where he lived during sankin-kōtai. His wife and children remained there even while he was away from Edo, and the ladies-in-waiting often feature prominently in jidaigeki. A high-ranking samurai, the Edo-garō, oversaw the affairs in the daimyōs absence. In addition to a staff of samurai, the household included ashigaru (lightly armed warrior-servants) and chūgen and yakko (servants often portrayed as flamboyant and crooked). Many daimyōs employed doctors, goten'i; their counterpart in the shogun's household was the okuishi. Count on them to provide the poisons that kill and the potions that heal.
Other
The cast of a wandering jidaigeki encountered a similar setting in each han. There, the karō were the kuni-garō and the jōdai-garō. Tensions between them have provided plots for many stories.
Conventions
There are several dramatic conventions of jidaigeki:
The heroes often wear eye makeup, and the villains often have disarranged hair.
A contrived form of old-fashioned Japanese speech, using modern pronunciation and grammar with a high degree of formality and frequent archaisms.
In long-running TV series, like Mito Kōmon and Zenigata Heiji, the lead and supporting actors sometimes change. This is done without any rationale for the change of appearance. The new actor simply appears in the place of the old one and the stories continue. This is similar to the James Bond film series or superhero films, in contrast with e.g. the British television program Doctor Who.
In a sword fight, when a large number of villains attacks the main character, they never attack at once. The main character first launches into a lengthy preamble detailing the crimes the villains have committed, at the end of which the villains then initiate hostilities. The villains charge singly or in pairs; the rest wait their turn to be dispatched and surround the main character until it is their turn to be easily defeated. Sword fights are the grand finale of the show and are conducted to specially crafted theme music for their duration.
On television, even fatal sword cuts draw little blood, and often do not even cut through clothing. Villains are chopped down with deadly, yet completely invisible, sword blows. Despite this, blood or wounding may be shown for arrow wounds or knife cuts.
In chambara films, the violence is generally considerably stylized, sometimes to such a degree that sword cuts cause geysers of blood from wounds. Dismemberment and decapitation are common as well.
Proverbs and catchphrases
Authors of jidaigeki work pithy sayings into the dialog. Here are a few:
: Like bugs that fly into the fire in the summer (they will come to their destruction)
: A wolf in sheep's clothing (literally, a parasite in the lion's body)
: Fires and brawls are the flower of Edo
: "The eight hundred neighborhoods of Edo"
: "On the road you need a companion"
The authors of series invent their own catchphrases called that the protagonist says at the same point in nearly every episode. In Mito Kōmon, in which the eponymous character disguises himself as a commoner, in the final sword fight, a sidekick invariably holds up an accessory bearing the shogunal crest and shouts, : "Back! Can you not see this emblem?", revealing the identity of the hitherto unsuspected old man with a goatee beard. The villains then instantly surrender and beg forgiveness.
Likewise, Tōyama no Kin-san bares his tattooed shoulder and snarls, : "I won't let you say you forgot this cherry-blossom blizzard!" After sentencing the criminals, he proclaims, : "Case closed."
Examples
Films
Video games
The following are Japanese video games in the jidaigeki genre.
Downtown Special: Kunio-kun no Jidaigeki dayo Zen'in Shūgō—sequel to Downtown Nekketsu Monogatari (River City Ransom in America) set in feudal Japan.
Genji: Dawn of the Samurai
Hakuōki series
Kengo series
Live A Live in the "Twilight of Edo Japan" scenario
Ni-Oh series
Ninja Gaiden series "Ninja Ryukenden", "Legend of the Ninja Dragon Sword" in Japan
Nobunaga's Ambition series "Nobunaga no Yabō" in Japan
Onimusha series
Ryū ga Gotoku Kenzan!
Ryū ga Gotoku Ishin!
Samurai, a Sega arcade action game released in March 1980.
Samurai Shodown series
Samurai Warriors "Sengoku Musō" in Japan
Samurai Warriors 2 "Sengoku Musō 2" in Japan
Samurai Warriors 3 "Sengoku Musō 3" in Japan
Samurai Warriors 4 "Sengoku Musō 4" in Japan
Samurai Warriors 2: Empires series "Sengoku Musō 2: Empires" in Japan
Samurai Warriors: Chronicles "Sengoku Musō: Kuronikuru" in Japan
Samurai Warriors: Katana "Sengoku Musō: Katana" in Japan
Samurai Warriors: Spirit of Sanada "Sengoku Musō: Sanada Maru" in Japan
Sekiro: Shadows Die Twice
Sengoku Ace
Soul of the Samurai
Tenchu series
The Last Blade series
Warriors Orochi series
Way of the Samurai series
Although jidaigeki is essentially a Japanese genre, there are also Western games that use the setting to match the same standards. Examples are Ghost of Tsushima, Shogun: Total War series or Japanese campaigns of Age of Empires III.
Anime and manga
Azumi
Basilisk
Dororo
Fire Tripper
Gintama
Hakuouki Shinsengumi Kitan
Hyouge Mono
Intrigue in the Bakumatsu – Irohanihoheto
InuYasha
Kaze Hikaru
Lone Wolf and Cub
Mushishi
Ninja Resurrection
Ninja Scroll
Oi! Ryoma
Otogizoshi
Princess Mononoke
Rakudai Ninja Rantarō
Rurouni Kenshin
Samurai 7
Samurai Champloo
Samurai Executioner
Shigurui
Shōnen Onmyōji
The Yagyu Ninja Scrolls
Samurai Deeper Kyo
Sword of the Stranger
Vagabond
Yasuke
Live action television
Taiga drama Series on NHK.
Prominent directors
Names are in Western order, with the surname after the given name.
Hideo Gosha
Kon Ichikawa
Hiroshi Inagaki
Akira Kurosawa
Masaki Kobayashi
Shozo Makino
Kenji Misumi
Kenji Mizoguchi
Kihachi Okamoto
Kimiyoshi Yasuda
Akira Inoue
Tomu Uchida
Eiichi Kudo
Tokuzō Tanaka
Koreyoshi Kurahara
Kazuo Ikehiro
Prominent actors
Tsumasaburō Bandō
Denjirō Ōkōchi
Chiyonosuke Azuma
Utaemon Ichikawa
Ryūtarō Ōtomo
Kanjūrō Arashi
Jūshirō Konoe
Ryūnosuke Tsukigata
Chiezō Kataoka
Ichikawa Raizō VIII
Hashizo Okawa
Yorozuya Kinnosuke
Toshiro Mifune
Shintaro Katsu
Tomisaburo Wakayama
Kōtarō Satomi
Asahi Kurizuka
Hiroki Matsukata
Masakazu Tamura
Kin'ya Kitaōji
Sonny Chiba
Hideki Takahashi
Ken Matsudaira
Influence
Star Wars creator George Lucas has admitted to being inspired significantly by the period works of Akira Kurosawa, and many thematic elements found in Star Wars bear the influence of Chanbara filmmaking. In an interview, Lucas has specifically cited the fact that he became acquainted with the term jidaigeki while in Japan, and it is widely assumed that he took inspiration for the term Jedi from this.
References
External links
A Man, a Blade, an Empty Road: Postwar Samurai Film to 1970 by Allen White, this article discusses specific chanbara films, their distinction from regular jidai-geki, and the evolution of the genre.
Program for a jidaigeki film series sponsored by the Yale CEAS and the National Film Archive of Japan.
TOEI KYOTO STUDIO PARK
Film genres
Japanese entertainment terms
Japan in fiction
| 0.760018 | 0.997156 | 0.757856 |
Recent human evolution
|
Recent human evolution refers to evolutionary adaptation, sexual and natural selection, and genetic drift within Homo sapiens populations, since their separation and dispersal in the Middle Paleolithic about 50,000 years ago. Contrary to popular belief, not only are humans still evolving, their evolution since the dawn of agriculture is faster than ever before. It has been proposed that human culture acts as a selective force in human evolution and has accelerated it; however, this is disputed. With a sufficiently large data set and modern research methods, scientists can study the changes in the frequency of an allele occurring in a tiny subset of the population over a single lifetime, the shortest meaningful time scale in evolution. Comparing a given gene with that of other species enables geneticists to determine whether it is rapidly evolving in humans alone. For example, while human DNA is on average 98% identical to chimp DNA, the so-called Human Accelerated Region 1 (HAR1), involved in the development of the brain, is only 85% similar.
Following the peopling of Africa some 130,000 years ago, and the recent Out-of-Africa expansion some 70,000 to 50,000 years ago, some sub-populations of Homo sapiens have been geographically isolated for tens of thousands of years prior to the early modern Age of Discovery. Combined with archaic admixture, this has resulted in relatively significant genetic variation. Selection pressures were especially severe for populations affected by the Last Glacial Maximum (LGM) in Eurasia, and for sedentary farming populations since the Neolithic, or New Stone Age.
Single nucleotide polymorphisms (SNP, pronounced 'snip'), or mutations of a single genetic code "letter" in an allele that spread across a population, in functional parts of the genome can potentially modify virtually any conceivable trait, from height and eye color to susceptibility to diabetes and schizophrenia. Approximately 2% of the human genome codes for proteins and a slightly larger fraction is involved in gene regulation. But most of the rest of the genome has no known function. If the environment remains stable, the beneficial mutations will spread throughout the local population over many generations until it becomes a dominant trait. An extremely beneficial allele could become ubiquitous in a population in as little as a few centuries whereas those that are less advantageous typically take millennia.
Human traits that emerged recently include the ability to free-dive for long periods of time, adaptations for living in high altitudes where oxygen concentrations are low, resistance to contagious diseases (such as malaria), light skin, blue eyes, lactase persistence (or the ability to digest milk after weaning), lower blood pressure and cholesterol levels, retention of the median artery, reduced prevalence of Alzheimer's disease, lower susceptibility to diabetes, genetic longevity, shrinking brain sizes, and changes in the timing of menarche and menopause.
Archaic admixture
Genetic evidence suggests that a species dubbed Homo heidelbergensis is the last common ancestor of Neanderthals, Denisovans, and Homo sapiens. This common ancestor lived between 600,000 and 750,000 years ago, likely in either Europe or Africa. Members of this species migrated throughout Europe, the Middle East, and Africa and became the Neanderthals in Western Asia and Europe while another group moved further east and evolved into the Denisovans, named after the Denisova Cave in Russia where the first known fossils of them were discovered. In Africa, members of this group eventually became anatomically modern humans. Migrations and geographical isolation notwithstanding, the three descendant groups of Homo heidelbergensis later met and interbred.
Archaeological research suggests that as prehistoric humans swept across Europe 45,000 years ago, Neanderthals went extinct. Even so, there is evidence of interbreeding between the two groups as humans expanded their presence in the continent. While prehistoric humans carried 3–6% Neanderthal DNA, modern humans have only about 2%. This seems to suggest selection against Neanderthal-derived traits. For example, the neighborhood of the gene FOXP2, affecting speech and language, shows no signs of Neanderthal inheritance whatsoever.
Introgression of genetic variants acquired by Neanderthal admixture has different distributions in Europeans and East Asians, pointing to differences in selective pressures. Though East Asians inherit more Neanderthal DNA than Europeans, East Asians, South Asians, Australo-Melanesians, Native Americans, and Europeans all share Neanderthal DNA, so hybridization likely occurred between Neanderthals and their common ancestors coming out of Africa. Their differences also suggest separate hybridization events for the ancestors of East Asians and other Eurasians.
Following the genome sequencing of three Vindija Neanderthals, a draft sequence of the Neanderthal genome was published and revealed that Neanderthals shared more alleles with Eurasian populations—such as French, Han Chinese, and Papua New Guinean—than with sub-Saharan African populations, such as Yoruba and San. According to the authors of the study, the observed excess of genetic similarity is best explained by recent gene flow from Neanderthals to modern humans after the migration out of Africa. But gene flow did not go one way. The fact that some of the ancestors of modern humans in Europe migrated back into Africa means that modern Africans also carry some genetic materials from Neanderthals. In particular, Africans share 7.2% Neanderthal DNA with Europeans but only 2% with East Asians.
Some climatic adaptations, such as high-altitude adaptation in humans, are thought to have been acquired by archaic admixture. An ethnic group known as the Sherpas from Nepal is believed to have inherited an allele called EPAS1, which allows them to breathe easily at high altitudes, from the Denisovans. A 2014 study reported that Neanderthal-derived variants found in East Asian populations showed clustering in functional groups related to immune and haematopoietic pathways, while European populations showed clustering in functional groups related to the lipid catabolic process. A 2017 study found correlation of Neanderthal admixture in modern European populations with traits such as skin tone, hair color, height, sleeping patterns, mood and smoking addiction. A 2020 study of Africans unveiled Neanderthal haplotypes, or alleles that tend to be inherited together, linked to immunity and ultraviolet sensitivity.
The gene microcephalin (MCPH1), involved in the development of the brain, likely originated from a Homo lineage separate from that of anatomically modern humans, but was introduced to them around 37,000 years ago, and has become much more common ever since, reaching around 70% of the human population at present. Neanderthals were suggested as one possible origin of this gene.
But later studies did not find this gene in the Neanderthal genome nor has it been found to be associated with cognitive ability in modern people.
The promotion of beneficial traits acquired from admixture is known as adaptive introgression.
A study concluded only 1.5–7% of "regions" of the modern human genome to be specific to modern humans. These regions have neither been altered by archaic hominin DNA due to admixture (only a small portion of archaic DNA is inherited per individual but a large portion is inherited across populations overall) nor are shared with Neanderthals or Denisovans in any of the genomes of the used datasets. They also found two bursts of changes specific to modern human genomes which involve genes related to brain development and function.
Upper Paleolithic, or the Late Stone Age (50,000 to 12,000 years ago)
Victorian naturalist Charles Darwin was the first to propose the out-of-Africa hypothesis for the peopling of the world, but the story of prehistoric human migration is now understood to be much more complex thanks to twenty-first-century advances in genomic sequencing. There were multiple waves of dispersal of anatomically modern humans out of Africa, with the most recent one dating back to 70,000 to 50,000 years ago. Earlier waves of human migrants might have gone extinct or returned to Africa. Moreover, a combination of gene flow from Eurasia back into Africa and higher rates of genetic drift among East Asians compared to Europeans led these human populations to diverge from one another at different times.
Around 65,000 to 50,000 years ago, a variety of new technologies, such as projectile weapons, fish hooks, porcelain, and sewing needles, made their appearance. Bird-bone flutes were invented 30,000 to 35,000 years ago, indicating the arrival of music. Artistic creativity also flowered, as can be seen with Venus figurines and cave paintings. Cave paintings of not just actual animals but also imaginary creatures that could reliably be attributed to Homo sapiens have been found in different parts of the world. Radioactive dating suggests that the oldest of the ones that have been found, as of 2019, are 44,000 years old. For researchers, these artworks and inventions represent a milestone in the evolution of human intelligence, the roots of story-telling, paving the way for spirituality and religion. Experts believe this sudden "great leap forward"—as anthropologist Jared Diamond calls it—was due to climate change. Around 60,000 years ago, during the middle of an ice age, it was extremely cold in the far north, but ice sheets sucked up much of the moisture in Africa, making the continent even drier and droughts much more common. The result was a genetic bottleneck, pushing Homo sapiens to the brink of extinction, and a mass exodus from Africa. Nevertheless, it remains uncertain (as of 2003) whether or not this was due to some favorable genetic mutations, for example in the FOXP2 gene, linked to language and speech. A combination of archaeological and genetic evidence suggests that humans migrated along Southern Asia and down to Australia 50,000 years ago, to the Middle East and then to Europe 35,000 years ago, and finally to the Americas via the Siberian Arctic 15,000 years ago.
DNA analyses conducted since 2007 revealed the acceleration of evolution with regards to defenses against disease, skin color, nose shapes, hair color and type, and body shape since about 40,000 years ago, continuing a trend of active selection since humans emigrated from Africa 100,000 years ago. Humans living in colder climates tend to be more heavily built compared to those in warmer climates because having a smaller surface area compared to volume makes it easier to retain heat. People from warmer climates tend to have thicker lips, which have large surface areas, enabling them to keep cool. With regards to nose shapes, humans residing in hot and dry places tend to have narrow and protruding noses in order to reduce loss of moisture. Humans living in hot and humid places tend to have flat and broad noses that moisturize inhaled air and retain moisture from exhaled air. Humans dwelling in cold and dry places tend to have small, narrow, and long noses in order to warm and moisturize inhaled air. As for hair types, humans from regions with colder climates tend to have straight hair so that the head and neck are kept warm. Straight hair also allows cool moisture to quickly fall off the head. On the other hand, tight and curly hair increases the exposed areas of the scalp, easing the evaporation of sweat and allowing heat to be radiated away while keeping itself off the neck and shoulders. Epicanthic eye folds are believed to be an adaptation protecting the eye from overexposure to ultraviolet radiation, and is presumed to be a particular trait in archaic humans from eastern and southeast Asia. A cold-adaptive explanation for the epicanthic fold is today seen as outdated by some, as epicanthic folds appear in some African populations. Dr. Frank Poirier, a physical anthropologist at Ohio State University, concluded that the epicanthic fold in fact may be an adaptation for tropical regions, and was already part of the natural diversity found among early modern humans.
Physiological or phenotypical changes have been traced to Upper Paleolithic mutations, such as the East Asian variant of the EDAR gene, dated to about 35,000 years ago in Southern or Central China. Traits affected by the mutation are sweat glands, teeth, hair thickness and breast tissue. While Africans and Europeans carry the ancestral version of the gene, most East Asians have the mutated version. By testing the gene on mice, Yana G. Kamberov and Pardis C. Sabeti and their colleagues at the Broad Institute found that the mutated version brings thicker hair shafts, more sweat glands, and less breast tissue. East Asian women are known for having comparatively small breasts and East Asians in general tend to have thick hair. The research team calculated that this gene originated in Southern China, which was warm and humid, meaning having more sweat glands would be advantageous to the hunter-gatherers who lived there. A subsequent study from 2021, based on ancient DNA samples, has suggested that the derived variant became dominant among "Ancient Northern East Asians" shortly after the Last Glacial Maximum in Northeast Asia, around 19,000 years ago. Ancient remains from Northern East Asia, such as the Tianyuan Man (40,000 years old) and the AR33K (33,000 years old) specimen lacked the derived EDAR allele, while ancient East Asian remains after the LGM carry the derived EDAR allele. The frequency of 370A is most highly elevated in North Asian and East Asian populations.
The most recent Ice Age peaked in intensity between 19,000 and 25,000 years ago and ended about 12,000 years ago. As the glaciers that once covered Scandinavia all the way down to Northern France retreated, humans began returning to Northern Europe from the Southwest, modern-day Spain. But about 14,000 years ago, humans from Southeastern Europe, especially Greece and Turkey, began migrating to the rest of the continent, displacing the first group of humans. Analysis of genomic data revealed that all Europeans since 37,000 years ago have descended from a single founding population that survived the Ice Age, with specimens found in various parts of the continent, such as Belgium. Although this human population was displaced 33,000 years ago, a genetically related group began spreading across Europe 19,000 years ago. Recent divergence of Eurasian lineages was sped up significantly during the Last Glacial Maximum (LGM), the Mesolithic and the Neolithic, due to increased selection pressures and founder effects associated with migration. Alleles predictive of light skin have been found in Neanderthals, but the alleles for light skin in Europeans and East Asians, KITLG and ASIP, are (as of 2012) thought to have not been acquired by archaic admixture but recent mutations since the LGM. Hair, eye, and skin pigmentation phenotypes associated with humans of European descent emerged during the LGM, from about 19,000 years ago. The associated TYRP1 SLC24A5 and SLC45A2 alleles emerge around 19,000 years ago, still during the LGM, most likely in the Caucasus. Within the last 20,000 years or so, lighter skin has evolved in East Asia, Europe, North America and Southern Africa. In general, people living in higher latitudes tend to have lighter skin. The HERC2 variation for blue eyes first appears around 14,000 years ago in Italy and the Caucasus.
Inuit adaptation to high-fat diet and cold climate has been traced to a mutation dated the Last Glacial Maximum (20,000 years ago). Average cranial capacity among modern male human populations varies in the range of 1,200 to 1,450 cm3. Larger cranial volumes are associated with cooler climatic regions, with the largest averages being found in populations of Siberia and the Arctic. Humans living in Northern Asia and the Arctic have evolved the ability to develop thick layers of fat on their faces to keep warm. Moreover, the Inuit tend to have flat and broad faces, an adaptation that reduces the likelihood of frostbites. Both Neanderthal and Cro-Magnons had somewhat larger cranial volumes on average than modern Europeans, suggesting the relaxation of selection pressures for larger brain volume after the end of the LGM.
Australian Aboriginals living in the Central Desert, where the temperature can drop below freezing at night, have evolved the ability to reduce their core temperatures without shivering.
Holocene (12,000 years ago to present)
Neolithic or New Stone Age
Impacts of agriculture
The advent of agriculture has played a key role in the evolutionary history of humanity. Early farming communities benefited from new and comparatively stable sources of food, but were also exposed to new and initially devastating diseases such as tuberculosis, measles, and smallpox. Eventually, genetic resistance to such diseases evolved and humans living today are descendants of those who survived the agricultural revolution and reproduced. The pioneers of agriculture faced tooth cavities, protein deficiency and general malnutrition, resulting in shorter statures. Diseases are one of the strongest forces of evolution acting on Homo sapiens. As this species migrated throughout Africa and began colonizing new lands outside the continent around 100,000 years ago, they came into contact with and helped spread a variety of pathogens with deadly consequences. In addition, the dawn of agriculture led to the rise of major disease outbreaks. Malaria is the oldest known of human contagions, traced to West Africa around 100,000 years ago, before humans began migrating out of the continent. Malarial infections surged around 10,000 years ago, raising the selective pressures upon the affected populations, leading to the evolution of resistance.
Examples for adaptations related to agriculture and animal domestication include East Asian types of ADH1B associated with rice domestication, and lactase persistence.
Migrations
As Europeans and East Asians migrated out of Africa, those groups were maladapted and came under stronger selective pressures.
Lactose tolerance
Around 11,000 years ago, as agriculture was replacing hunting and gathering in the Middle East, people invented ways to reduce the concentrations of lactose in milk by fermenting it to make yogurt and cheese. People lost the ability to digest lactose as they matured and as such lost the ability to consume milk. Thousands of years later, a genetic mutation enabled people living in Europe at the time to continue producing lactase, an enzyme that digests lactose, throughout their lives, allowing them to drink milk after weaning and survive bad harvests.
Today, lactase persistence can be found in 90% or more of the populations in Northwestern and Northern Central Europe, and in pockets of Western and Southeastern Africa, Saudi Arabia, and South Asia. It is not as common in Southern Europe (40%) because Neolithic farmers had already settled there before the mutation existed. It is rarer in inland Southeast Asia and Southern Africa. While all Europeans with lactase persistence share a common ancestor for this ability, pockets of lactase persistence outside Europe are likely due to separate mutations. The European mutation, called the LP allele, is traced to modern-day Hungary, 7,500 years ago. In the twenty-first century, about 35% of the human population is capable of digesting lactose after the age of seven or eight. Prior this mutation, dairy farming was already widespread in Europe.
A Finnish research team reported that the European mutation that allows for lactase persistence is not found among the milk-drinking and dairy-farming Africans, however. Sarah Tishkoff and her students confirmed this by analyzing DNA samples from Tanzania, Kenya, and Sudan, where lactase persistence evolved independently. The uniformity of the mutations surrounding the lactase gene suggests that lactase persistence spread rapidly throughout this part of Africa. According to Tishkoff's data, this mutation first appeared between 3,000 and 7,000 years ago. This mutation provides some protection against drought and enables people to drink milk without diarrhea, which causes dehydration.
Lactase persistence is a rare ability among mammals. Because it involves a single gene, it is a simple example of convergent evolution in humans. Other examples of convergent evolution, such as the light skin of Europeans and East Asians or the various means of resistance to malaria, are much more complicated.
Skin color
The light skin pigmentation characteristic of modern Europeans is estimated to have spread across Europe in a "selective sweep" during the Mesolithic (5,000 years ago). Signals for selection in favor of light skin among Europeans was one of the most pronounced, comparable to those for resistance to malaria or lactose tolerance. However, Dan Ju and Ian Mathieson caution in a study addressing 40,000 years of modern human history, "we can assess the extent to which they carried the same light pigmentation alleles that are present today," but explain that Early Upper Paleolithic hunter-gatherers "may have carried different alleles that we cannot now detect", and as a result "we cannot confidently make statements about the skin pigmentation of ancient populations.”
Eumelanin, which is responsible for pigmentation in human skin, protects against ultraviolet radiation while also limiting vitamin D synthesis. Variations in skin color, due to the levels of melanin, are caused by at least 25 different genes, and variations evolved independently of each other to meet different environmental needs. Over the millennia, human skin colors have evolved to be well-suited to their local environments. Having too much melanin can lead to vitamin D deficiency and bone deformities while having too little makes the person more vulnerable to skin cancer. Indeed, Europeans have evolved lighter skin in order to combat vitamin D deficiency in regions with low levels of sunlight. Today, they and their descendants in places with intense sunlight such as Australia are highly vulnerable to sunburn and skin cancer. On the other hand, Inuit have a diet rich in vitamin D and consequently have not needed lighter skin.
Eye color
Blue eyes are an adaptation for living in regions where the amounts of light are limited because they allow more light to come in than brown eyes. They also seem to have undergone both sexual and frequency-dependent selection. A research program by geneticist Hans Eiberg and his team at the University of Copenhagen from the 1990s to 2000s investigating the origins of blue eyes revealed that a mutation in the gene OCA2 is responsible for this trait. According to them, all humans initially had brown eyes and the OCA2 mutation took place between 6,000 and 10,000 years ago. It dilutes the production of melanin, responsible for the pigmentation of human hair, eye, and skin color. The mutation does not completely switch off melanin production, however, as that would leave the individual with a condition known as albinism. Variations in eye color from brown to green can be explained via the variation in the amounts of melanin produced in the iris. While brown-eyed individuals share a large area in their DNA controlling melanin production, blue-eyed individuals have only a small region. By examining mitochondrial DNA of people from multiple countries, Eiberg and his team concluded blue-eyed individuals all share a common ancestor.
In 2018, an international team of researchers from Israel and the United States announced their genetic analysis of 6,500-year-old excavated human remains in Israel's Upper Galilee region revealed a number of traits not found in the humans who had previously inhabited the area, including blue eyes. They concluded that the region experienced a significant demographic shift 6,000 years ago due to migration from Anatolia and the Zagros mountains (in modern-day Turkey and Iran) and that this change contributed to the development of the Chalcolithic culture in the region.
Bronze Age to Medieval Era
Resistance to malaria is a well-known example of recent human evolution. This disease attacks humans early in life. Thus humans who are resistant enjoy a higher chance of surviving and reproducing. While humans have evolved multiple defenses against malaria, sickle cell anemia—a condition in which red blood cells are deformed into sickle shapes, thereby restricting blood flow—is perhaps the best known. Sickle cell anemia makes it more difficult for the malarial parasite to infect red blood cells. This mechanism of defense against malaria emerged independently in Africa and in Pakistan and India. Within 4,000 years it has spread to 10–15% of the populations of these places. Another mutation that enabled humans to resist malaria that is strongly favored by natural selection and has spread rapidly in Africa is the inability to synthesize the enzyme glucose-6-phosphate dehydrogenase, or G6PD.
A combination of poor sanitation and high population densities proved ideal for the spread of contagious diseases which was deadly for the residents of ancient cities. Evolutionary thinking would suggest that people living in places with long-standing urbanization dating back millennia would have evolved resistance to certain diseases, such as tuberculosis and leprosy. Using DNA analysis and archeological findings, scientists from the University College London and the Royal Holloway studied samples from 17 sites in Europe, Asia, and Africa. They learned that, indeed, long-term exposure to pathogens has led to resistance spreading across urban populations. Urbanization is therefore a selective force that has influenced human evolution. The allele in question is named SLC11A1 1729+55del4. Scientists found that among the residents of places that have been settled for thousands of years, such as Susa in Iran, this allele is ubiquitous whereas in places with just a few centuries of urbanization, such as Yakutsk in Siberia, only 70–80% of the population have it.
Evolution to resist infection of pathogens also increased inflammatory disease risk in post-Neolithic Europeans over the last 10,000 years. A study of ancient DNA estimated nature, strength, and time of onset of selections due to pathogens and also found that "the bulk of genetic adaptation occurred after the start of the Bronze Age, <4,500 years ago".
Adaptations have also been found in modern populations living in extreme climatic conditions such as the Arctic, as well as immunological adaptations such as resistance against prion caused brain disease in populations practicing mortuary cannibalism, or the consumption of human corpses. Inuit have the ability to thrive on the lipid-rich diets consisting of Arctic mammals. Human populations living in regions of high altitudes, such as the Tibetan Plateau, Ethiopia, and the Andes benefit from a mutation that enhances the concentration of oxygen in their blood. This is achieved by having more capillaries, increasing their capacity for carrying oxygen. This mutation is believed to be around 3,000 years old.
A recent adaptation has been proposed for the Austronesian Sama-Bajau, also known as the Sea Gypsies or Sea Nomads, developed under selection pressures associated with subsisting on free-diving over the past thousand years or so. As maritime hunter-gatherers, the ability to dive for long periods of times plays a crucial role in their survival. Due to the mammalian dive reflex, the spleen contracts when the mammal dives and releases oxygen-carrying red blood cells. Over time, individuals with larger spleens were more likely to survive the lengthy free-dives, and thus reproduce. By contrast, communities centered around farming show no signs of evolving to have larger spleens. Because the Sama-Bajau show no interest in abandoning this lifestyle, there is no reason to believe further adaptation will not occur.
Advances in the biology of genomes have enabled geneticists to investigate the course of human evolution within centuries. Jonathan Pritchard and a postdoctoral fellow, Yair Field, counted the singletons, or changes of single DNA bases, which are likely to be recent because they are rare and have not spread throughout the population. Since alleles bring neighboring DNA regions with them as they move around the genome, the number of singletons can be used to roughly estimate how quickly the allele has changed its frequency. This approach can unveil evolution within the last 2,000 years or a hundred human generations. Armed with this technique and data from the UK10K project, Pritchard and his team found that alleles for lactase persistence, blond hair, and blue eyes have spread rapidly among Britons within the last two millennia or so. Britain's cloudy skies may have played a role in that the genes for light hair could also cause light skin, reducing the chances of vitamin D deficiency. Sexual selection could also favor blond hair. The technique also enabled them to track the selection of polygenic traits—those affected by a multitude of genes, rather than just one—such as height, infant head circumferences, and female hip sizes (crucial for giving birth). They found that natural selection has been favoring increased height and larger head and female hip sizes among Britons. Moreover, lactase persistence showed signs of active selection during the same period. However, evidence for the selection of polygenic traits is weaker than those affected only by one gene.
A 2012 paper studied the DNA sequence of around 6,500 Americans of European and African descent and confirmed earlier work indicating that the majority of changes to a single letter in the sequence (single nucleotide variants) were accumulated within the last 5,000-10,000 years. Almost three quarters arose in the last 5,000 years or so. About 14% of the variants are potentially harmful, and among those, 86% were 5,000 years old or younger. The researchers also found that European Americans had accumulated a much larger number of mutations than African Americans. This is likely a consequence of their ancestors' migration out of Africa, which resulted in a genetic bottleneck; there were few mates available. Despite the subsequent exponential growth in population, natural selection has not had enough time to eradicate the harmful mutations. While humans today carry far more mutations than their ancestors did 5,000 years ago, they are not necessarily more vulnerable to illnesses because these might be caused by multiple mutations. It does, however, confirm earlier research suggesting that common diseases are not caused by common gene variants. In any case, the fact that the human gene pool has accumulated so many mutations over such a short period of time—in evolutionary terms—and that the human population has exploded in that time mean that humanity is more evolvable than ever before. Natural selection might eventually catch up with the variations in the gene pool, as theoretical models suggest that evolutionary pressures increase as a function of population size.
Early Modern Period to present
A study published in 2021 states that the populations of the Cape Verde islands off the coast of West Africa have speedily evolved resistance to malaria within roughly the last 20 generations, since the start of human habitation there. As expected, the residents of the Island of Santiago, where malaria is most prevalent, show the highest prevalence of resistance. This is one of the most rapid cases of change to the human genome measured.
Geneticist Steve Jones told the BBC that during the sixteenth century, only a third of English babies survived until the age of 21, compared to 99% in the twenty-first century. Medical advances, especially those made in the twentieth century, made this change possible. Yet while people from the developed world today are living longer and healthier lives, many are choosing to have just a few or no children at all, meaning evolutionary forces continue to act on the human gene pool, just in a different way.
Natural selection affects only 8% of the human genome, meaning mutations in the remaining parts of the genome can change their frequency by pure chance through neutral selection. If natural selective pressures are reduced, then more mutations survive, which could increase their frequency and the rate of evolution. For humans, a large source of heritable mutations is sperm; a man accumulates more and more mutations in his sperm as he ages. Hence, men delaying reproduction can affect human evolution.
A 2012 study led by Augustin Kong suggests that the number of de novo (new) mutations increases by about two per year of delayed reproduction by the father and that the total number of paternal mutations doubles every 16.5 years.
For a long time, medicine has reduced the fatality of genetic defects and contagious diseases, allowing more and more humans to survive and reproduce, but it has also enabled maladaptive traits that would otherwise be culled to accumulate in the gene pool. This is not a problem as long as access to modern healthcare is maintained. But natural selective pressures will mount considerably if that is taken away. Nevertheless, dependence on medicine rather than genetic adaptations will likely be the driving force behind humanity's fight against diseases for the foreseeable future. Moreover, while the introduction of antibiotics initially reduced the mortality rates due to infectious diseases by significant amounts, abuse has led to the rise of antibiotic-resistant strains of bacteria, making many illnesses major causes of death once again.
Human jaws and teeth have been shrinking in proportion with the decrease in body size in the last 30,000 years as a result of new diets and technology. There are many individuals today who do not have enough space in their mouths for their third molars (or wisdom teeth) due to reduced jaw sizes. In the twentieth century, the trend toward smaller teeth appeared to have been slightly reversed due to the introduction of fluoride, which thickens dental enamel, thereby enlarging the teeth.
Recent research suggests that menopause is evolving to occur later. Other reported trends appear to include lengthening of the human reproductive period and reduction in cholesterol levels, blood glucose and blood pressure in some populations.
Population geneticist Emmanuel Milot and his team studied recent human evolution in an isolated Canadian island using 140 years of church records. They found that selection favored younger age at first birth among women. In particular, the average age at first birth of women from Coudres Island (Île aux Coudres), northeast of Québec City, decreased by four years between 1800 and 1930. Women who started having children sooner generally ended up with more children in total who survive until adulthood. In other words, for these French-Canadian women, reproductive success was associated with lower age at first childbirth. Maternal age at first birth is a highly heritable trait.
Human evolution continues during the modern era, including among industrialized nations. Things like access to contraception and the freedom from predators do not stop natural selection. Among developed countries, where life expectancy is high and infant mortality rates are low, selective pressures are the strongest on traits that influence the number of children a human has. It is speculated that alleles influencing sexual behavior would be subject to strong selection, though the details of how genes can affect said behavior remain unclear.
Historically, as a by-product of the ability to walk upright, humans evolved to have narrower hips and birth canals and to have larger heads. Compared to other close relatives such as chimpanzees, childbirth is a highly challenging and potentially fatal experience for humans. Thus began an evolutionary tug-of-war (see Obstetrical dilemma). For babies, having larger heads proved beneficial as long as their mothers' hips were wide enough. If not, both mother and child typically died. This is an example of balancing selection, or the removal of extreme traits. In this case, heads that were too large or hips that were too small were selected against. This evolutionary tug-of-war attained an equilibrium, making these traits remain more or less constant over time while allowing for genetic variation to flourish, thus paving the way for rapid evolution should selective forces shift their direction.
All this changed in the twentieth century as Cesarean sections (or C-sections) became safer and more common in some parts of the world. Larger head sizes continue to be favored while selective pressures against smaller hip sizes have diminished. Projecting forward, this means that human heads would continue to grow while hip sizes would not. As a result of increasing fetopelvic disproportion, C-sections would become more and more common in a positive feedback loop, though not necessarily to the extent that natural childbirth would become obsolete.
Paleoanthropologist Briana Pobiner of the Smithsonian Institution noted that cultural factors could play a role in the widely different rates of C-sections across the developed and developing worlds. Daghni Rajasingam of the Royal College of Obstetricians observed that the increasing rates of diabetes and obesity among women of reproductive age also boost the demand for C-sections. Biologist Philipp Mitteroecker from the University of Vienna and his team estimated that about six percent of all births worldwide were obstructed and required medical intervention. In the United Kingdom, one quarter of all births involved the C-section while in the United States, the number was one in three. Mitteroecker and colleagues discovered that the rate of C-sections has gone up 10% to 20% since the mid-twentieth century. They argued that because the availability of safe Cesarean sections significantly reduced maternal and infant mortality rates in the developed world, they have induced an evolutionary change. However, "It's not easy to foresee what this will mean for the future of humans and birth," Mitteroecker told The Independent. This is because the increase in baby sizes is limited by the mother's metabolic capacity and modern medicine, which makes it more likely that neonates who are born prematurely or are underweight to survive.
Researchers participating in the Framingham Heart Study, which began in 1948 and was intended to investigate the cause of heart disease among women and their descendants in Framingham, Massachusetts, found evidence for selective pressures against high blood pressure due to the modern Western diet, which contains high amounts of salt, known for raising blood pressure. They also found evidence for selection against hypercholesterolemnia, or high levels of cholesterol in the blood. Evolutionary geneticist Stephen Stearns and his colleagues reported signs that women were gradually becoming shorter and heavier. Stearns argued that human culture and changes humans have made on their natural environments are driving human evolution rather than putting the process to a halt. The data indicates that the women were not eating more; rather, the ones who were heavier tended to have more children. Stearns and his team also discovered that the subjects of the study tended to reach menopause later; they estimated that if the environment remains the same, the average age at menopause will increase by about a year in 200 years, or about ten generations. All these traits have medium to high heritability. Given the starting date of the study, the spread of these adaptations can be observed in just a few generations.
By analyzing genomic data of 60,000 individuals of Caucasian descent from Kaiser Permanente in Northern California, and of 150,000 people in the UK Biobank, evolutionary geneticist Joseph Pickrell and evolutionary biologist Molly Przeworski were able to identify signs of biological evolution among living human generations. For the purposes of studying evolution, one lifetime is the shortest possible time scale. An allele associated with difficulty withdrawing from tobacco smoking dropped in frequency among the British but not among the Northern Californians. This suggests that heavy smokers—who were common in Britain during the 1950s but not in Northern California—were selected against. A set of alleles linked to later menarche was more common among women who lived for longer. An allele called ApoE4, linked to Alzheimer's disease, fell in frequency as carriers tended to not live for very long. In fact, these were the only traits that reduced life expectancy Pickrell and Przeworski found, which suggests that other harmful traits probably have already been eradicated. Only among older people are the effects of Alzheimer's disease and smoking visible. Moreover, smoking is a relatively recent trend. It is not entirely clear why such traits bring evolutionary disadvantages, however, since older people have already had children. Scientists proposed that either they also bring about harmful effects in youth or that they reduce an individual's inclusive fitness, or the tendency of organisms that share the same genes to help each other. Thus, mutations that make it difficult for grandparents to help raise their grandchildren are unlikely to propagate throughout the population. Pickrell and Przeworski also investigated 42 traits determined by multiple alleles rather than just one, such as the timing of puberty. They found that later puberty and older age of first birth were correlated with higher life expectancy.
Larger sample sizes allow for the study of rarer mutations. Pickrell and Przeworski told The Atlantic that a sample of half a million individuals would enable them to study mutations that occur among only 2% of the population, which would provide finer details of recent human evolution. While studies of short time scales such as these are vulnerable to random statistical fluctuations, they can improve understanding of the factors that affect survival and reproduction among contemporary human populations.
Evolutionary geneticist Jaleal Sanjak and his team analyzed genetic and medical information from more than 200,000 women over the age of 45 and 150,000 men over the age of 50—people who have passed their reproductive years—from the UK Biobank and identified 13 traits among women and ten among men that were linked to having children at a younger age, having a higher body-mass index, fewer years of education, and lower levels of fluid intelligence, or the capacity for logical reasoning and problem solving. Sanjak noted, however, that it was not known whether having children actually made women heavier or being heavier made it easier to reproduce. Because taller men and shorter women tended to have more children and because the genes associated with height affect men and women equally, the average height of the population will likely remain the same. Among women who had children later, those with higher levels of education had more children.
Evolutionary biologist Hakhamanesh Mostafavi led a 2017 study that analyzed data of 215,000 individuals from just a few generations in the United Kingdom and the United States and found a number of genetic changes that affect longevity. The ApoE allele linked to Alzheimer's disease was rare among women aged 70 and over while the frequency of the CHRNA3 gene associated with smoking addiction among men fell among middle-aged men and up. Because this is not itself evidence of evolution, since natural selection only cares about successful reproduction not longevity, scientists have proposed a number of explanations. Men who live longer tend to have more children. Men and women who survive until old age can help take care of both their children and grandchildren, in benefits their descendants down the generations. This explanation is known as the grandmother hypothesis. It is also possible that Alzheimer's disease and smoking addiction are also harmful earlier in life, but the effects are more subtle and larger sample sizes are required in order to study them. Mostafavi and his team also found that mutations causing health problems such as asthma, having a high body-mass index and high cholesterol levels were more common among those with shorter lifespans while mutations leading to delayed puberty and reproduction were more common among long living individuals. According to geneticist Jonathan Pritchard, while the link between fertility and longevity was identified in previous studies, those did not entirely rule out the effects of educational and financial status—people who rank high in both tend to have children later in life; this seems to suggest the existence of an evolutionary trade-off between longevity and fertility.
In South Africa, where large numbers of people are infected with HIV, some have genes that help them combat this virus, making it more likely that they would survive and pass this trait onto their children. If the virus persists, humans living in this part of the world could become resistant to it in as little as hundreds of years. However, because HIV evolves more quickly than humans, it will more likely be dealt with technologically rather than genetically.
A 2017 study by researchers from Northwestern University unveiled a mutation among the Old Order Amish living in Berne, Indiana, that suppressed their chances of having diabetes and extends their life expectancy by about ten years on average. That mutation occurred in the gene called Serpine1, which codes for the production of the protein PAI-1 (plasminogen activator inhibitor), which regulates blood clotting and plays a role in the aging process. About 24% of the people sampled carried this mutation and had a life expectancy of 85, higher than the community average of 75. Researchers also found the telomeres—non-functional ends of human chromosomes—of those with the mutation to be longer than those without. Because telomeres shorten as the person ages, those with longer telomeres tend to live longer. At present, the Amish live in 22 U.S. states plus the Canadian province of Ontario. They live simple lifestyles that date back centuries and generally insulate themselves from modern North American society. They are mostly indifferent towards modern medicine, but scientists do have a healthy relationship with the Amish community in Berne. Their detailed genealogical records make them ideal subjects for research.
In 2020, Teghan Lucas, Maciej Henneberg, Jaliya Kumaratilake gave evidence that a growing share of the human population retained the median artery in their forearms. This structure forms during fetal development but dissolves once two other arteries, the radial and ulnar arteries, develop. The median artery allows for more blood flow and could be used as a replacement in certain surgeries. Their statistical analysis suggested that the retention of the median artery was under extremely strong selection within the last 250 years or so. People have been studying this structure and its prevalence since the eighteenth century.
Multidisciplinary research suggests that ongoing evolution could help explain the rise of certain medical conditions such as autism and autoimmune disorders. Autism and schizophrenia may be due to genes inherited from the mother and the father which are over-expressed and which fight a tug-of-war in the child's body. Allergies, asthma, and autoimmune disorders appear linked to higher standards of sanitation, which prevent the immune systems of modern humans from being exposed to various parasites and pathogens the way their ancestors' were, making them hypersensitive and more likely to overreact. The human body is not built from a professionally engineered blue print but a system shaped over long periods of time by evolution with all kinds of trade-offs and imperfections. Understanding the evolution of the human body can help medical doctors better understand and treat various disorders. Research in evolutionary medicine suggests that diseases are prevalent because natural selection favors reproduction over health and longevity. In addition, biological evolution is slower than cultural evolution and humans evolve more slowly than pathogens.
Whereas in the ancestral past, humans lived in geographically isolated communities where inbreeding was rather common, modern transportation technologies have made it much easier for people to travel great distances and facilitated further genetic mixing, giving rise to additional variations in the human gene pool. It also enables the spread of diseases worldwide, which can have an effect on human evolution. Furthermore, climate change may trigger the mass migration of not just humans but also diseases affecting humans. Besides the selection and flow of genes and alleles, another mechanism of biological evolution is epigenetics, or changes not to the DNA sequence itself, but rather the way it is expressed. Scientists already know that chronic illnesses and stress are epigenetic mechanisms.
See also
Blood type distribution by country
Cold and heat adaptations in humans
Evolutionary approaches to depression
Genetic genealogy
Heritability of IQ
Human mating strategies
Missing heritability problem
Race and genetics
Sexual selection in humans
Notes
References
External links
Randolph M. Nesse, Carl T. Bergstrom, Peter T. Ellison, Jeffrey S. Flier, Peter Gluckman, Diddahally R. Govindaraju, Dietrich Niethammer, Gilbert S. Omenn, Robert L. Perlman, Mark D. Schwartz, Mark G. Thomas, Stephen C. Stearns, David Valle. Making evolutionary biology a basic science for medicine. Proceedings of the National Academy of Sciences. Jan 2010, 107 (suppl 1) 1800–1807.
Scott Solomon. The Future of Human Evolution | What Darwin Didn't Know. The Great Courses Plus. February 24, 2019. (Video lecture, 32:19.)
Laurence Hurst. Is Human Evolution Speeding Up or Slowing Down? TED-Ed. September 2020. (Video lecture, 5:25)
How Humans are Shaping Our Own Evolution, National Geographic, D. T. Max, 2017
Further reading
Biological anthropology
Holocene
Human evolution
Modern human genetic history
Upper Paleolithic
| 0.761249 | 0.995531 | 0.757847 |
Marxist feminism
|
Marxist feminism is a philosophical variant of feminism that incorporates and extends Marxist theory. Marxist feminism analyzes the ways in which women are exploited through capitalism and the individual ownership of private property. According to Marxist feminists, women's liberation can only be achieved by dismantling the capitalist systems in which they contend much of women's labor is uncompensated. Marxist feminists extend traditional Marxist analysis by applying it to unpaid domestic labor and sex relations.
Because of its foundation in historical materialism, Marxist feminism is similar to socialist feminism and, to a greater degree, materialist feminism. The latter two place greater emphasis on what they consider the "reductionist limitations" of Marxist theory but, as Martha E. Gimenez notes in her exploration of the differences between Marxist and materialist feminism, "clear lines of theoretical demarcation between and within these two umbrella terms are somewhat difficult to establish."
Theoretical background in Marxism
Marxism follows the development of oppression and class division in the evolution of human society through the development and organization of wealth and production, and concludes the evolution of oppressive societal structure to be relative to the evolution of oppressive family structures, i.e., the normalization of oppressing the female sex marks or coincides with the birth of oppressive society in general.
In The Origin of the Family, Private Property, and the State (1884), Friedrich Engels writes about the earliest origins of the family structure, social hierarchy, and the concept of wealth, drawing from both ancient and contemporary study. He concludes that women originally had a higher social status and equal consideration in labor, and particularly, only women were sure to share a family name. As the earliest men did not even share the family name, Engels says, they did not know for sure who their children were or benefit from inheritance.
When agriculture first became abundant and the abundance was considered male wealth, as it was sourced from the male work environment away from the home, a deeper wish for male lineage and inheritance was founded. To achieve that wish, women were not only granted their long-sought monogamy but forced into it as part of domestic servitude, while males pursued a hushed culture of "hetaerism". Engels describes this situation as coincidental to the beginnings of forced servitude as a dominant feature of society, leading eventually to a European culture of class oppression, where the children of the poor were expected to be servants of the rich.
Engels rewrites a quote in this book, by himself and Marx from 1846, "The first division of labor is that between man and woman for the propagation of children," to say, "The first class opposition that appears in history coincides with the development of the antagonism between man and woman in monogamous marriage, and the first class oppression coincides with that of the female sex by the male."
Gender oppression is reproduced culturally and maintained through institutionalized inequality. By privileging men at the expense of women and refusing to acknowledge traditional domestic labor as equally valuable, the working-class man is socialized into an oppressive structure which marginalizes the working-class woman.
Productive, unproductive, and reproductive labor
Marx categorized labor into two categories: productive and unproductive.
Productive labor is labor that creates surplus value, e.g. production of raw materials and manufacturing products.
Unproductive labor does not create surplus value and may in fact be subsidized by it. This can include supervisory duties, bookkeeping, marketing, etc.
Early Marxist and Socialist feminists began creating working women's organizations within broader worker movements in the early 1910s. These organizers considered the needs of working women to be different to those of the feminist movements that had been developed by the bourgeoise. Separating gender from class as a means for liberation was unthinkable for these women, and they found various levels of success within communist and socialist parties for their ideas. Marxist feminists such as Mary Inman established networks of likeminded members within these organizations that were able to lobby for women's oppression to be considered a key policy issue by the 1940s.
Marxist feminist authors in the 1970s, such as Margaret Benston and Peggy Morton, relied heavily on analysis of productive and unproductive labor in an attempt to shift the perception of the time that consumption was the purpose of a family, presenting arguments for a state-paid wage to homemakers, and a cultural perception of the family as a productive entity. In capitalism, the work of maintaining a family has little material value, as it produces no marketable products. In Marxism, the maintenance of a family is productive, as it has a service value, and is used in the same sense as a commodity.
Wages for Housework
Focusing on exclusion from productive labor as the most important source of female oppression, some Marxist feminists advocated for the inclusion of domestic work within the waged capitalist economy. The idea of compensating reproductive labor was present in the writing of socialists such as Charlotte Perkins Gilman (1898) who argued that women's oppression stemmed from being forced into the private sphere. Gilman argued that conditions for women would improve when their work was located, recognized, and valued in the public sphere.
Perhaps the most influential effort to compensate reproductive labor was the International Wages for Housework Campaign, an organization launched in Italy in 1972 by members of the International Feminist Collective. Many of these women, including Selma James, Mariarosa Dalla Costa, Brigitte Galtier, and Silvia Federici published a range of sources to promote their message in academic and public domains. Despite beginning as a small group of women in Italy, the Wages for Housework Campaign was successful in mobilizing on an international level. A Wages for Housework group was founded in Brooklyn, New York, with the help of Federici. As Heidi Hartmann acknowledges (1981), the efforts of these movements, though ultimately unsuccessful, generated important discourse regarding the value of housework and its relation to the economy.
Domestic Slavery
Many Marxist feminist scholars analyzing modes of oppression at the site of production note the effect that housework has on women in a capitalist system. In Angela Davis' Women, Race and Class, the concept of housework is utilized to deconstruct the capitalist construct of gendered labor within the home and to show the ways in which women are exploited through "domestic slavery". To address this, Davis concludes that the "socialisation of housework – including meal preparation and child care – presupposes an end to the profit-motive's reign over the economy."
Attempts to address these challenges were regularly met with resistance. Attempts to address the exploitation of domestic labor were met with pushback by critics who argued that this type of gendered housework should be considered a social good. In this manner, Marxist feminists argue that unquestioned domestic slavery upholds the structural inequities faced by women in all capitalist economies.
Other Marxist feminist have noted the concept of domestic work for women internationally and the role it plays in buttressing global patriarchy. In Paresh Chattopadhyay's response to Custer's Capital Accumulation and Women's Labor in Asian Economies, Chattopadhyay notes the ways in which Custer analyzes "women's labor in the garments industry in West Bengal and Bangladesh as well as in Bangladesh's agricultural sector, labor management methods of the Japanese industrial bourgeoisie and, finally, the mode of employment of the women laborers in Japanese industry" in demonstrating the ways in which the domestic sphere exhibits similar gender-based exploitation of difference. In both works, the gendered division of labor, specifically within the domestic sphere, is shown to illustrate the methods the capitalist system exploits women globally.
Responsibility of reproductive labor
Another solution proposed by Marxist feminists is to liberate women from their forced connection to reproductive labor. In her critique of traditional Marxist feminist movements such as the Wages for Housework Campaign, Heidi Hartmann (1981) argues that these efforts "take as their question the relationship of women to the economic system, rather than that of women to men, apparently assuming the latter will be explained in their discussion of the former." Hartmann believes that traditional discourse has ignored the importance of women's oppression as women, and instead focused on women's oppression as members of the capitalist system. Similarly, Gayle Rubin, who has written on a range of subjects including sadomasochism, prostitution, pornography, and lesbian literature, first rose to prominence through her 1975 essay "The Traffic in Women: Notes on the 'Political Economy' of Sex", in which she coins the phrase "sex/gender system" and criticizes Marxism for what she claims is its incomplete analysis of sexism under capitalism.
Through these works, Marxist feminists like Hartmann and Rubin framed the oppression of women as a social phenomenon that occurred when hierarchies based on perceived difference were enforced. This has been challenged within Marxist feminist circles as overcorrecting Marxism's issues with sexism by divorcing the social oppression of women from their economic oppression. In response to Rubin's writings, theorist Brooke Meredith Beloso argued that Marxist feminist critique "must challenge the political economy that has taken and continues to take advantage of anything it can, including feminism, in order to take advantage of millions."
Many Marxist feminists have shifted their focus to the ways in which women are now potentially in worse conditions as a result of gaining access to productive labor. Nancy Folbre proposes that feminist movements begin to focus on women's subordinate status to men both in the reproductive (private) sphere, as well as in the workplace (public sphere). In an interview in 2013, Silvia Federici urges feminist movements to consider the fact that many women are now forced into productive and reproductive labor, resulting in a double day. Federici argues that the emancipation of women cannot occur until they are free from the burden of unwaged labor, which she proposes will involve institutional changes such as closing the wage gap and implementing child care programs in the workplace. Federici's suggestions are echoed in a similar interview with Selma James (2012) and have even been touched on in recent presidential elections.
Affective and emotional labor
Scholars and sociologists such as Michael Hardt, Antonio Negri, Arlie Russell Hochschild and Shiloh Whitney discuss a new form of labor that transcends the traditional spheres of labor and which does not create product, or is byproductive. Affective labor focuses on the blurred lines between personal life and economic life. Whitney states, "The daily struggle of unemployed persons and the domestic toil of housewives no less than the waged worker are thus part of the production and reproduction of social life, and of the biopolitical growth of capital that valorizes information and subjectivities."
The concept of emotional labor, particularly the emotional labor that is present and required in pink collar jobs, was introduced by Arlie Russell Hochschild in her book The Managed Heart: Commercialization of Human Feeling (1983) in which she considers the affective labor of the profession as flight attendants smile, exchange pleasantries and banter with customers. Marxist feminists identify this as part of the social reproduction of labor, which reinforces gender and racial hierarchies.
Equal pay for equal labour
In 1977 the British feminist sociologist Veronica Beechey published 'Some Notes on Female Wage Labour', which argued that women should be understood as an unrecognised 'reserve arm of labour'. In response, Floya Anthias published 'Woman and the Reserve Army of Labour: A Critique of Veronica Beechy', to query Beechey's arguments, while also recognising that it Beechey's was "the most sophisticated and influential attempt to analyse women's wage labour by using or reconstituting the categories of Marx's Capital". In 1987 Verso published Beechey's collected essays on women's participation in labour as the book Unequal Work.
Intersectionality and Marxist feminism
The emergence of intersectionality as a widely popular theory of current feminism saw different responses from Marxist feminists. Traditional Marxist feminists remain critical of its reliance on bourgeois identity politics, arguing that intersectionality limits conceptions of class and power by overemphasizing the individual and not the collective proletariat experience. In this view, differing identities are to be collectively overcome in order to challenge capitalist structures.
Marxist feminists consider intersectionality as a lens to view the interaction of different aspects of identity as a result of structured, systematic oppression. Intersectional Marxist feminism challenges the separation of class and social identity as being an incomplete critique of capitalism, that reproduces bourgeois hierarchy. While class is considered the root cause of systemic oppression in this model, Marxist feminists may use an intersectional lens to understand how class is socially produced on a global scale.
Accomplishments and activism
The nature of Marxist feminists and their ability to mobilize to promote social change has enabled them to engage in important activism. As activists, Marxist feminists insist "on developing politics that put women's oppression and liberation, class politics, anti-imperialism, antiracism, and issues of gender identity and sexuality together at the heart of the agenda." Though their advocacy often receives criticism, Marxist feminists challenge capitalism in ways that facilitate new discourse and shed light on the status of women. These women throughout history have used a range of approaches in fighting hegemonic capitalism, which reflect their different views on the optimal method of achieving liberation for women.
A few women who contributed to the development of Marxist Feminism as a theory were Chizuko Ueno, Anuradha Ghandy, Claudia Jones, and Angela Davis. Chizuko Ueno is well known for being one of the first women to introduce Marxist Feminism in Japan, as one of the primary developers of feminist theories across Japan. Among other renowned Marxist Feminists, their influence impacted nations such as Ukraine, India, Russia, United States, and Trinidad and Tobago.
Marxist feminism has also been influential on feminist movements that have grown out of Latin American nations. The 2010s feminist movement in Argentina used Marxist feminism to address the relationship between various social and economic factors that contributed to gender violence in the country. Argentinian feminist theorist and activist Verónica Gago wrote in her book, Feminist International: How to Change Everything, about the use of strikes to address femicide, abortion access, and gender-based economic hardship in Argentina through feminist movements such as Ni una menos.
Marxist feminist critiques of other branches of feminism
Clara Zetkin and Alexandra Kollontai were opposed to forms of feminism that reinforce class status. They did not see a true possibility to unite across economic inequality because they argue that it would be extremely difficult for an upper-class woman to truly understand the struggles of the working class. For instance, Kollontai wrote in 1909: For what reason, then, should the woman worker seek a union with the bourgeois feminists? Who, in actual fact, would stand to gain in the event of such an alliance? Certainly not the woman worker.Kollontai avoided associating herself with the term "feminism" as she deemed the term to be too closely related to that of the bourgeois feminism that shut out the capability of other classes to benefit from the term.
Kollontai was a prominent leader in the Bolshevik party in Russia, defending her stance on how capitalism had shaped a rather displeasing and oppressing position for women who are part of its system. She recognized and emphasized the difference between the proletariat and bourgeois women in society, though it has been expressed by Kollontai's thought that all women under a capitalist economy were those of oppression. One of the reasons Kollontai had a strict opposition of the bourgeois women and proletariat or working-class women to have an alliance is because the bourgeois was still inherently using the women of the working class to their advantage, and therefore prolonging the injustice that women in a capitalist society are treated. She theorized that a well-balanced economic utopia was ingrained in the need for gender equality, but never identified as a feminist, though she greatly impacted the feminist movement within the ideology of feminism within and throughout socialism. Kollontai had a harsh stance on the feminist movement and believed feminists to be naïve in only addressing gender as the reason inequality was happening under a capitalist rule. She believed that the true issue of inequality was that of the division of classes that led to the immediate production of gender struggles, just how men in the structure of the classes shown a harsh divide as well. Kollontai analyzed the theories and historical implications of Marxism as a background for her ideologies, which she addressed the most profound obstacle for society to address be that of the gender inequality, which could never be eradicated under a capitalist society.
As capitalism is inherently for private profit, Kollontai's argument toward the eradication of women suffrage within society under a capitalist rule also delved into how women cannot and will not be abolished under a capitalist society because of the ways in which women's "free labor" has been utilized. Kollontai criticized the feminist movement as also neglecting to emphasize how the working class, while trying to care and provide for a family and being paid less than that of men, was still expected to cater to and provide for the bourgeois or upper-class women who were still oppressing the working-class women by utilizing their stereotypical type of work. Kollontai also faced harsh scrutiny in being a woman leader in a time of a male dominated political stance during the Bolshevik movement. In keeping with her unusual position during her time, she also kept diaries of her plans and ideas on moving towards a more "modern" society where socialism would help uproot that of capitalism and the oppression that different groups of gender and class had been facing. Kollontai was a great example of a woman who was indeed still oppressed by the times and was removed from her own ideologies and progress for the mere fact she was a woman in times where being so in a powerful position was frowned upon and "great women" were only allowed to be placed alongside "great men" in history. Kollontai's most pertinent presence in feminist socialism was her stance on reproductive rights and her view on women being allowed the same luxuries that men have in finding love not only to be stable and supported, and to also be able to make their own money and be secure on their own two feet. She focused her attention on opening up society's allowance of women's liberation from a capitalist and bourgeois control and emphasizing women's suffrage in the working-class.
Critics like Kollontai believed liberal feminism would undermine the efforts of Marxism to improve conditions for the working class. Marxists supported the more radical political program of liberating women through socialist revolution, with a special emphasis on work among women and in materially changing their conditions after the revolution. Additional liberation methods supported by Marxist feminists include radical "Utopian Demands", coined by Maria Mies. This indication of the scope of revolution required to promote change states that demanding anything less than complete reform will produce inadequate solutions to long-term issues.
Notable Marxist feminists
See also
References
Further reading
Cited in:
Federici, S. B. (2014). Caliban and the witch (2., rev. ed). New York, NY: Autonomedia.
Marxist & Materialist Feminism - The Feminist eZine. (n.d.). Retrieved October 3, 2019, from http://www.feministezine.com/feminist/philosophy/Marxist-Materialist-Feminism.html
Hennessy, R., & Ingraham, C. (1997). Materialist Feminism: A reader in Class, Difference, and Women's lives. Routledge.
Gago, V. (2020). Feminist international: How to change everything. Verso Books.
External links
Marxism, Liberalism, And Feminism (Leftist Legal Thought) New Delhi, Serials (2010) by Dr.Jur. Eric Engle LL.M.
Proletarian Feminism
Silvia Federici, recorded live at Fusion Arts, NYC. 11.30.04
Marxist Feminism
Feminism of the Anti-Capitalist Left by Lidia Cirillo
Feminism
Feminist theory
| 0.762167 | 0.994317 | 0.757835 |
Style (visual arts)
|
In the visual arts, style is a "...distinctive manner which permits the grouping of works into related categories" or "...any distinctive, and therefore recognizable, way in which an act is performed or an artifact made or ought to be performed and made". Style refers to the visual appearance of a work of art that relates to other works with similar aesthetic roots, by the same artist, or from the same period, training, location, "school", art movement or archaeological culture: "The notion of style has long been historian's principal mode of classifying works of art".
Style can be divided into the general style of a period, country or cultural group, group of artists or art movement, and the individual style of the artist within that group style. Divisions within both types of styles are often made, such as between "early", "middle" or "late". In some artists, such as Picasso for example, these divisions may be marked and easy to see; in others, they are more subtle. Style is seen as usually dynamic, in most periods always changing by a gradual process, though the speed of this varies greatly, from the very slow development in style typical of prehistoric art or Ancient Egyptian art to the rapid changes in Modern art styles. Style often develops in a series of jumps, with relatively sudden changes followed by periods of slower development.
After dominating academic discussion in art history in the 19th and early 20th centuries, so-called "style art history" has come under increasing attack in recent decades, and many art historians now prefer to avoid stylistic classifications where they can.
Overview
Any piece of art is in theory capable of being analysed in terms of style; neither periods nor artists can avoid having a style, except by complete incompetence, and conversely natural objects or sights cannot be said to have a style, as style only results from choices made by a maker. Whether the artist makes a conscious choice of style, or can identify his own style, hardly matters. Artists in recent developed societies tend to be highly conscious of their own style, arguably over-conscious, whereas for earlier artists stylistic choices were probably "largely unselfconscious".
Most stylistic periods are identified and defined later by art historians, but artists may choose to define and name their own style. The names of most older styles are the invention of art historians and would not have been understood by the practitioners of those styles. Some originated as terms of derision, including Gothic, Baroque, and Rococo. Cubism on the other hand was a conscious identification made by a few artists; the word itself seems to have originated with critics rather than painters, but was rapidly accepted by the artists.
Western art, like that of some other cultures, most notably Chinese art, has a marked tendency to revive at intervals "classic" styles from the past. In critical analysis of the visual arts, the style of a work of art is typically treated as distinct from its iconography, which covers the subject and the content of the work, though for Jas Elsner this distinction is "not, of course, true in any actual example; but it has proved rhetorically extremely useful".
History of the concept
Classical art criticism and the relatively few medieval writings on aesthetics did not greatly develop a concept of style in art, or analysis of it, and though Renaissance and Baroque writers on art are greatly concerned with what we would call style, they did not develop a coherent theory of it, at least outside architecture:
Artistic styles shift with cultural conditions; a self-evident truth to any modern art historian, but an extraordinary idea in this period [Early Renaissance and earlier]. Nor is it clear that any such idea was articulated in antiquity... Pliny was attentive to changes in ways of art-making, but he presented such changes as driven by technology and wealth. Vasari, too, attributes the strangeness and, in his view the deficiencies, of earlier art to lack of technological know-how and cultural sophistication.
Giorgio Vasari set out a hugely influential but much-questioned account of the development of style in Italian painting (mainly) from Giotto to his own Mannerist period. He stressed the development of a Florentine style based on disegno or line-based drawing, rather than Venetian colour. With other Renaissance theorists like Leon Battista Alberti he continued classical debates over the best balance in art between the realistic depiction of nature and idealization of it; this debate was to continue until the 19th century and the advent of Modernism.
The theorist of Neoclassicism, Johann Joachim Winckelmann, analysed the stylistic changes in Greek classical art in 1764, comparing them closely to the changes in Renaissance art, and "Georg Hegel codified the notion that each historical period will have a typical style", casting a very long shadow over the study of style. Hegel is often attributed with the invention of the German word Zeitgeist, but he never actually used the word, although in Lectures on the Philosophy of History, he uses the phrase der Geist seiner Zeit (the spirit of his time), writing that "no man can surpass his own time, for the spirit of his time is also his own spirit."
Constructing schemes of the period styles of historic art and architecture was a major concern of 19th century scholars in the new and initially mostly German-speaking field of art history, with important writers on the broad theory of style including Carl Friedrich von Rumohr, Gottfried Semper, and Alois Riegl in his Stilfragen of 1893, with Heinrich Wölfflin and Paul Frankl continuing the debate in the 20th century. Paul Jacobsthal and Josef Strzygowski are among the art historians who followed Riegl in proposing grand schemes tracing the transmission of elements of styles across great ranges in time and space. This type of art history is also known as formalism, or the study of forms or shapes in art.
Semper, Wölfflin, and Frankl, and later Ackerman, had backgrounds in the history of architecture, and like many other terms for period styles, "Romanesque" and "Gothic" were initially coined to describe architectural styles, where major changes between styles can be clearer and more easy to define, not least because style in architecture is easier to replicate by following a set of rules than style in figurative art such as painting. Terms originated to describe architectural periods were often subsequently applied to other areas of the visual arts, and then more widely still to music, literature and the general culture.
In architecture stylistic change often follows, and is made possible by, the discovery of new techniques or materials, from the Gothic rib vault to modern metal and reinforced concrete construction. A major area of debate in both art history and archaeology has been the extent to which stylistic change in other fields like painting or pottery is also a response to new technical possibilities, or has its own impetus to develop (the kunstwollen of Riegl), or changes in response to social and economic factors affecting patronage and the conditions of the artist, as current thinking tends to emphasize, using less rigid versions of Marxist art history.
Although style was well-established as a central component of art historical analysis, seeing it as the over-riding factor in art history had fallen out of fashion by World War II, as other ways of looking at art were developing, as well as a reaction against the emphasis on style; for Svetlana Alpers, "the normal invocation of style in art history is a depressing affair indeed". According to James Elkins "In the later 20th century criticisms of style were aimed at further reducing the Hegelian elements of the concept while retaining it in a form that could be more easily controlled". Meyer Schapiro, James Ackerman, Ernst Gombrich and George Kubler (The Shape of Time: Remarks on the History of Things, 1962) have made notable contributions to the debate, which has also drawn on wider developments in critical theory. In 2010 Jas Elsner put it more strongly: "For nearly the whole of the 20th century, style art history has been the indisputable king of the discipline, but since the revolutions of the seventies and eighties the king has been dead", though his article explores ways in which "style art history" remains alive, and his comment would hardly be applicable to archaeology.
The use of terms such as Counter-Maniera appears to be in decline, as impatience with such "style labels" grows among art historians. In 2000 Marcia B. Hall, a leading art historian of 16th-century Italian painting and mentee of Sydney Joseph Freedberg (1914–1997), who invented the term, was criticised by a reviewer of her After Raphael: Painting in Central Italy in the Sixteenth Century for her "fundamental flaw" in continuing to use this and other terms, despite an apologetic "Note on style labels" at the beginning of the book and a promise to keep their use to a minimum.
A rare recent attempt to create a theory to explain the process driving changes in artistic style, rather than just theories of how to describe and categorize them, is by the behavioural psychologist Colin Martindale, who has proposed an evolutionary theory based on Darwinian principles. However this cannot be said to have gained much support among art historians.
Individual style
Traditional art history has also placed great emphasis on the individual style, sometimes called the signature style, of an artist: "the notion of personal style—that individuality can be uniquely expressed not only in the way an artist draws, but also in the stylistic quirks of an author's writing (for instance)— is perhaps an axiom of Western notions of identity". The identification of individual styles is especially important in the attribution of works to artists, which is a dominant factor in their valuation for the art market, above all for works in the Western tradition since the Renaissance. The identification of individual style in works is "essentially assigned to a group of specialists in the field known as connoisseurs", a group who centre in the art trade and museums, often with tensions between them and the community of academic art historians.
The exercise of connoisseurship is largely a matter of subjective impressions that are hard to analyse, but also a matter of knowing details of technique and the "hand" of different artists. Giovanni Morelli (1816 – 1891) pioneered the systematic study of the scrutiny of diagnostic minor details that revealed artists' scarcely conscious shorthand and conventions for portraying, for example, ears or hands, in Western old master paintings. His techniques were adopted by Bernard Berenson and others, and have been applied to sculpture and many other types of art, for example by Sir John Beazley to Attic vase painting. Personal techniques can be important in analysing individual style. Though artists' training was before Modernism essentially imitative, relying on taught technical methods, whether learnt as an apprentice in a workshop or later as a student in an academy, there was always room for personal variation. The idea of technical "secrets" closely guarded by the master who developed them, is a long-standing topos in art history from Vasari's probably mythical account of Jan van Eyck to the secretive habits of Georges Seurat.
However the idea of personal style is certainly not limited to the Western tradition. In Chinese art it is just as deeply held, but traditionally regarded as a factor in the appreciation of some types of art, above all calligraphy and literati painting, but not others, such as Chinese porcelain; a distinction also often seen in the so-called decorative arts in the West. Chinese painting also allowed for the expression of political and social views by the artist a good deal earlier than is normally detected in the West. Calligraphy, also regarded as a fine art in the Islamic world and East Asia, brings a new area within the ambit of personal style; the ideal of Western calligraphy tends to be to suppress individual style, while graphology, which relies upon it, regards itself as a science.
The painter Edward Edwards said in his Anecdotes of Painters (1808): "Mr. Gainsborough's manner of penciling was so peculiar to himself, that his work needed no signature". Examples of strongly individual styles include: the Cubist art of Pablo Picasso, the Pop Art style of Andy Warhol, Impressionist style of Vincent Van Gogh, Drip Painting by Jackson Pollock
Manner
"Manner" is a related term, often used for what is in effect a sub-division of a style, perhaps focused on particular points of style or technique. While many elements of period style can be reduced to characteristic forms or shapes, that can adequately be represented in simple line-drawn diagrams, "manner" is more often used to mean the overall style and atmosphere of a work, especially complex works such as paintings, that cannot so easily be subject to precise analysis. It is a somewhat outdated term in academic art history, avoided because it is imprecise. When used it is often in the context of imitations of the individual style of an artist, and it is one of the hierarchy of discreet or diplomatic terms used in the art trade for the relationship between a work for sale and that of a well-known artist, with "Manner of Rembrandt" suggesting a distanced relationship between the style of the work and Rembrandt's own style. The "Explanation of Cataloguing Practice" of the auctioneers Christie's' explains that "Manner of..." in their auction catalogues means "In our opinion a work executed in the artist's style but of a later date". Mannerism, derived from the Italian maniera ("manner") is a specific phase of the general Renaissance style, but "manner" can be used very widely.
Style in archaeology
In archaeology, despite modern techniques like radiocarbon dating, period or cultural style remains a crucial tool in the identification and dating not only of works of art but all classes of archaeological artefact, including purely functional ones (ignoring the question of whether purely functional artefacts exist). The identification of individual styles of artists or artisans has also been proposed in some cases even for remote periods such as the Ice Age art of the European Upper Paleolithic.
As in art history, formal analysis of the morphology (shape) of individual artefacts is the starting point. This is used to construct typologies for different types of artefacts, and by the technique of seriation a relative dating based on style for a site or group of sites is achieved where scientific absolute dating techniques cannot be used, in particular where only stone, ceramic or metal artefacts or remains are available, which is often the case. Sherds of pottery are often very numerous in sites from many cultures and periods, and even small pieces may be confidently dated by their style. In contrast to recent trends in academic art history, the succession of schools of archaeological theory in the last century, from culture-historical archaeology to processual archaeology and finally the rise of post-processual archaeology in recent decades has not significantly reduced the importance of the study of style in archaeology, as a basis for classifying objects before further interpretation.
Stylization
Stylization and stylized (or stylisation and stylised in (non-Oxford) British English, respectively) have a more specific meaning, referring to visual depictions that use simplified ways of representing objects or scenes that do not attempt a full, precise and accurate representation of their visual appearance (mimesis or "realistic"), preferring an attractive or expressive overall depiction. More technically, it has been defined as "the decorative generalization of figures and objects by means of various conventional techniques, including the simplification of line, form, and relationships of space and color", and observed that "[s]tylized art reduces visual perception to constructs of pattern in line, surface elaboration and flattened space".
Ancient, traditional, and modern art, as well as popular forms such as cartoons or animation very often use stylized representations, so for example The Simpsons use highly stylized depictions, as does traditional African art. The two Picasso paintings illustrated at the top of this page show a movement to a more stylized representation of the human figure within the painter's style, and the Uffington White Horse is an example of a highly stylized prehistoric depiction of a horse. Motifs in the decorative arts such as the palmette or arabesque are often highly stylized versions of the parts of plants.
Even in art that is in general attempting mimesis or "realism", a degree of stylization is very often found in details, and especially figures or other features at a small scale, such as people or trees etc. in the distant background even of a large work. But this is not stylization intended to be noticed by the viewer, except on close examination. Drawings, modelli, and other sketches not intended as finished works for sale will also very often stylize.
"Stylized" may mean the adoption of any style in any context, and in American English is often used for the typographic style of names, as in "AT&T is also stylized as ATT and at&t": this is a specific usage that seems to have escaped dictionaries, although it is a small extension of existing other senses of the word.
Computer identification and recreation
In a 2012 experiment at Lawrence Technological University in Michigan, a computer analysed approximately 1,000 paintings from 34 well-known artists using a specially developed algorithm and placed them in similar style categories to human art historians. The analysis involved the sampling of more than 4,000 visual features per work of art.
Apps such as Deep Art Effects can turn photos into art-like images claimed to be in the style of painters such as Van Gogh. With the development of sophisticated text-to-image AI art software, using specifiable art styles has become a widespread tool in the 2020s.
See also
Artistic rendering
Composition (visual arts)
Mise en scène
Posthumanist art
Notes
References
"Alpers in Lang": Alpers, Svetlana, "Style is What You Make It", in The Concept of Style, ed. Berel Lang, (Ithaca: Cornell University Press, 1987), 137–162, google books.
Bahn, Paul G. and Vertut, Jean, Journey Through the Ice Age, University of California Press, 1997, , 9780520213067, google books
Blunt Anthony, Artistic Theory in Italy, 1450–1600, 1940 (refs to 1985 edn), OUP,
Crane, Susan A. ed, Museums and Memory, Cultural Sitings, 2000, Stanford University Press, , 9780804735643, google books
Elkins, James, "Style" in Grove Art Online, Oxford Art Online, Oxford University Press, accessed March 6, 2013, subscriber link
Elsner, Jas, "Style" in Critical Terms for Art History, Nelson, Robert S. and Shiff, Richard, 2nd Edn. 2010, University of Chicago Press, , 9780226571690, google books
Gombrich, E. "Style" (1968), orig. International Encyclopedia of the Social Sciences, ed. D. L. Sills, xv (New York, 1968), reprinted in Preziosi, D. (ed.) The Art of Art History: A Critical Anthology (see below), whose page numbers are used.
Gotlieb, Marc, "The Painter's Secret: Invention and Rivalry from Vasari to Balzac", The Art Bulletin, Vol. 84, No. 3 (Sep., 2002), pp. 469–490, JSTOR
Grosvenor, Bendor, "On connoisseurship", article in Fine Art Connoisseur, 2011?, now on "art History News" website
Honour, Hugh & John Fleming. A World History of Art. 7th edition. London: Laurence King Publishing, 2009,
"Kubler in Lang": Kubler, George, Towards a Reductive Theory of Style, in Lang
Lang, Berel (ed.), The Concept of Style, 1987, Ithaca: Cornell University Press, , 9780801494390, google books; includes essays by Alpers and Kubler
Murphy, Caroline P., Review of: After Raphael: Painting in Central Italy in the Sixteenth Century by Marcia B. Hall, The Catholic Historical Review, Vol. 86, No. 2 (Apr., 2000), pp. 323–324, Catholic University of America Press, JSTOR
Nagel, Alexander, and Wood, Christopher S., Anachronic Renaissance, 2020, Zone Books, MIT Press, , google books
Preziosi, D. (ed.) The Art of Art History: A Critical Anthology, Oxford: Oxford University Press, 1998,
Rawson, Jessica, Chinese Ornament: The lotus and the dragon, 1984, British Museum Publications,
Further reading
Conkey, Margaret W., Hastorf, Christine Anne (eds.), The Uses of Style in Archaeology, 1990, Cambridge: Cambridge University Press, Review by Clemency Chase Coggins in Journal of Field Archaeology,1992), from JSTOR
Davis, W. Replications: Archaeology, Art History, Psychoanalysis. Pennsylvania: Pennsylvania State University Press, 1996. (Chapter on "Style and History in Art History", pp. 171–198.)
Panofsky, Erwin. Three Essays on Style. Cambridge, Mass. The MIT Press, 1995.
Schapiro, Meyer, "Style", in Theory and Philosophy of Art: Style, Artist, and Society, New York: Georg Braziller, 1995), 51–102
Sher, Yakov A.; "On the Sources of the Scythic Animal Style", Arctic Anthropology, Vol. 25, No. 2 (1988), pp. 47–60; University of Wisconsin Press, JSTOR; pp. 50–51 discuss the difficulty of capturing style in words.
Siefkes, Martin, Arielli, Emanuele, The Aesthetics and Multimodality of Style, 2018, New York, Peter Lang,
Watson, William, Style in the Arts of China, 1974, Penguin,
Wölfflin, Heinrich, Principles of Art History. The Problem of the Development of Style in Later Art, Translated from 7th German Edition (1929) into English by M D Hottinger, Dover Publications New York, 1950 and many reprints
See also the lists at Elsner, 108–109 and Elkins
Painting
Concepts in aesthetics
Art history
Visual arts theory
| 0.764686 | 0.991032 | 0.757828 |
Post-Classic stage
|
In the classification of the archaeology of the Americas, the Post-Classic stage is a term applied to some pre-Columbian cultures, typically ending with local contact with Europeans. This stage is the fifth of five archaeological stages posited by Gordon Willey and Philip Phillips' 1958 book Method and Theory in American Archaeology.
The Lithic stage
The Archaic stage
The Formative stage
The Classic stage
The Post-Classic stage
Cultures of the Post-Classic Stage are defined distinctly by possessing developed metallurgy. Social organization is supposed to involve complex urbanism and militarism. Ideologically, Post-Classic cultures are described as showing a tendency towards the secularization of society.
Post-classic Mesoamerica runs from about 900 to 1519 AD, and includes the following cultures: Aztec, Tarascans, Mixtec, Totonac, Pipil, Itzá, Kowoj, K'iche', Kaqchikel, Poqomam, Mam.
In the North American chronology, the "Post-Classic Stage" followed the Classic stage in certain areas, and typically dates from around AD 1200 to modern times.
See also
Aztec Empire
Inca Empire
References
900 establishments
1519 disestablishments
1958 introductions
1950s neologisms
+5
+5
+5
History of Indigenous peoples of North America
| 0.775712 | 0.976944 | 0.757827 |
Gender and politics
|
Gender and politics, also called gender in politics, is a field of study in political science and gender studies that aims to understand the relationship between peoples' genders and phenomena in politics. Researchers of gender and politics study how peoples' political participation and experiences interact with their gender identity, and how ideas of gender shape political institutions and decision-making. Women's political participation in the context of patriarchal political systems is a particular focus of study. Gender and politics is an interdisciplinary field, drawing not just from political science and gender studies but also related fields such as feminist political thought, and peoples' gendered treatment is commonly seen as intersectionally linked to their entire social identity.
History
The history of gender and politics has a complex pathway, with a long series of political structures, personal attributes gendered social norms, and a wide societal context. It takes place in a multitude of surroundings from the halls of power in either parliamentary or presidential democracy to the unpredictable turmoils of The researches reveal that parliamentary and semi-presidential systems give more room to women to attain top political positions than the presidential system does. Inside these power centers women either chair policy agendas and political parties or become the general voices on women rights. However, the way to power requires confrontation with a lot of obstacles. Women deal with gender bias, the sexism that is infused in the culture, and the formidable fight against the corruption. Especially in countries like Asia and Latin America, it is the cult of family and connections that mostly make the road for female leaders, therefore, the question of political legacy and meritocracy is almost not here. In the USA, the patriarchal political cultures have always been sexist and discriminatory toward women, which widens the gender gap in participation. Though achievements in formal participation of woman still have more to be done, society confines women in private circles limiting their involvement in public affairs. There are themes that range from the social desirability effects of female leadership, and the impact of gender quotas and candidate training programs There has been research that look into the contributions of party culture, the recruitment paradigms and the prejudicial sentiments towards women. Studies provide insight on the high degree of individual features, political institutions, and social structures in influencing the political processes. The multifaceted nature of women’s demands as healthcare providers and principled leaders, alternatively, is an indicator of both their achievements and the systemic hindrances they have faced throughout their careers.
Scope and context
Overview
The study of gender and politics is concerned with how peoples' gender structures their participation in and experience of political events, and how political institutions are encoded with gendered ideas. This study exists in the context that, historically and across countries, gender has been a core determinant of how resources are distributed, how policies are set, and who participates in political decision-making. Because of the breadth covered by the subfield, it spans numerous areas of study in politics such as international relations, comparative politics, political philosophy, and public policy, and it draws from and builds on ideas in feminist political theory like intersectionality and modern conceptions of gender. The study of gender and politics overlaps with the study of how other components of peoples' social identities interact with their political participation and experiences, with researchers particularly emphasizing that the interaction of gender and politics is intersectional and dependent on factors like peoples' race, class, and gender expression.
The study of gender and politics may also be referred to as "gender in politics", and is closely related to the study of "women and politics" or "women in politics", which may also be used synecdochically to refer to the connection between gender and politics.
In a study conducted by Amy Friesenhahn called At the Intersection of Gender and Party:
Legislative Freedom, we see how intricate the study of gender and politics is. The study delves into the intricate dynamics of gender and politics, particularly focusing on the behavior of politicians in relation to women-friendly districts. The findings shed light on how district characteristics influence legislative freedom and party defections on women's issue roll-call votes among both Democratic and Republican members of Congress. The study reveals that Democratic women MCs representing moderately women-friendly districts are more likely to defect from party on women’s issue roll-call votes than their partisan counterparts who are men. On the other hand, there are no apparent gender differences in party defection among Republican MCs on women’s issue roll-call votes. However, the Republican women-friendly district effect increases the likelihood of Republicans defecting from party on women’s issues. The research also highlights the importance of district-level demographic characteristics as a conditional explanatory factor for Republican MCs exercising legislative freedom on women’s issues. It suggests that as districts become more women-friendly, Republican MCs, even men, will likely serve as substantive representatives for women’s interests. Moreover, the study acknowledges certain limitations, such as the inability of existing measures to capture the likely success of women candidates of color. Future research is encouraged to delve deeper into intersectionality and the effects of descriptive representation in terms of race and gender, combined with district characteristics, on women’s issue roll-call voting behavior. The study contributes to the ongoing discourse on gender and politics, providing insights into how district characteristics intersect with gender to influence political behavior. It underscores the nuanced nature of political representation and the need to further explore intersectional dynamics within political contexts.
Women and politics
A central concern in the study of gender and politics is the patriarchal exclusion of women from politics, which is a common but not universal theme historically and across cultures. As the involvement of women in public affairs increased across many societies during the 20th and 21st centuries, academic attention was also increasingly focused on the changing role of women in politics. For example, a common topic in the study of gender and politics is the participation of women as politicians, voters, and activists in a particular country. Since that participation exists in some political context, many scholars of gender and politics also study the political mechanisms that either enable or suppress women's participation in politics; women's social participation may increase or decrease as a result of political institutions, government policies, or social events. Another common topic of study is the impact on women of particular social policies, such as debates over women's rights, reproductive rights, women in government quotas, and policies on violence against women.
Gender and politics researchers have also analyzed the position of women in the discipline of political science, which has mirrored the broader societal trend of increasing inclusion and participation of women beginning in the second half of the 20th century.
Works and institutions
Gender and politics is the focus of the journals Politics & Gender and the European Journal of Politics and Gender. Gender and politics is also the title of a book series, Gender and Politics, which launched in 2012 and published dozens of volumes over the next several years.
There are a number of institutes and centers devoted to the study of gender and politics. The Center for American Women and Politics in the Eagleton Institute of Politics at Rutgers University is dedicated to the study of women's political participation in the United States. Other examples include the Women & Politics Institute at American University, which seeks "to close the gender gap in political leadership" by providing relevant academic training to young women, and the Center for Women in Politics and Public Policy at the University of Massachusetts Boston which has a similar mandate.
See also
Identity politics
Gender essentialism
Gender empowerment
Gender Empowerment Measure (GEM)
Sociology of gender
Anti-gender movement
Patriarchy
Sexism in American political elections
References
Subfields of political science
Gender studies
Gender and society
Feminism and society
Control (social and political)
Social privilege
| 0.770529 | 0.983472 | 0.757793 |
6th century
|
The 6th century is the period from 501 through 600 in line with the Julian calendar.
In the West, the century marks the end of Classical Antiquity and the beginning of the Middle Ages. The collapse of the Western Roman Empire late in the previous century left Europe fractured into many small Germanic kingdoms competing fiercely for land and wealth. From the upheaval the Franks rose to prominence and carved out a sizeable domain covering much of modern France and Germany. Meanwhile, the surviving Eastern Roman Empire began to expand under Emperor Justinian, who recaptured North Africa from the Vandals and attempted fully to recover Italy as well, in the hope of reinstating Roman control over the lands once ruled by the Western Roman Empire.
Owing in part to the collapse of the Roman Empire along with its literature and civilization, the sixth century is generally considered to be the least known about in the Dark Ages.
In its second golden age, the Sassanid Empire reached the peak of its power under Khosrau I in the 6th century. The classical Gupta Empire of Northern India, largely overrun by the Huna, ended in the mid-6th century. In Japan, the Kofun period gave way to the Asuka period. After being divided for more than 150 years among the Northern and Southern dynasties, China was reunited under the Sui dynasty toward the end of the 6th century. The Three Kingdoms of Korea persisted throughout the century. The Göktürks became a major power in Central Asia after defeating the Rouran.
In the Americas, Teotihuacan began to decline in the 6th century after having reached its zenith between AD 150 and 450. Classic period of the Maya civilization in Central America.
Events
Early 6th century – Ah Suytok Tutul Xiu founds Uxmal.
Early 6th century – Archangel ivory, panel of a diptych probably from the court workshop at Constantinople, is made. It is now kept at The British Museum, London.
Early 6th century – Vienna Genesis, from "Book of Genesis", probably made in Syria or Palestine, is made. It is now kept at Österreichische Nationalbibliothek, Vienna.
By 6th century – Shilpa Shastras is written.
Early 6th century – first academy of the east the Academy of Gundeshapur founded in Iran by Khosrau I of Persia.
Early 6th century – Irish colonists and invaders, the Scots, began migrating to Caledonia (later known as Scotland). Migration from south-west Britain to Brittany.
Early 6th century – Glendalough monastery, Wicklow Ireland founded on St. Kevin. Many similar foundations in Ireland and Wales.
Early 6th century – Zen Buddhism enters Vietnam from China.
Early 6th century – Haniwa, from Kyoto, is made during the Kofun period.
Early 6th century – Basilica of Sant'Apollinare in Classe's apse's mosaic is completed.
502: Chinese annals mentioned the existence of the Buddhist Kingdom, Kanto Lim in South Sumatra, presumably in the neighborhood of present-day Palembang.
507: The Franks commanded by Clovis I wrest Aquitania from Alaric II's Visigoths at the Battle of Vouillé.
518: Eastern Roman Emperor Anastasius I dies and is succeeded by Justin I.
522: Romans obtain silkworm eggs and begin silkworm cultivation
c. 524: Boethius writes his On the Consolation of Philosophy.
525: Having settled in Rome c. 500, Scythian monk Dionysius Exiguus invents the Anno Domini era calendar based on the estimated birth year of Jesus Christ.
527: Justinian I succeeds Justin I as Emperor of the Eastern Roman Empire.
529: Saint Benedict of Nursia founds the monastery of Monte Cassino in Italy.
532: Nika riots in Constantinople; the cathedral is destroyed. They are put down a week later by Belisarius and Mundus; up to 30,000 people are killed in the Hippodrome.
535: Postulated volcanic eruption in the tropics which causes several years of abnormally cold weather, resulting in mass famine in the Northern Hemisphere. (See Extreme weather events of 535–536.)
537: Battle of Camlann, final battle of legendary King Arthur.
541–542: First pandemic of bubonic plague (Plague of Justinian) hits Constantinople and the rest of Byzantine Empire.
543/544: One of Justinian's edict leads to the Three-Chapter Controversy
545: Nubian Kingdom of Nobatia converts to Christianity.
Mid-6th century – Cassiodorus founds a cenobitic monastery and scriptorium at Vivarium in Italy
Mid-6th century – Buddhist Jataka stories are translated into Persian by order of the Zoroastrian king Khosrau.
Mid-6th century – Cave-Temple of Shiva at Elephanta Caves, Maharashtra, India, is built. Post-Gupta period.
Mid-6th century – Eternal Shiva, rock-cut relief in the Cave-Temple of Shiva at Elephanta Caves, is made
Mid-6th century – Jogeshwari Caves excavated during 6th century A.D. is one of the finest specimen of Brahmanical Rock-Cut Architecture and bears similarity with Elephanta Caves (Cave No. 1) and Dhumar Lena (Cave No. 29) at Ellora Caves.
Second half of 6th century – Virgin and Child with Saints and Angels, icon, is made. It is now kept at Saint Catherine's Monastery, Egypt.
550: Kingdom of Funan dies out.
551: Bumin Khagan founded the Turkic Khaganate in Central Asia
552: Buddhism introduced to Japan from Baekje during the Asuka period.
553: Second Council of Constantinople
554: Eviction of the Ostrogoths from Rome, and the re-unification of all Italy under Roman rule.
561 to 592: Buddhist monk Jnanagupta translates 39 sutras from Sanskrit to Chinese.
563: The monastery on Iona is founded by St. Columba.
566: Birth of Lǐ Yuān, founder of the Tang dynasty and Emperor of China under the name of Gaozu (618-626)
568: Lombards invade Italy and establish a federation of dukedoms under a king.
569: Nubian kingdom of Alodia converts to Christianity.
569: Nubian kingdom of Makuria converts to Christianity.
570: Birth of the last Islamic Prophet Muhammad.
574: The Roman Empire is invaded by various Slavs, who plunder the Balkans.
577: China's last of the Southern dynasties, the Chen dynasty invents matches.
578: The world's oldest ongoing company, Kongō Gumi, is founded in Osaka, Japan.
579–590: Reign of Persian Shah Hormizd IV.
582–602: Reign of Roman Emperor Maurice.
585: Suebian Kingdom conquered by Visigoths in Spain.
587: Reccared, king of the Visigoths in Spain, converts to Catholicism.
588: Shivadeva ascends the throne of the Lichchhavi dynasty in Nepal.
589: Third Council of Toledo adds the "filioque" clause to the Nicene Creed in Spain.
589: China reunified under the Sui dynasty (589 – 618).
590: Gregory the Great succeeds Pope Pelagius II (who dies of plague) as the 64th pope.
594: Beginning of the Bengali Calendar or (বঙ্গাব্দ Bônggabdô or Banggabda).
595: Pope Gregory sends Roman monks led by Augustine to England.
Inventions, discoveries, introductions
Dionysius Exiguus creates the Anno Domini system, inspired by the birth of Jesus, in 525. This is the system upon which the Gregorian calendar and Common Era systems are based.
The technology of cutting and polishing diamonds was invented in India, Ratna Pariksha, a text dated to 6th century talks about diamond cutting.
Backgammon (nard) invented in Persia by Burzoe.
Chess, as chaturanga, entered Persia from India and was modified to shatranj.
Breast-strap horse harness in use in Frankish kingdom.
Byzantine Empire acquires silk technology from China.
Chen dynasty from China invents matches in 577.
Silk is a protected palace industry in the Byzantine Empire.
Vaghbata, Indian medical books.
In 589 AD, the Chinese scholar-official Yan Zhitui makes the first reference to the use of toilet paper in history.
Significant to the history of agriculture, the Chinese author Jia Sixia wrote the treatise Qi Min Yao Shu in 535, and although it quotes 160 previous Chinese agronomy books, it is the oldest existent Chinese agriculture treatise. In over one hundred thousand written Chinese characters, the book covered land preparation, seeding, cultivation, orchard management, forestry, animal husbandry, trade, and culinary uses for crops.
Notes
References
1st millennium
06th century
| 0.7632 | 0.992906 | 0.757786 |
Hereditary monarchy
|
A hereditary monarchy is a form of government and succession of power in which the throne passes from one member of a ruling family to another member of the same family. A series of rulers from the same family would constitute a dynasty. It is historically the most common type of monarchy and remains the dominant form in extant monarchies. It has the advantages of continuity of the concentration of power and wealth and predictability of who one can expect to control the means of governance and patronage. Provided that a monarch is competent, not oppressive, and maintains an appropriate dignity, it might also offer the stabilizing factors of popular affection for and loyalty to a ruling family. The adjudication of what constitutes oppressive, dignified and popular tends to remain in the purview of the monarch. A major disadvantage of hereditary monarchy arises when the heir apparent may be physically or temperamentally unfit to rule. Other disadvantages include the inability of a people to choose their head of state, the ossified distribution of wealth and power across a broad spectrum of society, and the continuation of outmoded religious and social-economic structures mainly for the benefit of monarchs, their families, and supporters.
In most extant hereditary monarchies, the typical order of succession uses some form of primogeniture, but there exist other methods such as seniority and tanistry (in which an heir-apparent is nominated from among qualified candidates). Research shows that hereditary regimes, in particular primogeniture, are more stable than forms of authoritarian rule with alternative succession arrangements.
Succession
Theoretically, when the monarch of a hereditary monarchy dies or abdicates, the crown typically passes to the next generation of the family. If no qualified child exists, the crown may pass to a brother, sister, nephew, niece, cousin, or other relative, in accordance with a predefined order of succession, often enshrined in legislation. Such a process establishes who will be the next monarch beforehand and avoids disputes among members of the royal family. Usurpers may resort to inventing semi-mythical genealogies to bolster their respectability.
Historically, there have been differences in systems of succession, mainly revolving around the question of whether succession is limited to males, or whether females are also eligible (historically, the crown often devolved on the eldest surviving male child, as ability to lead an army in battle was a requisite of kingship). Agnatic succession refers to systems where females are neither allowed to succeed nor to transmit succession rights to their male descendants (as according to the Salic law). An agnate is a kinsman with whom one has a common ancestor by descent in an unbroken male line. Cognatic primogeniture allows both male and female descendants to succeed, but males are usually given preference. In absolute primogeniture, the eldest child can succeed to the throne regardless of sex; this system was adopted in 2011 by the monarchies in the Commonwealth (though not retrospectively affecting the order of succession). Another factor which may be taken into account is the religious affiliation of the candidate or the candidate's spouse, specifically where the monarch also has a religious title or role; for example, the British monarch has the title of supreme governor of the Church of England and may not profess Roman Catholicism.
Elective hereditary monarchy
Elective monarchy can function as de facto hereditary monarchy. A specific type of elective monarchy known as tanistry limits eligibility to members of the ruling house. But hereditary succession can also occur in practice despite any such legal limitations. For example, if the majority of electors belong to the same house, then they may elect only family members. Or a reigning monarch might have sole power to elect a relative. Many late-medieval countries of Europe were officially elective monarchies, but in fact pseudo-elective; most transitioned into officially hereditary systems in the early modern age. Exceptions include the Holy Roman Empire and the Polish–Lithuanian Commonwealth.
See also
List of hereditary monarchies
Heir presumptive
References
Succession
| 0.76244 | 0.993747 | 0.757673 |
Proto-globalization
|
Proto-globalization or early modern globalization is a period of the history of globalization roughly spanning the years between 1500 and 1800, following the period of archaic globalization. First introduced by historians A. G. Hopkins and Christopher Bayly, the term describes the phase of increasing trade links and cultural exchange that characterized the period immediately preceding the advent of so-called "modern globalization" in the 19th century.
Proto-globalization distinguished itself from modern globalization on the basis of expansionism, the method of managing global trade, and the level of information exchange. The period is marked by the shift of hegemony to Western Europe, the rise of larger-scale conflicts between powerful nations such as the Thirty Years' War, and demand for commodities, most particularly slaves. The triangular trade made it possible for Europe to take advantage of resources within the western hemisphere. The transfer of plant and animal crops and epidemic diseases associated with Alfred Crosby's concept of the Columbian exchange also played a central role in this process. Proto-globalization trade and communications involved a vast group including European, Middle Eastern, Indian, Southeast Asian, and Chinese merchants, particularly in the Indian Ocean region.
The transition from proto-globalization to modern globalization was marked by a more complex global network based on both capitalistic and technological exchange; however, it led to a significant collapse in cultural exchange.
Description
Although the 16th, 17th and 18th centuries saw a rise in Western imperialism in the world system, the period of proto-globalization involved increased interaction between Western Europe and the systems that had formed between nations in East Asia and the Middle East. Proto-globalization was a period of reconciling the governments and traditional systems of individual nations, world regions, and religions with the "new world order" of global trade, imperialism and political alliances, what historian A. G. Hopkins called "the product of the contemporary world and the product of distant past."
According to Hopkins, "globalization remains an incomplete process: it promotes fragmentation as well as uniformity; it may recede as well as advance; its geographical scope may exhibit a strong regional bias; its future direction and speed cannot be predicted with confidence—and certainly not by presuming that it has an 'inner logic' of its own. Before proto-globalization, globalizing networks were the product of "great kings and warriors searching for wealth and honor in fabulous lands, by religious wanderers,...and by merchant princes". Proto-globalization held on to and matured many aspects of archaic globalization such as the importance of cities, migrants, and specialization of labor.
Proto-globalization was also marked by two main political and economic developments: "the reconfiguration of the state systems, and the growth of finance, services, and pre-industrial manufacturing". A number of states at the time began to "strengthen their connections between territory, taxation, and sovereignty" despite their continuing monopoly of loyalties from their citizens. The process of globalization during this time was heavily focused on material world and the labor needed for its production. The proto-globalization period was a time of "improved efficiency in the transactions sector" with the generation of goods such as sugar, tobacco, tea, coffee, and opium unlike anything the archaic globalization possessed. The improvement of economic management also spread to the expansion of transportation which created a complex set of connection between the West and East. The expansion of trade routes led to the "green revolution" based on the plantation system and slave exportation from Africa.
Precursors
During the pre-modern era early forms of globalization were already beginning to affect a world-system, marking a period that historian A. G. Hopkins has called archaic globalization. The world system leading up to proto-globalization was one that hinged on one or more hegemonic powers assimilating neighboring cultures into their political system, waging war on other nations, and dominating world trade.
A major hegemony in archaic globalization was the Roman Empire, which united the Greater Mediterranean Area and Western Europe through a long-running series of military and political campaigns expanding the Roman system of government and Roman values to more underdeveloped areas. Conquered areas became provinces of the empire and Roman military outposts in the provinces became cities with structures designed by the best Roman architects, which hastened the spread of Rome's "modern" way of life while absorbing the traditions and beliefs of these native cultures. Nationalist ideology as well as propaganda supporting the Roman Army and military success, bravery, and valor also strengthened the Roman Empire's spread across Western Europe and the Mediterranean Area. The Roman Empire's well-built aqueducts and cities and sturdy, effective naval fleets, ships and an organized system of paved roads also facilitated fast, easy travel and better networking and trade with neighboring nations and the provinces.
During the Han dynasty under Han Wudi (141–87 BCE), the Chinese government united and became powerful enough that China began to successfully indulge in imperialistic endeavors with its neighboring nations in East Asia. Han China's imperialism was a peaceful tributary system, which focused mainly on diplomatic and trade relations. The growth of the Han Empire facilitated trade and cultural exchange with virtually all of the known world as reached from Asia, and Chinese silk spread through Asia and Inner Asia and even to Rome. The early Tang dynasty saw China as even more responsive to foreign influence and the Tang dynasty becoming a great empire. Overseas trade with India and the Middle East grew rapidly, and China's East and Southern Coasts, once distant and unimportant regions, gradually became chief areas of foreign trade. During the Song dynasty China's navy became more powerful thanks to technological improvements in shipbuilding and navigation, and China's maritime commerce also increased exponentially.
China's power began to decline in the 16th century when the rulers of the subsequent Ming dynasty neglected the importance of China's trade from sea power. The Ming rulers let China's naval dominance and its grip on the spice trade slacken, and the European powers stepped in. Portugal, with its technological advances in naval architecture, weaponry, seamanship and navigation, took over the spice trade. With this, European imperialism and the age of European hegemony was beginning, although China still retained power of many of its areas of trade.
Changes in trade systems
One of the most significant differences between proto-globalization and archaic globalization was the switch from inter-nation trading of rarities to the trading of commodities. During the 12th and 13th centuries it was common to trade items that were foreign and rare to different cultures. A popular trade during archaic globalization involved European merchants sailing to areas of India or China in order to purchase luxury items such as porcelain, silk, and spices. Traders of the pre-modern period also traded drugs and certain foods such as sugarcane and other crops.
While these items were not rarities as such, the drugs and food traded were valued for the health and function of the human body. It was more common during proto-globalization to trade various commodities such as cotton, rice, and tobacco.
The shift into proto-globalization trade signified the "emergence of the modern international order" and the development of early capitalist expansion which began in the Atlantic during the 17th century and spread throughout the world by 1830.
Atlantic slave trade
One of the main reasons for the rise of commodities was the rise in the slave trade, specifically the Atlantic slave trade. The use of slaves prior to the 15th century was only a minor practice in the labor force and was not crucial in the development of products and goods; but, due to labor shortage, the use of slaves rose. After 1500, the settlement of island despots and plantation centers in São Tomé began trade relations with the Kingdom of Kongo, which brought Central Africa into the Atlantic slave trade. The Portuguese maintained an export of slaves from Agadir, an Atlantic port, which they maintained for most of the early 16th century. The Portuguese settlement of the Brazilian subcontinent has also opened the American slave market, allowing the direct shipment of slaves from São Tomé to America. European slave ships took their slaves to the Iberian Peninsula, however slave owners in Europe were only seen in wealthy, aristocrat families due to the high costs of slaves and the cheap peasant labor available for agricultural uses, and as its name implies the first use of the African American slaves in plantation work arose in the Atlantic islands not in the continental Europe. Approximately 10.2 million Africans survived the Atlantic crossing between 1450 and 1870. The large slave population thrived due to the demand for production from the Europeans who found it cheaper to import crops and goods rather than produce them on their own.
Many wars were fought during the 17th century between the slave trading companies for areas that were economically dependent on slaves. The Dutch West India Company (GWC) gained many slaves through these wars (specifically with Portugal) by captains who had captured enemy ships; between 1623 and 1637, 2,336 were captured and sold in the New World by the GWC. The selling of slaves to the New World opened up trading posts in North America; the Dutch opened their first on Manhattan Island in 1613. The GWC had also opened a trading post in the Caribbean and the company was also carrying slaves to the colony of New Netherland.
The use of slaves had many benefits to the economies and productions in the areas of trade. The emergent rise of coffee, tea, and chocolate in Europe led to the demand for the production of sugar; 70 percent of slaves were used solely for the labor-intensive production of crop. Slave trade was also beneficial to the trade voyages, because the constant sailing allowed investors to buy small shares of many ships at the same time. Hopkin states that many scholars, him among them, argue that slave trade was essential to the wealth of many nations, during and after proto-globalization, and without the trade production would have plummeted. The investment in ships and nautical technology was the catalyst to the complex trade networks developed throughout proto-globalization and into modern globalization.
Plantation economy
Consequently, the rise of slavery was due to the increasing rise of crops being produced and traded, more specifically the rise of the plantation economy The rise of the plantations was the main reason for the trade of commodities during proto-globalization. Plantations were used by the exporting countries (mainly America) to grow the raw materials needed to manufacture the goods which were traded back into the plantation economy. Commodities that grew in trade due to the plantation economy were mainly tobacco, cotton, sugarcane, and rubber.
Tobacco
During the second half of the 16th century, Europeans' interest in the New World revolved around gold and silver and not tobacco. The European lack of interest in tobacco was due to the fact that the Amerindians controlled the tobacco industry; as long as the Amerindians controlled the supply there was no need for the incorporation in European commercial capitalism.
The trading of tobacco was a new commodity and was in high popular demand in the 17th century due to the rise of the plantations. Tobacco began to be used a monetary standard, which is why the term "cash crop" was originated.
The first export of tobacco from the then-colonies of the United States (specifically Virginia) to London showed fortunes in the English enterprise and by 1627, the Virginia tobacco was being shipped to London at 500,000 pounds a shipment. By 1637, tobacco had become the colony's currency and by 1639 Maryland was exporting 100,000 pounds of tobacco to London. English success with the production of tobacco caught the attention of many Europeans, specifically those colonized on Martinique and Guadeloupe, French islands. These islands soon became wealthy due to the tobacco production and by 1671 roughly one-third of the acreage devoted to the cash crops grown for the islands were for tobacco. While the cultivation of tobacco thrived, production saw severe depressions in later years due to the profits made from sugar. According to an account of the Barbadian exports 82 percent of the island's export value was due to sugar and less than one percent was accounted for by tobacco.
Sugarcane
Another commodity that was a prominent source of trade was the production of sugar from the crop sugarcane. The original habitat of sugar was in India, where it was taken and planted in various islands. Once reaching to the people of the Iberian Peninsula, it was further migrated across the Atlantic Ocean. In the 16th century, the first plantations of sugar were started in the New World, marking the last great stage of migration of the cane to the West. Because of the conflict of transporting sugar in its raw form, sugar was not associated with commerce until the act of refining it came into play; this act became the center for industry. Venice was the center for refining during the Middle Ages, therefore making them the chief traders of sugar. Although, the Spanish and Portuguese held the monopolies of the sugarcane fields in America, they were being supplied by Venice. In the 17th century, England dominated Venice and became the center for refining and cultivating sugar; this leadership was maintained until the rise of French industry. Sugar throughout the 17th century was still considered a luxury until the latter half of the 17th century when the sugar was being produced in mass quantities making it available to the mass of the English people. This turn of events made the sugar a commodity, because the crop was not being used in only special occasion, but in all daily meals.
Hostilities, war, and imperialism
Proto-globalization differed from modern globalization in the practices of expansionism, methods of managing global trade, finances, as well as commercial innovation. With the shift of expansionism by large nations to Western Europe, nations began competing in an effort to achieve world domination. The rise of larger-scale conflicts between these powerful nations over expanding their wealth led to nations taking control over one another's territory and then moving products and the accumulated wealth of these conquered regions back to the sovereign country. Although conflicts occurred throughout the world between 1600 and 1800, European powers found themselves far more equipped to handle the pressures of war. A quote by Christopher Alan Bayly gives a better interpretation of these advantages by stating, "Europeans became much better at killing people. The European ideological wars of the 17th century had created links between war, finance, and commercial innovation which extended all these gains. It gave the Continent a brute advantage in world conflicts which broke out in the 18th century. Western European warfare was peculiarly complicated and expensive, partly because it was amphibious." These battle-tested nations fought for their own needs, but in reality their success increased European advancement in the global market. Each of the following sections will shed light on the history of several key engagements. Whether a war was religious or commercial, its impact was greatly felt throughout the world. British victories during the Anglo-Dutch Wars led to their dominance in commercial shipping and naval power. The stage was set for future conflicts between Britain and foreign nations, as well as domestic frustration with "the motherland" on the North American continent. The French and Indian War, fought between the European powers of France and England, led to a British victory and resulted in continued dominance in maritime enterprise. The American Revolutionary War marked the beginning of the power shift for control over foreign markets.
English Civil War
The English Civil War was a battle over not only religious and political beliefs, but also economic and social as well. This war was between Parliamentarians and Royalists and took place from 1642 to 1651, but was broken into several separate engagements. Charles I and his supporters experienced the first two periods of the war, which resulted in King Charles I dissolving Parliament, which would not be called into session again for over ten years. Reasons for this dismissal were because supporters of Long Parliament tried to install two resolutions into English law. One called for consequences against individuals that taxed without the consent of Parliament and labeled them as enemies of England, while the other stated that innovations in religion would result in the same tag. Each of these policies was aimed at Charles I, in that he was inferior leader as well as a supporter of Catholicism. This prompted the Puritan Revolt and eventually led to the trial and execution of Charles I for treason. The final stage of the English Civil War came in 1649, and lasted until 1651. This time, King Charles II, the son of Charles I led supporters against Parliament. The Battle of Worcester, which took place in 1651, marked the end of the English Civil War. Charles II and other royalist forces were defeated by Parliamentarians and their leader Oliver Cromwell. This war began to take England in different directions regarding religious and political beliefs as well as economic and social. Also, the war constitutionally established that no British monarch was permitted to rule without first having been approved by Parliament.
First Anglo-Dutch War
The First Anglo-Dutch War was a naval conflict between England and the Dutch Republic from 1652 to 1654 and was over the competition in commercial maritime and was focused mainly in the East Indies. The first Navigation Act, which forbade the import of goods unless they were transported either in English vessels or by vessels from the country of origin. This was a policy aimed against the Dutch, and fighting broke out on May 19, 1652 with a small skirmish between Dutch and English fleets. The War officially began in July and fighting continued for two years. The Battle of Scheveningen which is also referred to as Texel was the end of serious fighting in the war and took place in July 1653. The Treaty of Westminster was signed in April 1654 ending the war and obligated the Dutch Republic to respect the Navigation Act as well as compensate England for the war.
French and Indian War
The French and Indian War was between the nations of Great Britain and France, along with the numerous Native American Nations allied with both. The French and Indian war was the North American theater of the Seven Years' War being fought in Europe at the time. Growing population in British territory throughout North America forced expansion west; however, this was met with resistance from the French and their Native American allies. French forces began entering British territory, building numerous forts in preparation to defend the newly acquired land. The beginning of the war favored the French and their Native American allies, who were able to defeat British forces time and again, and it was not until 1756 that the British were able to hold off their opposition. Pittsburgh was a center for fighting during the French and Indian War, namely because of the geographical location at the center where three rivers unite: the Allegheny, the Monongahela and the Ohio. The location of present-day Pittsburgh provided an advantage in naval control. Ownership of this point provided not only naval dominance, but it also expanded economic ventures, enabling shipments to be sent and received with relative ease. French and British forces both claimed ownership to this region; the French installing Fort Duquesne and the British with Fort Pitt. Fort Pitt was established in 1758 after French forces abandoned and destroyed Fort Duquesne. The French and Indian War came to an end in 1763, after British forces were able to secure Quebec and Montreal from the French and on February 10, the Treaty of Paris was signed. The French were forced to surrender their territory in North America, giving England control all the way to the Mississippi River. The effects of this war were heavily felt in the North American British colonies. England imposed many taxes on colonists in order to control the newly acquired territory. These tensions would soon culminate into a war for independence as well as a shift in power for dominance in the economic world.
American Revolutionary War
The American Revolutionary War was a war between the nation of England and inhabitants of the 13 British colonies on the North American continent, who desired to be independent of British rule, which the rebels viewed as tyrannical. They simply wanted to be free of tyranny and would ultimately be the country's first Americans. The war lasted eight years, from 1775 to 1783 and began with the Battle of Breed's Hill, now known as Bunker Hill, where over 1,150 British soldiers were killed or wounded. This equated to almost half of the entire British army that were present at the engagement, and approximately 450 independence-seeking citizens (Americans-to-be) were killed, wounded, or captured. The British, however, were able to take the ground and push the newly formed Continental Army back to the city of Boston, which also soon fell to British forces. Before the Battle of Bunker Hill, the Battles of Lexington and Concord in April, 1775, saw British troops begin their assault into the American colonies. British troops were searching for colonist supply depots, however, were met by heavy resistance and the British forces were turned around at Concord by outnumbering Minutemen forces. On July 4, 1776, the Declaration of Independence was signed by the Second Continental Congress and officially declared the colonies of North America to be a sovereign nation, free from England's rule. Also, the Congress permitted funding for a Continental Army, which is the first instance of an American political body handling military affairs. The British were dominating in the beginning of the war, holding off Continental regulars and militia and gaining vast amounts of territory throughout North America. However, the tide began to turn for the colonists in 1777 with their first major victory over British forces at the Battles of Saratoga. Victory for the rest of the war pushed back and forth between the British and colonists, but the alliance with France in 1778 by the American colonists leveled the playing field and aided in the final push for the defeat of the British Army and Navy. In 1781, American and French forces were able to trap the escaping southern British Army at Yorktown, thus ending the major fighting of the Revolution. The Treaty of Paris was signed in 1783, and recognized the American colonies as an independent nation. The newly formed United States would undergo numerous transitions to becoming one of the top economic and military powers in the world.
Treaties and agreements
Much of the trading during the proto-globalization time period was regulated by Europe. Globalization from an economic standpoint relied on the East India Companies. These were enterprises formed by several Western European nations in the 17th and 18th centuries, initially created to further trade in the East Indies. The companies controlled trading from India to East and Southeast Asia.
One of the key contributors to globalization was the triangular trade and how it connected the world. The triangular trade or triangle trade was a system used to connect three areas of the world through trade. Once traded, items and goods were shipped to other parts of the world, making the triangle trade a key to global trade. The Triangle Trade system was run by Europeans, increasing their global power.
Europeans would sail to the West African coast and trade African kings manufactured goods (firearms and ammunition) for slaves. From there slaves would be sent to the West Indies or the east coast of North America to be used for labor. Goods such as cotton, molasses, sugar, tobacco would be sent from these places back to Europe. Europe would also use their goods and trade with Asian countries for tea, cloth, and spices. The triangle trade in a sense was an agreement for established trade routes, that led to greater global integration, which ultimately contributed to globalization.
Along with the control Europe gained, as far as global trade, came several treaties and laws. The Regulating Act 1773 was passed, regulating affairs of the company in India and London. In 1748, the Treaty of Aix-la-Chapelle ended the War of the Austrian Succession, but failed to settle the commercial struggle between England and France in the West Indies, Africa and India. The treaty was an attempt at regulating trade and market expansion between the two regions, but was ultimately unsuccessful.
Globalization at this time was hindered by war, diseases and population growth in certain areas. The Corn Laws were established to regulate imports and exports of grains in England, thus restricting trade and the expansion of globalization. The Corn Crop Laws hindered the market economy and globalization based on tariffs and import restrictions. Eventually, the Ricardian theory of economics became prominent and allowed for improved trade regulations, specifically with Portugal.
Transition into modern globalization
According to Sebastian Conrad, proto-globalization is marked with a "rise of national chauvinism, racism, Social Darwinism, and genocidal thinking" which came to be with relations to the "establishment of a world economy". Beginning in the 1870s, the global trade cycle started to cement itself so that more nations' economies depended on one another than in any previous era. Domino effects in this new world trade cycle lead to both worldwide recessions and world economic booms. Modelski describes the late period of proto-globalization as a "thick range of global networks extending throughout the world at high speed and covering all components of society". By the 1750s, contact between Europe, Africa, Asia, and the Americas had grown into a stable multilateral interdependency which was echoed in the modern globalization period.
Shift in capital
Although the North Atlantic World dominated the global system before proto-globalization, a more "multipolar global economy" started taking form around the early 19th century, and capital was becoming highly mobile. By the end of the 19th century, British capital wealth was 17% overseas, and the level of capital invested overseas nearly doubled by 1913 to 33%. Germany invested one-fifth of their total domestic savings in 1880, and, like Britain, increased their wealth tremendously in the early 20th century. The net foreign investment of total domestic savings abroad was 35% in 1860, 47% in 1880, and 53% in the years prior to the Great War. Global investments were taking a steady rise throughout societies, and those able to invest thrust more and more of their domestic savings into international investments.
The ability to mobilize capital was due to the development of the Industrial Revolution and the beginnings of mechanical production (most prominent in Great Britain). During proto-globalization, "merchant capitalists in many societies quickly became aware of potential markets and new producers and began to link them together in new patterns of world trade. The expansion of the slave production and the exploitation of the Americas put Europeans on the top of the economic network. During the modern globalization period, mass production allowed the development of a stronger, more complex global network of trade. Another element of European success between 1750 and 1850 was the limitation and "relative failure'" of the Afro-Asian Industrial Revolution. The movement into modern globalization was marked with the economic drain of capital into Europe.
Shift in culture
Like capital, the end of proto-globalization was filled with mobility of individuals. The time of proto-globalization was one filled with "mutual influence, hybridization, and cross-cultural entanglement". Many historians blame this web of national entanglements and agreements as the cause for the intensity and vast involvement during World War I. Between 1750 and 1880, the expansion of worldwide integration was influenced by the new capacities in production, transportation, and communication.
The end of proto-globalization also marked the final phase of "great domestication". After the 1650s, the process of regular and intensive agrarian exploitation was complete. Human population began to increase almost exponentially with the end of the great pandemics. At the end of the proto-globalization and the cusp of modern globalization, population began to "recover in Central and South America," where at the beginning of proto-globalization, European-imported illnesses had savagely decreased indigenous populations. The importation of nutritious varieties from Central and South America created a more fertile and resilient population to forge ahead into modern globalization. The greater population pushed individuals in high populated areas to "spill into less populous forested and grazing lands, and bring them under cultivation". This development lead to an influx in produce production and exported trade.
Another development that lead to the shift to modern globalization was the development of a more politicized system. Proto-globalization period marked a steady expansion of larger states from the Indonesian islands to northern Scandinavia. The settlement of these individuals made it easier for governments to tax, develop an army, labor force, and create a sustainable economy. The development and streamlining of these cultural aspects lead to an increase in peripheral players in the game of globalization. The stable legal institutions developed in the late proto-globalization and early modern globalization period established economic advances, intellectual property rights (more predominantly in England), general geographical stability, and generational societal improvement.
The shift in exchange of technological advancements was another reason for modern globalization. In the early 19th century, European civilizations traveled the world to accumulate an "impressive knowledge about languages, religions, customs, and political orders of other countries. By the end of the 19th century, Europe was no longer receiving any significant technological innovations from Asia.
Shift in global networks
The developed global networks lead to the creation of new networks leading to new production. By 1880, there was a renewed thrust of European colonial expansion. The shift to modern globalization was slow, overlapping and interacting. Mid-19th century, noncompeting goods were exchanged between continents and markets for widely used commodities developed. Also, labor was becoming globally integrated. Modern globalization came to be as the movement of general expansion of socio-economic networks became more elaborate. An example of this is the development and establishment of Freemasonry. The "existing trading networks grew, capital and commodity flows intensified. The permanence of long-term interdependencies was unchanged. By the beginning of the modern globalization period, the European colonial expansion retreats into itself. National societies began to regret the economic integration and attempted to limit the effects. Bayly, Hopkins and others stress that proto-globalization's transformation into modern globalization was a complex process that took place at different times in different regions, and involved the hold-over of older notions of value and rarity which had their origins in the pre-modern period. Thus leading to the age of economic deglobalization and world wars which ended after 1945.
See also
Age of Discovery
History of globalization
Archaic globalization
Early modern period
References
Notes
Sources
Sharp, Paul. "Why Globalization Might Have Started in the Eighteenth Century""VoxEU", May 16, 2008, Accessed November 2009
External links
A Quick Guide to the World History of Globalization,
Bakerova, Katarina. "Slaves" "African Cultural Center", California, 1991.
"Corn Laws" "Encyclopædia Britannica"
"East India Company" "Encyclopædia Britannica"
"Regulating Act" "Encyclopædia Britannica"
"International Trade" "Encyclopædia Britannica"
"Treaty of Aix-la-Chapelle" "Encyclopædia Britannica"
Cultural geography
Early modern period
Economic geography
History of globalization
World government
Atlantic slave trade
| 0.780993 | 0.970135 | 0.757669 |
Acculturation
|
Acculturation is a process of social, psychological, and cultural change that stems from the balancing of two cultures while adapting to the prevailing culture of the society. Acculturation is a process in which an individual adopts, acquires and adjusts to a new cultural environment as a result of being placed into a new culture, or when another culture is brought to someone. Individuals of a differing culture try to incorporate themselves into the new more prevalent culture by participating in aspects of the more prevalent culture, such as their traditions, but still hold onto their original cultural values and traditions. The effects of acculturation can be seen at multiple levels in both the devotee of the prevailing culture and those who are assimilating into the culture.
At this group level, acculturation often results in changes to culture, religious practices, health care, and other social institutions. There are also significant ramifications on the food, clothing, and language of those becoming introduced to the overarching culture.
At the individual level, the process of acculturation refers to the socialization process by which foreign-born individuals blend the values, customs, norms, cultural attitudes, and behaviors of the overarching host culture. This process has been linked to changes in daily behaviour, as well as numerous changes in psychological and physical well-being. As enculturation is used to describe the process of first-culture learning, acculturation can be thought of as second-culture learning.
Under normal circumstances that are seen commonly in today's society, the process of acculturation normally occurs over a large span of time throughout a few generations. Physical force can be seen in some instances of acculturation, which can cause it to occur more rapidly, but it is not a main component of the process. More commonly, the process occurs through social pressure or constant exposure to the more prevalent host culture.
Scholars in different disciplines have developed more than 100 different theories of acculturation, but the concept of acculturation has only been studied scientifically since 1918. As it has been approached at different times from the fields of psychology, anthropology, and sociology, numerous theories and definitions have emerged to describe elements of the acculturative process. Despite definitions and evidence that acculturation entails a two-way process of change, research and theory have primarily focused on the adjustments and adaptations made by minorities such as immigrants, refugees, and indigenous people in response to their contact with the dominant majority. Contemporary research has primarily focused on different strategies of acculturation, how variations in acculturation affect individuals, and interventions to make this process easier.
Historical approaches
The history of Western civilization, and in particular the histories of Europe and the United States, are largely defined by patterns of acculturation.
One of the most notable forms of acculturation is imperialism, the most common progenitor of direct cultural change. Although these cultural changes may seem simple, the combined results are both robust and complex, impacting both groups and individuals from the original culture and the host culture. Anthropologists, historians, and sociologists have studied acculturation with dominance almost exclusively, primarily in the context of colonialism, as a result of the expansion of western European peoples throughout the world during the past five centuries.
The first psychological theory of acculturation was proposed in W.I. Thomas and Florian Znaniecki's 1918 study, The Polish Peasant in Europe and America. From studying Polish immigrants in Chicago, they illustrated three forms of acculturation corresponding to three personality types: Bohemian (adopting the host culture and abandoning their culture of origin), Philistine (failing to adopt the host culture but preserving their culture of origin), and creative-type (able to adapt to the host culture while preserving their culture of origin). In 1936, Redfield, Linton, and Herskovits provided the first widely used definition of acculturation as:
Long before efforts toward racial and cultural integration in the United States arose, the common process was assimilation. In 1964, Milton Gordon's book Assimilation in American Life outlined seven stages of the assimilative process, setting the stage for literature on this topic. Later, Young Yun Kim authored a reiteration of Gordon's work, but argued cross-cultural adaptation as a multi-staged process. Kim's theory focused on the unitary nature of psychological and social processes and the reciprocal functional personal environment interdependence. Although this view was the earliest to fuse micro-psychological and macro-social factors into an integrated theory, it is clearly focused on assimilation rather than racial or ethnic integration. In Kim's approach, assimilation is unilinear and the sojourner must conform to the majority group culture in order to be "communicatively competent." According to Gudykunst and Kim (2003) the "cross-cultural adaptation process involves a continuous interplay of deculturation and acculturation that brings about change in strangers in the direction of assimilation, the highest degree of adaptation theoretically conceivable." This view has been heavily criticized, since the biological science definition of adaptation refers to the random mutation of new forms of life, not the convergence of a monoculture (Kramer, 2003).
In contradistinction from Gudykunst and Kim's version of adaptive evolution, Eric M. Kramer developed his theory of Cultural Fusion (2011, 2010, 2000a, 1997a, 2000a, 2011, 2012) maintaining clear, conceptual distinctions between assimilation, adaptation, and integration. According to Kramer, assimilation involves conformity to a pre-existing form. Kramer's (2000a, 2000b, 2000c, 2003, 2009, 2011) theory of Cultural Fusion, which is based on systems theory and hermeneutics, argues that it is impossible for a person to unlearn themselves and that by definition, "growth" is not a zero-sum process that requires the disillusion of one form for another to come into being but rather a process of learning new languages and cultural repertoires (ways of thinking, cooking, playing, working, worshiping, and so forth). In other words, Kramer argues that one need not unlearn a language to learn a new one, nor does one have to unlearn who one is to learn new ways of dancing, cooking, talking, and so forth. Unlike Gudykunst and Kim (2003), Kramer argues that this blending of language and culture results in cognitive complexity, or the ability to switch between cultural repertoires. To put Kramer's ideas simply, learning is growth rather than unlearning.
Conceptual models
Theory of Dimensional Accrual and Dissociation
Although numerous models of acculturation exist, the most complete models take into consideration the changes occurring at the group and individual levels of both interacting groups. To understand acculturation at the group level, one must first look at the nature of both cultures before coming into contact with one another. A useful approach is Eric Kramer's theory of Dimensional Accrual and Dissociation (DAD). Two fundamental premises in Kramer's DAD theory are the concepts of hermeneutics and semiotics, which infer that identity, meaning, communication, and learning all depend on differences or variance. According to this view, total assimilation would result in a monoculture void of personal identity, meaning, and communication. Kramer's DAD theory also utilizes concepts from several scholars, most notably Jean Gebser and Lewis Mumford, to synthesize explanations of widely observed cultural expressions and differences.
Kramer's theory identifies three communication styles (idolic, symbolic, or signalic ) in order to explain cultural differences. It is important to note that in this theory, no single mode of communication is inherently superior, and no final solution to intercultural conflict is suggested. Instead, Kramer puts forth three integrated theories: the theory Dimensional Accrual and Dissociation, the Cultural Fusion Theory and the Cultural Churning Theory.
For instance, according to Kramer's DAD theory, a statue of a god in an idolic community is god, and stealing it is a highly punishable offense. For example, many people in India believe that statues of the god Ganesh – to take such a statue/god from its temple is more than theft, it is blasphemy. Idolic reality involves strong emotional identification, where a holy relic does not simply symbolize the sacred, it is sacred. By contrast, a Christian crucifix follows a symbolic nature, where it represents a symbol of God. Lastly, the signalic modality is far less emotional and increasingly dissociated.
Kramer refers to changes in each culture due to acculturation as co-evolution. Kramer also addresses what he calls the qualities of out vectors which address the nature in which the former and new cultures make contact. Kramer uses the phrase "interaction potential" to refer to differences in individual or group acculturative processes. For example, the process of acculturation is markedly different if one is entering the host as an immigrant or as a refugee. Moreover, this idea encapsulates the importance of how receptive a host culture is to the newcomer, how easy is it for the newcomer to interact with and get to know the host, and how this interaction affects both the newcomer and the host.
Fourfold models
The fourfold model is a bilinear model that categorizes acculturation strategies along two dimensions. The first dimension concerns the retention or rejection of an individual's minority or native culture (i.e. "Is it considered to be of value to maintain one's identity and characteristics?"), whereas the second dimension concerns the adoption or rejection of the dominant group or host culture. ("Is it considered to be of value to maintain relationships with the larger society?") From this, four acculturation strategies emerge.
Assimilation occurs when individuals adopt the cultural norms of a dominant or host culture, over their original culture. Sometimes it is forced by governments.
Separation occurs when individuals reject the dominant or host culture in favor of preserving their culture of origin. Separation is often facilitated by immigration to ethnic enclaves.
Integration occurs when individuals can adopt the cultural norms of the dominant or host culture while maintaining their culture of origin. Integration leads to, and is often synonymous with biculturalism.
Marginalization occurs when individuals reject both their culture of origin and the dominant host culture.
Studies suggest that individuals' respective acculturation strategy can differ between their private and public life spheres. For instance, an individual may reject the values and norms of the dominant culture in their private life (separation), whereas they might adapt to the dominant culture in public parts of their life (i.e., integration or assimilation).
Predictors of acculturation strategies
The fourfold models used to describe individual attitudes of immigrants parallel models used to describe group expectations of the larger society and how groups should acculturate. In a melting pot society, in which a harmonious and homogenous culture is promoted, assimilation is the endorsed acculturation strategy. In segregationist societies, in which humans are separated into racial, ethnic and/or religious groups in daily life, a separation acculturation strategy is endorsed. In a multiculturalist society, in which multiple cultures are accepted and appreciated, individuals are encouraged to adopt an integrationist approach to acculturation. In societies where cultural exclusion is promoted, individuals often adopt marginalization strategies of acculturation.
Attitudes towards acculturation, and thus the range of acculturation strategies available, have not been consistent over time. For example, for most of American history, policies and attitudes have been based around established ethnic hierarchies with an expectation of one-way assimilation for predominantly White European immigrants. Although the notion of cultural pluralism has existed since the early 20th century, the recognition and promotion of multiculturalism did not become prominent in America until the 1980s. Separatism can still be seen today in autonomous religious communities such as the Amish and the Hutterites. Immediate environment also impacts the availability, advantage, and selection of different acculturation strategies. As individuals immigrate to unequal segments of society, immigrants to areas lower on economic and ethnic hierarchies may encounter limited social mobility and membership to a disadvantaged community. It can be explained by the theory of Segmented Assimilation, which is used to describe the situation when immigrants individuals or groups assimilate to the culture of different segments of the society of the host country. The outcome of whether entering the upper class, middle class, or lower class is largely determined by the socioeconomic status of the last generation.
On a broad scale study, involving immigrants in 13 immigration-receiving countries, the experience of discrimination was positively related to the maintenance of the immigrants' ethnic culture. In other words, immigrants that maintain their cultural practices and values are more likely to be discriminated against than those whom abandon their culture. Further research has also identified that the acculturation strategies and experiences of immigrants can be significantly influenced by the acculturation preferences of the members of the host society. The degree of intergroup and interethnic contact has also been shown to influence acculturation preferences between groups, support for multilingual and multicultural maintenance of minority groups, and openness towards multiculturalism. Enhancing understanding of out-groups, nurturing empathy, fostering community, minimizing social distance and prejudice, and shaping positive intentions and behaviors contribute to improved interethnic and intercultural relations through intergroup contact.
Most individuals show variation in both their ideal and chosen acculturation strategies across different domains of their lives. For example, among immigrants, it is often easier and more desired to acculturate to their host society's attitudes towards politics and government, than it is to acculturate to new attitudes about religion, principles, values, and customs.
Acculturative stress
The large flux of migrants around the world has sparked scholarly interest in acculturation, and how it can specifically affect health by altering levels of stress, access to health resources, and attitudes towards health. The effects of acculturation on physical health is thought to be a major factor in the immigrant paradox, which argues that first generation immigrants tend to have better health outcomes than non-immigrants. Although this term has been popularized, most of the academic literature supports the opposite conclusion, or that immigrants have poorer health outcomes than their host culture counterparts.
One prominent explanation for the negative health behaviors and outcomes (e.g. substance use, low birth weight) associated with the acculturation process is the acculturative stress theory. Acculturative stress refers to the stress response of immigrants in response to their experiences of acculturation. Stressors can include but are not limited to the pressures of learning a new language, maintaining one's native language, balancing differing cultural values, and brokering between native and host differences in acceptable social behaviors. Acculturative stress can manifest in many ways, including but not limited to anxiety, depression, substance abuse, and other forms of mental and physical maladaptation. Stress caused by acculturation has been heavily documented in phenomenological research on the acculturation of a large variety of immigrants. This research has shown that acculturation is a "fatiguing experience requiring a constant stream of bodily energy," and is both an "individual and familial endeavor" involving "enduring loneliness caused by seemingly insurmountable language barriers".
One important distinction when it comes to risk for acculturative stress is degree of willingness, or migration status, which can differ greatly if one enters a country as a voluntary immigrant, refugee, asylum seeker, or sojourner. According to several studies, voluntary migrants experience roughly 50% less acculturative stress than refugees, making this an important distinction. According to Schwartz (2010), there are four main categories of migrants:
Voluntary immigrants: those that leave their country of origin to find employment, economic opportunity, advanced education, marriage, or to reunite with family members that have already immigrated.
Refugees: those who have been involuntarily displaced by persecution, war, or natural disasters.
Asylum seekers: those who willingly leave their native country to flee persecution or violence.
Sojourners: those who relocate to a new country on a time-limited basis and for a specific purpose. It is important to note that this group fully intends to return to their native country.
This type of entry distinction is important, but acculturative stress can also vary significantly within and between ethnic groups. Much of the scholarly work on this topic has focused on Asian and Latino/a immigrants, however, more research is needed on the effects of acculturative stress on other ethnic immigrant groups. Among U.S. Latinos, higher levels of adoption of the American host culture has been associated with negative effects on health behaviors and outcomes, such as increased risk for depression and discrimination, and increased risk for low self-esteem. However, some individuals also report "finding relief and protection in relationships" and "feeling worse and then feeling better about oneself with increased competencies" during the acculturative process. Again, these differences can be attributed to the age of the immigrant, the manner in which an immigrant exited their home country, and how the immigrant is received by both the original and host cultures. Recent research has compared the acculturative processes of documented Mexican-American immigrants and undocumented Mexican-American immigrants and found significant differences in their experiences and levels of acculturative stress. Both groups of Mexican-American immigrants faced similar risks for depression and discrimination from the host (Americans), but the undocumented group of Mexican-American immigrants also faced discrimination, hostility, and exclusion by their own ethnic group (Mexicans) because of their unauthorized legal status. These studies highlight the complexities of acculturative stress, the degree of variability in health outcomes, and the need for specificity over generalizations when discussing potential or actual health outcomes.
Researchers recently uncovered another layer of complications in this field, where survey data has either combined several ethnic groups together or has labeled an ethnic group incorrectly. When these generalizations occur, nuances and subtleties about a person or group's experience of acculturation or acculturative stress can be diluted or lost. For example, much of the scholarly literature on this topic uses U.S. Census data. The Census incorrectly labels Arab-Americans as Caucasian or "White". By doing so, this data set omits many factors about the Muslim Arab-American migrant experience, including but not limited to acculturation and acculturative stress. This is of particular importance after the events of September 11, 2001, since Muslim Arab-Americans have faced increased prejudice and discrimination, leaving this religious ethnic community with an increased risk of acculturative stress. Research focusing on the adolescent Muslim Arab American experience of acculturation has also found that youth who experience acculturative stress during the identity formation process are at a higher risk for low self-esteem, anxiety, and depression.
Some researchers argue that education, social support, hopefulness about employment opportunities, financial resources, family cohesion, maintenance of traditional cultural values, and high socioeconomic status (SES) serve as protections or mediators against acculturative stress. Previous work shows that limited education, low SES, and underemployment all increase acculturative stress. Since this field of research is rapidly growing, more research is needed to better understand how certain subgroups are differentially impacted, how stereotypes and biases have influenced former research questions about acculturative stress, and the ways in which acculturative stress can be effectively mediated.
Other outcomes
Culture
When individuals of a certain culture are exposed to another culture (host) that is primarily more present in the area that they live, some aspects of the host culture will likely be taken and blended within aspects of the original culture of the individuals. In situations of continuous contact, cultures have exchanged and blended foods, music, dances, clothing, tools, and technologies. This kind of cultural exchange can be related to selective acculturation that refers to the process of maintaining cultural content by researching those individuals' language use, religious belief, and family norms. Cultural exchange can either occur naturally through extended contact, or more quickly though cultural appropriation or cultural imperialism.
Cultural appropriation is the adoption of some specific elements of one culture by members a different cultural group. It can include the introduction of forms of dress or personal adornment, music and art, religion, language, or behavior. These elements are typically imported into the existing culture, and may have wildly different meanings or lack the subtleties of their original cultural context. Because of this, cultural appropriation for monetary gain is typically viewed negatively, and has sometimes been called "cultural theft".
Cultural imperialism is the practice of promoting the culture or language of one nation in another, usually occurring in situations in which assimilation is the dominant strategy of acculturation. Cultural imperialism can take the form of an active, formal policy or a general attitude regarding cultural superiority.
Language
In some instances, acculturation results in the adoption of another country's language, which is then modified over time to become a new, distinct, language. For example, Hanzi, the written language of Chinese language, has been adapted and modified by other nearby cultures, including: Japan (as kanji), Korea (as hanja), and Vietnam (as chữ Hán). Jews, often living as ethnic minorities, developed distinct languages derived from the common languages of the countries in which they lived (for example, Yiddish from High German and Ladino from Old Spanish). Another common effect of acculturation on language is the formation of pidgin languages. Pidgin is a mixed language that has developed to help communication between members of different cultures in contact, usually occurring in situations of trade or colonialism. For example, Pidgin English is a simplified form of English mixed with some of the language of another culture. Some pidgin languages can develop into creole languages, which are spoken as a first language.
Language plays a pivotal role in cultural heritage, serving as both a foundation for group identity and a means for transmitting culture in situations of contact between languages. Language acculturation strategies, attitudes and identities can also influence the sociolinguistic development of languages in bi/multilingual contexts.
Food
Food habits and food consumption are affected by acculturation on different levels. Research has indicated that food habits are discreet and practiced privately, and change occurs slowly. Consumption of new food items is affected by the availability of native ingredients, convenience, and cost; therefore, an immediate change is likely to occur. Aspects of food acculturation include the preparation, presentation, and consumption of food. Different cultures have different ways in which they prepare, serve, and eat their food. When exposed to another culture for an extended period of time, individuals tend to take aspects of the "host" culture's food customs and implement them with their own. In cases such as these, acculturation is heavily influenced by general food knowledge, or knowing the unique kinds of food different cultures traditionally have, the media, and social interaction. It allows for different cultures to be exposed to one another, causing some aspects to intertwine and also become more acceptable to the individuals of each of the respective cultures.
Controversies and debate
Definitions
Anthropologists have made a semantic distinction between group and individual levels of acculturation. In such instances, the term transculturation is used to define individual foreign-origin acculturation, and occurs on a smaller scale with less visible impact. Scholars making this distinction use the term "acculturation" only to address large-scale cultural transactions. Acculturation, then, is the process by which migrants gain new information and insight about the norms and values of their culture and adapt their behaviors to the host culture.
Recommended models
Research has largely indicated that the integrationist model of acculturation leads to the most favorable psychological outcomes and marginalization to the least favorable. While an initial meta-analysis of the acculturation literature found these results to be unclear, a more thorough meta-analysis of 40 studies showed that integration was indeed found to have a "significant, weak, and positive relationship with psychological and sociocultural adjustment". A study was done by John W. Berry (2006) that included 7,997 immigrant adolescents from 13 countries found that immigrant boys tend to have slightly better psychological adaptation than immigrant girls. Overall, immigrants in the integration profile were found to be more well-adapted than those in other profiles. Perceived discrimination was also negatively linked to both psychological and sociocultural adaptation. Various factors can explain the differences in these findings, including how different the two interacting cultures are, and degree of integration difficulty (bicultural identity integration). These types of factors partially explain why general statements about approaches to acculturation are not sufficient in predicting successful adaptation. As research in this area has expanded, one study has identified marginalization as being a maladaptive acculturation strategy.
Typological approach
Several theorists have stated that the fourfold models of acculturation are too simplistic to have predictive validity. Some common criticisms of such models include the fact that individuals don't often fall neatly into any of the four categories, and that there is very little evidence for the applied existence of the marginalization acculturation strategy. In addition, the bi-directionality of acculturation means that whenever two groups are engaged in cultural exchange, there are 16 permutations of acculturation strategies possible (e.g. an integrationist individual within an assimilationist host culture). According to the research, another critic of the fourfold of acculturation is that the people are less likely to cultivate a self-perception but either not assimilate other cultures or continuing the heritage cultures.Rethinking the Concept of Acculturation - PMC The interactive acculturation model represents one proposed alternative to the typological approach by attempting to explain the acculturation process within a framework of state policies and the dynamic interplay of host community and immigrant acculturation orientations.
See also
Naturalization
Acclimatization
Socialization
Deculturalization
Globalization
Nationalization
Acculturation gap
Educational anthropology
Ethnocentrism
Cultural relativism
Cultural conflict
Inculturation
Cultural competence
Language shift
Westernization
Cultural identity
Linguistic imperialism
Intercultural communication
Fusion music
Fusion cuisine
Notes
References
Ward, C. (2001). The A, B, Cs of acculturation. In D. Matsumoto (Ed.) "The handbook of culture and psychology" (pp. 411–445). Oxford, United Kingdom: Oxford University Press.
Cultural studies
Majority–minority relations
Immigration
| 0.76153 | 0.99485 | 0.757608 |
Cultural sensitivity
|
Cultural sensitivity, also referred to as cross-cultural sensitivity or cultural awareness, is the knowledge, awareness, and acceptance of other cultures and others' cultural identities. It is related to cultural competence (the skills needed for effective communication with people of other cultures, which includes cross-cultural competence), and is sometimes regarded as the precursor to the achievement of cultural competence, but is a more commonly used term. On the individual level, cultural sensitivity is a state of mind regarding interactions with those different from oneself. Cultural sensitivity enables travelers, workers, and others to successfully navigate interactions with a culture other than their own.
Cultural diversity includes demographic factors (such as race, gender, and age) as well as values and cultural norms. Cultural sensitivity counters ethnocentrism, and involves intercultural communication, among relative skills. Most countries' populations include minority groups comprising indigenous peoples, subcultures, and immigrants who approach life from a different perspective and mindset than that of the dominant culture. Workplaces, educational institutions, media, and organizations of all types are becoming more mindful of being culturally sensitive to all stakeholders and the population at large. Increasingly, training of cultural sensitivity is being incorporated into workplaces and students' curricula at all levels. The training is usually aimed at the dominant culture, but in multicultural societies may also be taught to migrants to teach them about other minority groups. The concept is also taught to expatriates working in other countries to ingratiate them into other customs and traditions.
Definitions and aims
There are a variety of definitions surrounding cultural sensitivity. All of these definitions revolve around the idea that it is the knowledge, awareness, and acceptance of other cultures. It includes "the willingness, ability and sensitivity required to understand people with different backgrounds", and the acceptance of diversity. Crucially, it "refers to being aware that cultural differences and similarities between people exist without assigning them a value." Definitions also include the skill set acquired by this learning. Cultural awareness is having the knowledge of the existence of multiple different cultures with different attitudes and worldviews, while cultural sensitivity means the acceptance of those differences and accepting that one's own culture is not superior.
In 2008, cultural sensitivity was found to be a widely used term in a literature search of global databases, both popular and scholarly. Based on this literature, cultural sensitivity is defined as "employing one's knowledge, consideration, understanding, [and] respect, and tailoring [it] after realizing awareness of self and others, and encountering a diverse group or individual".
There are many different types of cultural diversity in any society, including factors such as marginalized or socially excluded groups; ethnicity; sexual orientation; disability; values and cultural norms. Cultural sensitivity is relevant to all of these.
Support of cultural sensitivity is based on ideological or practical considerations. Former Secretary-General of the United Nations, Kofi Annan, advocated cultural sensitivity as an essential value in the modern world:
Factors for cultural awareness
Certain factors that affect cultural sensitivity include religion, ethnicity, race, national origin, language, or gender. Others areas to look at include age, education, socio-economic status, sexual orientation, and mental/physical challenges.
Cultural competence
Awareness and understanding of other cultures is a key factor of cultural sensitivity. Cultural Competence relies on the ability of both parties involved to have a pleasant and successful interaction. The term "cultural competence" is often used to describe those skills acquired to embody cultural sensitivity, particularly in the workplace. Cultural sensitivity requires flexibility. Louise Rasmussen and Winston Sieck led studies consisting of members of the U.S. Military that identified 12 Core Aspects (consisting of four subgroups) of successful cross-cultural interactions. These aspects rely on the subjects of the study being able to remain diplomatic and learn from intercultural interactions.
The 12 Core Aspects Include:
A diplomatic stance
Maintaining a Mission Orientation
Understanding Self in Social Context
Managing Attitude Towards Culture
Cultural Learning
Self-Directed Learning of Cultures
Developing Reliable Information Sources
Learning New Cultures Efficiently
Cultural Reasoning
Coping with Cultural Surprises
Developing Cultural Explanations of Behavior
Cultural Perspective Taking
Intercultural Interaction
Intercultural Communication Planning
Disciplined Self Presentation
Reflection and Feedback
In the dominant culture
Cultural awareness and sensitivity help to overcome inherent ethnocentrism by learning about other cultures and how various modes and expectations may differ between those cultures. These differences range from ethical, religious, and social attitudes to body language and other nonverbal communication. Cultural sensitivity is just one dimension of cultural competence, and has an impact on ethnocentrism and other factors related to culture. The results of developing cultural sensitivity are considered positive: communication is improved, leading to more effective interaction between the people concerned, and improved outcome or interventions for the client or customer.
The concept is taught in many workplaces, as it is an essential skill for managing and building teams in a multicultural society. Intercultural communication has been cited as one of the two biggest challenges within the workplace, along with internal communications (mission statement, meetings, etc.).
In healthcare
Cultural sensitivity training in health care providers can improve the satisfaction and health outcomes of patients from different minority groups. Because standard measures for diagnosis and prognosis relate to established norms, cultural sensitivity is essential. A person's norms are defined by their culture, and these may differ significantly from the treating medical professional. Language barriers, beliefs, and trust are just a few of the factors to consider when treating patients of other cultural groups. Understanding cultural beliefs regarding health and care can give healthcare professionals a better idea of how to proceed with providing care.
It is important to understand the concept behind the buzzword in the healthcare setting, as cultural sensitivity can increase nurses' appreciation of and communication with other professionals as well as patients. Part of providing culturally sensitive care is to develop cultural competence as an ongoing process. Nurses and employers should be committed to educating themselves about different patients' beliefs, values, and perspectives.
In therapy
In a study on narrative theory in therapy, Cynthia C. Morris concluded that culture in made up of the collected stories of a group of people. In the practice of therapy, understanding a patient's point of view is vital to the clinician. Cultural Sensitivity allows for a clinician to get a more well-rounded understanding of where the client is coming from, why they may think about things in a certain way, or their approach to thought in general. Culturally Sensitive Therapy approaches psychotherapy by emphasizing how the clinician understands the client's race, ethnicity, sexual orientation, gender, religion and any other aspects that relate to culture and identity. Culturally sensitive therapists will help their patients feel more seen and understood, while those without cultural sensitivity may turn away patients from the practice of therapy altogether.
Working and travelling abroad
On the individual level, cultural sensitivity allows travelers and expatriate workers to successfully navigate a different culture with which they are interacting. It can increase the security of travelers because it helps them understand interactions from the perspective of the native culture. One individual's understanding of another's culture can increase respect for the other individual, allowing for more effective communication and interactions. For managers as well as employees, cultural sensitivity is increasingly more vital in business or government jobs.
This cross-cultural sensitivity can lead to both competitiveness and success when working with or within organizations located in a different country. These benefits highlight the consideration of how two societies and cultures operate, particularly with respect to how they are similar and different from each other. Being able to determine these in terms of thoughts, behavior beliefs, and expressions among others makes it possible to solve problems meaningfully and act in a manner that is acceptable to all stakeholders.
Lacking awareness of foreign cultures can also have adverse consequences. These can be as severe as reaching the point of legal action. Similarly, certain etiquettes in one country can be considered violations of business codes in another.
Tourism
Tourism is a major opportunity to experience and interact with other cultures. It is therefore one of the most vital times to be culturally sensitive. There are major faux pas to be aware of regarding the locals. Ensuring awareness of table manners, common phrases, local dress, etiquette at holy sites, and other immersions into the culture are great ways to be sensitive to the destination and engage with it.
Tourism to areas with Indigenous people requires more awareness and cultural sensitivity. Many of these areas have been colonized and turned into tourist attractions that put on display the culture that is being erased. These kinds of attractions lead to stereotyping that negatively impacts the culture rather than exposing others to it. These displays can often turn the culture into an exotic aesthetic that leads to inauthentic portrayals of the culture and furthers stereotypes. This cultural insensitivity happens when cultural practices and products are sold by another cultural group without consent. Due to this, culturally sensitive tourism is an up and coming industry that aims to engage with a culture rather than exoticized.
Models
Bennett scale
Milton Bennett was the first to create a model or framework designed to help comprehension of various stages of intercultural sensitivity. This became known as the Developmental Model of Intercultural Sensitivity (DMIS), otherwise referred to as the Bennett scale. This scale has been adapting and developing since 1986 and is included in The International Encyclopedia of Intercultural Communication (2017).
Bennett developed the framework of the model to show the intercultural sensitivity a person may experience. Intercultural sensitivity is defined as an individual's ability to develop emotion towards understanding and appreciating cultural differences that promotes appropriate and effective behavior in intercultural communication"
According to Bennett, “As one’s perceptual organization of cultural difference becomes more complex, one’s experience of culture becomes more sophisticated and the potential for exercising competence in intercultural relations increases." By recognizing how cultural difference is being experienced, predictions about the effectiveness of intercultural communication can be made.
Bennett describes a continuum, which moves from ethnocentrism to "ethnorelativism". The model includes six stages of experiencing difference.
The six stages explained in the model include:
Denial - when people fail to recognize distinctions among cultures or consider them to be irrelevant
Defense - people perceive other cultures in a competitive way, or in an us-against-them way
Minimization - people assume that their distinct cultural worldview is shared by others, or when they perceive their culture's values as fundamental or universal human values that apply to everyone.
Acceptance - recognize that different beliefs and values are shaped by culture,
Adaptation - when people are able to adopt the perspective of another culture,
Integration - someone's identity or sense of self evolves to incorporate the values, beliefs, perspectives, and behaviors of other cultures.
Community Tool Box
The Community Tool Box was developed by the University of Kansas' Center for Community Health and Development, a designated World Health Organization Collaborating Centre for Community Health and Development. The Centre's idea of "Building Culturally Competent Organizations," is a guide for diversity and inclusion training in the workplace. The Tool Box refers to three levels leading up to the fourth, the end goal:
cultural knowledge
cultural awareness
cultural sensitivity
cultural competence
Each step builds on the previous one, with the final one, cultural competence, being the stage where the organization has effectively enabled better outcomes in a multicultural workforce.
Competence training
Training to achieve cultural competence or cultural sensitivity is undertaken in schools, workplaces, in healthcare settings
See also
Cross-cultural communication
Cultural assimilation
Cultural behavior
Cultural diversity
Cultural identity
Cultural intelligence
Cultural pluralism
Cultural relativism
Intercultural learning
Intercultural therapy
Multiculturalism
Social identity
References
Further reading
Cultural Sensitivity: A Concept Analysis
Intercultural Sensitivity and Conflict Management Styles in Cross-Cultural Organizational Situations
Cross-cultural competency tools
Cross-cultural studies
Cultural competence
| 0.765041 | 0.990266 | 0.757595 |
Anecdotal evidence
|
An anecdotal evidence (or anecdata) is a piece of evidence based on descriptions and reports of individual, personal experiences, or observations, collected in a non-systematic manner.
The word anecdotal constitutes a variety of forms of evidence. This word refers to personal experiences, self-reported claims, or eyewitness accounts of others, including those from fictional sources, making it a broad category that can lead to confusion due to its varied interpretations.
Anecdotal evidence can be true or false but is not usually subjected to the methodology of scholarly method, the scientific method, or the rules of legal, historical, academic, or intellectual rigor, meaning that there are little or no safeguards against fabrication or inaccuracy. However, the use of anecdotal reports in advertising or promotion of a product, service, or idea may be considered a testimonial, which is highly regulated in some jurisdictions.
The persuasiveness of anecdotal evidence compared to that of statistical evidence has been a subject of debate; some studies have argued for the presence a generalized tendency to overvalue anecdotal evidence, whereas others have emphasized the types of argument as a prerequisite or rejected the conclusion altogether.
Scientific context
In science, definitions of anecdotal evidence include:
"casual observations or indications rather than rigorous or scientific analysis"
"information passed along by word-of-mouth but not documented scientifically"
"evidence that comes from an individual experience. This may be the experience of a person with an illness or the experience of a practitioner based on one or more patients outside a formal research study."
"the report of an experience by one or more persons that is not objectively documented or an experience or outcome that occurred outside of a controlled environment"
Anecdotal evidence may be considered within the scope of scientific method as some anecdotal evidence can be both empirical and verifiable, e.g. in the use of case studies in medicine. Other anecdotal evidence, however, does not qualify as scientific evidence, because its nature prevents it from being investigated by the scientific method, for instance, in that of folklore or in the case of intentionally fictional anecdotes. Where only one or a few anecdotes are presented, there is a chance that they may be unreliable due to cherry-picked or otherwise non-representative samples of typical cases. Similarly, psychologists have found that due to cognitive bias people are more likely to remember notable or unusual examples rather than typical examples. Thus, even when accurate, anecdotal evidence is not necessarily representative of a typical experience. Accurate determination of whether an anecdote is typical requires statistical evidence. Misuse of anecdotal evidence in the form of argument from anecdote is an informal fallacy and is sometimes referred to as the "person who" fallacy ("I know a person who..."; "I know of a case where..." etc.) which places undue weight on experiences of close peers which may not be typical.
Anecdotal evidence can have varying degrees of formality. For instance, in medicine, published anecdotal evidence by a trained observer (a doctor) is called a case report, and is subjected to formal peer review. Although such evidence is not seen as conclusive, researchers may sometimes regard it as an invitation to more rigorous scientific study of the phenomenon in question. For instance, one study found that 35 of 47 anecdotal reports of drug side-effects were later sustained as "clearly correct."
Anecdotal evidence is considered the least certain type of scientific information. Researchers may use anecdotal evidence for suggesting new hypotheses, but never as validating evidence.
If an anecdote illustrates a desired conclusion rather than a logical conclusion, it is considered a faulty or hasty generalization.
In any case where some factor affects the probability of an outcome, rather than uniquely determining it, selected individual cases prove nothing; e.g. "my grandfather smoked two packs a day until he died at 90" and "my sister never smoked but died of lung cancer". Anecdotes often refer to the exception, rather than the rule: "Anecdotes are useless precisely because they may point to idiosyncratic responses."
In medicine, anecdotal evidence is also subject to placebo effects.
Legal
In the legal sphere, anecdotal evidence, if it passes certain legal requirements and is admitted as testimony, is a common form of evidence used in a court of law. Often this form of anecdotal evidence is the only evidence presented at trial. Scientific evidence in a court of law is called physical evidence, but this is much rarer. Anecdotal evidence, with a few safeguards, represents the bulk of evidence in court.
The legal rigors applied to testimony for it to be considered evidence is that it must be given under oath, that the person is only testifying to their own words and actions, and that someone intentionally lying under oath is subject to perjury. However, these rigors do not make testimony in a court of law equal to scientific evidence as there are far less legal rigors. Testimony about another person's experiences or words is called hearsay and is usually not admissible, though there are certain exceptions. However, any hearsay that is not objected to or thrown out by a judge is considered evidence for a jury. This means that trials contain quite a bit of anecdotal evidence, which is considered as relevant evidence by a jury. Eyewitness testimony (which is a form of anecdotal evidence) is considered the most compelling form of evidence by a jury.
See also
References
Informal fallacies
Philosophy of science
Skepticism
Evidence
Testimony
Inductive fallacies
Pseudoscience
Diversionary tactics
Misuse of statistics
Anecdotes
| 0.761977 | 0.994244 | 0.75759 |
Decolonization of knowledge
|
Decolonization of knowledge (also epistemic decolonization or epistemological decolonization) is a concept advanced in decolonial scholarship that critiques the perceived hegemony of Western knowledge systems. It seeks to construct and legitimize other knowledge systems by exploring alternative epistemologies, ontologies and methodologies. It is also an intellectual project that aims to "disinfect" academic activities that are believed to have little connection with the objective pursuit of knowledge and truth. The presumption is that if curricula, theories, and knowledge are colonized, it means they have been partly influenced by political, economic, social and cultural considerations. The decolonial knowledge perspective covers a wide variety of subjects including philosophy (epistemology in particular), science, history of science, and other fundamental categories in social science.
Background
Decolonization of knowledge inquires into the historical mechanisms of knowledge production and their perceived colonial and ethnocentric foundations. Budd L. Hall et al argue that knowledge and the standards that determine the validity of knowledge have been disproportionately informed by Western system of thought and ways of being in the world. According to Jaco S. Dreyer, the western knowledge system that emerged in Europe during renaissance and Enlightenment was deployed to legitimise Europe’s colonial endeavour, which eventually became a part of colonial rule and forms of civilization that the colonizers carried with them. This perspective maintains that the knowledge produced by the Western system was deemed superior to that produced by other systems since it had a universal quality. Decolonial scholars concur that the western system of knowledge still continues to determine as to what should be considered as scientific knowledge and continues to "exclude, marginalise and dehumanise" those with different systems of knowledge, expertise and worldviews. Anibal Quijano stated:
In effect, all of the experiences, histories, resources, and cultural products ended up in one global cultural order revolving around European or Western hegemony. Europe’s hegemony over the new model of global power concentrated all forms of the control of subjectivity, culture, and especially knowledge and the production of knowledge under its hegemony... They repressed as much as possible the colonized forms of knowledge production, the models of the production of meaning, their symbolic universe, the model of expression and of objectification and subjectivity.
In her book Decolonizing Methodologies: Research and Indigenous Peoples, Linda Tuhiwai Smith writes:
Imperialism and colonialism brought complete disorder to colonized peoples, disconnecting them from their histories, their landscapes, their languages, their social relations and their own ways of thinking, feeling and interacting with the world.
According to this viewpoint, colonialism has ended in the legal and political sense, but its legacy continues in many "colonial situations" where individuals and groups in historically colonized places are marginalized and exploited. Decolonial scholars refer to this continuing legacy of colonialism as "coloniality", which describes the oppression and exploitation left behind by colonialism in a variety of interrelated domains, including the domain of subjectivity and knowledge.
Origin and development
In community groups and social movements in the Americas, decolonization of knowledge traces its roots back to resistance against colonialism from its very beginning in 1492. Its emergence as an academic concern is rather a recent phenomenon. According to Enrique Dussel, the theme of epistemological decolonization has originated from a group of Latin American thinkers. Although the notion of decolonization of knowledge has been an academic topic since the 1970s, Walter Mignolo says it was the ingenious work of Peruvian sociologist Anibal Quijano that "explicitly linked coloniality of power in the political and economic spheres with the coloniality of knowledge." It has developed as "an elaboration of a problematic" that began as a result of several critical stances such as postcolonialism, subaltern studies and postmodernism. Enrique Dussel says epistemological decolonization is structured around the notions of coloniality of power and transmodernity, which traces its roots in the thoughts of José Carlos Mariátegui, Frantz Fanon and Immanuel Wallerstein. According to Sabelo J. Ndlovu-Gatsheni, although the political, economic, cultural and epistemological dimensions of decolonization were and are intricately connected to each other, attainment of political sovereignty was preferred as a "practical strategic logic of struggles against colonialism." As a result, political decolonization in the twentieth century failed to attain epistemological decolonization, as it did not widely inquire into the complex domain of knowledge.
Themes
According to Alex Broadbent, decolonization is sometimes understood as a rejection of the notion of objectivity, which is seen as a legacy of colonial thought. He argues that universal conception of ideas such as "truth" and "fact" are Western constructs that are imposed on other foreign cultures. This tradition considers notions of truth and fact as "local", arguing that what is "discovered" or "expressed" in one place or time may not be applicable in another. The concerns of decolonization of knowledge are that the western knowledge system has become a norm for global knowledge and that its methodologies are the only ones deemed appropriate for use in knowledge production. This perceived hegemonic approach towards other knowledge systems is said to have reduced epistemic diversity and established the center of knowledge, eventually suppressing all other knowledge forms. Boaventura de Sousa Santos says "throughout the world, not only are there very diverse forms of knowledge of matter, society, life and spirit, but also many and very diverse concepts of what counts as knowledge and criteria that may be used to validate it." However, it is claimed that this variety of knowledge systems has not gained much recognition. According to Lewis Gordon, the formulation of knowledge in its singular form itself was unknown to times before the emergence of European modernity. Modes of knowledge production and notions of knowledge were so diversified that knowledges, in his opinion, would be more appropriate description.
According to Walter Mignolo, the modern foundation of knowledge is thus territorial and imperial. This foundation is based on "the socio-historical organization and classification of the world founded on a macro narrative and on a specific concept and principles of knowledge" which finds its roots in European modernity. He articulates epistemic decolonization as an expansive movement that identifies "geo-political locations of theology, secular philosophy and scientific reason" while also affirming "the modes and principles of knowledge that have been denied by the rhetoric of Christianization, civilization, progress, development and market democracy." According to Achille Mbembe, decolonization of knowledge means contesting the hegemonic western epistemology that suppresses anything that is foreseen, conceived and formulated from outside of western epistemology. It has two aspects: a critique of Western knowledge paradigms and the development of new epistemic models. Savo Heleta states that decolonization of knowledge "implies the end of reliance on imposed knowledge, theories and interpretations, and theorizing based on one’s own past and present experiences and interpretation of the world."
Significance
According to Anibal Quijano, epistemological decolonization is necessary for opening up new avenues for intercultural communication and the sharing of experiences and meanings, laying the groundwork for an alternative rationality that could rightfully stake a claim to some degree of universality. Sabelo J. Ndlovu-Gatsheni says epistemological decolonization is essential for addressing the "asymmetrical global intellectual division of labor" in which Europe and North America not only act as teachers of the rest of the world but also serve as the centers for the production of theories and concepts that are ultimately "consumed" by the entire human race.
Approaches
According to Linda Tuhiwai Smith, decolonization "does not mean a total rejection of all theory or research or Western knowledge". In Lewis Gordon's view, decolonization of knowledge mandates a detachment from the "commitments to notions of an epistemic enemy." It rather emphasizes "the appropriation of any and all sources of knowledge" in order to achieve relative epistemic autonomy and epistemic justice for "previously unacknowledged and/or suppressed knowledge traditions."
Indigenous decolonization
Relational model of knowledge
Decolonial scholars inquire into various forms of indigenous knowledges in their efforts to decolonize knowledge and worldviews. Louis Botha et al make the case for a "relational model of knowledge," which they situate within indigenous knowledges. These indigenous knowledges are based on indigenous peoples' perceptions and modes of knowing. They consider indigenous knowledges to be essentially relational because these knowledge traditions place a high value on the relationships between the actors, objects, and settings involved in the development of knowledge. Such "networked" relational approach to knowledge production fosters and encourages connections between the individuals, groups, resources, and other components of knowledge-producing communities. For Louis Botha et al, since it is built on an ontology that acknowledges the spiritual realm as real and essential to knowledge formation, this relationality is also fundamentally spiritual, and feeds axiological concepts about why and how knowledge should be created, preserved, and utilized.
In academia
One of the most crucial aspects of decolonization of knowledge is to rethink the role of the academia, which, according to Louis Yako, an Iraqi-American anthropologist, has become the "biggest enemy of knowledge and the decolonial option." He says Western universities have always served colonial and imperial powers, and the situation has only become worse in the neoliberal age. According to Yako, the first step toward decolonizing academic knowledge production is to carefully examine "how knowledge is produced, by whom, whose works get canonized and taught in foundational theories and courses, and what types of bibliographies and references are mentioned in every book and published article." He criticizes Western universities for their alleged policies regarding research works that undermine foreign and independent sources while favoring citations to "elite" European or American scholars who are commonly considered "foundational" in their respective fields, and calls for an end to this practice.
Shose Kessi et al argue that the goal of academia is "not to reach new orders of homogeneity, but rather greater representation of pluralistic ideas and rigorous knowledge". They invite academics to carefully scrutinize the authors and voices that are presented as authorities on a subject or in the classroom, the methods and epistemologies that are taught or given preference, as well as the academic concerns that are seen as fundamental and the ones that are ignored. They must reconsider the pedagogical tools or approaches used in the learning process for students, as well as examine the indigenous or community knowledge systems that are followed, promoted, or allowed to redefine the learning agenda. The purpose and future of knowledge must also be reevaluated during this process. There have been suggestions for expanding the reading list and creating an inclusive curriculum that incorporates a range of voices and viewpoints in order to represent broader global and historical perspectives. Researchers are urged to investigate outside the Western canons of knowledge to determine whether there are any alternative canons that have been overlooked or disregarded as a result of colonialism.
Ngũgĩ wa Thiong'o, who emphasizes the significance of decolonizing history, memory, and language, has stated that language, not geopolitics, should serve as the initial point of decolonization. According to Mahmood Mamdani, the idea of a university based on a single language is a colonial heritage, as in the case of African universities, which began as a colonial project, with English or French being the project language, and it recognized only one intellectual tradition—the Western tradition. According to Mamdani, university education needs to be more diverse and multilingual, with a focus on not only providing Westernized education in a variety of languages but also on ways to advance non-Western intellectual traditions as living traditions that can support both scholarly and public discourse. Mamdani makes the case for allocating funds to the creation of academic units that may research and instruct in non-Western intellectual traditions. He believes that learning the language in which the tradition has been historically developed is necessary if one wants to access a different intellectual tradition.
Louis Yako opposes the labeling of new scholars as "Marxist", "Foucauldian", "Hegelian", "Kantian", and so on, which he sees as a "colonial method of validating oneself and research" through these scholars. According to Yako, despite the fact that scholars such as Marx, Hegel, Foucault, and many others were all inspired by numerous thinkers before them, they are not identified with the names of such intellectuals. He criticizes the academic peer-review process as a system of "gatekeepers" who regulate the production of knowledge in a given field or about a certain region of the world.
In various disciplines
In order to overcome the perceived constraints of the Western canons of knowledge, proponents of knowledge decolonization call for the decolonization of various academic disciplines, including history, science and the history of science, philosophy, (in particular, epistemology), psychology, sociology, religious studies, and legal studies.
History
According to the official web page of the University of Exeter, the "colonialist worldview," which allegedly prioritises some people's beliefs, rights, and dignity over those of others, has had an impact on the theoretical framework that underpins the modern academic field of history. This modern field of study has first developed in Europe during a period of rising nationalism and colonial exploitation, which determines the historical narratives of the world. This account suggests "that the very ways we are conditioned to look at and think about the past are often derived from imperialist and racialised schools of thought". Decolonial approach in history requires "an examination of the non-western world on its own terms, including before the arrival of European explorers and imperialists". In an effort to understand the world before the fifteenth century, it attempts to situate Western Europe in relation to other historical "great powers" like the Eastern Roman Empire or the Abbasid Caliphate. It "requires rigorous critical study of empire, power and political contestation, alongside close reflection on constructed categories of social difference". According to Walter Mignolo, discovering the variety of local historical traditions are crucial for "restoring the dignity that the Western idea of universal history took away from millions of people".
Modern science
The decolonial approach contests the notion of science as "purely objective, solely empirical, immaculately rational, and thus, singularly truth confirming”. According to this account, such an outlook towards science implies "that reality is discrete and stagnant; immune to its observer’s subjectivity, including their cultural suasions; and dismountable into its component parts whose functioning can then be ascertained through verificationist means". Laila N Boisselle situates modern science within Western philosophy and Western paradigms of knowledge, saying that "different ways of knowing how the world works are fashioned from the cosmology of the observer, and provides opportunities for the development of many sciences". Margaret Blackie and Hanelie Adendorff argue "that the practice of science by scientists has been profoundly influenced by Western modernity". According to this perspective, modern science thus "reflects foundational elements of empiricism according to Francis Bacon, positivism as conceptualized by Comte, and neo-positivism as suggested by the School of Vienna in the early 1900s." Boisselle also suggests that the mainstream scientific perspective that downplays the function or influence of Spirit or God in any manifestation in its processes, is not only Western and modern but also secular in orientation.
Boisselle sought to identify two issues with Western knowledge, including "Western Modern Science". For her, it starts off by seeking to explain the nature of the universe on the basis of reason alone. The second is that it considers itself to be the custodian of all knowledge and to have the power "to authenticate and reject other knowledge." The idea that modern science is the only legitimate method of knowing has been referred to as "scientific fundamentalism" or "scientism". It assumes the role of a gatekeeper by situating "science for all" initiatives on a global scale inside the framework of scientism. As a result, it acquires the power to decide what scientific knowledge is deemed to be "epistemologically rigorous". According to Boaventura de Sousa Santos, in order to decolonize modern science, it is necessary to consider "the partiality of scientific knowledge", i.e. to acknowledge that, like any other system of knowledge, "science is a system of both knowledge and ignorance". For Santos, "scientific knowledge is partial because it does not know everything deemed important and it cannot possibly know everything deemed important". In this regard, Boisselle argues for a "relational science" based on a "relational ontology" that respects “the interconnectedness of physical, mental, emotional, and spiritual aspects of individuals with all living things and with the star world, and the universe”.
Samuel Bendeck Sotillos, with reference to perennial philosophy, critiques modern science for its rejection of metaphysics and spiritual traditions from around the world. He states that "the belief that only the scientific method gives access to valid forms of knowledge is not only flawed but totalitarian, having its roots in the European Enlightenment or the so-called Age of Reason". For him, "This dogmatic outlook is not science, but an ideology known as scientism, which has nothing to do with the proper exercise of the scientific method". This viewpoint challenges the idea that science is Truth, with a capital "T", saying that "contemporary science is largely relegated to dealing with approximations; in doing so, it is always modifying its understanding and thus is in no position to declare what can be finally known with certainty", and it promotes an understanding of science within the confines of its underlying philosophical assumptions concerning physical reality. In this context, Sotillos seeks to revive traditional metaphysics, also known as sacred science or scientia sacra, which is guided by metaphysical principles and is based on the sapiential teachings of world religions.
History of science
Beginning in the middle of the 1980s, postcolonial histories of science is said to constitute a “decentered, diasporic, or ‘global’ rewriting of earlier nation-centred imperial grand narratives.” These histories seek to uncover "counter-histories of science, the legacies of precolonial knowledge, or residues and resurrections of the constitutive relations of colonial science." Instead of "centering scientific institutes in colonial metropoles," this history attempts to examine what Warwick Anderson refers to "as the unstable economy of science’s shifting spatialities as knowledge is transacted, translated, and transformed across the globe". It seeks to eradicate "imperial grand narratives", which is said to provincialize science into a single "indigenous knowledge tradition". Instead, it seeks to recognise "the culturally diverse and global origins of science", and build a cosmopolitan model of science history in place of the narrow view of science as the creation of "lone geniuses". This perspective acknowledges the contributions of other civilizations to science, and offers a "contrageography of science that is not Eurocentric and linear". The central tenet is that the history of science should be seen as a history of transmissions. In this, Prakash Kumar et al cite Joseph Needham as saying, "modern science...[is] like an ocean into which the rivers from all the world’s civilizations have poured their waters”.
Philosophy
Nelson Maldonado-Torres et al see the decolonial turn in philosophy "as a form of liberating and decolonising reason beyond the liberal and Enlightened emancipation of rationality, and beyond the more radical Euro-critiques that have failed to consistently challenge the legacies of Eurocentrism and white male heteronormativity (often Euro-centric critiques of Eurocentrism)". According to Sajjad H. Rizvi, the shift toward global philosophy may herald a radical departure from colonial epistemology and pave the way for the decolonization of knowledge, particularly in the study of the humanities. In opposition to what is said to have been the standard method in philosophy studies, he argues against focusing solely on Western philosophers. Rizvi makes the case for the inclusion of Islamic philosophy in the discussion because he thinks it will aid in the process of decolonization and may eventually replace the Eurocentric education of philosophy with an expansive "pedagogy of living and being". Philip Higgs argues for the inclusion of African philosophy in the context of decolonization. Similar suggestions have been made for Indian philosophy and Chinese philosophy. Maldonado-Torres et al discuss issues in the philosophy of race and gender as well as Asian philosophy and Latin American philosophy as instances of the decolonial turn and decolonizing philosophy, contending that "Asia and Latin America are not presented here as the continental others of Europe but as constructed categories and projects that themselves need to be decolonized".
Psychology
According to many influential colonial and postcolonial leaders and thinkers, decolonization was "essentially a psychological project" involving a "recovery of self" and "an attempt to reframe the damaging colonial discourses of selfhood". According to the decolonial perspective, Eurocentric psychology, which is based on a specific history and culture, places a strong emphasis on "experimental positivist methods, languages, symbols, and stories". A decolonizing approach in psychology thus seeks to show how colonialism, Orientalism, and Eurocentric presumptions are still deeply ingrained in modern psychological science as well as psychological theories of culture, identity, and human development. Decolonizing psychology entails comprehending and capturing the history of colonization as well as its perceived effects on families, nations, nationalism, institutions, and knowledge production. It seeks to extend the bounds of cultural horizons, which should serve as a gateway "to new confrontations and new knowledge". Decolonial turn in psychology entails upending the conventional research methodology by creating spaces for indigenous knowledge, oral histories, art, community knowledge, and lived experiences as legitimate forms of knowledge. Samuel Bendeck Sotillos seeks to break free from the alleged limits of modern psychology, which he claims is dominated by the precepts of modern science and which only addresses a very "restricted portion of human individuality". He instead wants to revive the traditional view of the human being as consisting of a spirit, a soul, and a body.
Sociology
Decolonial scholars argue that sociological study is now dominated by the viewpoints of academics in the Global North and empirical studies that are concentrated on these countries. This leads to sociological theories that portray the Global North as "normal" or "modern," while anything outside of it is assumed to be either "deviant" or "yet to be modernized." Such theories are said to undermine the concerns of the Global South despite the fact that they make up around 84% of the world population. They place a strong emphasis on taking into account the problems, perspectives, and way of life of those in the Global South who are typically left out of sociological research and theory-building; thus, decolonization in this sense refers to making non-Western social realities more relevant to academic debate.
Religious studies
According to the decolonial perspective, the study of religion is one of many humanities disciplines that has its roots in European colonialism. Because of this, the issues it covers, the concepts it reinforces, and even the settings in which it is taught at academic institutions all exhibit colonial characteristics. According to Malory Nye, in order to decolonize the study of religion, one must be methodologically cognizant of the historical and intellectual legacies of colonialism in the field, as well as fundamental presuppositions about the subject matter, including the conception of religion and world religions. For Adriaan van Klinken, a decolonial turn in the study of religions embraces reflexivity, is interactive, and challenges "the taken-for-granted Western frameworks of analysis and scholarly practice." It must accept "the pluriversality of ways of knowing and being" in the world. The interpretation of the Quran in the Euro-American academic community has been cited as one such example, where "the phenomenon of revelation (Wahy)" as it is understood in Islam is very often negated, disregarded, or regarded as unimportant to comprehending the scripture. According to Joseph Lumbard, Euro-American analytical modes have permeated Quranic studies and have a lasting impact on all facets of the discipline. He argues for more inclusive approaches that take into account different forms of analysis and make use of analytical tools from the classical Islamic tradition.
Legal studies
Aitor Jiménez González argues that the "generalized use of the term “law” or “Law” masks the fact that the concept we are using is not a universal category but a highly provincial one premised on the westernized legal cosmovision". According to him, it was not the "peaceful spread of a superior science" that ultimately led to the universal adoption of the western notion of law. Rather, it "was the result of centuries of colonialism, violent repression against other legal cosmovisions during the colonial periods and the persistence of the process referred to as coloniality". The decolonial stance on law facilitates dialogue between various understandings and epistemic perspectives on law in the first place, challenging the perceived hegemony of the westernized legal paradigm. It is a strategy for transforming a legal culture that historically was based on a hegemonic or Eurocentric understanding of the law into one that is more inclusive. It highlights the need for a fresh historical perspective that emphasizes diversity over homogeneity and casts doubt on the notion that the state is the "main organizer of legal and juridical life".
According to Asikia Karibi-Whyte, decolonization goes beyond inclusion in that it aims to dismantle the notions and viewpoints that undervalue the "other" in legal discourse. This point of view maintains that a society's values form the foundation of legal knowledge and argues for prioritizing those values when debating specific legal issues. This is because legal norms in former colonies bear the imprint of colonialism and values of colonial societies. For example, English Common Law predominates in former British colonies throughout Africa and Asia, whereas the Civil Law system is used in many former French colonies that mirrors the values of French society. In this context, decolonization of law calls "for the critical inclusion of epistemologies, ways of knowing, lived experiences, texts and scholarly works" that colonialism forced out of legal discourses.
Inclusive research
Shift in research methodology
According to Mpoe Johannah Keikelame and Leslie Swartz, "decolonising research methodology is an approach that is used to challenge the Eurocentric research methods that undermine the local knowledge and experiences of the marginalised population groups". Even though there is no set paradigm or practice for decolonizing research methodology, Thambinathan and Kinsella offer four methods that qualitative researchers might use. These four methods include engaging in transformative praxis, practicing critical reflexivity, employing reciprocity and respect for self-determination, as well as accepting "Other(ed)" ways of knowing. For Sabelo Ndlovu Gatsheni, decolonizing methodology involves "unmasking its role and purpose in research". It must transform the identity of research objects into questioners, critics, theorists, knowers, and communicators. In addition, research must be redirected to concentrate on what Europe has done to humanity and the environment rather than imitating Europe as a role model for the rest of the world.
Data decolonization
Criticism
According to Piet Naudé, decolonization's efforts to create new epistemic models with distinct laws of validation than those developed in Western knowledge system have not yet produced the desired outcome. The present "scholarly decolonial turn" has been criticised on the ground that it is divorced from the daily struggles of people living in historically colonized places. Robtel Neajai Pailey says that 21st-century epistemic decolonization will fail unless it is connected to and welcoming of the ongoing liberation movements against inequality, racism, austerity, imperialism, autocracy, sexism, xenophobia, environmental damage, militarization, impunity, corruption, media surveillance, and land theft because epistemic decolonization "cannot happen in a political vacuum".
"Decolonization", both as a theoretical and practical tendency, has recently faced increasing critique. For example, Olúfẹ́mi Táíwò argued that it is analytically unsound, conflating "coloniality" with "modernity", leading it to become an impossible political project. He further argued that it risks denying the formerly colonized countries agency, in not recognizing that people often consciously accept and adapt elements of different origins, including colonial ones. Jonatan Kurzwelly and Malin Wilckens used the example of decolonisation of academic collections of human remains - originally used to further racist science and legitimize colonial oppression - to show how both contemporary scholarly methods and political practice perpetuate reified and essentialist notions of identities.
See also
Decolonization of higher education in South Africa
Decolonization of museums
Decolonising the Mind
Decolonization of public space
Decolonizing outer space
Universal Declaration of Human Rights
Notes
References
Further reading
Decolonization
Social epistemology
Postcolonialism
| 0.772528 | 0.980631 | 0.757565 |
Scenario planning
|
Scenario planning, scenario thinking, scenario analysis, scenario prediction and the scenario method all describe a strategic planning method that some organizations use to make flexible long-term plans. It is in large part an adaptation and generalization of classic methods used by military intelligence.
In the most common application of the method, analysts generate simulation games for policy makers. The method combines known facts, such as demographics, geography and mineral reserves, with military, political, and industrial information, and key driving forces identified by considering social, technical, economic, environmental, and political ("STEEP") trends.
In business applications, the emphasis on understanding the behavior of opponents has been reduced while more attention is now paid to changes in the natural environment. At Royal Dutch Shell for example, scenario planning has been described as changing mindsets about the exogenous part of the world prior to formulating specific strategies.
Scenario planning may involve aspects of systems thinking, specifically the recognition that many factors may combine in complex ways to create sometimes surprising futures (due to non-linear feedback loops). The method also allows the inclusion of factors that are difficult to formalize, such as novel insights about the future, deep shifts in values, and unprecedented regulations or inventions. Systems thinking used in conjunction with scenario planning leads to plausible scenario storylines because the causal relationship between factors can be demonstrated. These cases, in which scenario planning is integrated with a systems thinking approach to scenario development, are sometimes referred to as "dynamic scenarios".
Critics of using a subjective and heuristic methodology to deal with uncertainty and complexity argue that the technique has not been examined rigorously, nor influenced sufficiently by scientific evidence. They caution against using such methods to "predict" based on what can be described as arbitrary themes and "forecasting techniques".
A challenge and a strength of scenario-building is that "predictors are part of the social context about which they are trying to make a prediction and may influence that context in the process". As a consequence, societal predictions can become self-destructing. For example, a scenario in which a large percentage of a population will become HIV infected based on existing trends may cause more people to avoid risky behavior and thus reduce the HIV infection rate, invalidating the forecast (which might have remained correct if it had not been publicly known). Or, a prediction that cybersecurity will become a major issue may cause organizations to implement more secure cybersecurity measures, thus limiting the issue.
Principle
Crafting scenarios
Combinations and permutations of fact and related social changes are called "scenarios". Scenarios usually include plausible, but unexpectedly important, situations and problems that exist in some nascent form in the present day. Any particular scenario is unlikely. However, futures studies analysts select scenario features so they are both possible and uncomfortable. Scenario planning helps policy-makers and firms anticipate change, prepare responses, and create more robust strategies.
Scenario planning helps a firm anticipate the impact of different scenarios and identify weaknesses. When anticipated years in advance, those weaknesses can be avoided or their impacts reduced more effectively than when similar real-life problems are considered under the duress of an emergency. For example, a company may discover that it needs to change contractual terms to protect against a new class of risks, or collect cash reserves to purchase anticipated technologies or equipment. Flexible business continuity plans with "PREsponse protocols" can help cope with similar operational problems and deliver measurable future value.
Zero-sum game scenarios
Strategic military intelligence organizations also construct scenarios. The methods and organizations are almost identical, except that scenario planning is applied to a wider variety of problems than merely military and political problems.
As in military intelligence, the chief challenge of scenario planning is to find out the real needs of policy-makers, when policy-makers may not themselves know what they need to know, or may not know how to describe the information that they really want.
Good analysts design wargames so that policy makers have great flexibility and freedom to adapt their simulated organisations. Then these simulated organizations are "stressed" by the scenarios as a game plays out. Usually, particular groups of facts become more clearly important. These insights enable intelligence organizations to refine and repackage real information more precisely to better serve the policy-makers' real-life needs. Usually the games' simulated time runs hundreds of times faster than real life, so policy-makers experience several years of policy decisions, and their simulated effects, in less than a day.
This chief value of scenario planning is that it allows policy-makers to make and learn from mistakes without risking career-limiting failures in real life. Further, policymakers can make these mistakes in a safe, unthreatening, game-like environment, while responding to a wide variety of concretely presented situations based on facts. This is an opportunity to "rehearse the future", an opportunity that does not present itself in day-to-day operations where every action and decision counts.
How military scenario planning or scenario thinking is done
Decide on the key question to be answered by the analysis. By doing this, it is possible to assess whether scenario planning is preferred over the other methods. If the question is based on small changes or a very small number of elements, other more formalized methods may be more useful.
Set the time and scope of the analysis. Take into consideration how quickly changes have happened in the past, and try to assess to what degree it is possible to predict common trends in demographics, product life cycles. A usual timeframe can be five to 10 years.
Identify major stakeholders. Decide who will be affected and have an interest in the possible outcomes. Identify their current interests, whether and why these interests have changed over time in the past.
Map basic trends and driving forces. This includes industry, economic, political, technological, legal, and societal trends. Assess to what degree these trends will affect your research question. Describe each trend, how and why it will affect the organisation. In this step of the process, brainstorming is commonly used, where all trends that can be thought of are presented before they are assessed, to capture possible group thinking and tunnel vision.
Find key uncertainties. Map the driving forces on two axes, assessing each force on an uncertain/(relatively) predictable and important/unimportant scale. All driving forces that are considered unimportant are discarded. Important driving forces that are relatively predictable (ex. demographics) can be included in any scenario, so the scenarios should not be based on these. This leaves you with a number of important and unpredictable driving forces. At this point, it is also useful to assess whether any linkages between driving forces exist, and rule out any "impossible" scenarios (ex. full employment and zero inflation).
Check for the possibility to group the linked forces and if possible, reduce the forces to the two most important. (To allow the scenarios to be presented in a neat xy-diagram)
Identify the extremes of the possible outcomes of the two driving forces and check the dimensions for consistency and plausibility. Three key points should be assessed:
Time frame: are the trends compatible within the time frame in question?
Internal consistency: do the forces describe uncertainties that can construct probable scenarios.
Vs the stakeholders: are any stakeholders currently in disequilibrium compared to their preferred situation, and will this evolve the scenario? Is it possible to create probable scenarios when considering the stakeholders? This is most important when creating macro-scenarios where governments, large organisations et al. will try to influence the outcome.
Define the scenarios, plotting them on a grid if possible. Usually, two to four scenarios are constructed. The current situation does not need to be in the middle of the diagram (inflation may already be low), and possible scenarios may keep one (or more) of the forces relatively constant, especially if using three or more driving forces. One approach can be to create all positive elements into one scenario and all negative elements (relative to the current situation) in another scenario, then refining these. In the end, try to avoid pure best-case and worst-case scenarios.
Write out the scenarios. Narrate what has happened and what the reasons can be for the proposed situation. Try to include good reasons why the changes have occurred as this helps the further analysis. Finally, give each scenario a descriptive (and catchy) name to ease later reference.
Assess the scenarios. Are they relevant for the goal? Are they internally consistent? Are they archetypical? Do they represent relatively stable outcome situations?
Identify research needs. Based on the scenarios, assess where more information is needed. Where needed, obtain more information on the motivations of stakeholders, possible innovations that may occur in the industry and so on.
Develop quantitative methods. If possible, develop models to help quantify consequences of the various scenarios, such as growth rate, cash flow etc. This step does of course require a significant amount of work compared to the others, and may be left out in back-of-the-envelope-analyses.
Converge towards decision scenarios. Retrace the steps above in an iterative process until you reach scenarios which address the fundamental issues facing the organization. Try to assess upsides and downsides of the possible scenarios.
Use by managers
The basic concepts of the process are relatively simple. In terms of the overall approach to forecasting, they can be divided into three main groups of activities (which are, generally speaking, common to all long range forecasting processes):
Environmental analysis
Scenario planning
Corporate strategy
The first of these groups quite simply comprises the normal environmental analysis. This is almost exactly the same as that which should be undertaken as the first stage of any serious long-range planning. However, the quality of this analysis is especially important in the context of scenario planning.
The central part represents the specific techniques – covered here – which differentiate the scenario forecasting process from the others in long-range planning.
The final group represents all the subsequent processes which go towards producing the corporate strategy and plans. Again, the requirements are slightly different but in general they follow all the rules of sound long-range planning.
Applications
Business
In the past, strategic plans have often considered only the "official future", which was usually a straight-line graph of current trends carried into the future. Often the trend lines were generated by the accounting department, and lacked discussions of demographics, or qualitative differences in social conditions.
These simplistic guesses are surprisingly good most of the time, but fail to consider qualitative social changes that can affect a business or government. Paul J. H. Schoemaker offers a strong managerial case for the use of scenario planning in business and had wide impact.
The approach may have had more impact outside Shell than within, as many others firms and consultancies started to benefit as well from scenario planning. Scenario planning is as much art as science, and prone to a variety of traps (both in process and content) as enumerated by Paul J. H. Schoemaker. More recently scenario planning has been discussed as a tool to improve the strategic agility, by cognitively preparing not only multiple scenarios but also multiple consistent strategies.
Military
Scenario planning is also extremely popular with military planners. Most states' department of war maintains a continuously updated series of strategic plans to cope with well-known military or strategic problems. These plans are almost always based on scenarios, and often the plans and scenarios are kept up-to-date by war games, sometimes played out with real troops. This process was first carried out (arguably the method was invented by) the Prussian general staff of the mid-19th century.
Finance
In economics and finance, a financial institution might use scenario analysis to forecast several possible scenarios for the economy (e.g. rapid growth, moderate growth, slow growth) and for financial returns (for bonds, stocks, cash, etc.) in each of those scenarios. It might consider sub-sets of each of the possibilities. It might further seek to determine correlations and assign probabilities to the scenarios (and sub-sets if any). Then it will be in a position to consider how to distribute assets between asset types (i.e. asset allocation); the institution can also calculate the scenario-weighted expected return (which figure will indicate the overall attractiveness of the financial environment). It may also perform stress testing, using adverse scenarios.
Depending on the complexity of the problem, scenario analysis can be a demanding exercise. It can be difficult to foresee what the future holds (e.g. the actual future outcome may be entirely unexpected), i.e. to foresee what the scenarios are, and to assign probabilities to them; and this is true of the general forecasts never mind the implied financial market returns. The outcomes can be modeled mathematically/statistically e.g. taking account of possible variability within single scenarios as well as possible relationships between scenarios. In general, one should take care when assigning probabilities to different scenarios as this could invite a tendency to consider only the scenario with the highest probability.
Geopolitics
In politics or geopolitics, scenario analysis involves reflecting on the possible alternative paths of a social or political environment and possibly diplomatic and war risks.
History of use by academic and commercial organizations
Most authors attribute the introduction of scenario planning to Herman Kahn through his work for the US Military in the 1950s at the RAND Corporation where he developed a technique of describing the future in stories as if written by people in the future. He adopted the term "scenarios" to describe these stories. In 1961 he founded the Hudson Institute where he expanded his scenario work to social forecasting and public policy. One of his most controversial uses of scenarios was to suggest that a nuclear war could be won. Though Kahn is often cited as the father of scenario planning, at the same time Kahn was developing his methods at RAND, Gaston Berger was developing similar methods at the Centre d’Etudes Prospectives which he founded in France. His method, which he named 'La Prospective', was to develop normative scenarios of the future which were to be used as a guide in formulating public policy. During the mid-1960s various authors from the French and American institutions began to publish scenario planning concepts such as 'La Prospective' by Berger in 1964 and 'The Next Thirty-Three Years' by Kahn and Wiener in 1967. By the 1970s scenario planning was in full swing with a number of institutions now established to provide support to business including the Hudson Foundation, the Stanford Research Institute (now SRI International), and the SEMA Metra Consulting Group in France. Several large companies also began to embrace scenario planning including DHL Express, Dutch Royal Shell and General Electric.
Possibly as a result of these very sophisticated approaches, and of the difficult techniques they employed (which usually demanded the resources of a central planning staff), scenarios earned a reputation for difficulty (and cost) in use. Even so, the theoretical importance of the use of alternative scenarios, to help address the uncertainty implicit in long-range forecasts, was dramatically underlined by the widespread confusion which followed the Oil Shock of 1973. As a result, many of the larger organizations started to use the technique in one form or another. By 1983 Diffenbach reported that 'alternate scenarios' were the third most popular technique for long-range forecasting – used by 68% of the large organizations he surveyed.
Practical development of scenario forecasting, to guide strategy rather than for the more limited academic uses which had previously been the case, was started by Pierre Wack in 1971 at the Royal Dutch Shell group of companies – and it, too, was given impetus by the Oil Shock two years later. Shell has, since that time, led the commercial world in the use of scenarios – and in the development of more practical techniques to support these. Indeed, as – in common with most forms of long-range forecasting – the use of scenarios has (during the depressed trading conditions of the last decade) reduced to only a handful of private-sector organisations, Shell remains almost alone amongst them in keeping the technique at the forefront of forecasting.
There has only been anecdotal evidence offered in support of the value of scenarios, even as aids to forecasting; and most of this has come from one company – Shell. In addition, with so few organisations making consistent use of them – and with the timescales involved reaching into decades – it is unlikely that any definitive supporting evidenced will be forthcoming in the foreseeable future. For the same reasons, though, a lack of such proof applies to almost all long-range planning techniques. In the absence of proof, but taking account of Shell's well documented experiences of using it over several decades (where, in the 1990s, its then CEO ascribed its success to its use of such scenarios), can be significant benefit to be obtained from extending the horizons of managers' long-range forecasting in the way that the use of scenarios uniquely does.
Process
The part of the overall process which is radically different from most other forms of long-range planning is the central section, the actual production of the scenarios. Even this, though, is relatively simple, at its most basic level. As derived from the approach most commonly used by Shell, it follows six steps:
Decide drivers for change/assumptions
Bring drivers together into a viable framework
Produce 7–9 initial mini-scenarios
Reduce to 2–3 scenarios
Draft the scenarios
Identify the issues arising
Step 1 – decide assumptions/drivers for change
The first stage is to examine the results of environmental analysis to determine which are the most important factors that will decide the nature of the future environment within which the organisation operates. These factors are sometimes called 'variables' (because they will vary over the time being investigated, though the terminology may confuse scientists who use it in a more rigorous manner). Users tend to prefer the term 'drivers' (for change), since this terminology is not laden with quasi-scientific connotations and reinforces the participant's commitment to search for those forces which will act to change the future. Whatever the nomenclature, the main requirement is that these will be informed assumptions.
This is partly a process of analysis, needed to recognise what these 'forces' might be. However, it is likely that some work on this element will already have taken place during the preceding environmental analysis. By the time the formal scenario planning stage has been reached, the participants may have already decided – probably in their sub-conscious rather than formally – what the main forces are.
In the ideal approach, the first stage should be to carefully decide the overall assumptions on which the scenarios will be based. Only then, as a second stage, should the various drivers be specifically defined. Participants, though, seem to have problems in separating these stages.
Perhaps the most difficult aspect though, is freeing the participants from the preconceptions they take into the process with them. In particular, most participants will want to look at the medium term, five to ten years ahead rather than the required longer-term, ten or more years ahead. However, a time horizon of anything less than ten years often leads participants to extrapolate from present trends, rather than consider the alternatives which might face them. When, however, they are asked to consider timescales in excess of ten years they almost all seem to accept the logic of the scenario planning process, and no longer fall back on that of extrapolation. There is a similar problem with expanding participants horizons to include the whole external environment.
Brainstorming
In any case, the brainstorming which should then take place, to ensure that the list is complete, may unearth more variables – and, in particular, the combination of factors may suggest yet others.
A very simple technique which is especially useful at this – brainstorming – stage, and in general for handling scenario planning debates is derived from use in Shell where this type of approach is often used. An especially easy approach, it only requires a conference room with a bare wall and copious supplies of 3M Post-It Notes.
The six to ten people ideally taking part in such face-to-face debates should be in a conference room environment which is isolated from outside interruptions. The only special requirement is that the conference room has at least one clear wall on which Post-It notes will stick. At the start of the meeting itself, any topics which have already been identified during the environmental analysis stage are written (preferably with a thick magic marker, so they can be read from a distance) on separate Post-It Notes. These Post-It Notes are then, at least in theory, randomly placed on the wall. In practice, even at this early stage the participants will want to cluster them in groups which seem to make sense. The only requirement (which is why Post-It Notes are ideal for this approach) is that there is no bar to taking them off again and moving them to a new cluster.
A similar technique – using 5" by 3" index cards – has also been described (as the 'Snowball Technique'), by Backoff and Nutt, for grouping and evaluating ideas in general.
As in any form of brainstorming, the initial ideas almost invariably stimulate others. Indeed, everyone should be encouraged to add their own Post-It Notes to those on the wall. However it differs from the 'rigorous' form described in 'creative thinking' texts, in that it is much slower paced and the ideas are discussed immediately. In practice, as many ideas may be removed, as not being relevant, as are added. Even so, it follows many of the same rules as normal brainstorming and typically lasts the same length of time – say, an hour or so only.
It is important that all the participants feel they 'own' the wall – and are encouraged to move the notes around themselves. The result is a very powerful form of creative decision-making for groups, which is applicable to a wide range of situations (but is especially powerful in the context of scenario planning). It also offers a very good introduction for those who are coming to the scenario process for the first time. Since the workings are largely self-evident, participants very quickly come to understand exactly what is involved.
Important and uncertain
This step is, though, also one of selection – since only the most important factors will justify a place in the scenarios. The 80:20 Rule here means that, at the end of the process, management's attention must be focused on a limited number of most important issues. Experience has proved that offering a wider range of topics merely allows them to select those few which interest them, and not necessarily those which are most important to the organisation.
In addition, as scenarios are a technique for presenting alternative futures, the factors to be included must be genuinely 'variable'. They should be subject to significant alternative outcomes. Factors whose outcome is predictable, but important, should be spelled out in the introduction to the scenarios (since they cannot be ignored). The Important Uncertainties Matrix, as reported by Kees van der Heijden of Shell, is a useful check at this stage.
At this point it is also worth pointing out that a great virtue of scenarios is that they can accommodate the input from any other form of forecasting. They may use figures, diagrams or words in any combination. No other form of forecasting offers this flexibility.
Step 2 – bring drivers together into a viable framework
The next step is to link these drivers together to provide a meaningful framework. This may be obvious, where some of the factors are clearly related to each other in one way or another. For instance, a technological factor may lead to market changes, but may be constrained by legislative factors. On the other hand, some of the 'links' (or at least the 'groupings') may need to be artificial at this stage. At a later stage more meaningful links may be found, or the factors may then be rejected from the scenarios. In the most theoretical approaches to the subject, probabilities are attached to the event strings. This is difficult to achieve, however, and generally adds little – except complexity – to the outcomes.
This is probably the most (conceptually) difficult step. It is where managers' 'intuition' – their ability to make sense of complex patterns of 'soft' data which more rigorous analysis would be unable to handle – plays an important role. There are, however, a range of techniques which can help; and again the Post-It-Notes approach is especially useful:
Thus, the participants try to arrange the drivers, which have emerged from the first stage, into groups which seem to make sense to them. Initially there may be many small groups. The intention should, therefore, be to gradually merge these (often having to reform them from new combinations of drivers to make these bigger groups work). The aim of this stage is eventually to make 6–8 larger groupings; 'mini-scenarios'. Here the Post-It Notes may be moved dozens of times over the length – perhaps several hours or more – of each meeting. While this process is taking place the participants will probably want to add new topics – so more Post-It Notes are added to the wall. In the opposite direction, the unimportant ones are removed (possibly to be grouped, again as an 'audit trail' on another wall). More important, the 'certain' topics are also removed from the main area of debate – in this case they must be grouped in clearly labelled area of the main wall.
As the clusters – the 'mini-scenarios' – emerge, the associated notes may be stuck to each other rather than individually to the wall; which makes it easier to move the clusters around (and is a considerable help during the final, demanding stage to reducing the scenarios to two or three).
The great benefit of using Post-It Notes is that there is no bar to participants changing their minds. If they want to rearrange the groups – or simply to go back (iterate) to an earlier stage – then they strip them off and put them in their new position.
Step 3 – produce initial mini-scenarios
The outcome of the previous step is usually between seven and nine logical groupings of drivers. This is usually easy to achieve. The 'natural' reason for this may be that it represents some form of limit as to what participants can visualise.
Having placed the factors in these groups, the next action is to work out, very approximately at this stage, what is the connection between them. What does each group of factors represent?
Step 4 – reduce to two or three scenarios
The main action, at this next stage, is to reduce the seven to nine mini-scenarios/groupings detected at the previous stage to two or three larger scenarios
There is no theoretical reason for reducing to just two or three scenarios, only a practical one. It has been found that the managers who will be asked to use the final scenarios can only cope effectively with a maximum of three versions! Shell started, more than three decades ago, by building half a dozen or more scenarios – but found that the outcome was that their managers selected just one of these to concentrate on. As a result, the planners reduced the number to three, which managers could handle easily but could no longer so easily justify the selection of only one! This is the number now recommended most frequently in most of the literature.
Complementary scenarios
As used by Shell, and as favoured by a number of the academics, two scenarios should be complementary; the reason being that this helps avoid managers 'choosing' just one, 'preferred', scenario – and lapsing once more into single-track forecasting (negating the benefits of using 'alternative' scenarios to allow for alternative, uncertain futures). This is, however, a potentially difficult concept to grasp, where managers are used to looking for opposites; a good and a bad scenario, say, or an optimistic one versus a pessimistic one – and indeed this is the approach (for small businesses) advocated by Foster. In the Shell approach, the two scenarios are required to be equally likely, and between them to cover all the 'event strings'/drivers. Ideally they should not be obvious opposites, which might once again bias their acceptance by users, so the choice of 'neutral' titles is important. For example, Shell's two scenarios at the beginning of the 1990s were titled 'Sustainable World' and 'Global Mercantilism'[xv]. In practice, we found that this requirement, much to our surprise, posed few problems for the great majority, 85%, of those in the survey; who easily produced 'balanced' scenarios. The remaining 15% mainly fell into the expected trap of 'good versus bad'. We have found that our own relatively complex (OBS) scenarios can also be made complementary to each other; without any great effort needed from the teams involved; and the resulting two scenarios are both developed further by all involved, without unnecessary focusing on one or the other.
Testing
Having grouped the factors into these two scenarios, the next step is to test them, again, for viability. Do they make sense to the participants? This may be in terms of logical analysis, but it may also be in terms of intuitive 'gut-feel'. Once more, intuition often may offer a useful – if academically less respectable – vehicle for reacting to the complex and ill-defined issues typically involved. If the scenarios do not intuitively 'hang together', why not? The usual problem is that one or more of the assumptions turns out to be unrealistic in terms of how the participants see their world. If this is the case then you need to return to the first step – the whole scenario planning process is above all an iterative one (returning to its beginnings a number of times until the final outcome makes the best sense).
Step 5 – write the scenarios
The scenarios are then 'written up' in the most suitable form. The flexibility of this step often confuses participants, for they are used to forecasting processes which have a fixed format. The rule, though, is that you should produce the scenarios in the form most suitable for use by the managers who are going to base their strategy on them. Less obviously, the managers who are going to implement this strategy should also be taken into account. They will also be exposed to the scenarios, and will need to believe in these. This is essentially a 'marketing' decision, since it will be very necessary to 'sell' the final results to the users. On the other hand, a not inconsiderable consideration may be to use the form the author also finds most comfortable. If the form is alien to him or her the chances are that the resulting scenarios will carry little conviction when it comes to the 'sale'.
Most scenarios will, perhaps, be written in word form (almost as a series of alternative essays about the future); especially where they will almost inevitably be qualitative which is hardly surprising where managers, and their audience, will probably use this in their day to day communications. Some, though use an expanded series of lists and some enliven their reports by adding some fictional 'character' to the material – perhaps taking literally the idea that they are stories about the future – though they are still clearly intended to be factual. On the other hand, they may include numeric data and/or diagrams – as those of Shell do (and in the process gain by the acid test of more measurable 'predictions').
Step 6 – identify issues arising
The final stage of the process is to examine these scenarios to determine what are the most critical outcomes; the 'branching points' relating to the 'issues' which will have the greatest impact (potentially generating 'crises') on the future of the organisation. The subsequent strategy will have to address these – since the normal approach to strategy deriving from scenarios is one which aims to minimise risk by being 'robust' (that is it will safely cope with all the alternative outcomes of these 'life and death' issues) rather than aiming for performance (profit) maximisation by gambling on one outcome.
Use of scenarios
Scenarios may be used in a number of ways:
a) Containers for the drivers/event strings
Most basically, they are a logical device, an artificial framework, for presenting the individual factors/topics (or coherent groups of these) so that these are made easily available for managers' use – as useful ideas about future developments in their own right – without reference to the rest of the scenario. It should be stressed that no factors should be dropped, or even given lower priority, as a result of producing the scenarios. In this context, which scenario contains which topic (driver), or issue about the future, is irrelevant.
b) Tests for consistency
At every stage it is necessary to iterate, to check that the contents are viable and make any necessary changes to ensure that they are; here the main test is to see if the scenarios seem to be internally consistent – if they are not then the writer must loop back to earlier stages to correct the problem. Though it has been mentioned previously, it is important to stress once again that scenario building is ideally an iterative process. It usually does not just happen in one meeting – though even one attempt is better than none – but takes place over a number of meetings as the participants gradually refine their ideas.
c) Positive perspectives
Perhaps the main benefit deriving from scenarios, however, comes from the alternative 'flavors' of the future their different perspectives offer. It is a common experience, when the scenarios finally emerge, for the participants to be startled by the insight they offer – as to what the general shape of the future might be – at this stage it no longer is a theoretical exercise but becomes a genuine framework (or rather set of alternative frameworks) for dealing with that.
Scenario planning compared to other techniques
The flowchart to the right provides a process for classifying a phenomenon as a scenario in the intuitive logics tradition.Scenario planning differs from contingency planning, sensitivity analysis and computer simulations.
Contingency planning is a "What if" tool, that only takes into account one uncertainty. However, scenario planning considers combinations of uncertainties in each scenario. Planners also try to select especially plausible but uncomfortable combinations of social developments.
Sensitivity analysis analyzes changes in one variable only, which is useful for simple changes, while scenario planning tries to expose policy makers to significant interactions of major variables.
While scenario planning can benefit from computer simulations, scenario planning is less formalized, and can be used to make plans for qualitative patterns that show up in a wide variety of simulated events.
During the past 5 years, computer supported Morphological Analysis has been employed as aid in scenario development by the Swedish Defence Research Agency in Stockholm. This method makes it possible to create a multi-variable morphological field which can be treated as an inference model – thus integrating scenario planning techniques with contingency analysis and sensitivity analysis.
Scenario analysis
Scenario analysis is a process of analyzing future events by considering alternative possible outcomes (sometimes called "alternative worlds"). Thus, scenario analysis, which is one of the main forms of projection, does not try to show one exact picture of the future. Instead, it presents several alternative future developments. Consequently, a scope of possible future outcomes is observable. Not only are the outcomes observable, also the development paths leading to the outcomes. In contrast to prognoses, the scenario analysis is not based on extrapolation of the past or the extension of past trends. It does not rely on historical data and does not expect past observations to remain valid in the future. Instead, it tries to consider possible developments and turning points, which may only be connected to the past. In short, several scenarios are fleshed out in a scenario analysis to show possible future outcomes. Each scenario normally combines optimistic, pessimistic, and more and less probable developments. However, all aspects of scenarios should be plausible. Although highly discussed, experience has shown that around three scenarios are most appropriate for further discussion and selection. More scenarios risks making the analysis overly complicated. Scenarios are often confused with other tools and approaches to planning. The flowchart to the right provides a process for classifying a phenomenon as a scenario in the intuitive logics tradition.
Principle
Scenario-building is designed to allow improved decision-making by allowing deep consideration of outcomes and their implications.
A scenario is a tool used during requirements analysis to describe a specific use of a proposed system. Scenarios capture the system, as viewed from the outside
Scenario analysis can also be used to illuminate "wild cards." For example, analysis of the possibility of the earth being struck by a meteor suggests that whilst the probability is low, the damage inflicted is so high that the event is much more important (threatening) than the low probability (in any one year) alone would suggest. However, this possibility is usually disregarded by organizations using scenario analysis to develop a strategic plan since it has such overarching repercussions.
Combination of Delphi and scenarios
Scenario planning concerns planning based on the systematic examination of the future by picturing plausible and consistent images of that future. The Delphi method attempts to develop systematically expert opinion consensus concerning future developments and events. It is a judgmental forecasting procedure in the form of an anonymous, written, multi-stage survey process, where feedback of group opinion is provided after each round.
Numerous researchers have stressed that both approaches are best suited to be combined. Due to their process similarity, the two methodologies can be easily combined. The output of the different phases of the Delphi method can be used as input for the scenario method and vice versa. A combination makes a realization of the benefits of both tools possible. In practice, usually one of the two tools is considered the dominant methodology and the other one is added on at some stage.
The variant that is most often found in practice is the integration of the Delphi method into the scenario process (see e.g. Rikkonen, 2005; von der Gracht, 2008;). Authors refer to this type as Delphi-scenario (writing), expert-based scenarios, or Delphi panel derived scenarios. Von der Gracht (2010) is a scientifically valid example of this method. Since scenario planning is “information hungry”, Delphi research can deliver valuable input for the process. There are various types of information output of Delphi that can be used as input for scenario planning. Researchers can, for example, identify relevant events or developments and, based on expert opinion, assign probabilities to them. Moreover, expert comments and arguments provide deeper insights into relationships of factors that can, in turn, be integrated into scenarios afterwards. Also, Delphi helps to identify extreme opinions and dissent among the experts. Such controversial topics are particularly suited for extreme scenarios or wildcards.
In his doctoral thesis, Rikkonen (2005) examined the utilization of Delphi techniques in scenario planning and, concretely, in construction of scenarios. The author comes to the conclusion that the Delphi technique has instrumental value in providing different alternative futures and the argumentation of scenarios. It is therefore recommended to use Delphi in order to make the scenarios more profound and to create confidence in scenario planning. Further benefits lie in the simplification of the scenario writing process and the deep understanding of the interrelations between the forecast items and social factors.
Critique
While there is utility in weighting hypotheses and branching potential outcomes from them, reliance on scenario analysis without reporting some parameters of measurement accuracy (standard errors, confidence intervals of estimates, metadata, standardization and coding, weighting for non-response, error in reportage, sample design, case counts, etc.) is a poor second to traditional prediction. Especially in “complex” problems, factors and assumptions do not correlate in lockstep fashion. Once a specific sensitivity is undefined, it may call the entire study into question.
It is faulty logic to think, when arbitrating results, that a better hypothesis will render empiricism unnecessary. In this respect, scenario analysis tries to defer statistical laws (e.g., Chebyshev's inequality Law), because the decision rules occur outside a constrained setting. Outcomes are not permitted to “just happen”; rather, they are forced to conform to arbitrary hypotheses ex post, and therefore there is no footing on which to place expected values. In truth, there are no ex ante expected values, only hypotheses, and one is left wondering about the roles of modeling and data decision. In short, comparisons of "scenarios" with outcomes are biased by not deferring to the data; this may be convenient, but it is indefensible.
“Scenario analysis” is no substitute for complete and factual exposure of survey error in economic studies. In traditional prediction, given the data used to model the problem, with a reasoned specification and technique, an analyst can state, within a certain percentage of statistical error, the likelihood of a coefficient being within a certain numerical bound. This exactitude need not come at the expense of very disaggregated statements of hypotheses. R Software, specifically the module “WhatIf,” (in the context, see also Matchit and Zelig) has been developed for causal inference, and to evaluate counterfactuals. These programs have fairly sophisticated treatments for determining model dependence, in order to state with precision how sensitive the results are to models not based on empirical evidence.
Another challenge of scenario-building is that "predictors are part of the social context about which they are trying to make a prediction and may influence that context in the process". As a consequence, societal predictions can become self-destructing. For example, a scenario in which a large percentage of a population will become HIV infected based on existing trends may cause more people to avoid risky behavior and thus reduce the HIV infection rate, invalidating the forecast (which might have remained correct if it had not been publicly known). Or, a prediction that cybersecurity will become a major issue may cause organizations to implement more secure cybersecurity measures, thus limiting the issue.
Critique of Shell's use of scenario planning
In the 1970s, many energy companies were surprised by both environmentalism and the OPEC cartel, and thereby lost billions of dollars of revenue by mis-investment. The dramatic financial effects of these changes led at least one organization, Royal Dutch Shell, to implement scenario planning. The analysts of this company publicly estimated that this planning process made their company the largest in the world. However other observers of Shell's use of scenario planning have suggested that few if any significant long-term business advantages accrued to Shell from the use of scenario methodology. Whilst the intellectual robustness of Shell's long term scenarios was seldom in doubt their actual practical use was seen as being minimal by many senior Shell executives. A Shell insider has commented "The scenario team were bright and their work was of a very high intellectual level. However neither the high level "Group scenarios" nor the country level scenarios produced with operating companies really made much difference when key decisions were being taken".
The use of scenarios was audited by Arie de Geus's team in the early 1980s and they found that the decision-making processes following the scenarios were the primary cause of the lack of strategic implementation ), rather than the scenarios themselves. Many practitioners today spend as much time on the decision-making process as on creating the scenarios themselves.
See also
Decentralized planning (economics)
Hoshin Kanri#Hoshin planning
Futures studies
Futures techniques
Global Scenario Group
Jim Dator (Hawaii Research Center for Futures Studies)
Resilience (organizational)
Robust decision-making
Scenario (computing)
Similar terminology
Feedback loop
System dynamics (also known as Stock and flow)
System thinking
Analogous concepts
Delphi method, including Real-time Delphi
Game theory
Horizon scanning
Morphological analysis
Rational choice theory
Stress testing
Twelve leverage points
Examples
Climate change mitigation scenarios – possible futures in which global warming is reduced by deliberate actions
Dynamic Analysis and Replanning Tool
Energy modeling – the process of building computer models of energy systems
Pentagon Papers
References
Additional Bibliography
D. Erasmus, The future of ICT in financial services: The Rabobank ICT scenarios (2008).
M. Godet, Scenarios and Strategic Management, Butterworths (1987).
M. Godet, From Anticipation to Action: A Handbook of Strategic Prospective. Paris: Unesco, (1993).
Adam Kahane, Solving Tough Problems: An Open Way of Talking, Listening, and Creating New Realities (2007)
H. Kahn, The Year 2000, Calman-Levy (1967).
Herbert Meyer, "Real World Intelligence", Weidenfeld & Nicolson, 1987,
National Intelligence Council (NIC) , "Mapping the Global Future", 2005,
M. Lindgren & H. Bandhold, Scenario planning – the link between future and strategy, Palgrave Macmillan, 2003
G. Wright& G. Cairns, Scenario thinking: practical approaches to the future, Palgrave Macmillan, 2011
A. Schuehly, F. Becker t& F. Klein, Real Time Strategy: When Strategic Foresight Meets Artificial Intelligence, Emerald, 2020*
A. Ruser, Sociological Quasi-Labs: The Case for Deductive Scenario Development, Current Sociology Vol63(2): 170-181, https://journals.sagepub.com/doi/pdf/10.1177/0011392114556581
Scientific journals
Foresight
Futures
Futures & Foresight Science
Journal of Futures Studies
Technological Forecasting and Social Change
External links
Wikifutures wiki; Scenario page—wiki also includes several scenarios (GFDL licensed)
ScenarioThinking.org —more than 100 scenarios developed on various global issues, on a wiki for public use
Shell Scenarios Resources—Resources on what scenarios are, Shell's new and old scenario's, explorer's guide and other scenario resources
Learn how to use Scenario Manager in Excel to do Scenario Analysis
Systems Innovation (SI) courseware
Further reading
"Learning from the Future: Competitive Foresight Scenarios", Liam Fahey and Robert M. Randall, Published by John Wiley and Sons, 1997, , Google book
"Shirt-sleeve approach to long-range plans.", Linneman, Robert E, Kennell, John D.; Harvard Business Review; Mar/Apr77, Vol. 55 Issue 2, p141
Business models
Futures techniques
Military strategy
Risk analysis
Risk management
Strategic management
Systems thinking
Systems engineering
Types of marketing
| 0.766699 | 0.988018 | 0.757512 |
Comparative method
|
In linguistics, the comparative method is a technique for studying the development of languages by performing a feature-by-feature comparison of two or more languages with common descent from a shared ancestor and then extrapolating backwards to infer the properties of that ancestor. The comparative method may be contrasted with the method of internal reconstruction in which the internal development of a single language is inferred by the analysis of features within that language. Ordinarily, both methods are used together to reconstruct prehistoric phases of languages; to fill in gaps in the historical record of a language; to discover the development of phonological, morphological and other linguistic systems and to confirm or to refute hypothesised relationships between languages.
The comparative method emerged in the early 19th century with the birth of Indo-European studies, then took a definite scientific approach with the works of the Neogrammarians in the late 19th–early 20th century. Key contributions were made by the Danish scholars Rasmus Rask (1787–1832) and Karl Verner (1846–1896), and the German scholar Jacob Grimm (1785–1863). The first linguist to offer reconstructed forms from a proto-language was August Schleicher (1821–1868) in his Compendium der vergleichenden Grammatik der indogermanischen Sprachen, originally published in 1861. Here is Schleicher's explanation of why he offered reconstructed forms:
In the present work an attempt is made to set forth the inferred Indo-European original language side by side with its really existent derived languages. Besides the advantages offered by such a plan, in setting immediately before the eyes of the student the final results of the investigation in a more concrete form, and thereby rendering easier his insight into the nature of particular Indo-European languages, there is, I think, another of no less importance gained by it, namely that it shows the baselessness of the assumption that the non-Indian Indo-European languages were derived from Old-Indian (Sanskrit).
Definition
Principles
The aim of the comparative method is to highlight and interpret systematic phonological and semantic correspondences between two or more attested languages. If those correspondences cannot be rationally explained as the result of linguistic universals or language contact (borrowings, areal influence, etc.), and if they are sufficiently numerous, regular, and systematic that they cannot be dismissed as chance similarities, then it must be assumed that they descend from a single parent language called the 'proto-language'.
A sequence of regular sound changes (along with their underlying sound laws) can then be postulated to explain the correspondences between the attested forms, which eventually allows for the reconstruction of a proto-language by the methodical comparison of "linguistic facts" within a generalized system of correspondences.
Relation is considered to be "established beyond a reasonable doubt" if a reconstruction of the common ancestor is feasible.
In some cases, this reconstruction can only be partial, generally because the compared languages are too scarcely attested, the temporal distance between them and their proto-language is too deep, or their internal evolution render many of the sound laws obscure to researchers. In such case, a relation is considered plausible, but uncertain.
Terminology
Descent is defined as transmission across the generations: children learn a language from the parents' generation and, after being influenced by their peers, transmit it to the next generation, and so on. For example, a continuous chain of speakers across the centuries links Vulgar Latin to all of its modern descendants.
Two languages are genetically related if they descended from the same ancestor language. For example, Italian and French both come from Latin and therefore belong to the same family, the Romance languages. Having a large component of vocabulary from a certain origin is not sufficient to establish relatedness; for example, heavy borrowing from Arabic into Persian has caused more of the vocabulary of Modern Persian to be from Arabic than from the direct ancestor of Persian, Proto-Indo-Iranian, but Persian remains a member of the Indo-Iranian family and is not considered "related" to Arabic.
However, it is possible for languages to have different degrees of relatedness. English, for example, is related to both German and Russian but is more closely related to the former than to the latter. Although all three languages share a common ancestor, Proto-Indo-European, English and German also share a more recent common ancestor, Proto-Germanic, but Russian does not. Therefore, English and German are considered to belong to a subgroup of Indo-European that Russian does not belong to, the Germanic languages.
The division of related languages into subgroups is accomplished by finding shared linguistic innovations that differentiate them from the parent language. For instance, English and German both exhibit the effects of a collection of sound changes known as Grimm's Law, which Russian was not affected by. The fact that English and German share this innovation is seen as evidence of English and German's more recent common ancestor—since the innovation actually took place within that common ancestor, before English and German diverged into separate languages. On the other hand, shared retentions from the parent language are not sufficient evidence of a sub-group. For example, German and Russian both retain from Proto-Indo-European a contrast between the dative case and the accusative case, which English has lost. However, that similarity between German and Russian is not evidence that German is more closely related to Russian than to English but means only that the innovation in question, the loss of the accusative/dative distinction, happened more recently in English than the divergence of English from German.
Origin and development
In classical antiquity, Romans were aware of the similarities between Greek and Latin, but did not study them systematically. They sometimes explained them mythologically, as the result of Rome being a Greek colony speaking a debased dialect.
Even though grammarians of Antiquity had access to other languages around them (Oscan, Umbrian, Etruscan, Gaulish, Egyptian, Parthian...), they showed little interest in comparing, studying, or just documenting them. Comparison between languages really began after classical antiquity.
Early works
In the 9th or 10th century AD, Yehuda Ibn Quraysh compared the phonology and morphology of Hebrew, Aramaic and Arabic but attributed the resemblance to the Biblical story of Babel, with Abraham, Isaac and Joseph retaining Adam's language, with other languages at various removes becoming more altered from the original Hebrew.
In publications of 1647 and 1654, Marcus Zuerius van Boxhorn first described a rigorous methodology for historical linguistic comparisons and proposed the existence of an Indo-European proto-language, which he called "Scythian", unrelated to Hebrew but ancestral to Germanic, Greek, Romance, Persian, Sanskrit, Slavic, Celtic and Baltic languages. The Scythian theory was further developed by Andreas Jäger (1686) and William Wotton (1713), who made early forays to reconstruct the primitive common language. In 1710 and 1723, Lambert ten Kate first formulated the regularity of sound laws, introducing among others the term root vowel.
Another early systematic attempt to prove the relationship between two languages on the basis of similarity of grammar and lexicon was made by the Hungarian János Sajnovics in 1770, when he attempted to demonstrate the relationship between Sami and Hungarian. That work was later extended to all Finno-Ugric languages in 1799 by his countryman Samuel Gyarmathi. However, the origin of modern historical linguistics is often traced back to Sir William Jones, an English philologist living in India, who in 1786 made his famous The Sanscrit language, whatever be its antiquity, is of a wonderful structure; more perfect than the Greek, more copious than the Latin, and more exquisitely refined than either, yet bearing to both of them a stronger affinity, both in the roots of verbs and the forms of grammar, than could possibly have been produced by accident; so strong indeed, that no philologer could examine them all three, without believing them to have sprung from some common source, which, perhaps, no longer exists. There is a similar reason, though not quite so forcible, for supposing that both the Gothick and the Celtick, though blended with a very different idiom, had the same origin with the Sanscrit; and the old Persian might be added to the same family.
Comparative linguistics
The comparative method developed out of attempts to reconstruct the proto-language mentioned by Jones, which he did not name but subsequent linguists have labelled Proto-Indo-European (PIE). The first professional comparison between the Indo-European languages that were then known was made by the German linguist Franz Bopp in 1816. He did not attempt a reconstruction but demonstrated that Greek, Latin and Sanskrit shared a common structure and a common lexicon. In 1808, Friedrich Schlegel first stated the importance of using the eldest possible form of a language when trying to prove its relationships; in 1818, Rasmus Christian Rask developed the principle of regular sound-changes to explain his observations of similarities between individual words in the Germanic languages and their cognates in Greek and Jacob Grimm, better known for his Fairy Tales, used the comparative method in Deutsche Grammatik (published 1819–1837 in four volumes), which attempted to show the development of the Germanic languages from a common origin, which was the first systematic study of diachronic language change.
Both Rask and Grimm were unable to explain apparent exceptions to the sound laws that they had discovered. Although Hermann Grassmann explained one of the anomalies with the publication of Grassmann's law in 1862, Karl Verner made a methodological breakthrough in 1875, when he identified a pattern now known as Verner's law, the first sound-law based on comparative evidence showing that a phonological change in one phoneme could depend on other factors within the same word (such as neighbouring phonemes and the position of the accent), which are now called conditioning environments.
Neo-grammarian approach
Similar discoveries made by the Junggrammatiker (usually translated as "Neogrammarians") at the University of Leipzig in the late 19th century led them to conclude that all sound changes were ultimately regular, resulting in the famous statement by Karl Brugmann and Hermann Osthoff in 1878 that "sound laws have no exceptions". That idea is fundamental to the modern comparative method since it necessarily assumes regular correspondences between sounds in related languages and thus regular sound changes from the proto-language. The Neogrammarian hypothesis led to the application of the comparative method to reconstruct Proto-Indo-European since Indo-European was then by far the most well-studied language family. Linguists working with other families soon followed suit, and the comparative method quickly became the established method for uncovering linguistic relationships.
Application
There is no fixed set of steps to be followed in the application of the comparative method, but some steps are suggested by Lyle Campbell and Terry Crowley, who are both authors of introductory texts in historical linguistics. This abbreviated summary is based on their concepts of how to proceed.
Step 1, assemble potential cognate lists
This step involves making lists of words that are likely cognates among the languages being compared. If there is a regularly-recurring match between the phonetic structure of basic words with similar meanings, a genetic kinship can probably then be established. For example, linguists looking at the Polynesian family might come up with a list similar to the following (their actual list would be much longer):
Borrowings or false cognates can skew or obscure the correct data. For example, English taboo is like the six Polynesian forms because of borrowing from Tongan into English, not because of a genetic similarity. That problem can usually be overcome by using basic vocabulary, such as kinship terms, numbers, body parts and pronouns. Nonetheless, even basic vocabulary can be sometimes borrowed. Finnish, for example, borrowed the word for "mother", , from Proto-Germanic *aiþį̄ (compare to Gothic ). English borrowed the pronouns "they", "them", and "their(s)" from Norse. Thai and various other East Asian languages borrowed their numbers from Chinese. An extreme case is represented by Pirahã, a Muran language of South America, which has been controversially claimed to have borrowed all of its pronouns from Nheengatu.
Step 2, establish correspondence sets
The next step involves determining the regular sound-correspondences exhibited by the lists of potential cognates. For example, in the Polynesian data above, it is apparent that words that contain t in most of the languages listed have cognates in Hawaiian with k in the same position. That is visible in multiple cognate sets: the words glossed as 'one', 'three', 'man' and 'taboo' all show the relationship. The situation is called a "regular correspondence" between k in Hawaiian and t in the other Polynesian languages. Similarly, a regular correspondence can be seen between Hawaiian and Rapanui h, Tongan and Samoan f, Maori ɸ, and Rarotongan ʔ.
Mere phonetic similarity, as between English day and Latin (both with the same meaning), has no probative value. English initial d- does not regularly match since a large set of English and Latin non-borrowed cognates cannot be assembled such that English d repeatedly and consistently corresponds to Latin d at the beginning of a word, and whatever sporadic matches can be observed are due either to chance (as in the above example) or to borrowing (for example, Latin and English devil, both ultimately of Greek origin). However, English and Latin exhibit a regular correspondence of t- : d- (in which "A : B" means "A corresponds to B"), as in the following examples:
If there are many regular correspondence sets of this kind (the more, the better), a common origin becomes a virtual certainty, particularly if some of the correspondences are non-trivial or unusual.
Step 3, discover which sets are in complementary distribution
During the late 18th to late 19th century, two major developments improved the method's effectiveness.
First, it was found that many sound changes are conditioned by a specific context. For example, in both Greek and Sanskrit, an aspirated stop evolved into an unaspirated one, but only if a second aspirate occurred later in the same word; this is Grassmann's law, first described for Sanskrit by Sanskrit grammarian Pāṇini and promulgated by Hermann Grassmann in 1863.
Second, it was found that sometimes sound changes occurred in contexts that were later lost. For instance, in Sanskrit velars (k-like sounds) were replaced by palatals (ch-like sounds) whenever the following vowel was *i or *e. Subsequent to this change, all instances of *e were replaced by a. The situation could be reconstructed only because the original distribution of e and a could be recovered from the evidence of other Indo-European languages. For instance, the Latin suffix , "and", preserves the original *e vowel that caused the consonant shift in Sanskrit:
Verner's Law, discovered by Karl Verner 1875, provides a similar case: the voicing of consonants in Germanic languages underwent a change that was determined by the position of the old Indo-European accent. Following the change, the accent shifted to initial position. Verner solved the puzzle by comparing the Germanic voicing pattern with Greek and Sanskrit accent patterns.
This stage of the comparative method, therefore, involves examining the correspondence sets discovered in step 2 and seeing which of them apply only in certain contexts. If two (or more) sets apply in complementary distribution, they can be assumed to reflect a single original phoneme: "some sound changes, particularly conditioned sound changes, can result in a proto-sound being associated with more than one correspondence set".
For example, the following potential cognate list can be established for Romance languages, which descend from Latin:
They evidence two correspondence sets, k : k and k : :
Since French occurs only before a where the other languages also have a, and French k occurs elsewhere, the difference is caused by different environments (being before a conditions the change), and the sets are complementary. They can, therefore, be assumed to reflect a single proto-phoneme (in this case *k, spelled ⟨c⟩ in Latin). The original Latin words are , , and , all with an initial k. If more evidence along those lines were given, one might conclude that an alteration of the original k took place because of a different environment.
A more complex case involves consonant clusters in Proto-Algonquian. The Algonquianist Leonard Bloomfield used the reflexes of the clusters in four of the daughter languages to reconstruct the following correspondence sets:
Although all five correspondence sets overlap with one another in various places, they are not in complementary distribution and so Bloomfield recognised that a different cluster must be reconstructed for each set. His reconstructions were, respectively, *hk, *xk, *čk (=), *šk (=), and çk (in which x and ç are arbitrary symbols, rather than attempts to guess the phonetic value of the proto-phonemes).
Step 4, reconstruct proto-phonemes
Typology assists in deciding what reconstruction best fits the data. For example, the voicing of voiceless stops between vowels is common, but the devoicing of voiced stops in that environment is rare. If a correspondence -t- : -d- between vowels is found in two languages, the proto-phoneme is more likely to be *-t-, with a development to the voiced form in the second language. The opposite reconstruction would represent a rare type.
However, unusual sound changes occur. The Proto-Indo-European word for two, for example, is reconstructed as *dwō, which is reflected in Classical Armenian as erku. Several other cognates demonstrate a regular change *dw- → erk- in Armenian. Similarly, in Bearlake, a dialect of the Athabaskan language of Slavey, there has been a sound change of Proto-Athabaskan *ts → Bearlake . It is very unlikely that *dw- changed directly into erk- and *ts into , but they probably instead went through several intermediate steps before they arrived at the later forms. It is not phonetic similarity that matters for the comparative method but rather regular sound correspondences.
By the principle of economy, the reconstruction of a proto-phoneme should require as few sound changes as possible to arrive at the modern reflexes in the daughter languages. For example, Algonquian languages exhibit the following correspondence set:.
The simplest reconstruction for this set would be either *m or *b. Both *m → b and *b → m are likely. Because m occurs in five of the languages and b in only one of them, if *b is reconstructed, it is necessary to assume five separate changes of *b → m, but if *m is reconstructed, it is necessary to assume only one change of *m → b and so *m would be most economical.
That argument assumes the languages other than Arapaho to be at least partly independent of one another. If they all formed a common subgroup, the development *b → m would have to be assumed to have occurred only once.
Step 5, examine the reconstructed system typologically
In the final step, the linguist checks to see how the proto-phonemes fit the known typological constraints. For example, a hypothetical system,
has only one voiced stop, *b, and although it has an alveolar and a velar nasal, *n and *ŋ, there is no corresponding labial nasal. However, languages generally maintain symmetry in their phonemic inventories. In this case, a linguist might attempt to investigate the possibilities that either what was earlier reconstructed as *b is in fact *m or that the *n and *ŋ are in fact *d and *g.
Even a symmetrical system can be typologically suspicious. For example, here is the traditional Proto-Indo-European stop inventory:
An earlier voiceless aspirated row was removed on grounds of insufficient evidence. Since the mid-20th century, a number of linguists have argued that this phonology is implausible and that it is extremely unlikely for a language to have a voiced aspirated (breathy voice) series without a corresponding voiceless aspirated series.
Thomas Gamkrelidze and Vyacheslav Ivanov provided a potential solution and argued that the series that are traditionally reconstructed as plain voiced should be reconstructed as glottalized: either implosive or ejective . The plain voiceless and voiced aspirated series would thus be replaced by just voiceless and voiced, with aspiration being a non-distinctive quality of both. That example of the application of linguistic typology to linguistic reconstruction has become known as the glottalic theory. It has a large number of proponents but is not generally accepted.
The reconstruction of proto-sounds logically precedes the reconstruction of grammatical morphemes (word-forming affixes and inflectional endings), patterns of declension and conjugation and so on. The full reconstruction of an unrecorded protolanguage is an open-ended task.
Complications
The history of historical linguistics
The limitations of the comparative method were recognized by the very linguists who developed it, but it is still seen as a valuable tool. In the case of Indo-European, the method seemed at least a partial validation of the centuries-old search for an Ursprache, the original language. The others were presumed to be ordered in a family tree, which was the tree model of the neogrammarians.
The archaeologists followed suit and attempted to find archaeological evidence of a culture or cultures that could be presumed to have spoken a proto-language, such as Vere Gordon Childe's The Aryans: a study of Indo-European origins, 1926. Childe was a philologist turned archaeologist. Those views culminated in the Siedlungsarchaologie, or "settlement-archaeology", of Gustaf Kossinna, becoming known as "Kossinna's Law". Kossinna asserted that cultures represent ethnic groups, including their languages, but his law was rejected after World War II. The fall of Kossinna's Law removed the temporal and spatial framework previously applied to many proto-languages. Fox concludes:
The Comparative Method as such is not, in fact, historical; it provides evidence of linguistic relationships to which we may give a historical interpretation.... [Our increased knowledge about the historical processes involved] has probably made historical linguists less prone to equate the idealizations required by the method with historical reality.... Provided we keep [the interpretation of the results and the method itself] apart, the Comparative Method can continue to be used in the reconstruction of earlier stages of languages.
Proto-languages can be verified in many historical instances, such as Latin. Although no longer a law, settlement-archaeology is known to be essentially valid for some cultures that straddle history and prehistory, such as the Celtic Iron Age (mainly Celtic) and Mycenaean civilization (mainly Greek). None of those models can be or have been completely rejected, but none is sufficient alone.
The Neogrammarian principle
The foundation of the comparative method, and of comparative linguistics in general, is the Neogrammarians' fundamental assumption that "sound laws have no exceptions". When it was initially proposed, critics of the Neogrammarians proposed an alternate position that summarised by the maxim "each word has its own history". Several types of change actually alter words in irregular ways. Unless identified, they may hide or distort laws and cause false perceptions of relationship.
Borrowing
All languages borrow words from other languages in various contexts. Loanwords imitate the form of the donor language, as in Finnic kuningas, from Proto-Germanic *kuningaz ('king'), with possible adaptations to the local phonology, as in Japanese sakkā, from English soccer. At first sight, borrowed words may mislead the investigator into seeing a genetic relationship, although they can more easily be identified with information on the historical stages of both the donor and receiver languages. Inherently, words that were borrowed from a common source (such as English coffee and Basque kafe, ultimately from Arabic qahwah) do share a genetic relationship, although limited to the history of this word.
Areal diffusion
Borrowing on a larger scale occurs in areal diffusion, when features are adopted by contiguous languages over a geographical area. The borrowing may be phonological, morphological or lexical. A false proto-language over the area may be reconstructed for them or may be taken to be a third language serving as a source of diffused features.
Several areal features and other influences may converge to form a Sprachbund, a wider region sharing features that appear to be related but are diffusional. For instance, the Mainland Southeast Asia linguistic area, before it was recognised, suggested several false classifications of such languages as Chinese, Thai and Vietnamese.
Random mutations
Sporadic changes, such as irregular inflections, compounding and abbreviation, do not follow any laws. For example, the Spanish words palabra ('word'), peligro ('danger') and milagro ('miracle') would have been parabla, periglo, miraglo by regular sound changes from the Latin parabŏla, perīcŭlum and mīrācŭlum, but the r and l changed places by sporadic metathesis.
Analogy
Analogy is the sporadic change of a feature to be like another feature in the same or a different language. It may affect a single word or be generalized to an entire class of features, such as a verb paradigm. An example is the Russian word for nine. The word, by regular sound changes from Proto-Slavic, should have been , but it is in fact . It is believed that the initial changed to under influence of the word for "ten" in Russian, .
Gradual application
Those who study contemporary language changes, such as William Labov, acknowledge that even a systematic sound change is applied at first inconsistently, with the percentage of its occurrence in a person's speech dependent on various social factors. The sound change seems to gradually spread in a process known as lexical diffusion. While it does not invalidate the Neogrammarians' axiom that "sound laws have no exceptions", the gradual application of the very sound laws shows that they do not always apply to all lexical items at the same time. Hock notes, "While it probably is true in the long run every word has its own history, it is not justified to conclude as some linguists have, that therefore the Neogrammarian position on the nature of linguistic change is falsified".
Non-inherited features
The comparative method cannot recover aspects of a language that were not inherited in its daughter idioms. For instance, the Latin declension pattern was lost in Romance languages, resulting in an impossibility to fully reconstruct such a feature via systematic comparison.
The tree model
The comparative method is used to construct a tree model (German Stammbaum) of language evolution, in which daughter languages are seen as branching from the proto-language, gradually growing more distant from it through accumulated phonological, morpho-syntactic, and lexical changes.
The presumption of a well-defined node
The tree model features nodes that are presumed to be distinct proto-languages existing independently in distinct regions during distinct historical times. The reconstruction of unattested proto-languages lends itself to that illusion since they cannot be verified, and the linguist is free to select whatever definite times and places seems best. Right from the outset of Indo-European studies, however, Thomas Young said:It is not, however, very easy to say what the definition should be that should constitute a separate language, but it seems most natural to call those languages distinct, of which the one cannot be understood by common persons in the habit of speaking the other.... Still, however, it may remain doubtfull whether the Danes and the Swedes could not, in general, understand each other tolerably well... nor is it possible to say if the twenty ways of pronouncing the sounds, belonging to the Chinese characters, ought or ought not to be considered as so many languages or dialects.... But,... the languages so nearly allied must stand next to each other in a systematic order…
The assumption of uniformity in a proto-language, implicit in the comparative method, is problematic. Even small language communities always have differences in dialect, whether they are based on area, gender, class or other factors. The Pirahã language of Brazil is spoken by only several hundred people but has at least two different dialects, one spoken by men and one by women. Campbell points out:
It is not so much that the comparative method 'assumes' no variation; rather, it is just that there is nothing built into the comparative method which would allow it to address variation directly.... This assumption of uniformity is a reasonable idealization; it does no more damage to the understanding of the language than, say, modern reference grammars do which concentrate on a language's general structure, typically leaving out consideration of regional or social variation.
Different dialects, as they evolve into separate languages, remain in contact with and influence one another. Even after they are considered distinct, languages near one another continue to influence one another and often share grammatical, phonological, and lexical innovations. A change in one language of a family may spread to neighboring languages, and multiple waves of change are communicated like waves across language and dialect boundaries, each with its own randomly delimited range. If a language is divided into an inventory of features, each with its own time and range (isoglosses), they do not all coincide. History and prehistory may not offer a time and place for a distinct coincidence, as may be the case for Proto-Italic, for which the proto-language is only a concept. However, Hock observes:
The discovery in the late nineteenth century that isoglosses can cut across well-established linguistic boundaries at first created considerable attention and controversy. And it became fashionable to oppose a wave theory to a tree theory.... Today, however, it is quite evident that the phenomena referred to by these two terms are complementary aspects of linguistic change....
Subjectivity of the reconstruction
The reconstruction of unknown proto-languages is inherently subjective. In the Proto-Algonquian example above, the choice of *m as the parent phoneme is only likely, not certain. It is conceivable that a Proto-Algonquian language with *b in those positions split into two branches, one that preserved *b and one that changed it to *m instead, and while the first branch developed only into Arapaho, the second spread out more widely and developed into all the other Algonquian tribes. It is also possible that the nearest common ancestor of the Algonquian languages used some other sound instead, such as *p, which eventually mutated to *b in one branch and to *m in the other.
Examples of strikingly complicated and even circular developments are indeed known to have occurred (such as Proto-Indo-European *t > Pre-Proto-Germanic *þ > Proto-Germanic *ð > Proto-West-Germanic *d > Old High German t in fater > Modern German Vater), but in the absence of any evidence or other reason to postulate a more complicated development, the preference of a simpler explanation is justified by the principle of parsimony, also known as Occam's razor. Since reconstruction involves many such choices, some linguists prefer to view the reconstructed features as abstract representations of sound correspondences, rather than as objects with a historical time and place.
The existence of proto-languages and the validity of the comparative method is verifiable if the reconstruction can be matched to a known language, which may be known only as a shadow in the loanwords of another language. For example, Finnic languages such as Finnish have borrowed many words from an early stage of Germanic, and the shape of the loans matches the forms that have been reconstructed for Proto-Germanic. Finnish kuningas 'king' and kaunis 'beautiful' match the Germanic reconstructions *kuningaz and *skauniz (> German König 'king', schön'' 'beautiful').
Additional models
The wave model was developed in the 1870s as an alternative to the tree model to represent the historical patterns of language diversification. Both the tree-based and the wave-based representations are compatible with the comparative method.
By contrast, some approaches are incompatible with the comparative method, including contentious glottochronology and even more controversial mass lexical comparison considered by most historical linguists to be flawed and unreliable.
See also
Comparative linguistics
Historical linguistics
Lexicostatistics
Proto-language
Swadesh list
Notes
References
.
.
.
External links
Historical linguistics
Comparative linguistics
| 0.764119 | 0.99135 | 0.75751 |
Fertility
|
Fertility in colloquial terms refers the ability to have offspring. In demographic contexts, fertility refers to the actual production of offspring, rather than the physical capability to reproduce, which is termed fecundity. The fertility rate is the average number of children born during an individual's lifetime. In medicine, fertility refers to the ability to have children, and infertility refers to difficulty in reproducing naturally. In general, infertility or subfertility in humans is defined as not being able to conceive a child after one year (or longer) of unprotected sex. The antithesis of fertility is infertility, while the antithesis of fecundity is sterility.
Demography
In demographic contexts, fertility refers to the actual production of offspring, rather than the physical capability to produce which is termed fecundity. While fertility can be measured, fecundity cannot be. Demographers measure the fertility rate in a variety of ways, which can be broadly broken into "period" measures and "cohort" measures. "Period" measures refer to a cross-section of the population in one year. "Cohort" data on the other hand, follows the same people over a period of decades. Both period and cohort measures are widely used.
Period measures
Crude birth rate (CBR) - the number of live births in a given year per 1,000 people alive at the middle of that year. One disadvantage of this indicator is that it is influenced by the age structure of the population.
General fertility rate (GFR) - the number of births in a year divided by the number of women aged 15–44, times 1000. It focuses on the potential mothers only, and takes the age distribution into account.
Child-Woman Ratio (CWR) - the ratio of the number of children under 5 to the number of women 15–49, times 1000. It is especially useful in historical data as it does not require counting births. This measure is actually a hybrid, because it involves deaths as well as births. (That is, because of infant mortality some of the births are not included; and because of adult mortality, some of the women who gave birth are not counted either.)
Coale's Index of Fertility - a special device used in historical research
Cohort measures
Total fertility rate (TFR) - the total number of children a woman would bear during her lifetime if she were to experience the prevailing age-specific fertility rates of women. TFR equals the sum for all age groups of 5 times each ASFR rate.
Gross Reproduction Rate (GRR) - the number of girl babies a synthetic cohort will have. It assumes that all of the baby girls will grow up and live to at least age 50.
Net Reproduction Rate (NRR) - the NRR starts with the GRR and adds the realistic assumption that some of the women will die before age 49; therefore they will not be alive to bear some of the potential babies that were counted in the GRR. NRR is always lower than GRR, but in countries where mortality is very low, almost all the baby girls grow up to be potential mothers, and the NRR is practically the same as GRR. In countries with high mortality, NRR can be as low as 70% of GRR. When NRR = 1.0, each generation of 1000 baby girls grows up and gives birth to exactly 1000 girls. When NRR is less than one, each generation is smaller than the previous one. When NRR is greater than 1 each generation is larger than the one before. NRR is a measure of the long-term future potential for growth, but it usually is different from the current population growth rate.
Social and economic determinants of fertility
A parent's number of children strongly correlates with the number of children that each person in the next generation will eventually have. Factors generally associated with increased fertility include religiosity, intention to have children, and maternal support. Factors generally associated with decreased fertility include wealth, education, female labor participation, urban residence, cost of housing, intelligence, increased female age and (to a lesser degree) increased male age.
The "Three-step Analysis" of the fertility process was introduced by Kingsley Davis and Judith Blake in 1956 and makes use of three proximate determinants: The economic analysis of fertility is part of household economics, a field that has grown out of the New Home Economics. Influential economic analyses of fertility include Becker (1960), Mincer (1963), and Easterlin (1969). The latter developed the Easterlin hypothesis to account for the Baby Boom.
Bongaarts' model of components of fertility
Bongaarts proposed a model where the total fertility rate of a population can be calculated from four proximate determinants and the total fecundity (TF). The index of marriage (Cm), the index of contraception (Cc), the index of induced abortion (Ca) and the index of postpartum infecundability (Ci). These indices range from 0 to 1. The higher the index, the higher it will make the TFR, for example a population where there are no induced abortions would have a Ca of 1, but a country where everybody used infallible contraception would have a Cc of 0.
TFR = TF × Cm × Ci × Ca × Cc
These four indices can also be used to calculate the total marital fertility (TMFR) and the total natural fertility (TN).
TFR = TMFR × Cm
TMFR = TN × Cc × Ca
TN = TF × Ci
Intercourse
The first step is sexual intercourse, and an examination of the average age at first intercourse, the average frequency outside marriage, and the average frequency inside.
Conception
Certain physical conditions may make it impossible for a woman to conceive. This is called "involuntary infecundity." If the woman has a condition making it possible, but unlikely to conceive, this is termed "subfecundity." Venereal diseases (especially gonorrhea, syphilis, and chlamydia) are common causes. Nutrition is a factor as well: women with less than 20% body fat may be subfecund, a factor of concern for athletes and people susceptible to anorexia. Demographer Ruth Frisch has argued that "It takes 50,000 calories to make a baby". There is also subfecundity in the weeks following childbirth, and this can be prolonged for a year or more through breastfeeding. A furious political debate raged in the 1980s over the ethics of baby food companies marketing infant formula in developing countries. A large industry has developed to deal with subfecundity in women and men. An equally large industry has emerged to provide contraceptive devices designed to prevent conception. Their effectiveness in use varies. On average, 85% of married couples using no contraception will have a pregnancy in one year. The rate drops to the 20% range when using withdrawal, vaginal sponges, or spermicides. (This assumes the partners never forget to use the contraceptive.) The rate drops to only 2 or 3% when using the pill or an IUD, and drops to near 0% for implants and 0% for tubal ligation (sterilization) of the woman, or a vasectomy for the man.
Gestation
After a fetus is conceived, it may or may not survive to birth. "Involuntary fetal mortality" involves natural abortion, miscarriages and stillbirth (a fetus born dead). Human intervention intentionally causing abortion of the fetus is called "therapeutic abortion".
In medicine
In medicine, the definition of fertility is "the capacity to establish a clinical pregnancy."
Women have hormonal cycles which determine when they can achieve pregnancy. The cycle is approximately twenty-eight days long, with a fertile period of five days per cycle, but can deviate greatly from this norm. Men are fertile continuously, but their sperm quality is affected by their health, frequency of ejaculation, and environmental factors.
Fertility declines with age in both sexes. For women, the decline begins around the age of 32, and becomes precipitous at age 37. For men, potency and sperm quality begins to decline around the age of 40. Even if an older couple does manage to conceive a child, the pregnancy will be increasingly difficult for the mother, and carries a higher risk of birth defects and genetic disorders for the child.
Pregnancy rates for sexual intercourse are highest when it occurs every 1 or 2 days, or every 2 or 3 days. Studies have found no significant difference between different sex positions and pregnancy rate, as long as it results in ejaculation into the vagina.
Menstrual cycle
A woman's menstrual cycle begins, as arbitrarily assigned, with menses. Next is the follicular phase where estrogen levels build as an ovum matures (due to the follicular stimulating hormone, or FSH) within the ovary. When estrogen levels peak, it spurs a surge of luteinizing hormone (LH) which completes maturation and enables the ovum to break through the ovary wall. This is ovulation. During the luteal phase following ovulation LH and FSH cause the post-ovulation ovary to develop into the corpus luteum which produces progesterone. The production of progesterone inhibits the LH and FSH hormones which (in a cycle without pregnancy) causes the corpus luteum to atrophy, and menses to begin the cycle again.
Peak fertility occurs during just a few days of the cycle: usually two days before and two days after the ovulation date. This fertile window varies from woman to woman, just as the ovulation date often varies from cycle to cycle for the same woman. The ovule is usually capable of being fertilized for up to 48 hours after it is released from the ovary. Sperm survive inside the uterus between 48 and 72 hours on average, with the maximum being 120 hours (5 days).
These periods and intervals are important factors for couples using the rhythm method of contraception.
Female fertility
The average age of menarche in the United States is about 12.5 years. In postmenarchal girls, about 80% of the cycles are anovulatory (ovulation does not actually take place) in the first year after menarche, 50% in the third and 10% in the sixth year.
Menopause occurs during a woman's midlife between ages 48 and 55. During menopause, hormonal production by the ovaries is reduced, eventually causing a permanent cessation of the creation of the uterine lining (period). This is considered the end of the fertile phase of a woman's life.
The predicted effect of age on female fertility in women trying to get pregnant, without using fertility drugs or in vitro fertilization:
At age 30
75% will conceive ending in a live birth within one year
91% will conceive ending in a live birth within four years.
At age 35
66% will conceive ending in a live birth within one year
84% will conceive ending in a live birth within four years.
At age 40
44% will conceive ending in a live birth within one year
64% will conceive ending in a live birth within four years.
Studies of couples trying to conceive have yielded better results: one 2004 study of 770 European women found that 82% of 35- to 39-year-old women conceived within a year, while a study in 2013 of 2,820 Danish women saw 78% of 35- to 40-year-olds conceive within a year.
According to an opinion by the Practice Committee of the American Society for Reproductive Medicine, specific coital timing or position, and resting supine after intercourse have no significant impact on fertility. Sperm can be found in the cervical canal seconds after ejaculation, regardless of coital position.
Successful pregnancies facilitated by fertility treatment have been documented in women as old as 67.
Male fertility
Some research suggests that older males have decreased semen volume, sperm motility, and impaired sperm morphology. In studies that controlled for female partner's age, comparisons between men under 30 and men over 50 found relative decreases in pregnancy rates between 23% and 38%. Sperm count declines with age, with men aged 50–80 years producing sperm at an average rate of 75% compared with men aged 20–50 years and larger differences exist in the number of seminiferous tubules in the testes containing mature sperm:
In males 20–39 years old, 90% of the seminiferous tubules contain mature sperm.
In males 40–69 years old, 50% of the seminiferous tubules contain mature sperm.
In males 80 years old and older, 10% of the seminiferous tubules contain mature sperm.
Decline in male fertility is influenced by many factors, including lifestyle, environment and psychological factors.
Some research suggests increased risks for health problems for children of older fathers, but no clear association has been proven. A large scale study in Israel suggested that the children of men 40 or older were 5.75 times more likely than children of men under 30 to have an autism spectrum disorder, controlling for year of birth, socioeconomic status, and maternal age. Increased paternal age has been suggested to correlate with schizophrenia but it is unproven.
Australian researchers have found evidence to suggest obesity may cause subtle damage to sperm and prevent a healthy pregnancy. They reported fertilization was 40% less successful when the father was overweight.
The American Fertility Society recommends an age limit for sperm donors of 50 years or less, and many fertility clinics in the United Kingdom will not accept donations from men over 40 or 45 years of age.
Historical trends by country
France
The French pronatalist movement from 1919 to 1945 failed to convince French couples they had a patriotic duty to help increase their country's birthrate. Even the government was reluctant in its support to the movement. It was only between 1938 and 1939 that the French government became directly and permanently involved in the pronatalist effort. Although the birthrate started to surge in late 1941, the trend was not sustained. Falling birthrate once again became a major concern among demographers and government officials beginning in the 1970s. In mid-2018, there was a bill introduced to legalize single women and lesbian couples to get fertility treatment. At the beginning of 2020, the Senate approved the bill 160 votes to 116. They are a step closer to legalizing fertility treatments for all women regardless of sexual orientation or marital status. Soon there will be no reason for lesbian couples or single women to travel to be able to start their own family.
Korea
South Korea has the lowest fertility rate in the world at 0.78. A variety of explanations have been proposed, ranging from investment in education to birth control, abortion, a decline in the marriage rate, divorce, female participation in the labor force, and the 1997 Asian financial crisis. After being legal from the 1960s to the 1980s, abortion was again made illegal in South Korea in the early 2000s in an attempt to reverse the declining fertility rate.
United States
From 1800 to 1940, fertility fell in the US. There was a marked decline in fertility in the early 1900s, associated with improved contraceptives, greater access to contraceptives and sexuality information and the "first" sexual revolution in the 1920s.
Post-WWII
After 1940 fertility suddenly started going up again, reaching a new peak in 1957. After 1960, fertility started declining rapidly. In the Baby Boom years (1946–1964), women married earlier and had their babies sooner; the number of children born to mothers after age 35 did not increase.
Sexual revolution
After 1960, new methods of contraception became available, ideal family size fell, from 3 to 2 children. Couples postponed marriage and first births, and they sharply reduced the number of third and fourth births.
See also
Anti-natalism
Birth control
Family economics
Family planning
Fecundity
Fertility clinic
Fertility tourism
Fertility deity
Fertility preservation
Human Fertilisation and Embryology Authority
Natalism
Natural fertility
Oncofertility
Reproductive health
Sub-replacement fertility
Total fertility rate
Vasectomy
Fertility-development controversy
Fertility factor (demography)
Income and fertility
Fertility and intelligence
Further reading
Bloom, David E.; Kuhn, Michael; Prettner, Klaus (2024). "Fertility in High-Income Countries: Trends, Patterns, Determinants, and Consequences". Annual Review of Economics.
References
Further reading
External links
Calder, Vanessa Brown, and Chelsea Follett (August 10, 2023). Freeing American Families: Reforms to Make Family Life Easier and More Affordable, Policy Analysis no. 955, Cato Institute, Washington, DC.
Demography
| 0.760947 | 0.995473 | 0.757502 |
Sexuality in India
|
India has developed its discourse on sexuality differently based on its distinct regions with their own unique cultures. According to R.P. Bhatia, a New Delhi psychoanalyst and psychotherapist, middle-class India's "very strong repressive attitude" has made it impossible for many married couples to function well sexually, or even to function at all.
Background
The seeming contradictions of Indian attitudes towards sex (more broadly – sexuality) can be best explained through the context of history. India played a role in shaping understandings of sexuality, and it could be argued that one of the first pieces of literature that treated "Kama" as science came from the Indian subcontinent. It may be argued that historically, India pioneered the use of sexual education through various art forms like sculptures, paintings, pieces of literature. As in all societies, there was a difference in sexual practices in India between common people and powerful rulers, with people in power often indulging in "self-gratification" lifestyles that were not representative of common moral attitudes. Moreover, there are distinct cultural differences seen through the course of history across India.
Ancient times
The origins of the current Indian culture can be traced back to the Indus Valley civilisation, which was contemporaneous with the ancient Egyptian and Sumerian civilisations, around 2700 BCE. During this period, the first evidence of attitudes towards sex comes from the ancient texts of Hinduism. These ancient texts, the Rig Veda among few others, reveal moral perspectives on sexuality, marriage and fertility prayers. The epics of ancient India, the Ramayana and Mahabharata, which may have been first composed as early as 500 BCE, had a huge effect on the culture of Asia, influencing later Chinese, Japanese, Tibetan culture and South East Asian culture. These texts support the view that in ancient India, sex was considered a mutual duty between a married couple, where husband and wife pleasured each other equally, but where sex was considered a private affair, at least by followers of the aforementioned Indian religions. It seems that polygamy was allowed during ancient times. In practice, this seems to have only been practiced by rulers, with common people maintaining a monogamous marriage. It is common in many cultures for a ruling class to practice both polyandry and polygyny as a way of preserving dynastic succession.
Nudity in art was considered acceptable in southern India, as shown by the paintings at Ajanta and the sculptures of the time. It is likely that as in most countries with tropical climates, Indians from some regions did not need to wear clothes, and other than for fashion, there was no practical need to cover the upper half of the body. This is supported by historical evidence, which shows that men in many parts of ancient India mostly dressed only the lower half of their bodies with clothes and upper part of body was covered by gold and precious stones, jewellery, while women used to wear traditional sarees made of silk and expensive clothes as a symbol of their wealth.
As Indian civilisation further developed and the writing of the Upanishads around 500 BCE, it was somewhere between the 1st and 6th centuries that the Kama Sutra, originally known as Vatsyayana Kamasutram ('Vatsyayana's Aphorisms on Love'), was written. This philosophical work on kama shastra, or 'science of love', was intended as both an exploration of human desire, including infidelity, and a technical guide to pleasing a sexual partner within a marriage. This is not the only example of such a work in ancient India, but is the most widely known in modern times. It is probably during this period that the text spread to ancient China, along with Buddhist scriptures, where Chinese versions were written.
It is also during 10th century to 12th century that some of India's most famous ancient works of art were produced, often freely depicting romantic themes and situations. Examples of this include the depiction of Apsaras, roughly equivalent to nymphs or sirens in European and Arabic mythology, on some ancient temples. The best and most famous example of this can be seen at the Khajuraho complex in central India built around 9th to 12th century.
Colonial-era
British colonization of India marks a notable turning point for expressions and opinions of sexuality in India. Prior to the colonial era in India, sexuality as a concept had much more varied viewpoints and traditions surrounding it. Generally, there was acceptance of differing sexual orientations as well as gender identities. However, during the colonial era, there were significant changes to the notion of and expression of sexuality. These changes came as a result of both internal and external influences.
External influences came in the form of British colonial rule causing colonial authorities to impose Western values and ideas of sexuality on Indian society. This was not just due to British belief that their societal standards and moral beliefs were correct and consequently needed to be established in India, but rather due to the British desire to more effectively establish control as well. At the time, British society was relatively conservative when it came to sexuality in that expression of sexuality was frowned upon, female sexuality was seen as particularly problematic and needing to be controlled, and overall societal standards can be characterized as critically focused on religious and moral ideas. In addition to that, there was a general view amongst the British that Indian society was inferior and needed to be changed to fit British standards. This paved the way for policies that criminalized practices which weren't inherently sexual such as devadasi, which were religious dancers that became associated with temple prostitution during the time, or the existence of Hijra communities which were groups of intersex people, transgender people, or eunuchs who lived together and identify as a third gender. In addition, in 1861 Section 377 of the Indian Penal Code, which already criminalized hijras, was also established to prohibit homosexuality, deeming it an "unnatural offense" that was "against the order of nature." The criminalization and stigmatization of various practices such as these occurred not just due to British belief in their immorality, but also because doing so made it easier for British authorities to manage and control the public. For example, criminalizing hijra made it simpler for British officers to categorize the Indian people because classification was a key part of how Britain maintained control and governed.
However, British imposition of Victorian ideals and subsequent policies were not the only factors causing this shift in Indian sexuality. Changes in internal ideals also developed alongside British influence, creating internal factors that impacted these shifts. Most notable of these factors is regarding the concept of prostitution, and the way the term prostitute ended up being used in colonial India to describe almost all women outside of monogamous Hindu upper-caste marriages. In 1872, British authorities put out a survey in order to gain information about Indian women following the 1860 Indian Penal Code which outlawed trafficking of girls for prostitution. Through the survey, they aimed to define who prostitutes were in order to better control and manage their existence. However, responses showed that many colonial administrators—both British and Indian alike— believed basically all Indian women could be prostitutes. For example, A.H. Giles—the deputy commissioner of Calcutta's Police—argued that Indian women were more likely to partake in dangerous and illegal behavior and that as a result "the prostitute community is recruited in various ways from all classes and castes," describing the various ways women may begin engaging in prostitution such as "hereditary prostitutes [whose] mothers were prostitutes before them and they were reared into the profession from infancy" or those who "practice as prostitutes with the full knowledge and consent of their husbands...[to] drive a profitable trade." Similarly, Bengali Deputy Magistrate Bankim Chandra Chatterjee also categorized the different conditions of prostitutes, similarly claiming that while "Prostitutes in general are recruited from all classes of society and do not belong to any hereditary prostitute caste," they exist due to the sexual nature of the women themselves not being restrained.
These ideas of women and their uncontrollable sexuality that needed to be limited were in part due to the British administration's concerns that women who were not in typical monogamous upper-caste marriages were sexually deviant and therefore a threat to the order of society. However, these ideas were not solely created by the British. Upper-caste Bengali Hindus men who desired recognition as being key to the ruling of India also spread these ideas of deviant sexuality alongside idealized concepts of Hindu women for their own purposes. Through spreading these ideals, they hoped society would be restructured around these ideals and they would be able to consequently gain authority. Female sexuality was a shared target for both these men and British authorities to fault for various behaviors and then use to establish their control.
The colonial era and British policies had an immense impact on Indian sexuality—both legally and societally. The changes that occurred during this period have continued to impact various social movements and politics in India to this day.
A number of movements were set up by prominent citizens, such as the Brahmo Samaj in Bengal and the Prarthana Samaj in Bombay Presidency, to work for the 'reform' of Indian private and public life. While this new consciousness led to the promotion of education for women and (eventually) a raise in the age of consent and reluctant acceptance of remarriage for widows, it also produced a puritanical attitude to sex even within marriage and the home.
Current issues
Conservative views of sexuality are now the norm in the modern republic of India, and South Asia in general. It is often argued that this is partly related to the effect of colonial influence, as well as to the puritanical elements of Islam in countries like Pakistan (e.g. the Islamic revivalist movements, which has influenced many Muslims in Pakistan and Bangladesh). However, such views were also prevalent in the pre-colonial era, especially since the advent of Islam in India which brought purdah as ideal for Muslim women. Before the gradual spread of Islam largely through the influence of Sufis, there seems to be evidence of liberal attitudes towards sexuality and nudity in art. However, scholars debate the degree to which Islam, as a mass and the varied phenomenon was responsible for this shift.
While during the 1960s and 1970s in the west, many people discovered the ancient culture of sexual liberalism in India as a source for western free love movements, and neo-Tantric philosophy, India itself is currently the more prudish culture, embodying Victorian sensibilities that were abandoned decades ago in their country of origin.
Modern India
Modern issues that affect India, as part of the sexual revolution, have become points of argument between conservative and liberal forces, such as political parties and religious pressure groups. These issues are also matters of ethical importance in a nation where freedom and equality are guaranteed in the constitution.
Scholarship by Indian sociologist Jyoti Puri calls attention to the social control around middle-class women's bodies in urban India and how politics of gender and sexuality impact of nationalist and transnational discourses and the role of nation-state. Her third book Sexual States: Governance and the Struggle Against the Antisodomy Law in India's Present, tracks the efforts to decriminalize homosexuality in India.
On 6 September 2018, the Supreme Court unanimously ruled that Section 377 is unconstitutional as it infringed on the fundamental rights of autonomy, intimacy, and identity, thus legalizing homosexuality in India.
Sexuality in popular entertainment
The entertainment industry is an important part of modern India, and is expressive of Indian society in general. Historically, Indian television and film has lacked the frank depiction of sex; until recently, even kissing scenes were considered taboo. On the other hand, rape scenes or scenes showing sexual assault were shown. Currently, some Indian states show soft-core sexual scenes and nudity in films, whilst other areas do not. Mainstream films are still largely catered to the masses.
Some recent movies like Ek Ladki Ko Dekha Toh Aisa Laga, Shubh Mangal Zyada Saavdhan, Badhaai Do help bring the concepts of alternative sexualities and LGBT inclusion in the popular culture.
Pornography
The distribution and production of pornography are both illegal in India; however, accessing pornography in private is not. Regardless, softcore films have been common since the late 1970s, and many directors have produced them. Magazine publications like Debonair (magazine), Fantasy, Chastity, Royal Magazine, and Dafa 302 exist in India, and more than 50 million Indians are believed to see porn on a daily basis.
The Information Technology Act, Chapter XI Paragraph 67, the Government of India clearly considers the transmission of pornography through any electronic medium as a punishable offence. The CEO of the Indian subsidiary of eBay was charged with various criminal offences for allowing the trading of a CD on the website that contained pornography.
Sex industry
While trade in sex was frowned upon in ancient India, it was tolerated and regulated so as to reduce the damage that it could do. However, the stigmatisation that has arisen in modern times has left the many poor sex workers with problems of exploitation and rampant infection, including AIDS, and worse, it has allowed a huge human-trafficking industry, like that of Eastern Europe, to take hold. Many poor, young women are kidnapped from villages and sold into sexual slavery. There have been some recent efforts to regulate the Indian sex industry.
A supreme court order in May, 2022 upheld prostitution as a profession ruling that sex workers had the same human rights as any other citizen of India and thus they can not be discriminated or arrested for their profession.
Sexual health
Sexual dysfunction in both males and females have been reported in significant numbers in recent years. Many attribute the prevalence of sexual dysfunctions to ignorance around sexual health and conservative attitudes toward sex. Sexual education is also an area of concern for many researchers; culture-bound sexual dysfunctions such as Dhat syndrome are rooted in erroneous ideas of human physiology, which could be refuted by improved and easily accessible sexual education.
Studies of sexual dysfunction in India focus primarily on male sexual dysfunction. Dhat syndrome, a culture-bound psychosexual dysfunction in males is an area of study for many researchers in India. Males who experience Dhat syndrome usually come from rural areas and families with very conservative attitudes around sex. Patients with Dhat syndrome typically experience other sexual dysfunctions such as erectile dysfunction, premature ejaculation, in addition to psychiatric disorders such as depressive neurosis and anxiety neurosis. A study in 2015 showed one in five males in rural South India, and one in seven females, suffered from one or more sexual disorders. Prevalence of sexual dysfunction was two to three times higher in illiterate men than literate men in the study.
Research shows a greater prevalence of sexual dysfunction in women from higher socioeconomic classes., Women's lack of education on sex is an even greater problem in sexual health. In terms of education, knowledge around abortion is a key area of development, as unsafe abortions account for 8-9% of maternal deaths per a bulletin from the India Office of Register General. Women's agency is also heavily considered in studies of female sexual health along with the sociocultural factors such as conservative attitudes toward sex and early marriage. Much like the men experiencing Dhat syndrome, most cases of female sexual dysfunction are concentrated in rural areas and reinforced by the same social factors discussed in males.
See also
History of human sexuality
Hinduism and sexual orientation
Homosexuality in India
Homosexuality and Hinduism
Homosexuality and Sikhism
Kamashastra
Non-westernized concepts of male sexuality
References
Further reading
Alain Daniélou. The Complete Kama Sutra: The First Unabridged Modern Translation of the Classic Indian Text. Inner Traditions,1993 .
Doniger, Wendy. "The Mare's Trap, Nature and Culture in The Kama Sutra." Speaking Tiger, 2015. .
The Continent of Circe by Nirad C. Chaudhuri – this has a chapter devoted to the topic.
Ciotti, Manuela. "'The Bourgeois Woman and the Half-Naked One': Or the Indian Nation's Contradictions Personified." Modern Asian Studies, vol. 44, no. 4, 2010, pp. 785–815. .
Chanana, Karuna. "Hinduism and Female Sexuality: Social Control and Education of Girls in India." Sociological Bulletin, vol. 50, no. 1, Indian Sociological Society, 2001, pp. 37–63, .
Crane, Ralph, and Radhika Mohanram. "The Missionary's Position: Love and Passion in Anglo-India." In Imperialism as Diaspora: Race, Sexuality, and History in Anglo-India, NED-New edition, 1, 1., 13:83–107. Liverpool University Press, 2013. .
Rao, Vidya. "'Thumri' as Feminine Voice". Economic and Political Weekly, vol. 25, no. 17, 1990, pp. WS31–WS39. .
Tripathi, Laxminarayan. "Me Hijra, Me Laxmi." Oxford University Press, 2015. . Translated from the Marathi original by R. Raj Rao and P. G. Joshi.
Revathi, A. "The Truth About Me, A Hijra Life Story." Penguin Books, India, 2010. . Translated from Tamil by V. Geetha.
Vanita, Ruth. "love's rite, Same-Sex marriage in India and The West." Palgrave Macmillan, 2005. .
Narrain, Arvind. "Queer." Books for Change, 2004. .
Urban, Hugh B. "'From Sex To Superconsciousness': Sexuality, Tantra, and Liberation in 1970s India." Zorba the Buddha: Sex, Spirituality, and Capitalism in the Global Osho Movement, 1st ed., University of California Press, 2015, pp. 76–100, .
Parkinson, R.B.. "A Little Gay History." The British Museum Press, 2013. .
Edited by Sangari, Kumkum and Vaid, Sudesh. "Recasting Women, Essays in Colonial History." Zubaan Publishers Pvt. Ltd., 2003. .
Edited by Dasgupta, Rohit and Gokulsing, K. Moti. "Masculinity and Its Challenges in India." Mcfarland & Company, Inc., Publishers. 2041. .
George, Annie. "Embodying Identity through Heterosexual Sexuality: Newly Married Adolescent Women in India". Culture, Health & Sexuality, vol. 4, no. 2, Taylor & Francis, Ltd., 2002, pp. 207–22, .
Ahluwalia, Sanjam. "Demographic Rhetoric and Sexual Surveillance: Indian Middle-Class Advocates of Birth Control, 1877–1947". Reproductive Restraints: Birth Control in India, 1877–1947, University of Illinois Press, 2008, pp. 23–53, .
Dell, Heather S. "'Ordinary' Sex, Prostitutes, and Middle-Class Wives: Liberalization and National Identity in India". Sex in Development: Science, Sexuality, and Morality in Global Perspective, edited by Vincanne Adams and Stacy Leigh Pigg, Duke University Press, 2005, pp. 187–206, .
Pechilis, Karen. Review of Progress toward an Open Discussion of Sexuality in India and Asia, by Moni Nag, Geetanjali Misra, and Radhika Chandiramani. The Journal of Sex Research 44, no. 4 (2007): 401–4. .
Espinosa-Hernández G, Choukas-Bradley S, van de Bongardt D, Van Dulmen M. Romantic relationships and sexuality in diverse adolescent populations: Introduction to the special issue. J Adolesc. 2020 Aug;83:95–99. . Epub 2020 Aug 4. .
Govind N, Chowkhani K. Integrating concerns of gender, sexuality and marital status in the medical curriculum. Indian J Med Ethics. 2020 Apr-Jun; V(2):92–94. . .
Sharma MK, Rao GN, Benegal V, Thennarasu K, Oommen D. Use of pornography in India: Need to explore its implications. Natl Med J India. 2019 Sep-Oct; 32(5):282–284. . .
Gruskin S, Yadav V, Castellanos-Usigli A, Khizanishvili G, Kismödi E. Sexual health, sexual rights and sexual pleasure: meaningfully engaging the perfect triangle. Sex Reprod Health Matters. 2019 Dec; 27(1):1593787. . ; .
Setia MS, Brassard P, Jerajani HR, Bharat S, Gogate A, Kumta S, Row-Kavi A, Anand V, Boivin JF. Men who have sex with men in India: a systematic review of the literature. J LGBT Health Res. 2008;4(2–3):51–70. . .
Gupta C. Writing sex and sexuality: archives of colonial North India. J Women's Hist. 2011;23(4):12–35. . .
Vanita R. Lesbian studies and activism in India. J Lesbian Stud. 2007;11(3–4):243–53. . .
Bowling J, Blekfeld-Sztraky D, Simmons M, Dodge B, Sundarraman V, Lakshmi B, Dharuman SD, Herbenick D. Definitions of sex and intimacy among gender and sexual minoritised groups in urban India. Cult Health Sex. 2020 May;22(5):520-534. . Epub 2019 May 30. .
Bowling J, Dodge B, Bindra N, Dave B, Sharma R, Sundarraman V, Thirupathur Dharuman S, Herbenick D. Female condom acceptability in urban India: Examining the role of sexual pleasure. J Health Psychol. 2018 Feb; 23(2):218–228. . Epub 2017 Dec 18. ; .
Banik S, Dodge B, Schmidt-Sane M, Sivasubramanian M, Bowling J, Rawat SM, Dange A, Anand V. Humanizing an Invisible Population in India: Voices from Bisexual Men Concerning Identity, Life Experiences, and Sexual Health. Arch Sex Behav. 2019 Jan; 48(1):305–316. . Epub 2018 Dec 3. ; .
Sharma SK, Vishwakarma D. Transitions in adolescent boys and young Men's high-risk sexual behaviour in India. BMC Public Health. 2020 Jul 11; 20(1):1089. . ; .
Siddiqui M, Kataria I, Watson K, Chandra-Mouli V. A systematic review of the evidence on peer education programmes for promoting the sexual and reproductive health of young people in India. Sex Reprod Health Matters. 2020 Dec; 28(1):1741494. . ; .
Bibliography
Das, K. and Rao, T. S. S. (2019) 'A Chronicle of Sexuality in the Indian Subcontinent', Journal of Psychosexual Health, 1(1), pp. 20–25.
Mamta Gupta (1994) Sexuality in the Indian subcontinent, Sexual and Marital Therapy, 9:1, 57–69,
Abraham, Leena, and K. Anil Kumar. "Sexual Experiences and Their Correlates Among College Students in Mumbai City, India." International Family Planning Perspectives, vol. 25, no. 3, Guttmacher Institute, 1999, pp. 139–52,
Das, Vishnupriya. "Dating Applications, Intimacy, and Cosmopolitan Desire in India." Global Digital Cultures: Perspectives from South Asia, edited by ASWIN PUNATHAMBEKAR and SRIRAM MOHAN, University of Michigan Press, 2019, pp. 125–41,
External links
BBC Article on AIDS Awareness in India
Indian Sex Life Survey - Average Ages, Statistics on Foreplay, Intercourse & Frequency, Extramarital & Premarital Data, Homosexuality in Indian Society and other such related issues
History of Sex: Ancient India
IASSTD & AIDS - Indian Association for the Study of Sexually Transmitted Diseases & AIDS
India In World Sex Survey - Frequency of sex in India & other data
Sex
India
| 0.760924 | 0.995494 | 0.757496 |
Reactionary feminism
|
Reactionary feminism is a form of feminism that rejects the progressivist belief that human history is an ongoing arc of moral advancement and seeks to ground a defence of women's interests in a contingent, materialist, and sex-realist position. The term originates in an article by the author Mary Harrington and popularized in her book Feminism Against Progress. Louise Perry is also usually described as a reactionary feminist author.
Reactionary feminism views men and women as equal in dignity and capacity for excellence but physiologically different in ways that, at scale, are materially and politically significant. Reactionary feminism argues from a materialist analysis of feminist history that the claim that males and females are interchangeable is itself false, serves as a means of consolidating power by the managerial class, and is actively inimical to the interests of poorer women whose lives of necessity cannot be abstracted from the material.
Reactionary feminist arguments include a critique of modern abortion politics as serving to marginalise key issues raised by maternal feminism such as women's embodiment and the importance of care, a re-read of the sexual revolution as primarily a technological transition whose externalities are under-counted, and an anti-capitalist framing of transgender politics as driven centrally by the post-1960s industrialization of the body via biotech. Though reactionary feminism is less hostile to religious faiths than liberal feminism, its adherents are by no means all religious. However it has some points of overlap with Catholic social teaching.
See also
Anti-gender movement
Trans-exclusionary radical feminism
References
feminism
Feminism
Essentialism
| 0.77116 | 0.982248 | 0.75747 |
Diagenesis
|
Diagenesis is the process that describes physical and chemical changes in sediments first caused by water-rock interactions, microbial activity, and compaction after their deposition. Increased pressure and temperature only start to play a role as sediments become buried much deeper in the Earth's crust. In the early stages, the transformation of poorly consolidated sediments into sedimentary rock (lithification) is simply accompanied by a reduction in porosity and water expulsion (clay sediments), while their main mineralogical assemblages remain unaltered. As the rock is carried deeper by further deposition above, its organic content is progressively transformed into kerogens and bitumens.
The process of diagenesis excludes surface alteration (weathering) and deep metamorphism. There is no sharp boundary between diagenesis and metamorphism, but the latter occurs at higher temperatures and pressures. Hydrothermal solutions, meteoric groundwater, rock porosity, permeability, dissolution/precipitation reactions, and time are all influential factors.
After deposition, sediments are compacted as they are buried beneath successive layers of sediment and cemented by minerals that precipitate from solution. Grains of sediment, rock fragments and fossils can be replaced by other minerals (e.g. calcite, siderite, pyrite or marcasite) during diagenesis. Porosity usually decreases during diagenesis, except in rare cases such as dissolution of minerals and dolomitization.
The study of diagenesis in rocks is used to understand the geologic history they have undergone and the nature and type of fluids that have circulated through them. From a commercial standpoint, such studies aid in assessing the likelihood of finding various economically viable mineral and hydrocarbon deposits.
The process of diagenesis is also important in the decomposition of bone tissue.
Role in anthropology and paleontology
The term diagenesis, literally meaning "across generation", is extensively used in geology. However, this term has filtered into the field of anthropology, archaeology and paleontology to describe the changes and alterations that take place on skeletal (biological) material. Specifically, diagenesis "is the cumulative physical, chemical, and biological environment; these processes will modify an organic object's original chemical and/or structural properties and will govern its ultimate fate, in terms of preservation or destruction". In order to assess the potential impact of diagenesis on archaeological or fossil bones, many factors need to be assessed, beginning with elemental and mineralogical composition of bone and enveloping soil, as well as the local burial environment (geology, climatology, groundwater).
The composite nature of bone, comprising one-third organic (mainly protein collagen) and two thirds mineral (calcium phosphate mostly in the form of hydroxyapatite) renders its diagenesis more complex. Alteration occurs at all scales from molecular loss and substitution, through crystallite reorganization, porosity, and microstructural changes, and in many cases, to the disintegration of the complete unit. Three general pathways of the diagenesis of bone have been identified:
Chemical deterioration of the organic phase.
Chemical deterioration of the mineral phase.
(Micro) biological attack of the composite.
They are as follows:
The dissolution of collagen depends on time, temperature, and environmental pH. At high temperatures, the rate of collagen loss will be accelerated, and extreme pH can cause collagen swelling and accelerated hydrolysis. Due to the increase in porosity of bones through collagen loss, the bone becomes susceptible to hydrolytic infiltration where the hydroxyapatite, with its affinity for amino acids, permits charged species of endogenous and exogenous origin to take up residence.
The hydrolytic activity plays a key role in the mineral phase transformations that expose the collagen to accelerated chemical- and bio-degradation. Chemical changes affect crystallinity. Mechanisms of chemical change, such as the uptake of F− or may cause recrystallization where hydroxyapatite is dissolved and re-precipitated allowing for the incorporation or substitution of exogenous material.
Once an individual has been interred, microbial attack, the most common mechanism of bone deterioration, occurs rapidly. During this phase, most bone collagen is lost and porosity is increased. The dissolution of the mineral phase caused by low pH permits access to the collagen by extracellular microbial enzymes thus microbial attack.
Role in hydrocarbon generation
When animal or plant matter is buried during sedimentation, the constituent organic molecules (lipids, proteins, carbohydrates and lignin-humic compounds) break down due to the increase in temperature and pressure. This transformation occurs in the first few hundred meters of burial and results in the creation of two primary products: kerogens and bitumens.
It is generally accepted that hydrocarbons are formed by the thermal alteration of these kerogens (the biogenic theory). In this way, given certain conditions (which are largely temperature-dependent) kerogens will break down to form hydrocarbons through a chemical process known as cracking, or catagenesis.
A kinetic model based on experimental data can capture most of the essential transformation in diagenesis, and a mathematical model in a compacting porous medium to model the dissolution-precipitation mechanism. These models have been intensively studied and applied in real geological applications.
Diagenesis has been divided, based on hydrocarbon and coal genesis into: eodiagenesis (early), mesodiagenesis (middle) and telodiagenesis (late). During the early or eodiagenesis stage shales lose pore water, little to no hydrocarbons are formed and coal varies between lignite and sub-bituminous. During mesodiagenesis, dehydration of clay minerals occurs, the main development of oil genesis occurs and high to low volatile bituminous coals are formed. During telodiagenesis, organic matter undergoes cracking and dry gas is produced; semi-anthracite coals develop.
Early diagenesis in newly formed aquatic sediments is mediated by microorganisms using different electron acceptors as part of their metabolism. Organic matter is mineralized, liberating gaseous carbon dioxide (CO2) in the porewater, which, depending on the conditions, can diffuse into the water column. The various processes of mineralization in this phase are nitrification and denitrification, manganese oxide reduction, iron hydroxide reduction, sulfate reduction, and fermentation.
Role in bone decomposition
Diagenesis alters the proportions of organic collagen and inorganic components (hydroxyapatite, calcium, magnesium) of bone exposed to environmental conditions, especially moisture. This is accomplished by the exchange of natural bone constituents, deposition in voids or defects, adsorption onto the bone surface and leaching from the bone.
See also
References
Geological processes
Fossil fuels
Sedimentology
| 0.766142 | 0.98864 | 0.757438 |
History of Christian thought on persecution and tolerance
|
The history of Christian thought has included concepts of both inclusivity and exclusivity from its beginnings, that have been understood and applied differently in different ages, and have led to practices of both persecution and toleration. Early Christian thought established Christian identity, defined heresy, separated itself from polytheism and Judaism and developed the theological conviction called supersessionism. In the centuries after Christianity became the official religion of Rome, some scholars say Christianity became a persecuting religion. Others say the change to Christian leadership did not cause a persecution of pagans, and that what little violence occurred was primarily directed at non-orthodox Christians.
After the fall of the Roman Empire, Christian thought focused more on preservation than origination. This era of thought is exemplified by Gregory the Great, Saint Benedict, Visigothic Spain, illustrated manuscripts, and progress in medical care through monks. Although the roots of supersessionism and deicide can be traced to some second century Christian thought, Jews of the Middle Ages lived mostly peacefully alongside their Christian neighbors because of Augustine of Hippo's teaching that they should be left alone. In the Early Middle Ages, Christian thought on the military and involvement in war shifted to accommodate the crusades by inventing chivalry and new monastic orders dedicated to it. There was no single thread of Christian thought throughout most of the Middle Ages as the church was largely democratic and each order had its own doctrine.
The High Middle Ages were pivotal in both European culture and Christian thought. Feudal kings began to lay the foundation of what would become their modern nations by centralizing power. They gained power through multiple means including persecution. Christian thought played a supportive role, as did the literati, a group of ambitious intellectuals who had contempt for those they thought beneath them, by verbally legitimizing those attitudes and actions. This contributed to a turning point in Judeo-Christian relations in the 1200s. Heresy became a religious, political, and social issue which led to civil disorder and the Medieval Inquisitions. The Albigensian Crusade is seen by many as evidence of Christianity's propensity for intolerance and persecution, while other scholars say it was conducted by the secular powers for their own ends.
The Late Middle Ages are marked by a decline of papal power and church influence with accommodation to secular power becoming more and more of an aspect of Christian thought. The modern Inquisitions were formed in the Late Middle Ages at the special request of the Spanish and Portuguese sovereigns. Where the medieval inquisitions had limited power and influence, the powers of the modern "Holy Tribunal" were taken over, extended and enlarged by the power of the state into "one of the most formidable engines of destruction which ever existed." During the Northern Crusades, Christian thought on conversion shifted to a pragmatic acceptance of conversion obtained through political pressure or military coercion even though theologians of the period continued to write that conversion must be voluntary.
By the time of the early Reformation (1400–1600), the conviction developed among the early Protestants that pioneering the concepts of religious freedom and religious toleration was necessary. Scholars say tolerance has never been an attitude broadly espoused by an entire society, not even western societies, and that only a few outstanding individuals, historically, have truly fought for it. In the West, Christian reformation figures, and later Enlightenment intellectuals, advocated for tolerance in the century preceding, during, and after the Reformation and into the Enlightenment. Contemporary Christians generally agree that tolerance is preferable to conflict, and that heresy and dissent are not deserving of punishment. Despite that, the systematized government-supported persecution of minorities invented in the West in the High Middle Ages for garnering power to the state has spread throughout the world. Sociology indicates tolerance and persecution are products of context and group identity more than ideology.
Early Christian thought from the first century to Constantine
Historical background
In its first three centuries, Christian thought was just beginning to define what it meant to be a Christian, distinct from paganism and Judaism, through its definitions of orthodoxy and heterodoxy. Early Christian writers worked to reconcile the Jewish founding story, the Christian gospel of the Apostles, and the Greek tradition of knowing the divine through reason, but the substance of Christian orthodoxy was increasingly found in the homogeneous canon of writings believed to be apostolic (written by the apostles), that had circulated widely as such, and the writings of the church fathers that were based on them.
Persecution and tolerance are both the result of alterity, the state of otherness, and the question of how to properly deal with those who are 'outside' the defined identity. Like the other Abrahamic religions, Christian thought has included, from its beginnings, two ideals which have affected Christian responses to alterity: inclusivity (also called universality) and exclusivity, or as David Nirenberg describes them, our "mutual capacities for coexistence and violence." There is an inherent tension in all the Abrahamic traditions between exclusivity and inclusivity which is theologically and practically dealt with by each in different ways.
Justo L. González traces three veins of Christian thought that began in the second century. Out of Carthage, Tertullian the lawyer (155–200 CE) wrote of Christianity as revelation of the law of God. From the pluralistic city of Alexandria, Origen wrote of the commonalities between philosophy and theology, reason and revelation, seeing Christianity as the intellectual pursuit of transcendent truth. In Asia Minor and Syria, Irenaeus saw Christianity as God working in human history through its pastoral work of reaching people with God's love. Each vein of thought has continued throughout Christian history, and have impacted attitudes toward and practices of tolerance and persecution.
Inclusivity, exclusivity and heresy
Early Christian communities were highly inclusive in terms of social stratification and other social categories, much more so than were the Roman voluntary associations. Heterogeneity characterized the groups formed by Paul the Apostle, and the role of women was much greater than in either of the forms of Judaism or paganism in existence at the time. Early Christians were told to love others, even enemies, and Christians of all classes and sorts called each other "brother" and "sister". These concepts and practices were foundational to early Christian thought, have remained central, and can be seen as early precursors to modern concepts of tolerance.
Though tolerance was not a fully developed concept, and was held with some ambivalence, Guy Stroumsa says Christian thought of this era promotes inclusivity, yet invents the concept of heresy at the same time. Tertullian, a second-century Christian intellectual and lawyer from Carthage, advocated for religious tolerance primarily in an effort to convince pagan readers that Christianity should be allowed into the religious "market-place" that historian John North proposes second century Rome had become. On the other hand, Stroumsa argues that Tertullian knew co-existence meant competition, so he attempted to undermine the legitimacy of the pagan religions by comparing them to Christianity at the same time he advocated for tolerance from them. Justin Martyr (100–165 CE) wrote his First Apology (155–157 CE) against heretics, and is generally attributed with inventing the concept of heresy in Christian thought. Historian Geoffrey S. Smith argues that Justin writes only to answer objections his friends are facing and to defend these friends from ill treatment and even death. He quotes Justin in a letter to the emperor as saying he is writing: "On behalf of those from every race of men who are unjustly hated and ill-treated, being one of them myself." However, Alain Le Boulluec argues it is within this period that use of the term "heretic" in Christian thought and writings changes from neutral to derogatory.
Supersessionism
Supersessionist thought is defined by "two core beliefs: (1) that the nation of Israel has forfeited its status as the people of God through disobedience; and (2) the New Testament church has therefore become the true Israel and inheritor of the promises made to the nation of Israel." It has three forms: punitive, economic, and/or structural supersessionism. Punitive supersessionism is the 'hard' form of supersessionism, and is seen as punishment from God. Economic supersessionism is a moderate form concerning God's economy: His plan in history to transfer the role of the "people of God" from an ethnic group to a universal group. The third form involves the New Testament having priority over the Old Testament by ignoring or replacing the original meaning of Old Testament passages. For example, within the early church, the rise of the use of Greek philosophical interpretation and allegory allowed inferences to be drawn such as the one Tertullian drew when he allegorically interpreted the statement "the older will serve the younger", concerning the twin sons of Isaac and Rebekah (Genesis 25.23), to mean that Israel would serve the church.
There is no agreement on when supersessionism began. Michael J. Vlach says that some claim it began in the New Testament, some say it began with the church fathers, others place its beginnings after the Bar Kokhba revolt in CE135. The destruction of Jerusalem by the Romans in CE70 and again in CE135 had a profound impact on Jewish–Christian relations. Many saw the Jewish–Christians as traitors for not supporting their brethren, and Vlach says supersessionism grew out of those events. Scholars such as W. C. Kaiser Jr. see the fourth century, after Constantine, as supersessionism's true beginning, because that is when a shift in Christian thought on eschatology took place. The church took its universally held traditional interpretation of Revelation 20:4-6 (Millennialism) and its hope of the thousand-year reign of the Messiah on earth, centered in Jerusalem, ruling with the redeemed Israel, and replaced it with a "historicized and allegorized version, that set up the church" as the metaphorical Israel instead.
Tracing the roots of supersessionism to the New Testament is problematic since "there is no consensus" that supersessionism is a biblical doctrine at all. Vlatch says one's position on this is determined more by one's beginning assumptions than it is by any biblical hermeneutic. Arguments in favor of supersessionism have traditionally been based on implications and inferences rather than biblical texts. Vlatch asserts that the church has also "always had compelling scriptural reasons, in both Testaments, to believe in a future salvation and restoration of the nation Israel." Therefore, supersessionism has never been an official doctrine and has never been universally held. Supersessionism's alternative is chiliasm, also known as Millennialism. These are both the belief that Christ will return to earth in visible form and establish a kingdom to last 1000 years. This was the traditional and more universally held view of the first two centuries, and has remained an aspect of Christian thought throughout its history. Steven D. Aguzzi says supersessionism was still considered a "normative view" in the writings of the early church fathers, such as Justin, Barnabas and Origen, and has also been a part of Christian thought for much of the church's history.
Evaluation
Supersessionism is significant in Christian thought because "It is undeniable that anti-Jewish bias has often gone hand-in-hand with the supersessionist view." Many Jewish writers trace anti-semitism, and the consequences of it in World War II, to this particular doctrine among Christians. Twentieth-century Jewish civil rights leader Leonard P. Zakim asserts that, despite the many possible destructive consequences of supersessionism, as theology professor Padraic O'Hare writes: supersessionism alone is not yet anti-semitism. John Gager makes a distinction between nineteenth century anti-Semitism and second century anti-Judaism, and many scholars agree, yet there are those who see early anti-Judaism and later anti-Semitism as the same. sees the development of anti-semitism as part of the paradigm shift that occurred in early modernity. Gerdmar argues the shift resulted from the new scientific focus on the Bible and history that replaced the primacy of theology and tradition. Christopher Leighton associates anti-Judaism with the origins of Christianity, and anti-semitism with "modern nationalism and racial theories".
Deicide
Deicide as the prime accusation against the Jews appears, for the first time, in a highly rhetorical second century poem by Melito, of which only a few fragments have survived. In the fourth century, Augustine refuted the accusation, saying the Jews could not be guilty of deicide as they did not believe Christ was God. Melito's writings were not influential, and the idea was not immediately influential, but the accusation returned in fourth century thinking and sixth century actions and again in the Middle Ages.
Constantine
Christian thought was still in its infancy in 313 when, following the Battle of the Milvian Bridge, Constantine I, (together with his co-emperor Licinius), issued the Edict of Milan granting religious toleration to the Christian faith. The Edict did not only protect Christians from religious persecution, but all religions, allowing anyone to worship whichever deity they chose. After 320, Constantine supported the Christian church with his patronage, had a number of basilicas built for the Christian church, and endowed it with land and other wealth. He outlawed the gladiatorial shows, destroyed temples and plundered more, and used forceful rhetoric against non-Christians. But he never engaged in a purge. "He did not punish pagans for being pagans, or Jews for being Jews, and did not adopt a policy of forced conversion."
While not making a direct personal contribution to Christian thought, the first Christian Roman emperor had a powerful impact on it through the example of his own conversion, his policies, and the various councils he called. Christian thought at the time of Constantine believed that victory over the "false gods" had begun with Jesus and ended with the conversion of Constantine as the final fulfillment of heavenly victory—even though Christians were only about fifteen to eighteen percent of the empire's population.
After Constantine, Christianity gradually became the dominant religion in the Roman Empire. In the view of many historians, the Constantinian shift turned Christianity from a persecuted religion into a persecuting religion. However, the claim that there was a Constantinian shift has been disputed. Theologian Peter Leithart argues that there was a "brief, ambiguous 'Constantinian moment' in the early fourth century", but that there was "no permanent, epochal 'Constantinian shift. According to Michele R. Salzman, fourth century Rome featured sociological, political, economic and religious competition, producing tensions and hostilities between various groups, but that Christians focused on heresy more than pagans.
Antiquity: from Constantine to the fall of empire
Historical background
Historians and theologians refer to the fourth century as the "golden age" of Christian thought. Figures such as John Chrysostom, Ambrose, Jerome, Basil, Gregory of Nazianus, Gregory of Nyssa, and the prolific Augustine, all made a permanent mark on Christian thought and history. They were primarily defenders of orthodoxy. They wrote philosophy and theology as well as apologetics and polemics. Some had a long-term effect on tolerance and persecution in Christian thought.
Fourth century Christian thought
Fourth century Christian thought was dominated by its many conflicts defining orthodoxy versus heterodoxy and heresy. In what remained of the Eastern Roman empire, known as Byzantium, the Arian controversy began with its debate of Trinitarian formulas which lasted 56 years. It gradually trickled over into the Latin West so that by the fourth century, the center of the controversy was the "champion of orthodoxy", Athanasius. Arianism was the reason for calling the Council of Nicea. Athanasius was ousted from his bishopric in Alexandria in 336 by the Arians, forced into exile, and lived much of the remainder of his life in a cycle of forced movement. The controversy became political after Constantine's death. Athanasius died in 373, while an Arian emperor ruled, but his orthodox teaching was a major influence in the West, and on Theodosius, who became emperor in 381. Also in the East, John Chrysostom, Bishop of Constantinople, who is best known for his brilliant oratory and his exegetical works on moral goodness and social responsibility, also wrote Discourses Against the Jews which is almost pure polemic, using replacement theology that is now known as supersessionism. However, Chrysostom did not advocate for killing heretics, even though he did advocate censoring them; he writes, "He [Christ] doth not therefore forbid our checking heretics, and stopping their mouths, and taking away their freedom of speech, and breaking up their assemblies and confederacies, but our killing and slaying them".
By 305, after the Diocletian persecution of the third century, many of those who had recanted during the persecution, wanted to return to the church. The North African Donatists refused to accept them back as clergy and remained resentful toward the Roman government. Catholics wanted to wipe the slate clean and accommodate the new government. The Donatists withdrew and began setting up their own churches. For decades, Donatists fomented protests and street violence, refused compromise, attacked random Catholics without warning, doing serious and unprovoked bodily harm such as beating people with clubs, cutting off their hands and feet, and gouging out eyes. By the time Augustine became coadjutor Bishop of Hippo in 395, the Donatists had been a multi-level problem for many years. Augustine held that belief cannot be compelled, so he appealed to them verbally, using popular propaganda, debate, personal appeal, General Councils, and political pressure. All attempts failed.
The empire responded to civil unrest with force, and in 408 in his Letter 93, Augustine began defending persecution of the Donatists by the imperial authorities saying that, "if the kings of this world could legislate against pagans and poisoners, they could do so against heretics as well." He continued saying that belief cannot be compelled, however, he also included the idea that, while "coercion cannot transmit the truth to the heretic, it can prepare them to hear and receive the truth." Augustine did not advocate religious violence, as such, but he supported the power of the state to use coercion against those he saw as behaving as enemies. His authority on this question was undisputed for over a millennium in Western Christianity, and according to Brown "it provided the theological foundation for the justification of medieval persecution."
Augustine had advocated fines, imprisonment, banishment, and moderate floggings; when the state's persecution of individual Donatists became extreme, he attempted to mitigate the punishments, and he always opposed the execution of heretics. According to Henry Chadwick, Augustine "would have been horrified by the burning of heretics."
In 385, Priscillian, a bishop in Spain, was the first Christian to be executed for heresy, though this sentence was roundly condemned by prominent church leaders like Ambrose. Priscillian was also accused of gross sexual immorality and acceptance of magic, but politics may have been involved in his sentencing.
Anti-paganism in late antique Roman empire
Polytheism began declining by the second century, long before there were Christian emperors, but after Constantine made Christianity officially accepted, it declined even more rapidly, and there are two views on why. According to the Oxford Handbook of Late Antiquity, scholars of Antiquity fall into two categories, holding either the "catastrophic" view, or the "long and slow" view of polytheism's decline and end. The traditional "catastrophic" view has been the established view for 200 years; it says polytheism declined rapidly in the fourth century, with a violent death in the fifth, as a result of determined anti-pagan opposition from Christians, particularly Christian emperors. Contemporary scholarship espouses the "long slow" view, which says anti-paganism was not a primary concern of Christians in antiquity because Christians believed the conversion of Constantine showed Christianity had already triumphed. Michele R. Salzman indicates that, as a result of this "triumphalism", heresy was a higher priority for Christians in the fourth and fifth centuries than was paganism. This produced less real conflict between Christians and pagans than was previously thought. Archaeologists Luke Lavan and Michael Mulryan indicate that contemporary archaeological evidence of religious conflict exists, as the catastrophists assert, but not to the degree or intensity previously thought.
Laws such as the Theodosian decrees attest to Christian thought of the period, giving a "dramatic view of radical Christian ambition". Peter Brown says the language is uniformly vehement and the penalties are harsh and frequently horrifying. Salzman says the law was intended as a means of conversion through the "carrot and the stick", but that it is necessary to look beyond the law to see what people actually did. Authorities, who were still mostly pagan, were lax in imposing them, and Christian bishops frequently obstructed their application. Anti-paganism existed, but according to , Michele Salzman, and Marianne Sághy who quote Alan Cameron: the idea of religious conflict as the cause of a swift demise of paganism is pure historiographical construction. Lavan says Christian writers gave the narrative of victory high visibility, but that it does not necessarily correlate to actual conversion rates. There are many signs that a healthy paganism continued into the fifth century, and in some places, into the sixth and beyond.
According to Brown, Christians objected to anything that called the triumphal narrative into question, and that included the mistreatment of non-Christians. Temple destructions and conversions are attested, but in small numbers. Archaeology indicates that in most regions away from the imperial court, the end of paganism was both gradual and untraumatic. The Oxford Handbook of Late Antiquity says that "Torture and murder were not the inevitable result of the rise of Christianity." Instead, there was fluidity in the boundaries between the communities and "coexistence with a competitive spirit." Brown says that "In most areas, polytheists were not molested, and, apart from a few ugly incidents of local violence, Jewish communities also enjoyed a century of stable, even privileged, existence." Having, in 423, been declared by the emperor Theodosius II not to exist, large bodies of polytheists all over the Roman empire were not murdered or converted under duress so much as they were simply left out of the histories the Christians wrote of themselves as victorious.
Early Medieval West (c. 500 – c. 800)
Historical background
After the Fall of the Western Roman Empire, life in the West returned to an agrarian subsistence style of living, becoming somewhat settled sometime in the 500s. Christian writers of the period were more concerned with preserving the past than in composing original works. The Germanic tribes which had overthrown Rome became the new rulers, dividing the empire between them. Gregory the Great became Pope in 590AD, and he sent out multiple missionaries who peacefully converted Britain, Ireland, Scotland and more. Learning was kept alive in the monasteries they built which became the sole source of education for the next few centuries. Patrick Wormald indicates the Irish and English missionaries sent out to those territories that would become the Holy Roman empire and then Germany, thought of the pagans on the continental mainland with "interest, sympathy and occasionally even admiration."
In most of history, victors of war imposed their religion on the newly subjugated people, however, the Germanic tribes gradually adopted Christianity, the religion of defeated Rome, instead. This brought, in its wake, a broad process of cultural change that lasted for the next 500 years. What had been formed by the unity of the classical world and Christianity, was now transplanted into Germanic tribal culture, thereby forming a new synthesis that became western European Christendom. The church had immense influence during this time due to the endless commitment and work of the clergy and the "powerful effect of the Christian belief system" amongst the people.
Erigina was not a major theologian, but in 870, he wrote On the Division of Nature which foresaw the modern view of predestination denying that God has foreordained anyone to sin and damnation. His mixture of rationalism and Neo-Platonic mysticism would prove influential to later Christian thought, though his books were banned by the Roman Catholic church in 1681.
Partial inclusivity of the Jews
According to Anna Sapir Abulafia, "Most scholars would agree that, with the marked exception of Visigothic Spain (in the seventh century), Jews in Latin Christendom lived relatively peacefully with their Christian neighbors through most of the Middle Ages." Scattered violence toward Jews occasionally took place during riots led by mobs, local leaders, and lower level clergy without the support of church leaders or Christian thought. Jeremy Cohen says historians generally agree this is because Catholic thought on the Jews before the 1200s was guided by the teachings of Augustine. Augustine's position on the Jews, with its accompanying argument for their "immunity from religious coercion enjoyed by virtually no other community in post-Theodosian antiquity" was preceded by a positive evaluation of the Jewish past, and its relationship to divine justice and human free will. Augustine rejected those who argued that the Jews should be killed, or forcibly converted, by saying that Jews should be allowed to live in Christian societies and practice Judaism without interference because they preserved the teachings of the Old Testament and were living witnesses of the truths of the New Testament.
Gregory the Great is generally seen as an important Pope in relation to the Jews. He denigrated Judaism but followed Roman Law and Augustinian thought with regard to how the Jews should be treated. He wrote against forced baptism. In 828, Gregory IV wrote a letter to the Bishops in Gaul and the Holy Roman empire warning that Jews must not be baptized by force. Gregory X repeated the ban. Even Pope Innocent III, who generally found the behavior of Jews in Christian society to be "intolerable", still agreed that the Jews should not be killed or forcibly converted when he called for the Second Crusade.
Jews and their communities were always vulnerable. Random ill treatment, and occasionally real persecution, did occur. However, their legal status, while it was inferior, was not insecure as it became later in the High Middle Ages. They could appeal to the authorities, and did, even on occasion appealing to the Pope himself. While the difficulties were not negligible, they were also not general enough to fundamentally impact the nature of Jewish life.
Inclusive Benedict
St. Benedict (480–547) was another major figure who impacted pre-modern ideals of tolerance in Christian thought. Considered the father of western monasticism, he wrote his Rule around three values: community, prayer, and hospitality. This hospitality was extended to anyone without discrimination. "Pilgrims and visitors from every rank of society from crowned heads to poorest peasants, came in search of prayers or alms, protection and hospitality."
Exclusive Spain
Visigothic leaders in Spain subjected the Jews to persecution and efforts to convert them forcibly for a century after 613. Norman Roth says Byzantine legal codes were the method used to reinforce anti-Jewish attitudes. The Breviarium of Alaric summarizes the most significant anti-Jewish legislation of the Byzantine codes, and it was written in the sixth century.
Early Middle Ages (c. 800 – c. 1000)
Historical background
Christian thought from its early days had generally frowned upon participation in the military, but that became increasingly difficult to maintain in the Middle Ages. Chivalry, a new ideal of the religious warrior who fought for justice, defended truth, and protected the weak and the innocent formed. Such a knight was ordained only after proving his spiritual and martial worth: robed in white, he would swear an oath before a cleric to uphold these values and defend the faith.
Massacre of Verden
While contemporary definitions of religious persecution typically do not include actions taken during war, the Massacre of Verden represents an event that is still often seen as persecution by Christians. The massacre took place in 782, in what had been Roman Gaul, and would one day be modern France.
Charlemagne had become King of the Franks in 771, and ruled most of western Europe of the time. He advocated Christian principles, including education, openly supported Christian missions, and had at least one Christian advisor. But he also spent his entire life fighting to defend his empire and his faith. The Franks had been fighting the Saxons since the time of Charlemagne's grandfather. Charlemagne himself began to fight the Saxons in earnest in 772, defeating them and taking hostages in a battle on the upper Weser. "Time and again the Saxon chiefs, worn down by war, sued for peace, offered hostages, accepted baptism and agreed to allow missionaries to go about their work without hindrance. But vigilance slackened, Charles was engaged on some other front, rebellions broke out, Frankish garrisons were attacked and massacred, and monasteries were pillaged". Repeatedly, Saxons rose, pillaged and looted and killed, were defeated, and rose again, until after 779, Charlemagne felt he had pacified the region and gained genuine oaths of loyalty from the Saxon leaders. In 782, Charles and the Saxons assembled at Lippe, where he appointed "several Saxon nobles as Counts as a reward for their loyalty".
Shortly thereafter, in that same year of 782, Widukind the Saxon leader, persuaded a group of Saxons who had submitted to Charlemagne, to break their oaths and rebel. Charlemagne was once again elsewhere, so the Saxons went to battle with the part of the Frankish army that had been left behind and the "Franks were killed almost to a man". They killed two of the King's chief lieutenants as well as some of his closest companions and counsellors. "In great anger at this breach of the treaty just made", Charlemagne gathered his forces, returned to Saxony, conquered the Saxon rebels, again, giving them the option to convert or die. The Saxons largely refused, and though no one knows the number for sure, it is said 4,500 unarmed prisoners were murdered in what is called the Massacre of Verden. Massive deportations followed, and death was decreed as the penalty for any Saxon who refused baptism thereafter. After this, Charlemagne transported ten thousand families from the most turbulent district into the heart of his own territory, and the Saxons were finally settled.
Historian Matthias Becher asserts that the number 4,500 is exaggerated, and that these events demonstrate the brutality of war of the period. Yet it is clear something untoward occurred, since Alcuin of York, Charlemagne's Christian advisor who was not present in Verden, later wrote the king a rebuke concerning them, saying that: "Faith must be voluntary not coerced. Converts must be drawn to the faith not forced. A person can be compelled to be baptized yet not believe. An adult convert should answer what he truly believes and feels, and if he lies, then he will not have true salvation."
Crusades
From the beginning, the crusades have been seen from different points of view. Darius von Güttner-Sporzyński explains that scholars continue to debate crusading and its impact so scholarship in this field is continually undergoing revision and reconsideration. Many early crusade scholars saw crusade histories as simple recitations of how events actually transpired, but by the eighteenth and nineteenth centuries, scholarship was increasingly critical and skeptical of that perspective. Simon John writes that Christopher Tyerman is in the forefront of contemporary scholarship when he says that the "earliest of crusade histories can not be regarded by scholars even in part as 'mere recitation of events.' Instead, they should be treated in their entirety as 'essays in interpretation'."
At the time of the First Crusade, there was no clear concept in Christian thought of what a crusade was beyond that of a pilgrimage. Hugh S. Pyper says the crusades are representative of the "powerful sense in Christian thought of the time of the importance of the concreteness of Jesus' human existence... The city [of Jerusalem's] importance is reflected in the fact that early medieval maps place [Jerusalem] at the center of the world."
By 1935, Carl Erdmann published Die Entstehung des Kreuzzugsgedankens (The Origin of the Idea of Crusade), stressing that the crusades were essentially defensive acts on behalf of fellow Christians and pilgrims in the East who were being attacked, killed, enslaved or forcibly converted. Crusade historian Jonathan Riley-Smith says the crusades were products of the renewed spirituality of the central Middle Ages. Senior churchmen of this time presented the concept of Christian love for those in need as the reason to take up arms. The people had a concern for living the vita apostolica and expressing Christian ideals in active works of charity, exemplified by the new hospitals, the pastoral work of the Augustinians and Premonstratensians, and the service of the friars. Riley-Smith concludes, "The charity of St. Francis may now appeal to us more than that of the crusaders, but both sprang from the same roots." Constable adds that those "scholars who see the crusades as the beginning of European colonialism and expansionism would have surprised people at the time. [Crusaders] would not have denied some selfish aspects... but the predominant emphasis was on the defense and recovery of lands that had once been Christian and on the self-sacrifice rather than the self-seeking of the participants."
At the opposite end is the view voiced by Steven Runciman in 1951 that the "Holy War was nothing more than a long act of intolerance in the name of God..." Giles Constable says this view is common among the populace. According to political science professor Andrew R. Murphy, concepts of tolerance and intolerance were not starting points for thoughts about relations for any of the various groups involved in or affected by the crusades. Instead, concepts of tolerance began to grow during the crusades from efforts to define legal limits and the nature of co-existence. Angeliki Laiou says that "many scholars today reject [Runciman's type of] hostile judgment and emphasize the defensive nature of the crusades" instead.
The crusades made a powerful contribution to Christian thought through the concept of Christian chivalry, "imbuing their Christian participants with what they believed to be a noble cause, for which they fought in a spirit of self-sacrifice. However, in another sense, they marked a qualitative degeneration in behavior for those involved, for they engendered and strengthened hostile attitudes..." Ideas such as Holy War and Christian chivalry, in both Christian thought and culture, continued to evolve gradually from the eleventh to the thirteenth centuries. This can be traced in expressions of law, traditions, tales, prophecy, and historical narratives, in letters, bulls and poems written during the crusading period. "The greatest of all crusader historians, William, archbishop of Tyre wrote his Chronicon from the point of view of a Latin Christian born and living in the East". Like others of his day, he did not start with a notion of tolerance, but he did advocate for, and contribute to, concepts that led to its development.
High Middle Ages (c. 1000–1200)
Historical background
In the pivotal twelfth century, Europe began laying the foundation for its gradual transformation from the medieval to the modern. Feudal lords slowly lost power to the feudal kings as kings began centralizing power into themselves and their nation. Kings built their own armies, instead of relying on their vassals, thereby taking power from the nobility. They started taking over legal processes that had traditionally belonged to local nobles and local church officials; and they began using these new legal powers to target minorities. According to R.I. Moore and other contemporary scholars such as John D. Cotts, and Peter D. Diehl "the growth of secular power and the pursuit of secular interests, constituted the essential context of the developments that led to a persecuting society." Some of these developments, such as centralization and secularization, also took place within the church whose leaders bent Christian thought to aid the state in the production of new rhetoric, patterns, and procedures of exclusion and persecution. According to Moore, the church "played a significant role in the formation of the persecuting society but not the leading one."
By the 1200s, both civil and canon law had become a major aspect of ecclesiastical culture, dominating Christian thought. Most bishops and Popes were trained lawyers rather than theologians, and much of the Christian thought of this period became little more than an extension of law. According to the Oxford Companion to Christian Thought, by the High Middle Ages, the religion that had begun by decrying the power of law had developed the most complex religious law the world has ever seen, a system in which equity and universality were largely overlooked.
Mendicant orders
New religious orders, that were founded during this time, each represent a different branch of Christian thought with its own distinct theology. Three of those new orders would have a separate but distinct impact on Christian thought on tolerance and persecution: the Dominicans, the Franciscans, and the Augustinians.
Dominican thought reached beyond a simple anti-heretical discourse into a broader and deeper ideology of sin, evil, justice, and punishment. They conceived themselves as fighting for truth against heterodoxy and heresy. St. Thomas Aquinas, perhaps the most illustrious of Dominicans, supported tolerance as a general principle. He taught that governing well included tolerating some evil in order to foster good or prevent worse evil. However, in his Summa Theologica II-II qu. 11, art. 3, he adds that heretics—after two fruitless admonitions—deserve only excommunication and death.
The Christian thought of St. Francis was pastoral. He is recognized for his commitment to issues of social justice and his embrace of the natural world but, during his lifetime, he was also a strong advocate of conversion of the Muslims, though he believed he would likely die for it. Francis was motivated by an intense devotion to the humanity of Christ, a regard for his sufferings, and by identifying the sufferings of ordinary people with the sufferings of Christ. Through the teachings of the Franciscans, this thinking emerged from the cloister, reoriented much Christian thought toward love and compassion, and became a central theme for the ordinary Christian.
Although the debate over defining the Augustinianism of the High Middle Ages has been ongoing for three quarters of a century, there is agreement that the Order of the Hermits of St. Augustine supported the development of church hierarchy and embraced concepts such as the primacy of the Pope and his perfection. The question of church authority in the West had remained unsettled until the eleventh century when the church hierarchy worked to centralize power into the Pope. Although centralization of power was never fully achieved within the church, the era of "papal monarchy" began, and the church gradually began to resemble its secular counterparts in its conduct, thought, and objectives.
Inquisitions, authority and exclusion
The medieval inquisitions were a series of separate inquisitions beginning from around 1184. The label Inquisition is problematic because it implies "an institutional coherence and an official unity that never existed in the Middle Ages." The inquisitions were formed in response to the breakdown of social order associated with heresy. Heresy was a religious, political, and social issue, so "the first stirrings of violence against dissidents were usually the result of popular resentment." There are many examples of this popular resentment involving mobs murdering heretics. Leaders reasoned that both lay and church authority had an obligation to step in when sedition, peace, or the general stability of society was part of the issue. In the Late Roman Empire, an inquisitorial system of justice had developed, and that system was revived in the Middle Ages using a combined panel (a tribunal) of both civil and ecclesiastical representatives with a bishop, his representative, or sometimes a local judge, as inquisitor. Essentially, the church reintroduced Roman law in Europe in the form of the Inquisition when it seemed that Germanic law had failed.
The revival of Roman law made it possible for Pope Innocent III (1198–1216) to make heresy a political question when he took Roman law's doctrine of lèse-majesté, and combined it with his view of heresy as laid out in the 1199 decretal Vergentis in senium, thereby equating heresy with treason against God.
Much of the papal reform of the eleventh century was not moral or theological reform so much as it was an attempt to impose this kind of Roman authority over the vast variety of local legal traditions that had existed up through the early Middle Ages. However, no pope ever succeeded in establishing complete control of the inquisitions. The institution reached its apex in the second half of the thirteenth century. During this period, the tribunals were almost entirely free from any higher authority, including that of the pope, and it became almost impossible to prevent abuse.
New persecution of minorities
The process of centralizing power included the development of a new kind of persecution aimed at minorities. R. I. Moore says the European nation-states had not exhibited a "habit" of persecuting minorities before the twelfth and thirteenth centuries. Jews, lepers, heretics and gays were the first minorities to be persecuted, and they were followed in the next few centuries by Gypsies, beggars, spendthrifts, prostitutes, and discharged soldiers. They were all vulnerable to whatever degree they existed "outside" the community. Religious persecution had certainly been familiar in the Roman empire, and remained so throughout the history of the Byzantine Empire, but it had largely faded away in the West before reappearing in the eleventh century. The various persecutions of minorities became established over the next hundred years. In this it was "determined, not only over whom, but also by whom, the [increasing] power of government was to be exercised."
For example, Peter Comestor (d. 1197) was the first influential scholar to interpret biblical injunctions against sodomy as injunctions against homosexual intercourse. The Third Lateran council of 1179 then became the first ecclesiastical council to rule that men who engaged in homosexual activity should be deprived of office or excommunicated. However, "the real impetus of the attack on homosexuality did not come from the church." The Fourth Lateran council reduced those penalties, and though Gregory IX (1145–1241) ordered the Dominicans to root out homosexuality from the territory that later became the nation of Germany, a century earlier, the kingdom of Jerusalem had spread a legal code ordaining death for "sodomites". From the 1250s onwards, a series of similar legal codes in the nation-states of Spain, France, Italy and Germany followed this example. "By 1300, places where male sodomy was not a capitol offense had become the exception rather than the rule."
Centralization of power led all of Europe of the High Middle Ages to become a persecuting culture. Christian thought, along with the intellectuals of the day who published their pejorative views of minorities in writing, helped make persecution a tool of the process of centralization as well as its inevitable result. Together, secular rulers and writers, along with Christian leadership and thought, created a new rhetoric of exclusion, legitimizing persecution based on new attitudes of stereotyping, stigmatization and even demonization of the accused. Moore says this contributed to "deliberate and socially sanctioned violence ... directed, through established governmental, judicial and social institutions, against groups of people defined by general characteristics such as race, religion or way of life. Membership in such groups in itself came to be regarded as justifying these attacks."
Instead of having to face one's accuser, new laws allowed the state to be the defendant and bring charges on its own behalf. The Assize of Arms of 1252 appointed constables to police breaches of the peace, and deliver offenders to the sheriff. In France, the constabulary was regularized in 1337 as a military body used to enforce the new laws. There were new funds to pay them as cities introduced several direct taxes: head taxes for the poor, and net-worth taxes or, occasionally, crude income taxes for the rich. New gold coins, trade and the new banks also made private policing possible. The inquisitions were a new legal method that allowed the judge to investigate on his own initiative without requiring a victim (other than the state) to press charges. Together, these enabled secular leaders to gain power by making others powerless.
During the fourteenth century, the kings in France and England were successful at centralizing power in their nations, and many other countries wanted to imitate them and their governing style. Other countries were not alone in that: the church wanted to imitate the secular kings as well. The primary success of the fourteenth century popes was in amassing power into the papal position, making any pope similar to a secular king. This is often called the papal monarchy or the papal-monarchial idea. As part of that process, popes in this century reorganized the financial system of the church. The poor had previously been allowed to offer their tithes 'in kind', in goods and services instead of cash, but these popes revamped the system to only accept money. The popes then had a steady cash flow, along with papal states: property the church owned that was ruled only by the pope and not a king. This gave them almost as much power as any king. They governed as the secular powers governed: with "royal [papal] secretaries, efficient treasuries, national [papal] judiciaries, and representative assemblies". The pope became a pseudo-monarch, and the church became secular, but the popes were so greedy, worldly, and politically corrupt, that pious Christians became disgusted, thereby undermining the papal authority that centralization was supposed to establish.
Persecution of the Jews
Historians agree that the period which spanned the eleventh, twelfth and thirteenth centuries was a turning point in Jewish-Christian relations. Bernard of Clairvaux (1090–1153), pillar of European monasticism and powerful twelfth century preacher, provides a perfect example of a Christian thinker who was balancing on a precipice, preaching hateful images of Jews but sounding Scripture based admonitions that they must be protected despite their nature." Low level discussions of religious thought had long existed between Jews and Christians. These interchanges attest to neighborly relations as Jews and Christians both struggled to fit the "other" into their sense of the demands of their respective faiths, and balance the human opponents who were facing them, with the traditions which they had inherited. By the thirteenth century, that changed in both tone and quality, growing more polemical.
In 1215, the Fourth Lateran Council, known as the Great Council, met and accepted 70 canons (laws). It hammered out a working definition of Christian community, stating the essentials of membership in it, thereby defining the "other" within Christian thought for the next three centuries. The last three canons required Jews to distinguish themselves from Christians in their dress, prohibited them from holding public office, and prohibited Jewish converts from continuing to practice Jewish rituals. As Berger has articulated it: "The other side of the coin of unique toleration was unique persecution." There was an increased and focused effort to convert and baptize Jews rather than tolerate them.
Trial of the Talmud
As their situation deteriorated, many Jews became enraged and polemics between the two faiths sunk to new depths. As Inquisitors learned how the central figures in Christianity were mocked, they went after the Talmud, and other Jewish writings. The Fourth Lateran council, in its 68th canon, placed on the secular authorities the responsibility for obtaining an answer from the Jews to the charge of blasphemy. For the first time in their history, Jews had to answer in a public trial the charges against them. There is no consensus in the sources as to who instigated the trial against the Talmud, but in June 1239, Gregory IX (1237–1241) issued letters to various archbishops and kings across Europe in which he ordered them to seize all Jewish books and take them to the Dominicans for examination. The order was only heeded in Paris where, on June 25, the royal court was opened to hear the case. Eventually, each side claimed victory; a final verdict of guilt and condemnation was not announced until May 1248, but the books had been burned six years before.
One result of the trial was that the people of Europe thought that, even if they had once had an obligation to preserve the Jews for the sake of the Old Testament, Talmudic Judaism was so different from its biblical sources that the old obligations no longer applied. In the words of Hebrew University historian Ben-Zion Dinur, from 1244 on the state and the church would "consider the Jews to be a people with no religion (benei bli dat) who have no place in the Christian world."
Expulsions of Jews in Europe from 1100 to 1600
The situation of the Jews differed from that of other victims of persecution because of their relationship with civic authorities and money. They often filled the role of financial agent or manager for the lords; they and their possessions were considered the property of the king in England; and they were often exempted from taxes and other laws because of the importance of their usury. This attracted unpopularity, jealousy and resentment from non-Jews.
As feudal lords lost power, the Jews became a focus of their opponents. J. H. Mundy has put it: "The opponents of princes hated the Jews" and "almost every medieval movement against princely or seignorial power began by attacking Jews." Opposition to the barons in England led to the Jewish expulsion in 1290. The expulsion from France in 1315 coincided with the formation of the league against arbitrary royal government.
As princes consolidated power to themselves with the institution of general taxation, they were able to be less monetarily dependent on the Jews. They were then less inclined to protect them, and were instead more inclined to expel them and confiscate their property for themselves.
Townspeople also attacked Jews. "Otto of Friesing reports that Bernard of Clairvaux in 1146 silenced a wandering monk at Mainz who stirred up popular revolt by attacking the Jews, but as the people gained a measure of political power around 1300, they became one of Jewry's greatest enemies."
Local anti-Jewish movements were often headed by local clergy, especially its radicals. The Fourth Lateran council of 1215 required Jews to restore "grave and immoderate usuries". Thomas Aquinas spoke against allowing the Jews to continue practicing usury. In 1283, the Archbishop of Canterbury spearheaded a petition demanding restitution of usury and urging the Jewish expulsion in 1290.
Emicho of Leiningen, who was probably mentally unbalanced, massacred Jews in Germany in search of supplies, loot, and protection money for a poorly provisioned army. The York massacre of 1190 also appears to have had its origins in a conspiracy by local leaders to liquidate their debts along with their creditors. In the early fourteenth century, systematic popular and judicial attack left the European Jewish community impoverished by the next century.
Although subordinate to religious, economic and social themes, racist concepts also reinforced hostility.
Anti-semitism
The term anti-semitism was coined in the nineteenth century; however, many Jewish intellectuals have insisted that modern anti-semitism, which is based on race, and the religiously based anti-Judaism of the past, are two different forms of a single historical phenomenon. Other scholars such as John Gager make a clear distinction between anti-Judaism and anti-semitism. Craig Evans defines anti-Judaism as opposition to Judaism as a religion, while anti-semitism is opposition to the Jewish people themselves. Langmuir insists that anti-semitism did not become widespread in popular culture until the eleventh century when it took root among people who were being buffeted by rapid social and economic changes. sees the development of anti-semitism as part of the paradigm shift of early modernity that replaced the primacy of theology, and the tradition of Augustine, with the primacy of human reason.
Some have linked anti-semitism to Christian thought on supersessionism. Perhaps the greatest Christian thinker of the Middle Ages was Thomas Aquinas, who continues to be highly influential in Catholicism. There is disagreement over where exactly Aquinas stood on the question of supersessionism. He did not teach punitive supersessionism, but did speak of Judaism as fulfilled and obsolete. Aquinas does appear to believe the Jews had been cast into spiritual exile for their rejection of Christ, but he also says Jewish observance of Law continues to have positive theological significance. For all the destructive consequences of supersessionism, Padraic O'Hare writes that supersessionism alone is not yet anti-semitism. He cites Christopher Leighton who associates anti-Judaism with the origins of Christianity, and anti-semitism with "modern nationalism and racial theories".
The Latin word deicidae was a translation of the Greek word that first appeared in Melito of the second century. Augustine had long ago rejected the concept, but the accusation began to flourish, within the altered situation of the High Middle Ages, when it was used to legitimize crimes against the Jews. The debate within Christian thought over the transubstantiation of the communion host helped foster the legend that Jews desecrated it. The ritual murder legend can also be tied to the accusation of Jewish deicide. By 1255, when Jews were charged with Hugh of Lincoln's ritual murder, it was not the first time they had been charged with such a crime. At other times, such allegations were rejected after full investigations had been conducted.
Heresy
There is a vast array of scholarly opinions on heresy, including whether it actually existed. Russell says that, as the church became more centralized and hierarchical, it was able to more clearly define orthodoxy than it ever had been before, and concepts of heresy developed along with it as a result. Mitchell Merback speaks of three groups involved in the persecution of heresy: the civil authorities, the church and the people. Historian R. I. Moore says the part the church played in turning dissent into heresy has been overestimated. According to Moore, the increased significance of heresy in the High Middle Ages reflects the secular powers' recognition of the devastating nature of the heretic's political message: that heretics were independent of the structures of power. James A. Brundage writes that the formal prosecution of heresy was codified in civil law, and was generally left to the civil authorities before this period. Russell adds that heresy became common only after the Third Lateran Council in 1179.
The dissemination of popular heresy to the laity (non-clergy) was a new problem for the bishops of the eleventh and twelfth centuries; heresy had previously been an accusation made solely toward bishops and other church leaders. The collection of ecclesiastical law from Burchard of Worms around 1002 did not include the concept of popular heresy in it. While there were acts of violence in response to heresy undertaken by secular powers for their own reasons, Christian thought on this problem (at the beginning of the High Middle Ages) still tended to coincide with Wazo of Liège who said reports of heresy should be investigated, true heretics excommunicated, and their teachings publicly rebuked.
By the end of the eleventh century, Christian thought had evolved a definition of heresy as the "deliberate rejection of the truth". This shifted attitudes concerning the church's appropriate response. The Council of Montpellier in 1062, and the Council of Toulouse in 1119, both demanded that heretics be handed over to secular powers for coercive punishment. As most bishops thought this would be participation in shedding blood, the church refused until 1148 when the notorious and violent Eon de l'Etoile was so delivered. Eon was found mad, but a number of his followers were burned.
Albigensian Crusade
Cathars, also known as Albigensians, were the largest of the heretic groups of the late 1100s and early 1200s. Catharism may reach back to the age of Constantine in the East, but there is consensus among most modern scholars that Catharism as an identifiable historical movement did not emerge in Europe until around 1143, when the first confirmed report of a group at Cologne is reported by the cleric Eberwin of Steinfeld Abbey. From 1125 to 1229, Cistercian monks left their isolation and served as itinerant preachers traversing town and country in anti-heretical campaigns aimed increasingly against the Albigensians. The Dominicans, founded in 1206, followed in this practice and approach. In 1209, after decades of having called upon secular rulers for aid in dealing with the Cathars and getting no response, Pope Innocent III and the king of France, Philip Augustus, began the military campaign against them. Scholars disagree, using two distinct lines of reasoning, on whether the brutal nature of the war that followed was determined more by the Pope or by King Philip and his proxies.
According to historian Elaine Graham-Leigh, Pope Innocent believed the tactical, as well as policy and strategic decisions, should be solely "the papal preserve". J. Sumption and Stephen O'Shea paint Innocent III as "the mastermind of the crusade".
Markale suggests the true architect of the campaign was the French king Philip Augustus, stating that "it was Phillip who actually petitioned Innocent for permission to conduct the Crusade." Historian Laurence W. Marvin says the Pope exercised "little real control over events in Occitania." Konrad Repgen writes: "The Albigensian war was indisputably a case of the interlinking of religion and politics."
Massacre at Béziers
On 22 July 1209, in the first battle of the Albigensian Crusade, mercenaries rampaged through the streets of Béziers, killing and plundering. Those citizens who could, sought refuge in the churches and cathedrals, but there was no safety from the raging mob. The doors of the churches were broken open, and all inside were slaughtered.
Some twenty years later, a story that historian Laurence W. Marvin calls apocryphal, arose about this event claiming the papal legate, Arnaud Amaury, the leader of the crusaders, was said to have responded: "Kill them all, let God sort them out." Marvin says it is unlikely the legate ever said any thing at all. "The speed and spontaneity of the attack indicates that the legate probably did not know what was going on until it was over." Marvin adds they did not kill them all at any rate: "clearly most of Bezier's population and buildings survived" and the city "continued to function as a major population center" after the campaign.
Other scholars say the legate probably did say it, that the statement is not inconsistent with what was recorded by the contemporaries of other church leaders, or with what is known of Arnaud Amaury's character and attitudes toward heresy. Religious toleration was not considered a virtue by the people or the church of the High Middle Ages. Historians W A Sibly and M D Sibly point out that: "contemporary accounts suggest that, at this stage, the crusaders did not intend to spare those who resisted them, and the slaughter at Béziers was consistent with this."
The Pope's response was not prompt, but four years after the massacre at Béziers, in a 1213 letter to Amaury, the pope rebuked the legate for his "greedy" conduct in the war. He also canceled crusade indulgences for Languedoc, and called for an end to the campaign. The campaign continued anyway. The Pope was not reversed until the Fourth Lateran council re-instituted crusade status two years later in 1215; afterwards, the Pope removed it yet again. Still, the campaign did not end for another 16 years. It was completed in what Marvin refers to as "an increasingly murky moral atmosphere" since there was technically no longer any crusade, no dispensational rewards for fighting it, the papal legates exceeded their orders from the Pope, and the army occupied lands of nobles who were in the good graces of the church.
Late Middle Ages (c. 1200 – c. 1400)
Historical background
"People living during what a modern historian has termed the 'calamitous' fourteenth century were thrown into confusion and despair". Plague, famine and war ravaged most of the continent. Add to this, social unrest, urban riots, peasant revolts and renegade feudal armies. From its pinnacle of power in the 1200s, the church entered a period of decline, internal conflict, and corruption and was unable to provide moral leadership. In 1302, Pope Boniface VIII (1294–1303) issued Unam sanctam, a papal bull proclaiming the superiority of the Pope over all secular rulers. Philip IV of France responded by sending an army to arrest the Pope. Boniface fled for his life and died shortly thereafter. "This episode revealed that the popes were no match for the feudal kings" and showed there had been a marked decline in papal prestige. George Garnett says the implementation of the papal monarchial idea had led to the loss of prestige, as the more efficient the papal bureaucratic machine became, the further it alienated the people, and the further it declined. Theologian Roger Olson says the church reached its nadir from 1309 to 1377 when there were three different men claiming to be the rightful Pope. "What the observer of the papacy witnessed in the second half of the thirteenth century was a gradual, though clearly perceptible, decomposition of Europe as a single ecclesiastical unit, and the fragmentation of Europe into independent, autonomous entities which were soon to be called national monarchies or states. This fragmentation heralded the withering away of the papacy as a governing institution operating on a universal scale." ...The [later] Reformation only administered the coup de grâce."
According to Walter Ullmann, the church lost "the moral, spiritual and authoritative leadership it had built up in Europe over the centuries of minute, consistent, detailed, dynamic forward-looking work. ... The papacy was now forced to pursue policies which, in substance, aimed at appeasement and were no longer directive, orientating and determinative." Ullmann goes on to explain that Christian thought of this age lost its objective standpoint, which had been based on Christianity's view of an objective world order and the Pope's place in that order. This was now replaced by the subjective point of view with the man taking precedence over the office. In the turmoil of nationalism and ecclesiastical confusion, some theologians began aligning themselves more with their kings than with the church. Devoted and virtuous nuns and monks became increasingly rare. Monastic reform had been a major force in the High Middle Ages but is largely unknown in the Late Middle Ages.
This led to the development in Christian thought of lay piety—the Devotio Moderna—the new devotion, which worked toward the ideal of a pious society of ordinary non-ordained people and, ultimately, to the Reformation and the development of the concepts of tolerance and religious freedom.
Response to reform
Advocates of lay piety who called for church reform met strong resistance from the Popes. John Wycliffe (1320–1384) urged the church to give up ownership of property, which produced much of the church's wealth, and to once again embrace poverty and simplicity. He urged the church to stop being subservient to the state and its politics. He denied papal authority. John Wycliff died of a stroke, but his followers, called Lollards, were declared heretics. After the Oldcastle rebellion many of its adherents were killed.
Jan Hus (1369–1415) accepted some of Wycliff's views and aligned with the Bohemian Reform movement which was also rooted in popular piety and owed much to the evangelical preachers of fourteenth century Prague. In 1415, Hus was called to the Council of Constance where his ideas were condemned as heretical and he was handed over to the state and burned at the stake. It was at the same Council of Constance that Paulus Vladimiri presented his treatise arguing that Christian and pagan nations could co-exist in peace.
The Fraticelli, who were also known as the "Little Brethren" or "Spiritual Franciscans", were dedicated followers of Saint Francis of Assisi. These Franciscans honored their vow of poverty and saw the wealth of the church as a contributor to corruption and injustice when so many lived in poverty. They criticized the worldly behavior of many churchmen. Thus, the Brethren were declared heretical by John XXII (1316–1334) who was called "the banker of Avignon".
The leader of the brethren, Bernard Délicieux (c. 1260–1270 1320) was well known as he had spent much of his life battling the Dominican-run inquisitions. After torture and threat of excommunication, he confessed to the charge of interfering with the inquisition, and was defrocked and sentenced to life in prison, in chains, in solitary confinement, and to receive nothing but bread and water. The judges attempted to ameliorate the harshness of this sentence due to his age and frailty, but Pope John XXII countermanded them and delivered the friar to Inquisitor Jean de Beaune. Délicieux died shortly thereafter in early 1320.
Modern inquisitions
Although inquisitions had always included a political aspect, the Inquisitions of the Late Middle Ages became more political and highly notorious. "The long history of the Inquisition divides easily into two major parts: its creation by the medieval papacy in the early thirteenth century, and its transformation between 1478 and 1542 into permanent secular governmental bureaucracies: the Spanish, Portuguese, and Roman Inquisitions... all of which endured into the nineteenth century."
Historian Helen Rawlings says, "the Spanish Inquisition was different [from earlier inquisitions] in one fundamental respect: it was responsible to the crown rather than the Pope and was used to consolidate state interest." It was authorized by the Pope, yet the initial inquisitors proved so severe that the Pope almost immediately opposed it, to no avail. Early in 1483, the king and queen established a council, the Consejo de la Suprema y General Inquisición, to govern the inquisition and chose Tomas de Torquemada to head it as inquisitor general. In October 1483, a papal bull conceded control to the crown. According to José Cassanova, the Spanish Inquisition became the first truly national, unified and centralized state institution. After the 1400s, few Spanish inquisitors were from the religious orders.
The Portuguese Inquisition was also fully controlled by the crown from its beginnings. The crown established a government board, known as the General Council, to oversee it. The Grand Inquisitor, who was chosen by the king, was always a member of the royal family. The first statute of limpieza de sangre (purity of blood) appeared in Toledo in 1449 and was later adopted in Portugal as well. Initially, these statutes were condemned by the church, but in 1555, the highly corrupt Pope Alexander VI approved a "blood purity" statute for one of the religious orders. In his history of the Portuguese Inquisition, Giuseppe Marcocci says there is a deep connection between the rise of the Felipes in Portugal, the growth of the inquisition, and the adoption of the statutes of purity of blood which spread and increased and were more concerned with ethnic ancestry than religion.
Historian T. F. Mayer writes that "the Roman Inquisition operated to serve the papacy's long standing political aims in Naples, Venice and Florence." Under Paul III and his successor Julius III, and under most of the popes thereafter, the Roman Inquisition's activity was relatively restrained and its command structure was considerably more bureaucratic than those of other inquisitions. Where the medieval Inquisition had focused on heresy and the disturbance of public order, the Roman Inquisition was concerned with orthodoxy of a more intellectual, academic nature. The Roman Inquisition is probably best known for its condemnation of the difficult and cantankerous Galileo which was more about "bringing Florence to heel" than about heresy.
Northern (Baltic) crusades
The Northern (or Baltic Crusades), went on intermittently from 1147 to 1316, and according to Eric Christiansen, they had multiple causes. Christiansen writes that, from the days of Charlemagne, the free pagan people living around the Baltic Sea in northern Europe raided the countries that surrounded them: Denmark, Prussia, Germany and Poland. In the eleventh century, various German and Danish nobles responded militarily to put a stop to it and make peace. They did achieve peace for a time, but it did not last; there was insurrection, which created a desire for more military response in the twelfth century.
Another factor adding to the desire for military action was the result of the longstanding German tradition of sending Christian missionaries to the area northeast of Germany, known as the Wendish, meaning Slavic "frontier", which often resulted in the untimely death of said missionaries.
Dragnea and Christiansen indicate the primary motive for war was the noble's desire for territorial expansion and material wealth in the form of land, furs, amber, slaves, and tribute. The princes wanted to subdue these pagan peoples, through conquering and conversion, but ultimately, they wanted wealth. Iben Fonnesberg-Schmidt says, the princes were motivated by their desire to extend their power and prestige, and conversion was not always an element of their plans. When it was, conversion by these princes was almost always as a result of conquest, either by the direct use of force or indirectly when a leader converted and required it of his followers as well. There were often severe consequences for populations that chose to resist. For example, the conquest and conversion of Old Prussia resulted in the death of much of the native population, whose language subsequently became extinct.
According to Mihai Dragnea, these wars were part of the political reality of the twelfth century.
The Popes became involved when Pope Eugenius III (1145–1153) called for a Second Crusade in response to the fall of Edessa in 1144 and the Saxon nobles refused to go to the Levant. In 1147, with Eugenius' Divini dispensatione, the German/Saxon nobles were granted full crusade indulgences to go to the Baltic area instead of the Levant. Eugenius' involvement did not lead to continuous papal support of these campaigns however. For the rest of the period after Eugenius, papal policy varied considerably. For example, Pope Alexander III, who was Pope from 1159 to 1181, did not issue a full indulgence or put the Baltic campaigns on an equal footing with the crusades to the Levant. According to Iben Fonnesberg-Schmidt, after the Second crusade, the campaigns were planned, financed and carried out by princes, local bishops and local archbishops rather than Popes until the arrival of the Teutonic order. The idea to employ crusaders seems to have originated with the local bishops. The nature of the campaigns changed when the Teutonic Order arrived in the region in 1230. The Danes regained influence in Estonia, the papacy became more involved, and the campaigns intensified and expanded.
Forced conversion and Christian thought
The Wendish crusade offers insights into new developments in Christian thought, particularly with respect to forced conversions. Ideas of peaceful conversion were rarely realized in these crusades because the monks and priests had to work with the secular rulers on their terms, and the military leaders seldom cared about taking the time for peaceful conversion. "While the theologians maintained that conversion should be voluntary, there was a widespread pragmatic acceptance of conversion obtained through political pressure or military coercion." The church's acceptance of this led some commentators of the time to endorse and approve it, something Christian thought had not done previously. Dominican friars helped with this ideological justification. By portraying the pagans as possessed by evil spirits, they could assert the pagans were in need of conquest, persecution and force to free them; then they would become peacefully converted. Another example of how the use of forced conversion was justified to make it compatible with previous Church doctrine on the subject, can be found in a statement by Pope Innocent III in 1201:
[T]hose who are immersed even though reluctant, do belong to ecclesiastical jurisdiction at least by reason of the sacrament, and might therefore be reasonably compelled to observe the rules of the Christian Faith. It is, to be sure, contrary to the Christian Faith that anyone who is unwilling and wholly opposed to it should be compelled to adopt and observe Christianity. For this reason a valid distinction is made by some between kinds of unwilling ones and kinds of compelled ones. Thus one who is drawn to Christianity by violence, through fear and through torture, and receives the sacrament of Baptism in order to avoid loss, he (like one who comes to Baptism in dissimulation) does receive the impress of Christianity, and may be forced to observe the Christian Faith as one who expressed a conditional willingness though, absolutely speaking, he was unwilling ...
Eric Christiansen writes that "These crusades can only be properly understood in light of the Cistercian movement, the rise of papal monarchy, the mission of the friars, the coming of the Mongol hordes, the growth of the Muscovite and Lithuanian empires, and the aims of the Conciliar movement in the fifteenth century." The Conciliar movement arose out of the profound malaise within western Christendom over schism and corruption in the church. It asked: where did ultimate authority in the church reside? Did it reside in the Pope, the body of cardinals who elected him, the bishops, or did it reside in the Christian community at large?
Conditional toleration and segregation
Conditional toleration that included discrimination was common everywhere in Europe of the Late Middle Ages and the Renaissance era. Prior to the Thirty Years' War, there was conditional toleration between Catholics and Protestants. While Frankfurt's Jews flourished between 1453 and 1613, their success came despite significant discrimination. They were restricted to one street, had rules concerning when they could leave it, and had to wear a yellow ring as a sign of their identity while outside. But within their community they also had some self-governance, their own laws, elected their own leaders, and had a Rabbinical school that became a religious and cultural center. "Officially, the medieval Catholic church never advocated the expulsion of all the Jews from Christendom, or repudiated Augustine's doctrine of Jewish witness... Still, late medieval Christendom frequently ignored its mandates..."
Political authorities of the day maintained order by keeping groups separated both legally and physically in what would be referred to in contemporary society as segregation. By the Late Middle Ages: "The maintenance of civil order through legislated separation and discrimination was part of the institutional structure of all European states ingrained in law, politics, and the economy."
Early Modern Era (1500–1715)
Early Reformation (1500–1600)
Protestant Christians pioneered the concept of religious toleration. There was a concerted campaign for tolerance in mid-sixteenth century northwestern Switzerland in the town of Basle. Sebastian Castellio (1515–1563), who was among the earliest of the reformers to advocate both religious and political tolerance, had moved to Basle after he was exiled from France. Castellio's argument for toleration was essentially theological: "By casting judgment on the belief of others, don't you take the place of God?" However, since he also pled for social stability and peaceful co-existence, his argument was also political. Making similar arguments were Anabaptist David Joris (1501–1556) from the Netherlands and the Italian reformer Jacobus Acontius (1520–1566) who also gathered with Castellio in Basle. Other advocates of religious tolerance, Mino Celsi (1514–1576) and Bernardino Ochino (1487–1564), joined them, publishing their works on toleration in that city. By the end of the seventeenth and beginning of the eighteenth centuries, persecutions of unsanctioned beliefs had been reduced in most European countries.
One of the leading secular skeptics of tolerance in the sixteenth century was Leiden professor Justus Lipsius (1547–1606). He published Politicorum libri sex in 1589 which argued in favor of the persecution of religious dissenters. Lispius believed that plurality would lead to civil strife and instability, and said: "it is better to sacrifice one than to risk the collapse of the whole Commonwealth." Dirck Coornhurt responded by eloquently defending religious liberty using his belief that free access to what he saw as the ultimate truth in the scriptures would bring about harmony and stability.
Historians indicate that Lispius was not out of step with religious leaders in recognizing the problematic nature of reconciling religious tolerance with political reality. Luther saw this as well. He was fully in favor of religious toleration in 1523 writing that secular authorities should never fight heresy with the sword. Yet, after the Peasants War in Germany in 1524, Luther determined that lay authorities had an obligation to step in when sedition, peace, or the stability of society was part of the issue, thus he unintentionally echoed Augustine and Aquinas.
Geoffrey Elton says that the English reformer John Foxe (1517–1587) demonstrated his deep faith in religious toleration when he attempted to stop the execution of the English Catholic Edmund Campion and the five Dutch Anabaptists who had been sentenced to be burned in 1575.
Toleration from the Reformation to the Early Modern Era (1500–1715)
While the Protestant Reformation changed the face of Western Christianity forever, it still embraced Augustine's acceptance of coercion, and many regarded the death penalty for heresy as legitimate. Martin Luther had written against persecution in the 1520s, and had demonstrated genuine sympathy towards the Jews in his earlier writings, especially in Das Jesus ein geborener Jude sei (That Jesus was born as a Jew) from 1523, but after 1525 his position hardened. In Wider die Sabbather an einen guten Freund (Against the Sabbather to a Good Friend), 1538, he still considered a conversion of the Jews to Christianity as possible, but in 1543 he published On the Jews and their Lies, a "violent anti-semitic tract". John Calvin helped to secure the execution for heresy of Michael Servetus, although he unsuccessfully requested that he should be beheaded instead of being burned at the stake.
In England, John Foxe, John Hales, Richard Perrinchief, Herbert Thorndike and Jonas Proast all only saw mild forms of persecution against the English Dissenters as legitimate. Most dissenters disagreed with the Anglican Church only on secondary matters of worship and ecclesiology, and although this was a considered a serious sin, only a few seventeenth century Anglican writers thought that this 'crime' deserved the death penalty. The English Act of Supremacy significantly complicated the matter by securely welding Church and state.
The Elizabethan bishop Thomas Bilson was of the opinion that men ought to be "corrected, not murdered", ut he did not condemn the Christian Emperors for executing the Manichaeans for "monstrous blasphemies". The Lutheran theologian Georgius Calixtus argued for the reconciliation of Christendom by removing all unimportant differences between Catholicism and Protestantism, and Rupertus Meldenius advocated in necessariis unitas, in dubiis libertas, in omnibus caritas (in necessary things unity; in uncertain things freedom; in everything compassion) in 1626.
The English Protestant "call for toleration"
In his book on the English Reformation, the late A. G. Dickens argued that from the beginning of the Reformation there had "existed in Protestant thought in Zwingli, Melanchthon and Bucer, as well as among the Anabaptists a more liberal tradition, which John Frith was perhaps the first to echo in England". Condemned for heresy, Frith was burnt at the stake in 1533. In his own mind, he died not because of the denial of the doctrines on purgatory and transubstantiation but "for the principle that a particular doctrine on either point was not a necessary part of a Christian's faith". In other words, there was an important distinction to be made between a genuine article of faith and other matters where a variety of very different conclusions should be tolerated within the church. This stand against unreasonable and profligate dogmatism meant that Frith, "to a greater extent than any other of our early Protestants", upheld "a certain degree of religious freedom".
Frith was not alone. John Foxe, for example, "strove hard to save Anabaptists from the fire, and he enunciated a sweeping doctrine of tolerance even towards Catholics, whose doctrines he detested with every fibre of his being".
In the early seventeenth century, Thomas Helwys was a principal formulator of that distinctively Baptist request: that the church and the state be kept separate in matters of law, so that individuals might have a freedom of religious conscience. Helwys said the King "is a mortal man, and not God, therefore he hath no power over the mortal soul of his subjects to make laws and ordinances for them and to set spiritual Lords over them". King James I had Helwys thrown into Newgate prison, where he had died by 1616 at about the age of forty.
By the time of the English Revolution, Helwys' stance on religious toleration was more commonplace. While accepting their zeal in desiring a "godly society", some contemporary historians doubt whether the English Puritans during the English Revolution were as committed to religious liberty and pluralism as traditional histories have suggested. However, historian John Coffey's recent work emphasizes the contribution of a minority of radical Protestants who steadfastly sought toleration for heresy, blasphemy, Catholicism, non-Christian religions, and even atheism. This minority included the Seekers, as well as the General Baptists and the Levellers. Their witness of these groups together demanded the church be an entirely voluntary, non-coercive community able to evangelize in a pluralistic society governed by a purely civil state.
In 1644 the "Augustinian consensus concerning persecution was irreparably fractured." This year can be identified quite exactly, because 1644 saw the publication of John Milton's Areopagitica, William Walwyn's The Compassionate Samaritane, Henry Robinson's Liberty of Conscience and Roger Williams' The Bloudy Tenent of Persecution. These authors were Puritans or had dissented from the Church of England, and their radical Protestantism led them to condemn religious persecution, which they saw as a popish corruption of primitive Christianity. Other non-Anglican writers advocating toleration were Richard Overton, John Wildman and John Goodwin, the Baptists Samuel Richardson and Thomas Collier and the Quakers Samuel Fisher and William Penn. Anglicans who argued against persecution were: John Locke, Anthony Ashley-Cooper, 1st Earl of Shaftesbury, James Harrington, Jeremy Taylor, Henry More, John Tillotson and Gilbert Burnet.
All of these individuals considered themselves Christians or were actual churchmen. John Milton and John Locke are the predecessors of modern liberalism. Although Milton was a Puritan and Locke an Anglican, Areopagitica and A Letter concerning Toleration are canonical liberal texts. Only from the 1690s onwards did the philosophy of Deism emerge, and with it a third group that advocated religious toleration. But, unlike the radical Protestants and the Anglicans, the deists also rejected biblical authority; this group prominently includes Voltaire, Frederick II of Prussia, Joseph II, Holy Roman Emperor, Thomas Jefferson and the English-Irish philosopher John Toland. When Toland published the writings of Milton, Edmund Ludlow and Algernon Sidney, he tried to downplay the Puritan divinity in these works.
In 1781, the Holy Roman Emperor, Joseph II, issued the Patent of Toleration which guaranteed the practice of religion by the Evangelical Lutheran and the Reformed Church in Austria. For the first time after the Counter-Reformation, the political and legal process of religious equality officially began.
Following the debates that started in the 1640s, the Church of England was the first Christian church to grant adherents of other Christian denominations freedom of worship with the Act of Toleration 1689, which nevertheless still retained some forms of religious discrimination and did not include toleration for Catholics. Even today, only individuals who are members of the Church of England at the time of the succession may become the British monarch.
Witches (1450–1750)
Renaissance, Reformation and witch hunts occurred in the same centuries. Stuart Clark indicates that is no coincidence, that instead, these different aspects of a single age are representative of a world in the process of revolutionizing its way of thinking and understanding. Clark says that understanding one aspect of the age, such as the witch hunts, can lead to a greater understanding of another, such as the development of tolerance.
Until the 1300s, the official position of the Roman Catholic Church was that witches did not exist. In medieval canon law, Christian thought on this subject is represented by a passage called the Canon Episcopi. Alan Charles Kors explains that the Canon is skeptical that witches exist while still allowing the existence of demons and the devil. By the mid-fifteenth century, popular conceptions of witches changed dramatically, and Christian thought denying witches and witchcraft was being challenged by the Dominicans and being debated within the church. While historians have been unable to pinpoint a single cause of what became known as the "witch frenzy", all have acknowledged that a new but common stream of thought developed in society, as well as in some parts of the church, that witches were both real and malevolent.
Scholarly views on what caused this change fall into three categories: those who say the learned in the church spread it, those who say popular tradition did so, and those who say witchcraft was actually being practiced. Of these three possibilities, Ankarloo and Clark indicate the main pressure to prosecute witches came from the common people, and trials were mostly civil trials. Everywhere in Europe, the higher up in either the ecclesiastical or the secular court system a case went, the more reluctance and reservations there were, with most cases ending up dismissed. In regions that were the most centralized, appellate jurisdictions acted in a restraining capacity, but areas of weak regimes, lacking strong legal or political control, were a disaster for witches. Witch trials were more prevalent in regions where the Catholic church was weakest (Germany, Switzerland and France), while in areas with a strong church presence (Spain, Poland and Eastern Europe) the witch craze was negligible.
Eventually, Christian thought solidified behind Cautio Criminalis (Precautions for Prosecutors) which was written by Friedrich Spee, in 1631. As a Jesuit priest, he personally witnessed witch trials in Westphalia. Driven by his priestly charge of enacting Christian charity, he describes the inhumane torture of the rack with the graphic language of the truly horrified saying, "it makes my blood boil." As a professor, Spee sought to expose the flawed arguments and methods used by the Dominican witch-hunters along with any authority who allowed it, including the emperor. Spee's primary methods for doing so were sarcasm, ridicule and piercing logic. The moral impression of his book was great, and it brought about the abolition of witch trials in a number of places, and led to its gradual decline in others. Witch trials became scant in the second half of the seventeenth century and eventually simply subsided. No one can explain, definitively, why they ended any more than they can explain why they began.
Modern era
Roman Catholic policy
In 1892, Pope Leo XIII (1810–1903) confirmed Aquinas' view of tolerance as a necessary aspect of governing well in Acta Leonis XIII 205.
On 7 December 1965 the Catholic Church's Vatican II council issued the decree "Dignitatis humanae" which dealt with the rights of the person and communities to social and civil liberty in religious matters. The Vatican II document Nostra Aetate absolved the Jewish people of any charge of deicide and affirmed that God has always remained faithful to his covenant with Israel.
In 1987, Pope John Paul II appealed to the world to recognize religious freedom as a fundamental human right. Pope John was quoted by the Los Angeles Times as saying: "Religious freedom, an essential requirement of the dignity of every person, is a cornerstone of the structure of human rights, and for this reason, an irreplaceable factor in the good of individuals and of the whole of society as well as of the personal fulfillment of each individual." On 12 March 2000, he prayed for forgiveness because "Christians have often denied the Gospel; yielding to a mentality of power, they have violated the rights of ethnic groups and peoples, and shown contempt for their cultures and religious traditions."
Protestant Christian thought
After World War II and the Holocaust, many Protestant theologians began to reassess Christian theology's negative attitudes towards the Jews, and as a result, felt compelled to reject the doctrine of supersessionism. Numerous leading Christian thinkers continue to find "keys to truth" in ancient writings such as Augustine's Confessions, and Aquinas' Summa. Modern discussions of the Kingdom of God are still influenced by the nineteenth century view of the eschatological Jesus.
Colin Gunton and Richard Swinburn use traditional motifs in order to creatively reinterpret atonement theories in ways which are not reliant on beliefs rejected by most contemporary Christians such as demonology or the belief in witches. They do not employ the morally objectionable transfer of liability and still effectively convey their belief that Christ's death is more than just a moral example.
Today's debates over inclusivity reach to the heart of what it means to be a Christian both theologically and practically. Bruce L. McCormack says that is why Karl Barth's theology of neo-orthodoxy remains popular even in the "post-modern" twenty-first century. Though Barth advocates the exclusive Christ-centered discipleship of orthodoxy, his view is also inherently inclusive, since, in his view, every human is among those God has set apart for that discipleship.
Contemporary global persecution and sociology
"The exceptional character of persecution in the Latin west since the twelfth century has lain not in the scale or savagery of particular persecutions, ... but in its capacity for sustained long-term growth. The patterns, procedures and rhetoric of persecution, which were established in the twelfth century, have given it the power of infinite and indefinite self-generation and self-renewal."
Tolerance, as a value, has grown out of humanity's experiences with social conflict and persecution, and is part of the legacy garnered from this. But there are also ideals similar to the concept of modern tolerance throughout the history of Christian thought (and philosophy and other religious thought) that can be seen as the long and somewhat torturous "prehistory" of tolerance. The Peace of Westphalia in 1648 included the first statement of freedom of religion in modern history. In the twenty-first century, nearly all contemporary societies in the world include religious freedom in their constitutions or other national proclamations in support of human rights. However, at the symposium on law and religion in 2014, Michelle Mack said: "Despite what appears to be near-universal expression of commitment to religious human rights, ... violations of freedom of religion and belief, including acts of severe persecution, occur with fearful frequency." In 1981, Israeli scholar Yoram Dinstein wrote that freedom of religion is "the most persistently violated human right in the annals of the species". In 2018, the U.S. Department of State, which releases annual reports in which it documents varying types of restrictions which are imposed on religious freedom around the world, detailed country by country, the violations of religious freedom which are taking place in approximately 75% of the 195 countries in the world.
R.I. Moore says that persecution during the Middle Ages "provides a striking illustration of classic deviance theory as it was propounded by the father of sociology, Emile Durkheim". Strong social-group identities, with attitudes of group loyalty, solidarity, and highly perceived benefits of belonging, make it likely that an individual or a group will become intolerant when identity is threatened. This indicates intolerance is more of a social process, tied to social identity, than an ideological one.
Contemporary persecution is often part of a larger conflict involving emerging states, and established states in the process of redefining their national identity. For example, Christianity in Iraq dates from the Apostolic era in what was then Persia; the U.S. Department of State identified 1.4 million Christians in Iraq in 1991 when the Gulf War began. By 2010, the number of Christians dropped to 700,000 and it is currently estimated there are between 200,000 and 450,000 Christians left in Iraq. During that period, actions against Christians included the burning and bombing of churches, the bombing of Christian owned businesses and homes, kidnapping, murder, demands for protection money, and anti-Christian rhetoric in the media with those responsible saying they wanted to rid the country of its Christians.
Serbia has been Christian since the Christianization of Serbs by Clement of Ohrid and Saint Naum in the ninth century. Within a relatively peaceful Serbia, the province of Kosovo has long been a site of ethnic and religious tensions. In the 1990s, it drew attention for frequent discrimination and acts of violence toward Albanians: 90 percent of Kosovo's Albanian population is Muslim. Eventually, Kosovo erupted in a full-scale ethnic cleansing resulting in armed intervention by the United Nations in 1999. Serbs attacked Albanian villages, killed and brutalized inhabitants, burned down houses and forced them to leave. By the end of 1998, approximately 3000 Islamic Albanians had been killed and more than 300,000 expelled. By the end of the "action", around 800,000 of the roughly two million Albanians, fled.
See also
Criticism of Christianity
History of Christianity
History of Christian theology
Christianity in the 4th century
Christianity and violence
Role of Christianity in civilization
Violence against Christians in India
Notes
References
Further reading
Chris Beneke (2006): Beyond toleration. the religious origins of American pluralism, Oxford University Press
Alexandra Walsham (2006): Charitable hatred. Tolerance and intolerance in England, 1500–1700, Manchester University Press
Religious persecution
Christianization
Christianity and violence
Persecution by Christians
Christian ethics
| 0.778671 | 0.972731 | 0.757438 |
Sex differences in psychology
|
Sex differences in psychology are differences in the mental functions and behaviors of the sexes and are due to a complex interplay of biological, developmental, and cultural factors. Differences have been found in a variety of fields such as mental health, cognitive abilities, personality, emotion, sexuality, friendship, and tendency towards aggression. Such variation may be innate, learned, or both. Modern research attempts to distinguish between these causes and to analyze any ethical concerns raised. Since behavior is a result of interactions between nature and nurture, researchers are interested in investigating how biology and environment interact to produce such differences, although this is often not possible.
A number of factors combine to influence the development of sex differences, including genetics and epigenetics; differences in brain structure and function; hormones, and socialization.
The formation of gender is controversial in many scientific fields, including psychology. Specifically, researchers and theorists take different perspectives on how much of gender is due to biological, neurochemical, and evolutionary factors (nature), or is the result of culture and socialization (nurture). This is known as the nature versus nurture debate.
Definition
Psychological sex differences refer to emotional, motivational, or cognitive differences between the sexes. Examples include greater male tendencies toward violence, or greater female empathy.
The terms "sex differences" and "gender differences" are sometimes used interchangeably; they can refer to differences in male and female behaviors as either biological ("sex differences") or environmental/cultural ("gender differences"). This distinction is often difficult to make, due to challenges in determining whether a difference is biological or environmental/cultural. It's important to note, however, that many individuals will use "sex" to refer to the biological and "gender" as a social construct.
Gender is generally conceived as a set of characteristics or traits that are associated with a certain biological sex (male or female). The characteristics that generally define gender are referred to as masculine or feminine. In some cultures, gender is not always conceived as binary, or strictly linked to biological sex. As a result, in some cultures there are third, fourth, or "some" genders.
History
Beliefs about sex differences have likely existed throughout history.
In his 1859 book On the Origin of Species, Charles Darwin proposed that, like physical traits, psychological traits evolve through the process of sexual selection:
Two of his later books, The Descent of Man, and Selection in Relation to Sex (1871) and The Expression of the Emotions in Man and Animals (1872) explore the subject of psychological differences between the sexes. The Descent of Man and Selection in Relation to Sex includes 70 pages on sexual selection in human evolution, some of which concerns psychological traits.
The study of gender took off in the 1970s. During this time period, academic works were published reflecting the changing views of researchers towards gender studies. Some of these works included textbooks, as they were an important way that information was compiled and made sense of the new field. In 1978 Women and sex roles: A social psychological perspective was published, one of the first textbooks on the psychology behind women and sex roles. Another textbook to be published, Gender and Communication, was the first textbook to discuss the topic of its subject.
Other influential academic works focused on the development of gender. In 1966, The Development of Sex Differences was published by Eleanor E Maccoby. This book went into what factors influence a child's gender development, with contributors proposing the effects of hormones, social learning, and cognitive development in respective chapters. Man and Woman, Boy and Girl, by John Money was published in 1972, reporting findings of research done with intersex subjects. The book proposed that the social environment a child grows up in is more important in determining gender than the genetic factors he or she inherits. The majority of Money's theories regarding the importance of socialization in the determination of gender have come under intense criticism, especially in connection with the inaccurate reporting of success in the infant sex reassignment of David Reimer.
In 1974, The Psychology of Sex Differences was published. It said that men and women behave more similarly than had been previously supposed. They also proposed that children have much power over what gender role they grow into, whether by choosing which parent to imitate, or doing activities such as playing with action figures or dolls. These works added new knowledge to the field of gender psychology.
Psychological traits
Personality traits
Cross-cultural research has shown population-level gender differences on the tests measuring sociability and emotionality. For example, on the scales measured by the Big Five personality traits women consistently report higher neuroticism, agreeableness, warmth and openness to feelings, and men often report higher assertiveness and openness to ideas. Nevertheless, there is significant overlap in all these traits, so an individual woman may, for example, have lower neuroticism than the majority of men. The size of the differences varied between cultures.
Across cultures, gender differences in personality traits are largest in prosperous, healthy, and egalitarian cultures in which women have more opportunities that are equal to those of men. However, variation in the magnitude of sex differences between more or less developed world regions were due to changes between men, not women, in these respective regions. That is, men in highly developed world regions were less neurotic, extroverted, conscientious and agreeable compared to men in less developed world regions. Women, on the other hand, tended not to differ significantly in personality traits across regions.
A personality trait directly linked to emotion and empathy where gender differences exist (see below) is scored on the Machiavellianism scale. Individuals who score high on this dimension are emotionally cool; this allows them to detach from others as well as values, and act egoistically rather than driven by affect, empathy or morality. In large samples of US college students, males are on average more Machiavellian than females; in particular, males are over-represented among very high Machiavellians, while females are overrepresented among low Machiavellians. A 2014 meta-analysis by researchers Rebecca Friesdorf and Paul Conway found that men score significantly higher on narcissism than women and this finding is robust across past literature. The meta-analysis included 355 studies measuring narcissism across participants from the US, Germany, China, Netherlands, Italy, UK, Hong Kong, Singapore, Switzerland, Norway, Sweden, Australia and Belgium as well as measuring latent factors from 124 additional studies. The researchers noted that gender differences in narcissism is not just a measurement artifact but also represents true differences in the latent personality traits such as men's heightened sense of entitlement and authority.
Males on average are more assertive and have higher self-esteem. Females were on average higher than males in extraversion, anxiety, trust, and, especially, tender-mindedness (e.g., nurturance).
When interests were classified by RIASEC type Holland Codes (Realistic, Investigative, Artistic, Social, Enterprising, Conventional), men were found to prefer working with things, while women preferred working with people. Men also showed stronger Realistic and Investigative interests, and women showed stronger Artistic, Social, and Conventional interests. Sex differences favoring men were also found for more specific measures of engineering, science, and mathematics interests.
Emotion
When measured with an affect intensity measure, women reported greater intensity of both positive and negative affect than men. Women also reported a more intense and more frequent experience of affect, joy, and love but also experienced more embarrassment, guilt, shame, sadness, anger, fear, and distress. Experiencing pride was more frequent and intense for men than for women. In imagined frightening situations, such as being home alone and witnessing a stranger walking towards your house, women reported greater fear. Women also reported more fear in situations that involved "a male's hostile and aggressive behavior". Emotional contagion refers to the phenomenon of a person's emotions becoming similar to those of surrounding people. Women have been reported to be more responsive to this. In fact, it was found in a study that men had stronger emotional experiences while women had stronger emotional expressivity, when it came to anger. It was also reported in a previous study that men had a higher physiological response to stimuli meant to induce anger. Seeing that emotional experience and expressivity are two different things, another study has found that "the emotional responses elicited by emotional videos were inconsistent between emotional experience and emotional expressivity. Men had stronger emotional experiences, whereas women had stronger emotional expressivity" where in this case emotional experience is the physiological arousal one faces due to an external stimulus and emotional expressivity is the "external expression of subjective experience."
There are documented differences in socialization that could contribute to sex differences in emotion and to differences in patterns of brain activity.
Context also determines a man or woman's emotional behavior. Context-based emotion norms, such as feeling rules or display rules, "prescribe emotional experience and expressions in specific situations like a wedding or a funeral", may be independent of the person's gender. In situations like a wedding or a funeral, the activated emotion norms apply to and constrain every person in the situation. Gender differences are more pronounced when situational demands are very small or non-existent as well as in ambiguous situations. During these situations, gender norms "are the default option that prescribes emotional behavior".
Professor of Psychology Ann Kring said, "It is incorrect to make a blanket statement that women are more emotional than men, it is correct to say that women show their emotions more than men." In two studies by Kring, women were found to be more facially expressive than men when it came to both positive and negative emotions. These researchers concluded that women and men experience the same amount of emotion, but that women are more likely to express their emotions.
Women are known to have anatomically differently shaped tear glands than men as well as having more of the hormone prolactin, which is present in tear glands, as adults. While girls and boys cry at roughly the same amount at age 12, by age 18, women generally cry four times more than men, which could be explained by higher levels of prolactin.
Empathy
Current literature find that women demonstrate more empathy across studies. Women perform better than men in tests involving emotional interpretation, such as understanding facial expressions, and empathy.
Some studies argue that this is related to the subject's perceived gender identity and gendered expectations influencing the subject's implicit gender stereotypes. Additionally, culture impacts gender differences in the expression of emotions. This may be explained by the different social roles women and men have in different cultures, and by the status and power men and women hold in different societies, as well as the different cultural values various societies hold. Some studies have found no differences in empathy between women and men, and suggest that perceived gender differences are the result of motivational differences. Some researchers argue that because differences in empathy disappear on tests where it is not clear that empathy is being studied, men and women do not differ in ability, but instead in how empathetic they would like to appear to themselves and others.
Women are better at recognizing facial effects, expression processing and emotions in general., while men were better at recognizing specific behaviour which includes anger, aggression and threatening cues. Small but statistically significant sex differences favour females in "Reading of the mind" test. "Reading of the mind" test is an ability measure of theory of mind or cognitive empathy. Overall, females have an advantage in non-verbal emotional recognition.
There are some sex differences in empathy from birth which remains consistent and stable across lifespan. Females were found to have higher empathy than males while children with higher empathy regardless of gender continue to be higher in empathy throughout development. Further analysis of brain tools such as event related potentials found that females who saw human suffering had higher ERP waveforms than males. Another investigation with similar brain tools such as N400 amplitudes found higher N400 in females in response to social situations which positively correlated with self-reported empathy. Structural fMRI studies found females have larger grey matter volumes in posterior inferior frontal and anterior inferior parietal cortex areas which are correlated with mirror neurons in fMRI literature. Females were also found to have stronger link between emotional and cognitive empathy. The researchers found that the stability of these sex differences in development are unlikely to be explained by any environment influences but rather might have some roots in human evolution and inheritance.
An evolutionary explanation for the difference is that understanding and tracking relationships and reading others' emotional states was particularly important for women in prehistoric societies for tasks such as caring for children and social networking. Throughout prehistory, females nurtured and were the primary caretakers of children so this might have led to an evolved neurological adaptation for women to be more aware and responsive to non-verbal expressions. According to the Primary Caretaker Hypothesis, prehistoric males did not have same selective pressure as primary caretakers so therefore this might explain modern day sex differences in emotion recognition and empathy.
Aggression
Although research on sex differences in aggression show that males are generally more likely to display aggression than females, how much of this is due to social factors and gender expectations is unclear. Aggression is closely linked with cultural definitions of "masculine" and "feminine". In some situations, women show equal or more aggression than men, although less physical; for example, women are more likely to use direct aggression in private, where other people cannot see them and are more likely to use indirect aggression in public. Men are more likely to be the targets of displays of aggression and provocation than females. Studies by Bettencourt and Miller show that when provocation is controlled for, sex differences in aggression are greatly reduced. They argue that this shows that gender-role norms play a large part in the differences in aggressive behavior between men and women.
Sex differences in aggression are one of the most robust and oldest findings in psychology. Males regardless of age engaged in more physical and verbal aggression while small effect for females engaging in more indirect aggression such as rumor spreading or gossiping. Males tend to engage in more unprovoked aggression at higher frequency than females. This greater male aggression is also present in childhood and adolescence. The difference is greater in the physical type of aggression, compared to the verbal type. Males are more likely to cyber-bully than females. Difference also showed that females reported more cyberbullying behaviour during mid-adolescence, while males showed more cyber bullying behaviour at late adolescence.
In humans, males engage in crime and especially violent crime more than females. The relationship between testosterone and aggression is highly debated in the scientific community, and evidence for a causal link between the two has resulted in conflicting conclusions. Some studies indicate that testosterone levels may be affected by environmental and social influences. while in the biological paradigm, the relationship between testosterone and the brain is primarily studied from two domains, a lumbar puncture which is mostly used to clinically diagnose disorders relevant in the nervous system and are usually not done for research purposes, a majority of research papers however rely on measures such as blood sampling (which is in widespread use across scientific academia ) in order to calculate active testosterone levels in behavior-related brain regions while the androgen is being administered or to observe testosterone increase in (mostly) men during physical activities. Involvement in crime usually rises in the early teens to mid teens, which happen at the same time as testosterone levels rise. Most studies support a link between adult criminality and testosterone, although the relationship is modest if examined separately for each sex. However, nearly all studies of juvenile delinquency and testosterone are not significant. Most studies have also found testosterone to be associated with behaviors or personality traits linked with criminality such as antisocial behavior and alcoholism.
In species that have high levels of male physical competition and aggression over females, males tend to be larger and stronger than females. Humans have modest general body sexual dimorphism on characteristics such as height and body mass. However, this may understate the sexual dimorphism regarding characteristics related to aggression since females have large fat stores. The sex differences are greater for muscle mass and especially for upper body muscle mass. Men's skeleton, especially in the vulnerable face, is more robust. Another possible explanation, instead of intra-species aggression, for this sexual dimorphism may be that it is an adaption for a sexual division of labor with males doing the hunting. However, the hunting theory may have difficulty explaining differences regarding features such as stronger protective skeleton, beards (not helpful in hunting, but they increase the perceived size of the jaws and perceived dominance, which may be helpful in intra-species male competition), and greater male ability at interception (greater targeting ability can be explained by hunting).
Ethics and morality
Sex differences of moral orientation find that women tend towards a more care-based morality while men tend towards a more justice-based morality. This is usually based on the fact that men have slightly more utilitarian reasoning while women have more deontological reasoning which is largely because of greater female affective response and rejection of harm-based behaviours (based on dual process theory). Women tend to have greater moral sensitivity than men. Using the five moral principles of care, fairness, loyalty, authority, and purity (based on moral foundations theory), women consistently score higher on care, fairness, and purity across 67 cultures. On the other hand, sex differences in loyalty and authority were small in size and highly variable across cultural contexts. Country-level sex differences in all moral foundations in relation to cultural, socioeconomic, and gender-related indicators reveal that global sex differences in moral foundations are larger in individualistic, Western, and gender-equal cultures.
Cognitive traits
Sex-related differences of cognitive functioning is questioned in research done on the areas of perception, attention, reasoning, thinking, problem solving, memory, learning, language and emotion. Cognitive testing on the sexes involves written tests that typically have a time limit, the most common form being a standardized test such as the SAT or ACT. These test basic individual abilities rather than complex combination of abilities needed to solve real life problems. Analysis of the research has found a lack of credibility when relying on published studies about cognition because most contain findings of cognitive differences between the males and females, but they overlook those that do not show any differences, creating a pool of biased information. Those differences found are attributed to both social and biological factors.
It was once thought that sex differences in cognitive task and problem solving did not occur until puberty. However, as of 2000, evidence suggested that cognitive and skill differences are present earlier in development. For example, researchers have found that three- and four-year-old boys were better at targeting and at mentally rotating figures within a clock face than girls of the same age. Prepubescent girls, however, excelled at recalling lists of words. These sex differences in cognition correspond to patterns of ability rather than overall intelligence. Laboratory settings are used to systematically study the sexual dimorphism in problem solving task performed by adults.
On average, females excel relative to males on tests that measure recollection. They have an advantage on processing speed involving letters, digits and rapid naming tasks. Females tend to have better object location memory and verbal memory. They also perform better at verbal learning. Females have better performance at matching items and precision tasks, such as placing pegs into designated holes. In maze and path completion tasks, males learn the goal route in fewer trials than females, but females remember more of the landmarks presented. This suggests that females use landmarks in everyday situations to orient themselves more than males. Females were better at remembering whether objects had switched places or not.
On average, males excel relative to females at certain spatial tasks. Specifically, males have an advantage in tests that require the mental rotation or manipulation of an object. In a computer simulation of a maze task, males completed the task faster and with fewer errors than their female counterparts. Additionally, males have displayed higher accuracy in tests of targeted motor skills, such as guiding projectiles. Males are also faster on reaction time and finger tapping tests.
Doreen Kimura, a psychobiologist, has published books and articles specifically on the subject of sex and cognition. Since studying gender differences in cognition, Kimura has further proved generalizations made from research data collected in the field of cognitive psychology. These scientific findings have not been generalized cross culturally. Females have shown to have a higher ability in reading facial and body cues than their male counterparts. Though studies have found females to have more advanced verbal skills, men and women in adulthood do not have varied vocabularies. Women tend to have better spelling capabilities and verbal memory.
Intelligence
An article published in the Review of Educational Research summarizes the history of the controversy around sex differences in variability of intelligence. Through modern research, the main idea has held that males have a much wider range in test performance in IQ tests. The study also analyzes data concerning differences in central tendencies through environmental and biological theories. Males were found to have much wider variation than females in areas of quantitative reasoning, spatial visualization, spelling, and general knowledge than females. In the studies conclusion, to form an accurate summary, both the variability in sex differences and in the central tendencies must be examined to generalize the cognitive variances of males and females.
Empirical studies of g, or general intelligence, in men and women have given inconsistent results, showing either no differences or advantages for either sex. The differences in average IQ between women and men are small in magnitude and inconsistent in direction.
There have been many studies where this issue has been looked into. Scientists have found that a mindset of differing intelligence is still prominent in many cultures. Databases like ProQuest Central, PsycINFO, and Web of Science were searched for more information on this topic. This resulted in a total of 71 studies that show a variety of gender inequalities across the world.
According to the 1995 report Intelligence: Knowns and Unknowns by the American Psychological Association, "Most standard tests of intelligence have been constructed so that there are no overall score differences between females and males." Arthur Jensen in 1998 conducted studies on sex differences in intelligence through tests that were "loaded heavily on g" but were not normalized to eliminate sex differences. He concluded, "No evidence was found for sex differences in the mean level of g. Males, on average, excel on some factors; females on others". Jensen's results that no overall sex differences existed for g has been strengthened by researchers who assessed this issue with a battery of 42 mental ability tests and found no overall sex difference.
Although most of the tests showed no difference, there were some that did. For example, some tests found females performed better on verbal abilities while males performed better on visuospatial abilities. One female advantage is in verbal fluency where they have been found to perform better in vocabulary, reading comprehension, speech production and essay writing. Males have been specifically found to perform better on spatial visualization, spatial perception, and mental rotation. Researchers had then recommended that general models such as fluid and crystallized intelligence be divided into verbal, perceptual and visuospatial domains of g, because when this model is applied then females excel at verbal and perceptual tasks while males on visuospatial tasks.
There are, however, also differences in the capacity of males and females in performing certain tasks, such as rotation of objects in space, often categorized as spatial ability. These differences are more pronounced when people are exposed to a stereotype threat to their gender, which can be as subtle as being asked for their gender before being tested. Differences in mental rotation have also been seen to correlate with computer experience and video game practice, with as little as 10 hours of video game training reducing the disparity. Other traditionally male advantages, such as in the field of mathematics are less clear; again, differences may be caused by stereotype threats to women, and several recent studies show no difference whatsoever. In some regions, especially in Arab countries, observed sex differences in math ability favor girls and women, and in gender-equal countries the traditional difference is eliminated, highlighting the importance of societal influences. Although females have lesser performance in spatial abilities on average, they have better performance in processing speed involving letters, digits and rapid naming tasks, object location memory, verbal memory, and also verbal learning.
Memory
The results from research on sex differences in memory are mixed and inconsistent, as some studies show no difference, and others show a female or male advantage. Females tend to perform better in episodic memory tasks and access their memories faster than males and use more emotional terms when describing memories. Females also outperform men in random word recall, semantic memory and autobiographical memory. Men are more likely to get the gist of events rather than be aware of specific details. Men also recall more factual information like childhood memories better than females, and also have increased spatial based memories. Men use strategies where they use mental spatial maps and are better at knowing absolute direction, like north and south. Women use landmarks and directional cues for spatial navigation. Also, estradiol, a hormone found in women, affects learning and memory. Estradiol affects the function related to memory in the brain because it maintains cognitive function by increasing nervous tissue growth in the brain to help maintain memory. Though women experience brain fog when they go through menopause, it has been attributed to stress and processes in frontal neural networks instead.
Cognitive control of behavior
A 2011 meta-analyses found that women have small, but persistent, advantages in punishment sensitivity and effortful control across cultures. A 2014 review found that in humans, women discount more steeply than men, but sex differences on measures of impulsive action depend on tasks and subject samples.
Behavior
Childhood play
The differences between males and females in the context of childhood play is linked to differences in gender roles. A research on the "acquisition of fundamental movement skills" found that even though the level of mastery for certain skills were about the same for both boys and girls, after a certain age boys have better object control skills than girls do.
Some differences in gender roles influence on childhood play are suggested to be biological. A study by Alexander, Wilcox, and Woods led to the conclusion that toy preferences are innate. The reason being that the infants in the study visually discriminated between dolls and trucks. Where as the girls preferred the dolls over the truck, the boys preferred the trucks.
Hines and Kaufman hypothesized that girls with Congenital Adrenal Hyperplasia who are exposed to high androgen levels during pregnancy might be more physically forceful and rougher like boys are observed to be when they play. The results of Hines and Kaufman's research led them to conclude that androgen didn't cause girls with Congenital Adrenal Hyperplasia to be rougher than unaffected girls during play. The study suggested socialization also influenced the type of play children participated in.
Sexual behavior
Psychological theories exist regarding the development and expression of gender differences in human sexuality. A number of these theories are consistent in predicting that men should be more approving of casual sex (sex happening outside a stable, committed relationship such as marriage) and should also be more promiscuous (have a higher number of sexual partners) than women.
A sociobiological approach applies evolutionary biology to human sexuality, emphasizing reproductive success in shaping patterns of sexual behavior. According to sociobiologists, since women's parental investment in reproduction is greater than men's, owing to human sperm being much more plentiful than eggs, and the fact that women must devote considerable energy to gestating their offspring, women will tend to be much more selective in their choice of mates than men. It may not be possible to accurately test sociobiological theories in relation to promiscuity and casual sex in contemporary (U.S.) society, which is quite different from the ancestral human societies in which most natural selection for sexual traits has occurred.
Neoanalytic theories are based on the observation that mothers, as opposed to fathers, bear the major responsibility for childcare in most families and cultures; both male and female infants, therefore, form an intense emotional attachment to their mother, a woman. According to feminist psychoanalytic theorist Nancy Chodorow, girls tend to preserve this attachment throughout life and define their identities in relational terms, whereas boys must reject this maternal attachment in order to develop a masculine identity. In addition, this theory predicts that women's economic dependence on men in a male-dominated society will tend to cause women to approve of sex more in committed relationships providing economic security, and less so in casual relationships.
The sexual strategies theory by David Buss and David P. Schmitt is an evolutionary psychology theory regarding female and male short-term and long-term mating strategies which they argued are dependent on several different goals and vary depending on the environment.
According to social learning theory, sexuality is influenced by people's social environment. This theory suggests that sexual attitudes and behaviors are learned through observation of role models such as parents and media figures, as well as through positive or negative reinforcements for behaviors that match or defy established gender roles. It predicts that gender differences in sexuality can change over time as a function of changing social norms, and also that a societal double standard in punishing women more severely than men (who may in fact be rewarded) for engaging in promiscuous or casual sex will lead to significant gender differences in attitudes and behaviors regarding sexuality.
Such a societal double standard also figures in social role theory, which suggests that sexual attitudes and behaviors are shaped by the roles that men and women are expected to fill in society, and script theory, which focuses on the symbolic meaning of behaviors; this theory suggests that social conventions influence the meaning of specific acts, such as male sexuality being tied more to individual pleasure and macho stereotypes (therefore predicting a high number of casual sexual encounters) and female sexuality being tied more to the quality of a committed relationship.
The ovulatory shift hypothesis is the contested theory that female behaviour and preferences relating to mate selection changes throughout the ovulation cycle. A meta-analysis of 58 studies concluded that there was no evidence to support this theory. Another meta-analysis found that the hypothesis was only support in regards to short-term attraction. Additionally, a 2016 paper suggested that any possible changes in preferences during ovulation would be moderated by the relationship quality itself, even to the point of inversion in favor of the female's current partner.
A recent study sought to test the connection between current fertility status and sociosexual attitudes and desires; the researchers concluded that their hypothesis was not met, meaning they found no connection between women's fertility status and sociosexual desires or attitudes.
Mental health
Childhood conduct disorder and adult antisocial personality disorder as well as substance use disorders are more common in men. Many mood disorders, anxiety disorders, and eating disorders are more common in women. One explanation is that men tend to externalize stress while women tend to internalize it. Gender differences vary to some degree for different cultures.
Men and women do not differ on their overall rates of psychopathology; however, certain disorders are more prevalent in women, and vice versa. Women have higher rates of anxiety and depression (internalizing disorders) and men have higher rates of substance abuse and antisocial disorders (externalizing disorders). It is believed that divisions of power and the responsibilities set upon each sex are critical to this predisposition. Namely, women earn less money than men do, they tend to have jobs with less power and autonomy, and women are more responsive to problems of people in their social networks. These three differences can contribute to women's predisposition to anxiety and depression. It is suggested that socializing practices that encourage high self-regard and mastery would benefit the mental health of both women and men.
Anxiety and depression
One study interviewed 18,572 respondents, aged 18 and over, about 15 phobic symptoms. These symptoms would yield diagnoses based on criteria for agoraphobia, social phobia, and simple phobia. Women had significantly higher prevalence rates of agoraphobia and simple phobia; however, there were no differences found between men and women in social phobia. The most common phobias for both women and men involved spiders, bugs, mice, snakes, and heights. The biggest differences between men and women in these disorders were found on the agoraphobic symptoms of "going out of the house alone" and "being alone", and on two simple phobic symptoms, involving the fear of "any harmless or dangerous animal" and "storms", with relatively more women having both phobias. There were no differences in the age of onset, reporting a fear on the phobic level, telling a doctor about symptoms, or the recall of past symptoms.
Women are more likely than men to have depression. One 1987 study found little empirical support for several proposed explanations, including biological ones, and argued that when depressed women tend to ruminate which may lower the mood further while men tend to distract themselves with activities. This may develop from women and men being raised differently.
Suicide
Although females have more suicidal thoughts and attempts, and are diagnosed with depression more than men, males are much more likely to die from suicide. Suicide from males happens 4 times more often than among females. Also, though females try to commit suicide more often, men choose more violent methods, like guns. Women are more likely to use different methods like drug overdose or poison. One proposed cause for these disparities is socialization: men are expected to be independent and discouraged from showing weakness or emotion, while women are encouraged to share emotions and rely on support from others. Other suggested factors are societal expectations linking men's worth to their ability to provide and men's higher rate of alcoholism.
Schizophrenia
Women and men are both equally likely at developing symptoms of schizophrenia, but the onset occurs earlier for men. It has been suggested that sexually dimorphic brain anatomy, the differential effects of estrogens and androgens, and the heavy exposure of male adolescents to alcohol and other toxic substances can lead to this earlier onset in men. Various neurodevelopmental theories suggest the reasoning behind an earlier onset for men. One theory suggests that male fetal brains are more vulnerable to prenatal complications. Another theory argues that the gender differentiation in schizophrenia onset is due to excessive pruning of synaptic nerves during male adolescence. "The estrogen hypothesis" proposes that higher levels of estrogen in women has a protective effect against these prenatal and adolescent complications that may be associated with men having schizophrenia onset earlier. Estrogen can alter post-synaptic signal transduction and inhibit psychotic symptoms. Thus, as women experience lower levels of estrogen during menopause or the menstrual cycle, they can experience greater amounts of psychotic symptoms. In addition, estrogen treatment has yielded beneficial effects in patients with schizophrenia.
Autism Spectrum Disorder
The epidemiology of autism spectrum disorder varies amongst males and females. Globally, data isn't available for every individual country, but a worldwide review of epidemiological surveys, found a median of 62 out of 10,000 people have ASD. Amongst 8-year-olds in the United States 1 in 44 children have been identified with autism spectrum disorder, but it is "4 times more common among males than females." According to a research looking at the disparity between the actual prevalence of ASD and what actually gets diagnosed, there is a 2:1 ratio of males to females who are undiagnosed. This same statistic suggests that females have a disadvantage when it comes to being diagnosed and are underrepresented.
The "extreme male brain" or empathizing–systemizing theory views the autism spectrum as an extreme version of male-female differences regarding systemizing and empathizing abilities. It's used to explain the possible reason why males with ASD score higher on systemizing tests than females with ASD.
Symptom presentation in females with ASD is not as noticeable as it is in males. Females are better able to cope with the symptoms and often camouflage to be able to fit in socially and form relationships. Camouflaging has been suggested to be the cause of females with ASD having more emotional distress, while male counterparts usually had more external social problems.
The imprinted brain hypothesis argues that autism and psychosis are contrasting disorders on a number of different variables and that this is caused by an unbalanced genomic imprinting favoring paternal genes (autism) or maternal genes (psychosis). According to the Female Protective Effect Hypothesis, for females to develop autism they need to have acquired a wider range of genetic mutations than their male counterpart.
Possible causes
Both biological and social/environmental factors have been studied for their impact on sex differences. Separating biological from environmental effects is difficult, and advocates for biological influences generally accept that social factors are also important.
Biological
Biological differentiation is a fundamental part of human reproduction. Generally, males have two different sex chromosomes, an X and a Y; females have two X chromosomes. The Y chromosome, or more precisely the SRY gene located on it, is what generally determines sexual differentiation. If a Y chromosome with an SRY gene is present, growth is along male lines; it results in the production of testes, which in turn produce testosterone. In addition to physical effects, this prenatal testosterone increases the likeliness of certain "male" patterns of behavior after birth, though the exact impact and mechanism are not well understood. Parts of the SRY and specific parts of the Y chromosome may also possibly influence different gender behaviors, but if so, these impacts have not yet been identified.
Biological perspectives on psychological differentiation often place parallels to the physical nature of sexual differentiation. These parallels include genetic and hormonal factors that create different individuals, with the main difference being the reproductive function. The brain controls the behavior of individuals, but it is influenced by genes, hormones and evolution. Evidence has shown that the ways that male and female children become adults is different, and that there are variations between the individuals of each sex.
Sex linkage
Certain psychological traits may be related to the chromosomal sex of the individual. In contrast, there are also "sex-influenced" (or sex-conditioned) traits, in which the same gene may present different phenotypes depending on sex. For example, two siblings might share the same gene of aggressiveness but one might be more docile than the other due to differences in sex. Even in a homozygous dominant or recessive female the condition may not be expressed fully. "Sex-limited" traits are characteristics only expressed in one sex, or only in men or women. They may be caused by genes on either autosomal or sex chromosomes. Evidence exists that there are sex-linked differences between the male and female brain.
Epigenetics
Epigenetic changes have also been found to cause sex-based differentiation in the brain. The extent and nature of these differences are not fully characterised. Differences in socialization of males and females may decrease or increase the size of sex differences.
Neuroscience
A 2021 meta-synthesis of existing literature found that sex accounted for 1% of the brain's structure or laterality, finding large group-level differences only in total brain volume. This partially contradicts a review from 2006 and a meta-analysis from 2014 which found that some evidence from brain morphology and function studies indicates that male and female brains cannot always be assumed to be identical from either a structural or functional perspective, and some brain structures are sexually dimorphic.
Culture
Socialization
Differences in socialization of males and females are known to cause, decrease, or increase the magnitude of various sex differences.
In most cultures, humans are subject from infancy to gender socialization. For example, infant girls typically wear pink and infant boys typically wear blue. Gender schemas, or gendered cultural ideals which determine a person's preferences, are also installed into our behaviors beginning at infancy.
As people get older, gender stereotypes become more applied. The social role theory primarily deals with such stereotypes, more specifically the division of labor and a gender hierarchy. When this theory is applied in social settings, such as the workplace, it can often lead to sexism. This theory also applies to certain personality trails, such as men are more typically more assertive and women more passive. According to this theory, ideally, in most cultures, the woman is to stay and tend to the house and home while the man works to both better the house itself and increase finances.
Gender roles vary significantly by culture and time period. Such differences include political rights as well as employment and education opportunities solely available to females. Homosexual people are also subject to various societal expectations. Sexual inversion was one theory of homosexuality, positing that homosexuality was due to an innate reversal of gender traits.
Evolutionary product
Donald Symons has argued that fundamental sex differences in genetics, hormones and brain structure and function may manifest as distal cultural phenomena (e.g., males as primary combatants in warfare, the primarily female readership of romance novels, etc.). There has been significant feminist critique of these and other evolutionary psychology arguments, from both within and outside of the scientific community.
See also
Feminization (sociology)
Feminine psychology
Male warrior hypothesis
References
Sources
External links
List of full text books and articles on the topic of psychology of gender
Gender psychology
Moral psychology
| 0.762637 | 0.993123 | 0.757392 |
Evolutionary mismatch
|
Evolutionary mismatch (also "mismatch theory" or "evolutionary trap") is the evolutionary biology concept that a previously advantageous trait may become maladaptive due to change in the environment, especially when change is rapid. It is said this can take place in humans as well as other animals.
Environmental change leading to evolutionary mismatch can be broken down into two major categories: temporal (change of the existing environment over time, e.g. a climate change) or spatial (placing organisms into a new environment, e.g. a population migrating). Since environmental change occurs naturally and constantly, there will certainly be examples of evolutionary mismatch over time. However, because large-scale natural environmental change – like a natural disaster – is often rare, it is less often observed. Another more prevalent kind of environmental change is anthropogenic (human-caused). In recent times, humans have had a large, rapid, and trackable impact on the environment, thus creating scenarios where it is easier to observe evolutionary mismatch.
Because of the mechanism of evolution by natural selection, the environment ("nature") determines ("selects") which traits will persist in a population. Therefore, there will be a gradual weeding out of disadvantageous traits over several generations as the population becomes more adapted to its environment. Any significant change in a population's traits that cannot be attributed to other factors (such as genetic drift and mutation) will be responsive to a change in that population's environment; in other words, natural selection is inherently reactive. Shortly following an environmental change, traits that evolved in the previous environment, whether they were advantageous or neutral, are persistent for several generations in the new environment. Because evolution is gradual and environmental changes often occur very quickly on a geological scale, there is always a period of "catching-up" as the population evolves to become adapted to the environment. It is this temporary period of "disequilibrium" that is referred to as mismatch. Mismatched traits are ultimately addressed in one of several possible ways: the organism may evolve such that the maladaptive trait is no longer expressed, the organism may decline and/or become extinct as a result of the disadvantageous trait, or the environment may change such that the trait is no longer selected against.
History
As evolutionary thought became more prevalent, scientists studied and attempted to explain the existence of disadvantageous traits, known as maladaptations, that are the basis of evolutionary mismatch.
The theory of evolutionary mismatch began under the term evolutionary trap as early as the 1940s. In his 1942 book, evolutionary biologist Ernst Mayr described evolutionary traps as the phenomenon that occurs when a genetically uniform population suited for a single set of environmental conditions is susceptible to extinction from sudden environment changes. Since then, key scientists such as Warren J. Gross and Edward O. Wilson have studied and identified numerous examples of evolutionary traps.
The first occurrence of the term "evolutionary mismatch" may have been in a paper by Jack E. Riggs published in the Journal of Clinical Epidemiology in 1993. In the years to follow, the term evolutionary mismatch has become widely used to describe biological maladaptations in a wide range of disciplines. A coalition of modern scientists and community organizers assembled to found the Evolution Institute in 2008, and in 2011 published a more recent culmination of information on evolutionary mismatch theory in an article by Elisabeth Lloyd, David Sloan Wilson, and Elliott Sober. In 2018 a popular science book appeared by evolutionary psychologists on evolutionary mismatch and the implications for humans
Mismatch in human evolution
Neolithic Revolution: transitional context
The Neolithic Revolution brought about significant evolutionary changes in humans; namely the transition from a hunter-gatherer lifestyle, in which humans foraged for food, to an agricultural lifestyle. This change occurred approximately 10,000–12,000 years ago. Humans began to domesticate both plants and animals, allowing for the maintenance of constant food resources. This transition quickly and dramatically changed the way that humans interact with the environment, with societies taking up practices of farming and animal husbandry. However, human bodies had evolved to be adapted to their previous foraging lifestyle. The slow pace of evolution in comparison with the very fast pace of human advancement allowed for the persistence of these adaptations in an environment where they are no longer necessary. In some human societies that now function in a vastly different way from the hunter-gatherer lifestyle, these outdated adaptations now lead to the presence of maladaptive, or mismatched, traits.
Obesity and diabetes
Human bodies are predisposed to maintain homeostasis, especially when storing energy as fat. This trait serves as the main basis for the "thrifty gene hypothesis", the idea that "feast-or-famine conditions during human evolutionary development naturally selected for people whose bodies were efficient in their use of food calories". Hunter-gatherers, who used to live under environmental stress, benefit from this trait; there was an uncertainty of when the next meal would be, and they would spend most of their time performing high levels of physical activity. Therefore, those that consumed many calories would store the extra energy as fat, which they could draw upon in times of hunger.
However, modern humans have evolved to a world of more sedentary lifestyles and convenience foods. People are sitting more throughout their days, whether it be in their cars during rush hour or in their cubicles during their full-time jobs. Less physical activity in general means fewer calories burned throughout the day. Human diets have changed considerably over the 10,000 years since the advent of agriculture, with more processed foods in their diets that lack nutritional value and lead them to consume more sodium, sugar, and fat. These high calorie, nutrient-deficient foods cause people to consume more calories than they burn. Fast food combined with decreased physical activity means that the "thrifty gene" that once benefit human predecessors now works against them, causing their bodies to store more fat and leading to higher levels of obesity in the population.
Obesity is one consequence of mismatched genes. Known as "metabolic syndrome", this condition is also associated with other health concerns, including insulin resistance, where the body no longer responds to insulin secretion, so blood glucose levels are unable to be lowered, which can lead to type 2 diabetes.
Osteoporosis
Another human disorder that can be explained by mismatch theory is the rise in osteoporosis in modern humans. In advanced societies, many people, especially women, are remarkably susceptible to osteoporosis during aging. Fossil evidence has suggested that this was not always the case, with bones from elderly hunter-gatherer women often showing no evidence of osteoporosis. Evolutionary biologists have posited that the increase in osteoporosis in modern Western populations is likely due to our considerably sedentary lifestyles. Women in hunter-gatherer societies were physically active both from a young age and well into their late-adult lives. This constant physical activity likely lead to peak bone mass being considerably higher in hunter-gatherer humans than in modern-day humans. While the pattern of bone mass degradation during aging is purportedly the same for both hunter-gatherers and modern humans, the higher peak bone mass associated with more physical activity may have led hunter-gatherers to be able to develop a propensity to avoid osteoporosis during aging.
Hygiene hypothesis
The hygiene hypothesis, a concept initially theorized by immunologists and epidemiologists, has been proved to have a strong connection with evolutionary mismatch through recent studies. The hygiene hypothesis states that the profound increase in allergies, autoimmune diseases, and some other chronic inflammatory diseases is related to the reduced exposure of the immune system to antigens. Such reduced exposure is more common in industrialized countries and especially urban areas, where the inflammatory chronic diseases are also more frequently seen. Recent analysis and studies have tied the hygiene hypothesis and evolutionary mismatch together. Some researchers suggest that the overly sterilized urban environment changes or depletes the microbiota composition and diversity. Such environmental conditions favor the development of the inflammatory chronic diseases because human bodies have been selected to adapt to a pathogen-rich environment in the history of evolution. For example, studies have shown that change in our symbiont community can lead to the disorder of immune homeostasis, which can be used to explain why antibiotic use in early childhood can result in higher asthma risk. Because the change or depletion of the microbiome is often associated with hygiene hypothesis, the hypothesis is sometimes also called "biome depletion theory".
Human behavior
Behavioral examples of evolutionary mismatch theory include the abuse of dopaminergic pathways and the reward system. An action or behavior that stimulates the release of dopamine, a neurotransmitter known for generating a sense of pleasure, will likely be repeated since the brain is programmed to continually seek such pleasure. In hunter-gatherer societies, this reward system was beneficial for survival and reproductive success. But now, when there are fewer challenges to survival and reproducing, certain activities in the present environment (gambling, drug use, eating) exploit this system, leading to addictive behaviors.
Anxiety
Anxiety is another example of a modern manifestation of evolutionary mismatch in humans. An immediate return environment is when decisions made in the present create immediate results. Prehistoric human brains have evolved to assimilate to this particular environment; creating reactions such as anxiety to solve short-term problems. For example, the fear of a predator stalking a human, causes the human to run away consequently immediately ensuring the safety of the human as the distance increases from the predator. However, humans currently live in a different environment called the delayed reaction environment. In this environment, current decisions do not create immediate results. The advancement of society has reduced the threat of external factors such as predators, lack of food, shelter, etc. therefore human problems that once circulated around current survival have changed into how the present will affect the quality of future survival. In summation, traits like anxiety have become outdated as the advancement of society has allowed humans to no longer be under constant threat and instead worry about the future.
Work stress
Examples of evolutionary mismatch also occur in the modern workplace. Unlike our hunter-gatherer ancestors who lived in small egalitarian societies, the modern work place is large, complex, and hierarchical. Humans spend significant amounts of time interacting with strangers in conditions that are very different from those of our ancestral past. Hunter-gatherers do not separate work from their private lives, they have no bosses to be accountable to, or no deadlines to adhere to. Our stress system reacts to immediate threats and opportunities. The modern workplace exploits evolved psychological mechanisms that are aimed at immediate survival or longer-term reproduction. These basic instincts misfire in the modern workplace, causing conflicts at work, burnout, job alienation and poor management practices.
Gambling
There are two aspects of gambling that make it an addictive activity: chance and risk. Chance gives gambling its novelty. Back when humans had to forage and hunt for food, novelty-seeking was advantageous for them, particularly for their diet. However, with the development of casinos, this trait of pursuing novelties has become disadvantageous. Risk assessment, the other behavioral trait applicable to gambling, was also beneficial to hunter-gatherers in the face of danger. However, the types of risks hunter-gatherers had to assess are significantly different and more life-threatening than the risks people now face. The attraction to gambling stems from the attraction to risk and reward related activity.
Drug addiction
Herbivores have created selective pressure for plants to possess specific molecules that deter plant consumption, such as nicotine, morphine, and cocaine. Plant-based drugs, however, have reinforcing and rewarding effects on the human neurological system, suggesting a "paradox of drug reward" in humans. Human behavioral evolutionary mismatch explains the contradiction between plant evolution and human drug use. In the last 10,000 years, humans found the dopaminergic system, or reward system, particularly useful in optimizing Darwinian fitness. While drug use has been a common characteristic of past human populations, drug use involving potent substances and diverse intake methods is a relatively contemporary feature of society. Human ancestors lived in an environment that lacked drug use of this nature, so the reward system was primarily used in maximizing survival and reproductive success. In contrast, present-day humans live in a world where the current nature of drugs render the reward system maladaptive. This class of drugs falsely triggers a fitness benefit in the reward system, leaving people susceptible to drug addiction. The modern-day dopaminergic system presents vulnerabilities to the difference in accessibility and social perception of drugs.
Eating
In the era of foraging for food, hunter-gatherers rarely knew where their next meal would come from. This food scarcity rewarded consumption of high energy meals in order to save excess energy as fat. Now that food is readily available, the neurological system that once helped people recognize the survival advantages of essential eating has now become disadvantageous as it promotes overeating. This has become especially dangerous after the rise of processed foods, as the popularity of foods that have unnaturally high levels of sugar and fat has significantly increased.
Non-human examples
Evolutionary mismatch can occur any time an organism is exposed to an environment that does not resemble the typical environment the organism adapted in. Due to human influences, such as global warming and habitat destruction, the environment is changing very rapidly for many organisms, leading to numerous cases of evolutionary mismatch.
Examples with human influence
Sea turtles and light pollution
Female sea turtles create nests to lay their eggs by digging a pit on the beach, typically between the high tide line and dune, using their rear flippers. Consequently, within the first seven days of hatching, hatchling sea turtles must make the journey from the nest back into the ocean. This trip occurs predominantly at night in order to avoid predators and overheating.
In order to orient themselves towards the ocean, the hatchlings depend on their eyes to turn towards the brightest direction. This is because the open horizon of the ocean, illuminated by celestial light, tends to be much brighter in a natural undeveloped beach than the dunes and vegetation. Studies propose two mechanisms of the eye for this phenomenon. Referred to as the "raster system", the theory is that sea turtles' eyes contain numerous light sensors which take in the overall brightness information of a general area and make a "measurement" of where the light is most intense. If the light sensors detect the most intense light on a hatchling's left side, the sea turtle would turn left. A similar proposal called the complex phototropotaxis system theorizes that the eyes contain light intensity comparators that take in detailed information of the intensity of light from all directions. Sea turtles are able to "know" that they are facing the brightest direction when the light intensity is balanced between both eyes.
This method of finding the ocean is successful in natural beaches, but in developed beaches, the intense artificial lights from buildings, light houses, and even abandoned fires overwhelm the sea turtles and cause them to head towards the artificial light instead of the ocean. Scientists call this misorientation. Sea turtles can also become disoriented and circle around in the same place. Numerous cases show that misoriented hatchling sea turtles either die from dehydration, get consumed by a predator, or even burn to death in an abandoned fire. The direct impact of light pollution on the number of sea turtles has been too difficult to measure. However, this problem is exacerbated because all species of sea turtles are endangered. Other animals, including migratory birds and insects, are also victims to light pollution because they also depend on light intensity at night to properly orient themselves.
Dodo bird and hunting
The Dodo bird lived on a remote Island, Mauritius, in the absence of predators. Here, the Dodo evolved to lose its instinct for fear and the ability to fly. This allowed them to be easily hunted by Dutch sailors who arrived on the island in the late 16th century. The Dutch sailors also brought foreign animals to the island such as monkeys and pigs that ate the Dodo bird's eggs, which was detrimental to the population growth of the slow breeding bird. Their fearlessness made them easy targets and their inability to fly gave them no opportunity to evade danger. Thus, they were easily driven to extinction within a century of their discovery.
The Dodo's inability to fly was once beneficial for the bird because it conserved energy. The Dodo conserved more energy relative to birds with the ability to fly, due to the Dodo's smaller pectoral muscles. Smaller muscle sizes are linked to lower rates of maintenance metabolism, which in turn conserves energy for the Dodo. Lacking an instinct for fear was another mechanism through which the Dodo conserved energy because it never had to expend any energy for a stress response. Both mechanisms of conservation of energy was once advantageous because it enabled the Dodo to execute activities with minimal energy expenditure. However, these proved disadvantageous when their island was invaded, rendering them defenseless to the new dangers that humans brought.
Peppered moths during the English Industrial Revolution
Before the English Industrial Revolution of the late 18th and early 19th centuries the most common phenotypic color of the peppered moth was white with black speckles. When higher air pollution in urban regions killed the lichens adhering to trees and exposed their darker bark, the light-colored moths stood out more to predators. Natural selection began favoring a previously rare darker variety of the peppered moth referred to as "carbonaria" because the lighter phenotype had become mismatched to its environment.
Carbonaria frequencies rose above 90% in some areas of England until efforts in the late 1900s to reduce air pollution caused a resurgence of epiphytes, including lichens, to again lighten the color of trees. Under these conditions the coloring of the carbonaria reverted from an advantage to a disadvantage and that phenotype became mismatched to its environment.
Giant jewel beetle and beer bottles
Evolutionary mismatch can also be seen among insects. One such example is in the case of the giant jewel beetle (Julodimorpha bakewelli). The male jewel beetle has evolved to be attracted to features of the female jewel beetle that allow the male to identify a female jewel beetle as it flies across the desert. These features include size, color, and texture. However, these physical traits are seen manifested in some beer bottles as well. As a result, males often consider beer bottles more attractive than female jewel beetles due to the beer bottle's large size and attractive coloring. Beer bottles are often discarded by humans in the Australian desert that the jewel beetle thrives in, creating an environment where male jewel beetles prefer to mate with beer bottles instead of females. This is a situation that is extremely disadvantageous as it reduces the reproductive output of the jewel beetle as fewer beetles are mating. This condition can be considered an evolutionary mismatch, as a habit that evolved to aid in reproduction has become disadvantageous due to the littering of beer bottles, an anthropogenic cause.
Examples without human influence
Information cascades between birds
Normally, gaining information from watching other organisms allows the observer to make good decisions without spending effort. More specifically, birds often observe the behavior of other organisms to gain valuable information, such as the presence of predators, good breeding sites, and optimal feeding spots. Although this allows the observer to spend less effort gathering information, it can also lead to bad decisions if the information gained from observing is unreliable. In the case of the nutmeg mannikins, the observer can minimize the time spent looking for an optimal feeder and maximize its feeding time by watching where other nutmeg mannikins feed. However, this relies on the assumption that the observed mannikins also had reliable information that indicated the feeding spot was an ideal one. This behavior can become maladaptive when prioritizing information gained from watching others leads to information cascades, where birds follow the rest of the crowd even though prior experience may have suggested that the decision of the crowd is a poor one. For instance, if a nutmeg mannikin sees enough mannikins feeding at a feeder, nutmeg mannikins have been shown to choose that feeder even if their personal experience indicates that the feeder is a poor one.
House finches and the introduction of the MG disease
Evolutionary mismatch occurs in house finches when they are exposed to infectious individuals. Male house finches tend to feed in close proximity to other finches that are sick or diseased, because sick individuals are less competitive than usual, in turn making the healthy male more likely to win an aggressive interaction if it happens. To make it less likely to lose a social confrontation, healthy finches are inclined to forage near individuals that are lethargic or listless due to disease. However, this disposition has created an evolutionary trap for the finches after the introduction of the MG disease in 1994. Since this disease is infectious, healthy finches will be in danger of contraction if they are in the vicinity of individuals that have previously developed the disease. The relatively short duration of the disease's introduction has caused an inability for the finches to adapt quickly enough to avoid nearing sick individuals, which ultimately results in the mismatch between their behavior and the changing environment.
Exploitation of earthworm's reaction to vibrations
Worm charming is a practice used by people to attract earthworms out of the ground by driving in a wooden stake to vibrate the soil. This activity is commonly performed to collect fishing bait and as a competitive sport. Worms that sense the vibrations rise to the surface. Research shows that humans are actually taking advantage of a trait that worms adapted to avoid hungry burrowing moles which prey on the worms. This type of evolutionary trap, where an originally beneficial trait is exploited in order to catch prey, was coined the "rare enemy effect" by Richard Dawkins, an English evolutionary biologist. This trait of worms has been exploited not only by humans, but by other animals. Herring gulls and wood turtles have been observed to also stamp on the ground to drive the worms up to the surface and consume them.
See also
Evolution
Evolutionary biology
Evolutionary trap
Fisher's geometric model
Human impact on the environment
Natural environment
Person–environment fit
Rate of evolution
Evolutionary anachronism
References
Evolutionary biology
| 0.771494 | 0.98172 | 0.757391 |
Theories about religion
|
Sociological, psychological, and anthropological theories about religion generally attempt to explain the origin and function of religion. These theories define what they present as universal characteristics of religious belief and practice.
History
From presocratic times, ancient authors advanced prescientific theories about religion. Herodotus (484–425 BCE) saw the gods of Greece as the same as the gods of Egypt. Euhemerus (about 330–264 BCE) regarded gods as excellent historical persons whom admirers eventually came to worship.
Scientific theories, inferred and tested by the comparative method, emerged after data from tribes and peoples all over the world became available in the 18th and 19th centuries. Max Müller (1823–1900) has the reputation of having founded the scientific study of religion; he advocated a comparative method that developed into comparative religion.
Subsequently, Clifford Geertz (1926–2006) and others questioned the validity of abstracting a general theory of all religions.
Classification
Theories of religion can be classified into:
Substantive (or essentialist) theories that focus on the contents of religions and the meaning the contents have for people. This approach asserts that people have faith because beliefs make sense insofar as they hold value and are comprehensible. The theories by Tylor and Frazer (focusing on the explanatory value of religion for its adherents), by Rudolf Otto (focusing on the importance of religious experience, more specifically experiences that are both fascinating and terrifying) and by Mircea Eliade (focusing on the longing for otherworldly perfection, the quest for meaning, and the search for patterns in mythology in various religions) offer examples of substantive theories.
Functional theories that focus on the social or psychological functions that religion has for a group or a person. In simple terms, the functional approach sees religion as "performing certain functions for society" Theories by Karl Marx (role of religion in capitalist and pre-capitalist societies), Sigmund Freud (psychological origin of religious beliefs), Émile Durkheim (social function of religions), and the theory by Stark and Bainbridge exemplify functional theories. This approach tends to be static, with the exception of Marx' theory, and unlike e.g. Weber's approach, which treats of the interaction and dynamic processes between religions and the rest of societies.
Social relational theories of religion that focus on the nature or social form of the beliefs and practices. Here, Charles Taylor's book The Secular Age is exemplary, as is the work of Clifford Geertz. The approach is expressed in Paul James's argument that religion is a "relatively bounded system of beliefs, symbols and practices that addresses the nature of existence through communion with others and Otherness, lived as both taking in and spiritually transcending socially grounded ontologies of time, space, embodiment and knowing". This avoids the dichotomy between the immanent and transcendental.
Other dichotomies according to which theories or descriptions of religions can be classified include:
"insider" versus "outsider" perspectives (roughly corresponding to emic versus etic descriptions)
individualist versus social views
evolutionist versus relativist views
Methodologies
Early essentialists, such as Tylor and Frazer, looked for similar beliefs and practices in all societies, especially the more primitive ones, more or less regardless of time and place. They relied heavily on reports made by missionaries, discoverers, and colonial civil servants. These were all investigators who had a religious background themselves, thus they looked at religion from the inside. Typically they did not practice investigative field work, but used the accidental reports of others. This method left them open to criticism for lack of universality, which many freely admitted. The theories could be updated, however, by considering new reports, which Robert Ranulph Marett (1866–1943) did for Tylor's theory of the evolution of religion.
Field workers deliberately sent out by universities and other institutions to collect specific cultural data made available a much greater database than random reports. For example, the anthropologist E. E. Evans-Pritchard (1902–1973) preferred detailed ethnographical study of tribal religion as more reliable. He criticised the work of his predecessors, Müller, Tylor, and Durkheim, as untestable speculation. He called them "armchair anthropologists".
A second methodology, functionalism, seeks explanations of religion that are outside of religion; i.e., the theorists are generally (but not necessarily) atheists or agnostics themselves. As did the essentialists, the functionalists proceeded from reports to investigative studies. Their fundamental assumptions, however, are quite different; notably, they apply methodological naturalism. When explaining religion they reject divine or supernatural explanations for the status or origins of religions because they are not scientifically testable. In fact, theorists such as Marett (an Anglican) excluded scientific results altogether, defining religion as the domain of the unpredictable and unexplainable; that is, comparative religion is the rational (and scientific) study of the irrational. The dichotomy between the two classifications is not bridgeable, even though they have the same methods, because each excludes the data of the other.
The functionalists and some of the later essentialists (among others E. E. Evans-Pritchard) have criticized the substantive view as neglecting social aspects of religion. Such critics go so far as to brand Tylor's and Frazer's views on the origin of religion as unverifiable speculation. The view of monotheism as more evolved than polytheism represents a mere preconception, they assert. There is evidence that monotheism is more prevalent in hunter societies than in agricultural societies. The view of a uniform progression in folkways is criticized as unverifiable, as the writer Andrew Lang (1844–1912) and E. E. Evans-Pritchard assert. The latter criticism presumes that the evolutionary views of the early cultural anthropologists envisaged a uniform cultural evolution. Another criticism supposes that Tylor and Frazer were individualists (unscientific). However, some support that supposed approach as worthwhile, among others the anthropologist Robin Horton. The dichotomy between the two fundamental presumptions - and the question of what data can be considered valid - continues.
Substantive theories
Evolutionary theories
Evolutionary theories view religion as either an adaptation or a byproduct. Adaptationist theories view religion as being of adaptive value to the survival of Pleistocene humans. Byproduct theories view religion as a spandrel.
Edward Burnett Tylor
The anthropologist Edward Burnett Tylor (1832–1917) defined religion as belief in spiritual beings and stated that this belief originated as explanations of natural phenomena. Belief in spirits grew out of attempts to explain life and death. Primitive people used human dreams in which spirits seemed to appear as an indication that the human mind could exist independent of a body. They used this by extension to explain life and death, and belief in the after life. Myths and deities to explain natural phenomena originated by analogy and an extension of these explanations. His theory assumed that the psyches of all peoples of all times are more or less the same and that explanations in cultures and religions tend to grow more sophisticated via monotheist religions, such as Christianity and eventually to science. Tylor saw practices and beliefs in modern societies that were similar to those of primitive societies as survivals, but he did not explain why they survived.
James George Frazer
James George Frazer (1854–1941) followed Tylor's theories to a great extent in his book The Golden Bough, but he distinguished between magic and religion. Magic is used to influence the natural world in the primitive man's struggle for survival. He asserted that magic relied on an uncritical belief of primitive people in contact and imitation. For example, precipitation may be invoked by the primitive man by sprinkling water on the ground. He asserted that according to them magic worked through laws. In contrast religion is faith that the natural world is ruled by one or more deities with personal characteristics with whom can be pleaded, not by laws.
Rudolf Otto
The theologian Rudolf Otto (1869–1937) focused on religious experience, more specifically moments that he called numinous which means "Wholly Other". He described it as mysterium tremendum (terrifying mystery) and mysterium fascinans (awe inspiring, fascinating mystery). He saw religion as emerging from these experiences.
He asserted that these experiences arise from a special, non-rational faculty of the human mind, largely unrelated to other faculties, so religion cannot be reduced to culture or society. Some of his views, among others that the experience of the numinous was caused by a transcendental reality, are untestable and hence unscientific.
His ideas strongly influenced phenomenologists and Mircea Eliade.
Mircea Eliade
Mircea Eliade's (1907–1986) approach grew out of the phenomenology of religion. Like Otto, he saw religion as something special and autonomous, that cannot be reduced to the social, economical or psychological alone. Like Durkheim, he saw the sacred as central to religion, but differing from Durkheim, he views the sacred as often dealing with the supernatural, not with the clan or society. The daily life of an ordinary person is connected to the sacred by the appearance of the sacred, called hierophany. Theophany (an appearance of a god) is a special case of it. In The Myth of the Eternal Return Eliade wrote that archaic men wish to participate in the sacred, and that they long to return to lost paradise outside the historic time to escape meaninglessness. The primitive man could not endure that his struggle to survive had no meaning. According to Eliade, man had a nostalgia (longing) for an otherworldly perfection. Archaic man wishes to escape the terror of time and saw time as cyclic. Historical religions like Christianity and Judaism revolted against this older concept of cyclic time. They provided meaning and contact with the sacred in history through the god of Israel.
Eliade sought and found patterns in myth in various cultures, e.g. sky gods such as Zeus.
Eliade's methodology was studying comparative religion of various cultures and societies more or less regardless of other aspects of these societies, often relying on second hand reports. He also used some personal knowledge of other societies and cultures for his theories, among others his knowledge of Hindu folk religion.
He has been criticized for vagueness in defining his key concepts. Like Frazer and Tylor he has also been accused of out-of-context comparisons of religious beliefs of very different societies and cultures.
He has also been accused of having a pro-religious bias (Christian and Hindu), though this bias does not seem essential for his theory.
E. E. Evans-Pritchard
The anthropologist Edward Evan Evans-Pritchard (1902–1973) did extensive ethnographic studies among the Azande and Nuer peoples who were considered "primitive" by society and earlier scholars. Evans-Pritchard saw these people as different, but not primitive.
Unlike the previous scholars, Evans-Pritchard did not propose a grand universal theory and he did extensive long-term fieldwork among "primitive" peoples, studying their culture and religion, among other among the Azande. Not just passing contact, like Eliade.
He argued that the religion of the Azande (witchcraft and oracles) can not be understood without the social context and its social function. Witchcraft and oracles played a great role in solving disputes among the Azande. In this respect he agreed with Durkheim, though he acknowledged that Frazer and Tylor were right that their religion also had an intellectual explanatory aspect. The Azande's faith in witchcraft and oracles was quite logical and consistent once some fundamental tenets were accepted. Loss of faith in the fundamental tenets could not be endured because of its social importance and hence they had an elaborate system of explanations (or excuses) against disproving evidence. Besides an alternative system of terms or school of thought did not exist.
He was heavily critical about earlier theorists of primitive religion with the exception of Lucien Lévy-Bruhl, asserting that they made statements about primitive people without having enough inside knowledge to make more than a guess. In spite of his praise of Bruhl's works, Evans-Pritchard disagreed with Bruhl's statement that a member of a "primitive" tribe saying "I am the moon" is prelogical, but that this statement makes perfect sense within their culture if understood metaphorically.
Apart from the Azande, Evans-Pritchard, also studied the neighbouring, but very different Nuer people. The Nuer had had an abstract monotheistic faith, somewhat similar to Christianity and Judaism, though it included lesser spirits. They had also totemism, but this was a minor aspect of their religion and hence a corrective to Durkheim's generalizations should be made. Evans-Pritchard did not propose a theory of religions, but only a theory of the Nuer religion.
Clifford Geertz
The anthropologist Clifford Geertz (1926–2006) made several studies in Javanese villages. He avoided the subjective and vague concept of group attitude as used by Ruth Benedict by using the analysis of society as proposed by Talcott Parsons who in turn had adapted it from Max Weber. Parsons' adaptation distinguished all human groups on three levels i.e. 1. an individual level that is controlled by 2. a social system that is in turn controlled by 3. a cultural system. Geertz followed Weber when he wrote that "man is an animal suspended in webs of significance he himself has spun and the analysis of it to be therefore not an experimental science in search of law but an interpretive one in search of meaning". Geertz held the view that mere explanations to describe religions and cultures are not sufficient: interpretations are needed too. He advocated what he called thick descriptions to interpret symbols by observing them in use, and for this work, he was known as a founder of symbolic anthropology.
Geertz saw religion as one of the cultural systems of a society. He defined religion as:
(1) a system of symbols
(2) which acts to establish powerful, pervasive and long-lasting moods and motivations in men
(3) by formulating conceptions of a general order of existence and
(4) clothing these conceptions with such an aura of factuality that
(5) the moods and motivations seem uniquely realistic.
With symbols Geertz meant a carrier that embodies a conception, because he saw religion and culture as systems of communication.
This definition emphasizes the mutual reinforcement between world view and ethos.
Though he used more or less the same methodology as Evans-Pritchard, he did not share Evans-Pritchard's hope that a theory of religion could ever be found. Geertz proposed methodology was not the scientific method of the natural science, but the method of historians studying history.
Functional theories
Karl Marx
The social philosopher Karl Marx (1818–1883) held a materialist worldview.
According to Marx, the dynamics of society were determined by the relations of production, that is, the relations that its members needed to enter into to produce their means of survival.
Developing on the ideas of Ludwig Feuerbach, he saw religion as a product of alienation that was functional to relieving people's immediate suffering, and as an ideology that masked the real nature of social relations.
He deemed it a contingent part of human culture, that would have disappeared after the abolition of class society.
These claims were limited, however, to his analysis of the historical relationship between European cultures, political institutions, and their Christian religious traditions.
Marxist views strongly influenced individuals' comprehension and conclusions about society, among others the anthropological school of cultural materialism.
Marx' explanations for all religions, always, in all forms, and everywhere have never been taken seriously by many experts in the field, though a substantial fraction accept that Marx' views possibly explain some aspects of religions.
Some recent work has suggested that, while the standard account of Marx's analysis of religion is true, it is also only one side of a dialectical account, which takes seriously the disruptive, as well as the pacifying moments of religion
Sigmund Freud
Sigmund Freud (1856–1939) saw religion as an illusion, a belief that people very much wanted to be true. Unlike Tylor and Frazer, Freud attempted to explain why religion persists in spite of the lack of evidence for its tenets. Freud asserted that religion is a largely unconscious neurotic response to repression. By repression Freud meant that civilized society demands that we not fulfill all our desires immediately, but that they have to be repressed. Rational arguments to a person holding a religious conviction will not change the neurotic response of a person. This is in contrast to Tylor and Frazer, who saw religion as a rational and conscious, though primitive and mistaken, attempt to explain the natural world.
In his 1913 book Totem and Taboo he developed a speculative story about how all monotheist religions originated and developed. In the book he asserted that monotheistic religions grew out of a homicide in a clan of a father by his sons. This incident was subconsciously remembered in human societies.
In Moses and Monotheism, Freud proposed that Moses had been a priest of Akhenaten who fled Egypt after the pharaoh's death and perpetuated monotheism through a different religion.
Freud's view on religion was embedded in his larger theory of psychoanalysis, which has been criticized as unscientific. Although Freud's attempt to explain the historical origins of religions have not been accepted, his generalized view that all religions originate from unfulfilled psychological needs is still seen as offering a credible explanation in some cases.
Émile Durkheim
Émile Durkheim (1858–1917) saw the concept of the sacred as the defining characteristic of religion, not faith in the supernatural. He saw religion as a reflection of the concern for society. He based his view on recent research regarding totemism among the Australian aboriginals. With totemism he meant that each of the many clans had a different object, plant, or animal that they held sacred and that symbolizes the clan. Durkheim saw totemism as the original and simplest form of religion. According to Durkheim, the analysis of this simple form of religion could provide the building blocks for more complex religions. He asserted that moralism cannot be separated from religion. The sacred i.e. religion reinforces group interest that clash very often with individual interests. Durkheim held the view that the function of religion is group cohesion often performed by collectively attended rituals. He asserted that these group meeting provided a special kind of energy, which he called effervescence, that made group members lose their individuality and to feel united with the gods and thus with the group. Differing from Tylor and Frazer, he saw magic not as religious, but as an individual instrument to achieve something.
Durkheim's proposed method for progress and refinement is first to carefully study religion in its simplest form in one contemporary society and then the same in another society and compare the religions then and only between societies that are the same.
The empirical basis for Durkheim's view has been severely criticized when more detailed studies of the Australian aboriginals surfaced. More specifically, the definition of religion as dealing with the sacred only, regardless of the supernatural, is not supported by studies of these aboriginals. The view that religion has a social aspect, at the very least, introduced in a generalized very strong form by Durkheim has become influential and uncontested.
Durkheim's approach gave rise to functionalist school in sociology and anthropology. Functionalism is a sociological paradigm that originally attempted to explain social institutions as collective means to fill individual biological needs, focusing on the ways in which social institutions fill social needs, especially social stability. Thus because Durkheim viewed society as an "organismic analogy of the body, wherein all the parts work together to maintain the equilibrium of the whole, religion was understood to be the glue that held society together.".
Bronisław Malinowski
The anthropologist Bronisław Malinowski (1884–1942) was strongly influenced by the functionalist school and argued that religion originated from coping with death. He saw science as practical knowledge that every society needs abundantly to survive and magic as related to this practical knowledge, but generally dealing with phenomena that humans cannot control.
Max Weber
Max Weber (1864–1920) thought that the truth claims of religious movement were irrelevant for the scientific study of the movements. He portrayed each religion as rational and consistent in their respective societies.
Weber acknowledged that religion had a strong social component, but diverged from Durkheim by arguing, for example in his book The Protestant Ethic and the Spirit of Capitalism that religion can be a force of change in society. In the book Weber wrote that modern capitalism spread quickly partially due to the Protestant worldly ascetic morale. Weber's main focus was not on developing a theory of religion but on the interaction between society and religion, while introducing concepts that are still widely used in the sociology of religion. These concept include
Church sect typology, Weber distinguished between sects and churches by stating that membership of a sect is a personal choice and church membership is determined by birth. The typology later developed more extensively by his friend Ernst Troeltsch and others. According to the typology, churches, ecclesia, denomination, and sects form a continuum with decreasing influence on society. Sects are protest break away groups and tend to be in tension with society.
Ideal type, a hypothetical "pure" or "clear" form, used in typologies
Charismatic authority Weber saw charisma as a volatile form or authority that depends on the acceptance of unique quality of a person by this person's followers. Charisma can be a revolutionary force and the authority can either be routinized (change into other forms of authority) or disappear upon the death of the charismatic person.
Somewhat differing from Marx, Weber dealt with status groups, not with class. In status groups the primary motivation is prestige and social cohesion. Status groups have differing levels of access to power and prestige and indirectly to economic resources. In his 1920 treatment of the religion in China he saw Confucianism as helping a certain status group, i.e. the educated elite to maintain access to prestige and power. He asserted that Confucianism opposition against both extravagance and thrift made it unlikely that capitalism could have originated in China.
He used the concept of Verstehen (German for "understanding") to describe his method of interpretation of the intention and context of human action.
Rational choice theory
The rational choice theory has been applied to religions, among others by the sociologists Rodney Stark (1934–2022) and William Sims Bainbridge (born 1940). They see religions as systems of "compensators", and view human beings as "rational actors, making choices that she or he thinks best, calculating costs and benefits". Compensators are a body of language and practices that compensate for some physical lack or frustrated goal. They can be divided into specific compensators (compensators for the failure to achieve specific goals), and general compensators (compensators for failure to achieve any goal). They define religion as a system of compensation that relies on the supernatural. The main reasoning behind this theory is that the compensation is what controls the choice, or in other words the choices which the "rational actors" make are "rational in the sense that they are centered on the satisfaction of wants".
It has been observed that social or political movements that fail to achieve their goals will often transform into religions. As it becomes clear that the goals of the movement will not be achieved by natural means (at least within their lifetimes), members of the movement will look to the supernatural to achieve what cannot be achieved naturally. The new religious beliefs are compensators for the failure to achieve the original goals. Examples of this include the counterculture movement in America: the early counterculture movement was intent on changing society and removing its injustice and boredom; but as members of the movement proved unable to achieve these goals they turned to Eastern and new religions as compensators.
Most religions start out their lives as cults or sects, i.e. groups in high tension with the surrounding society, containing different views and beliefs contrary to the societal norm. Over time, they tend to either die out, or become more established, mainstream and in less tension with society. Cults are new groups with a new novel theology, while sects are attempts to return mainstream religions to (what the sect views as) their original purity. Mainstream established groups are called denominations. The comments below about cult formation apply equally well to sect formation.
There are four models of cult formation: the Psychopathological Model, the Entrepreneurial Model, the Social Model and the Normal Revelations model.
Psychopathological model: religions are founded during a period of severe stress in the life of the founder. The founder suffers from psychological problems, which they resolve through the founding of the religion. (The development of the religion is for them a form of self-therapy, or self-medication.)
Entrepreneurial model: founders of religions act like entrepreneurs, developing new products (religions) to sell to consumers (to convert people to). According to this model, most founders of new religions already have experience in several religious groups before they begin their own. They take ideas from the pre-existing religions, and try to improve on them to make them more popular.
Social model: religions are founded by means of social implosions. Members of the religious group spend less and less time with people outside the group, and more and more time with each other within it. The level of affection and emotional bonding between members of a group increases, and their emotional bonds to members outside the group diminish. According to the social model, when a social implosion occurs, the group will naturally develop a new theology and rituals to accompany it.
Normal revelations: religions are founded when the founder interprets ordinary natural phenomena as supernatural; for instance, ascribing his or her own creativity in inventing the religion to that of the deity.
Some religions are better described by one model than another, though all apply to differing degrees to all religions.
Once a cult or sect has been founded, the next problem for the founder is to convert new members to it. Prime candidates for religious conversion are those with an openness to religion, but who do not belong or fit well in any existing religious group. Those with no religion or no interest in religion are difficult to convert, especially since the cult and sect beliefs are so extreme by the standards of the surrounding society. But those already happy members of a religious group are difficult to convert as well, since they have strong social links to their preexisting religion and are unlikely to want to sever them in order to join a new one. The best candidates for religious conversion are those who are members of or have been associated with religious groups (thereby showing an interest or openness to religion), yet exist on the fringe of these groups, without strong social ties to prevent them from joining a new group.
Potential converts vary in their level of social connection. New religions best spread through pre-existing friendship networks. Converts who are marginal with few friends are easy to convert, but having few friends to convert they cannot add much to the further growth of the organization. Converts with a large social network are harder to convert, since they tend to have more invested in mainstream society; but once converted they yield many new followers through their friendship network.
Cults initially can have quite high growth rates; but as the social networks that initially feed them are exhausted, their growth rate falls quickly. On the other hand, the rate of growth is exponential (ignoring the limited supply of potential converts): the more converts you have, the more missionaries you can have out looking for new converts. But nonetheless it can take a very long time for religions to grow to a large size by natural growth. This often leads to cult leaders giving up after several decades, and withdrawing the cult from the world.
It is difficult for cults and sects to maintain their initial enthusiasm for more than about a generation. As children are born into the cult or sect, members begin to demand a more stable life. When this happens, cults tend to lose or de-emphasise many of their more radical beliefs, and become more open to the surrounding society; they then become denominations.
The theory of religious economy sees different religious organizations competing for followers in a religious economy, much like the way businesses compete for consumers in a commercial economy. Theorists assert that a true religious economy is the result of religious pluralism, giving the population a wider variety of choices in religion. According to the theory, the more religions there are, the more likely the population is to be religious and hereby contradicting the secularization thesis.
See also
Notes
References
Further reading
External links
Anthropology of religion
Sociology of religion
Religious studies
| 0.762248 | 0.993627 | 0.75739 |
Extremism
|
Extremism is "the quality or state of being extreme" or "the advocacy of extreme measures or views". The term is primarily used in a political or religious sense to refer to an ideology that is considered (by the speaker or by some implied shared social consensus) to be far outside the mainstream attitudes of society. It can also be used in an economic context. The term may be used pejoratively by opposing groups, but is also used in academic and journalistic circles in a purely descriptive and non-condemning sense.
Extremists' views are typically contrasted with those of moderates. In Western countries, for example, in contemporary discourse on Islam or on Islamic political movements, the distinction between extremist and moderate Muslims is commonly stressed. Political agendas perceived as extremist often include those from the far-left politics or far-right politics, as well as radicalism, reactionism, chauvinism, fundamentalism, and fanaticism.
Definitions
Peter T. Coleman and Andrea Bartoli give observation of definitions: Extremism is a complex phenomenon, although its complexity is often hard to see. Most simply, it can be defined as activities (beliefs, attitudes, feelings, actions, strategies) of a character far removed from the ordinary. In conflict settings it manifests as a severe form of conflict engagement. However, the labeling of activities, people, and groups as "extremist", and the defining of what is "ordinary" in any setting is always a subjective and political matter. Thus, we suggest that any discussion of extremism be mindful of the following: Typically, the same extremist act will be viewed by some as just and moral (such as pro-social "freedom fighting"), and by others as unjust and immoral (antisocial "terrorism") depending on the observer's values, politics, moral scope, and the nature of their relationship with the actor. In addition, one's sense of the moral or immoral nature of a given act of extremism (such as Nelson Mandela's use of guerilla war tactics against the South African Government) may change as conditions (leadership, world opinion, crises, historical accounts, etc.) change. Thus, the current and historical context of extremist acts shapes our view of them. Power differences also matter when defining extremism. When in conflict, the activities of members of low power groups tend to be viewed as more extreme than similar activities committed by members of groups advocating the status quo.
In addition, extreme acts are more likely to be employed by marginalized people and groups who view more normative forms of conflict engagement as blocked for them or biased. However, dominant groups also commonly employ extreme activities (such as governmental sanctioning of violent paramilitary groups or the attack in Waco by the FBI in the U.S.).
Extremist acts often employ violent means, although extremist groups will differ in their preference for violent extremism vs. nonviolent extremism, in the level of violence they employ, and in the preferred targets of their violence (from infrastructure to military personnel to civilians to children). Again, low power groups are more likely to employ direct, episodic forms of violence (such as suicide bombings), whereas dominant groups tend to be associated with more structural or institutionalized forms (like the covert use of torture or the informal sanctioning of police brutality).
In Germany, extremism is explicitly used for differentiation between democratic and non-democratic intentions. The German Ministry of Home Affairs defines extremism as an intention that rejects the democratic constitution state and fundamental values, its norms and its laws.
Although extremist individuals and groups are often viewed as cohesive and consistently evil, it is important to recognize that they may be conflicted or ambivalent psychologically as individuals, or contain difference and conflict within their groups. For instance, individual members of Hamas may differ considerably in their willingness to negotiate their differences with the Palestinian Authority and, ultimately, with certain factions in Israel. Ultimately, the core problem that extremism presents in situations of protracted conflict is less the severity of the activities (although violence, trauma, and escalation are obvious concerns) but more so the closed, fixed, and intolerant nature of extremist attitudes, and their subsequent imperviousness to change.
Difference from radicalism
Astrid Bötticher notes several differences between radicalism and extremism, among them in goals (idealistic vs. restorative, emancipatory vs. anti-democratic), morals (particular vs. universal), approach towards diversity (acceptance vs. disdain), and use of violence (pragmatic and selective vs. legitimate and acceptable).
Theories of extremism
Eric Hoffer and Arthur Schlesinger Jr. were two political writers during the mid-20th century who gave what they purported to be accounts of "political extremism". Hoffer wrote The True Believer and The Passionate State of Mind about the psychology and sociology of those who join "fanatical" mass movements. Schlesinger wrote The Vital Center, championing a supposed "center" of politics within which "mainstream" political discourse takes place, and underscoring the alleged need for societies to draw definite lines regarding what falls outside of this acceptability.
Seymour Martin Lipset argued that besides the extremism of the left and right there is also an extremism of the center, and that it actually formed the base of fascism.
Laird Wilcox identifies 21 alleged traits of a "political extremist", ranging from "a tendency to character assassination" and hateful behavior like "name calling and labelling", to general character traits like "a tendency to view opponents and critics as essentially evil", "a tendency to substitute intimidation for argument" or "groupthink".
"Extremism" is not a standalone characteristic. The attitude or behavior of an "extremist" may be represented as part of a spectrum, which ranges from mild interest through "obsession" to "fanaticism" and "extremism". The alleged similarity between the "extreme left" and "extreme right", or perhaps between opposing religious zealots, may mean only that all these are "unacceptable" from the standpoint of the mainstream or majority.
Economist Ronald Wintrobe argues that many extremist movements, even though having completely different ideologies, share a common set of characteristics. As an example, he lists the following common characteristics between "Jewish fundamentalists" and "the extremists of Hamas":
Psychological
Among the explanations for extremism is one that views it as a plague. Arno Gruen said, "The lack of identity associated with extremists is the result of self-destructive self-hatred that leads to feelings of revenge toward life itself, and a compulsion to kill one's own humanness." In this context, extremism is seen as not a tactic, nor an ideology, but as a pathological illness which feeds on the destruction of life. Dr. Kathleen Taylor believes religious fundamentalism is a mental illness and is "curable." There are distinct psychological features of extremists that contribute to conflict among societal groups; Jan-Willem van Prooijen identified them as psychological distress, cognitive simplicity, overconfidence and intolerance.
Another view is that extremism is an emotional outlet for severe feelings stemming from "persistent experiences of oppression, insecurity, humiliation, resentment, loss, and rage" which are presumed to "lead individuals and groups to adopt conflict engagement strategies which "fit" or feel consistent with these experiences".
Extremism is seen by other researchers as a "rational strategy in a game over power", as described in the works of Eli Berman.
In a 2018 study at University College London, scientists have demonstrated that people with extreme political views (both extreme right and extreme left) had significantly worse metacognition, or the ability of a person to recognize they are wrong and modify their views when presented with contrary evidence, thus creating an opinion that supports only their idea of wrong and right. People found on either of the political extremes were shown to have much greater (but misplaced) confidence in their beliefs, and resisted change.
A 2019 study found that political extremism on both the left and right tended to have four common psychological features: psychological distress stimulates the adoption of an extreme ideological outlook, extreme ideologies tend to have relatively simplistic black-white perceptions of the social world, said mental simplicity causes overconfidence in judgements, and political extremists are less tolerant of different groups and opinions than moderates.
Criticism
After being accused of extremism, Martin Luther King Jr. criticized the mainstream usage of the term in his Letter from Birmingham Jail,
"But though I was initially disappointed at being categorized as an extremist, as I continued to think about the matter I gradually gained a measure of satisfaction from the label. Was not Jesus an extremist for love…Was not Amos an extremist for justice…Was not Martin Luther an extremist…So the question is not whether we will be extremists, but what kind of extremists we will be. Will we be extremists for hate or for love? Will we be extremists for the preservation of injustice or for the extension of justice?"
In his 1964 acceptance speech at the 1964 Republican National Convention, Barry Goldwater said, "I would remind you that extremism in the defense of liberty is no vice. And let me remind you also that moderation in the pursuit of justice is no virtue."
Robert F. Kennedy said "What is objectionable, what is dangerous about extremists is not that they are extreme but that they are intolerant. The evil is not what they say about their cause, but what they say about their opponents."
In Russia, the laws prohibiting extremist content are used to suppress the freedom of speech through very broad and flexible interpretation. Published material classified as "extremist", and thus prosecuted, included protests against the court rulings in the Bolotnaya Square case ("calling for illegal action"), criticism of overspending by a local governor ("insult of the authorities"), publishing a poem in support of Ukraine ("inciting hatred"), an open letter against a war in Chechnya by the writer Polina Zherebcova, the Jehovah's Witnesses movement in Russia, Raphael Lemkin, and articles by the initiator of the Genocide Convention of 1948.
Tushar Gandhi, Mahatma Gandhi's great-grandson, says India's Hindu nationalism is a threat to Gandhi's legacy and that the ideology of hate, division and polarization that led to Gandhi's assassination by a religious zealot in 1948 has captured India.
Other terms
Since the 1990s, in United States politics, the term Sister Souljah moment has been used to describe a politician's public repudiation of an allegedly extremist person or group, statement, or position which might otherwise be associated with his own party.
The term "subversive" was often used interchangeably, in the United States at least, with "extremist" during the Cold War period, although the two words are not synonymous.
See also
Purge
Ethnic cleansing
Islamic extremism
Kahanism
Christian terrorism
Hindutva
Sikh extremism
Religious persecution
Political extremism in Japan
Fundamentalism
Hate group
Cumulative extremism
Domestic Extremism Lexicon
False consensus effect
Horseshoe theory
Terrorism
Vigilantism
Violent extremism
Paradox of tolerance
References
Citations
Cited publications
George, John and Laird Wilcox. Nazis, Communists, Klansmen, and Others on the Fringe: Political Extremism in America. Prometheus Books, 1992.
Himmelstein, Jerome L. All But Sleeping with the Enemy: Studying the Radical Right Up Close. ASA, San Francisco: 1988
Hoffer, Eric. The True Believer: Thoughts on the Nature of Mass Movements. Various editions, first published 1951.
Schlesinger, Arthur Jr. The Vital Center: The Politics of Freedom. Various editions, first published 1949.
Wilcox, Laird. "What Is Political Extremism", retrieved from The Voluntaryist newsletter #27, 1987
Further reading
Nawaz, Maajid. Radical: My Journey out of Islamist Extremism (Lyons Press, 2013)
van Ginkel, Bibi. Engaging Civil Society in Countering Violent Extremism (ICCT – The Hague, 2012)
External links
The M and S Collection at the Library of Congress contains materials on Extremist Movements.
Political ideologies
Political spectrum
Political theories
Anti-intellectualism
Barriers to critical thinking
Political violence
| 0.76056 | 0.995816 | 0.757378 |
Historical climatology
|
Historical climatology is the study of historical changes in climate and their effect on civilization from the emergence of homininis to the present day. This differs from paleoclimatology which encompasses climate change over the entire history of Earth. These historical impacts of climate change can improve human life and cause societies to flourish, or can be instrumental in civilization's societal collapse. The study seeks to define periods in human history where temperature or precipitation varied from what is observed in the present day.
The primary sources include written records such as sagas, chronicles, maps and local history literature as well as pictorial representations such as paintings, drawings and even rock art. The archaeological record is equally important in establishing evidence of settlement, water and land usage.
Techniques
In literate societies, historians may find written evidence of climatic variations over hundreds or thousands of years, such as phenological records of natural processes, for example viticultural records of grape harvest dates. In preliterate or non-literate societies, researchers must rely on other techniques to find evidence of historical climate differences.
Past population levels and habitable ranges of humans or plants and animals may be used to find evidence of past differences in climate for the region. Palynology, the study of pollens, can show not only the range of plants and to reconstruct possible ecology, but to estimate the amount of precipitation in a given time period, based on the abundance of pollen in that layer of sediment or ice. The distribution of diatoms in sediments can also be used to examine changes in salinity and climate over geologic eras.
Role in human evolution
Changes in East African climate have been associated with the evolution of hominini. Researchers have proposed that the regional environment transitioned from humid jungle to more arid grasslands due to tectonic uplift and changes in broader patterns of ocean and atmospheric circulation. This environmental change is believed to have forced hominins to evolve for life in a savannah-type environment. Some data suggest that this environmental change caused the development of modern homimin features; however there exist other data that show that morphological changes in the earliest hominins occurred while the region was still forested. Rapid tectonic uplift likely occurred in the early Pleistocene, changing the local elevation and broadly reorganizing the regional patterns of atmospheric circulation. This can be correlated with the rapid hominin evolution of the Quaternary period. Changes in climate at 2.8, 1.7, and 1.0 million years ago correlate well with observed transitions between recognized hominin species. It is difficult to differentiate correlation from causality in these paleopanthropological and paleoclimatological reconstructions, so these results must be interpreted with caution and related to the appropriate time-scales and uncertainties.
Ice ages
The eruption of the Toba supervolcano, 70,000 to 75,000 years ago reduced the average global temperature by 5 degrees Celsius for several years and may have triggered an ice age. It has been postulated that this created a bottleneck in human evolution. A much smaller but similar effect occurred after the eruption of Krakatoa in 1883, when global temperatures fell for about 5 years in a row.
Before the retreat of glaciers at the start of the Holocene (~9600 BC), ice sheets covered much of the northern latitudes and sea levels were much lower than they are today. The start of our present interglacial period appears to have helped spur the development of human civilization.
Role in human migration and agriculture
Climate change has been linked to human migration from as early as the end of the Pleistocene to the early twenty-first century. The effect of climate on available resources and living conditions such as food, water, and temperature drove the movement of populations and determined the ability for groups to begin a system of agriculture or continue a foraging lifestyle.
Groups such as the inhabitants of northern Peru and central Chile, the Saqqaq in Greenland, nomadic Eurasian tribes in Historical China, and the Natufian culture in the Levant all display migration reactions due to climatic change.
Further descriptions of specific cases
In northern Peru and central Chile climate change is cited as the driving force in a series of migration patterns from about 15,000 B.C. to approximately 4,500 B.C. Between 11,800 B.C. and 10,500 B.C. evidence suggests seasonal migration from high to low elevation by the natives while conditions permitted a humid environment to persist in both areas. Around 9,000 B.C. the lakes that periodically served as a home to the natives dried up and were abandoned until 4,500 B.C. This period of abandonment is a blank segment of the archeological record known in Spanish as the silencio arqueológico. During this break, there exists no evidence of activity by the natives in the lakes area. The correlation between climate and migratory patterns leads historians to believe the Central Chilean natives favored humid, low-elevation areas especially during periods of increased aridity.
The different inhabitants of Greenland, specifically in the west, migrated primarily in response to temperature change. The Saqqaq people arrived in Greenland around 4,500 B.P. and experienced moderate temperature variation for the first 1,100 years of occupation; near 3,400 B.P. a cooling period began that pushed the Saqqaq toward the west. A similar temperature fluctuation occurred around 2,800 B.P. that led to the abandonment of the inhabited Saqqaq region; this temperature shift was a decrease in temperature of about 4 °C over 200 years. Following the Saqqaq dominance, other groups such as the Dorset people inhabited west Greenland; the Dorset were sea-ice hunters that had tools adapted to the colder environment. The Dorset appeared to leave the region around 2,200 B.P. without clear connection to the changing environment. Following the Dorset occupation, the Norse began to appear around 1,100 B.P. in west Greenland during a significant warming period. However, a sharp decrease in temperature beginning in 850 B.P. of about 4 °C in 80 years is thought to contribute to the demise of initial Norse occupation in western Greenland.
In Historical China over the past 2,000 years, migration patterns have centered around precipitation change and temperature fluctuation. Pastoralists moved in order to feed the livestock that they cared for and to forage for themselves in more plentiful areas. During dry periods or cooling periods the nomadic lifestyle became more prevalent because pastoralists were seeking more fertile ground. The precipitation was a more defining factor than temperature in terms of its effects on migration. The trend of the migrating Chinese showed that the northern pastoralists were more affected by the fluctuation in precipitation than the southern nomads. In a majority of cases, pastoralists migrated further southward during changes in precipitation. These movements were not classified by one large event or a specific era of movement; rather, the relationship between climate and nomadic migration is relevant from "a long term perspective and on a large spatial scale."
The Natufian population in the Levant was subject to two major climatic changes that influenced the development and separation of their culture. As a consequence of increased temperature, the expansion of the Mediterranean woodlands occurred approximately 13,000 years ago; with that expansion came a shift to sedentary foraging adopted by the surrounding population. Thus, a migration toward the higher-elevation woodlands took place and remained constant for nearly 2,000 years. This era ended when the climate became more arid and the Mediterranean forest shrank 11,000 years ago. Upon this change, some of the Natufian populations nearest sustainable land transitioned into an agricultural way of life; sustainable land was primarily near water sources. Those groups that did not reside near a stable resource returned to the nomadic foraging that was prevalent prior to sedentary life.
Historical and prehistoric societies
The rise and fall of societies have often been linked to environmental factors.
Evidence of a warm climate in Europe, for example, comes from archaeological studies of settlement and farming in the Early Bronze Age at altitudes now beyond cultivation, such as Dartmoor, Exmoor, the Lake District and the Pennines in Great Britain. The climate appears to have deteriorated towards the Late Bronze Age however. Settlements and field boundaries have been found at high altitude in these areas, which are now wild and uninhabitable. Grimspound on Dartmoor is well preserved and shows the standing remains of an extensive settlement in a now inhospitable environment.
Some parts of the present Saharan desert may have been populated when the climate was cooler and wetter, judging by cave art and other signs of settlement in Prehistoric Central North Africa.
Societal growth and urbanization
Approximately one millennium after the 7 ka slowing of sea-level rise, many coastal urban centers rose to prominence around the world. It has been hypothesized that this is correlated with the development of stable coastal environments and ecosystems and an increase in marine productivity (also related to an increase in temperatures), which would provide a food source for hierarchical urban societies.
Societal collapse
Climate change has been associated with the historical collapse of civilizations, cities and dynasties. Notable examples of this include the Anasazi, Classic Maya, the Harappa, the Hittites, and Ancient Egypt. Other, smaller communities such as the Viking settlement of Greenland have also suffered collapse with climate change being a suggested contributory factor.
There are two proposed methods of Classic Maya collapse: environmental and non-environmental. The environmental approach uses paleoclimatic evidence to show that movements in the Intertropical Convergence Zone likely caused severe, extended droughts during a few time periods at the end of the archaeological record for the classic Maya. The non-environmental approach suggests that the collapse could be due to increasing class tensions associated with the building of monumental architecture and the corresponding decline of agriculture, increased disease, and increased internal warfare.
The Harappa and Indus civilizations were affected by drought 4,500–3,500 years ago. A decline in rainfall in the Middle East and Northern India 3,800–2,500 is likely to have affected the Hittites and Ancient Egypt.
Medieval Warm Period
The Medieval Warm Period was a time of warm weather between about AD 800–1300, during the European Medieval period.
Archaeological evidence supports studies of the Norse sagas which describe the settlement of Greenland in the 9th century AD of land now quite unsuitable for cultivation. For example, excavations at one settlement site have shown the presence of birch trees during the early Viking period. In the case of the Norse, the Medieval warm period was associated with the Norse age of exploration and Arctic colonization, and the later colder periods led to the decline of those colonies. The same period records the discovery of an area called Vinland, probably in North America, which may also have been warmer than at present, judging by the alleged presence of grape vines.
Little Ice Age
Later examples include the Little Ice Age, well documented by paintings, documents (such as diaries) and events such as the River Thames frost fairs held on frozen lakes and rivers in the 17th and 18th centuries. The River Thames was made more narrow and flowed faster after old London Bridge was demolished in 1831, and the river was embanked in stages during the 19th century, both of which made the river less liable to freezing.
The Little Ice Age brought colder winters to parts of Europe and North America. In the mid-17th century, glaciers in the Swiss Alps advanced, gradually engulfing farms and crushing entire villages. The River Thames and the canals and rivers of the Netherlands often froze over during the winter, and people skated and even held frost fairs on the ice. The first Thames frost fair was in 1607; the last in 1814, although changes to the bridges and the addition of an embankment affected the river flow and depth, diminishing the possibility of freezes. The freeze of the Golden Horn and the southern section of the Bosphorus took place in 1622. In 1658, a Swedish army marched across the Great Belt to Denmark to invade Copenhagen. The Baltic Sea froze over, enabling sledge rides from Poland to Sweden, with seasonal inns built on the way. The winter of 1794/1795 was particularly harsh when the French invasion army under Pichegru could march on the frozen rivers of the Netherlands, while the Dutch fleet was fixed in the ice in Den Helder harbour. In the winter of 1780, New York Harbor froze, allowing people to walk from Manhattan to Staten Island. Sea ice surrounding Iceland extended for miles in every direction, closing that island's harbours to shipping.
The severe winters affected human life in ways large and small. The population of Iceland fell by half, but this was perhaps also due to fluorosis caused by the eruption of the volcano Laki in 1783. Iceland also suffered failures of cereal crops and people moved away from a grain-based diet. The Norse colonies in Greenland starved and vanished (by the 15th century) as crops failed and livestock could not be maintained through increasingly harsh winters, though Jared Diamond noted that they had exceeded the agricultural carrying capacity before then. In North America, American Indians formed leagues in response to food shortages. In Southern Europe, in Portugal, snow storms were much more frequent while today they are rare. There are reports of heavy snowfalls in the winters of 1665, 1744 and 1886.
In contrast to its uncertain beginning, there is a consensus that the Little Ice Age ended in the mid-19th century.
Evidence of anthropogenic climate change
Through deforestation and agriculture, some scientists have proposed a human component in some historical climatic changes. Human-started fires have been implicated in the transformation of much of Australia from grassland to desert. If true, this would show that non-industrialized societies could have a role in influencing regional climate. Deforestation, desertification and the salinization of soils may have contributed to or caused other climatic changes throughout human history.
For a discussion of recent human involvement in climatic changes, see Attribution of recent climate change.
See also
CLIWOC, Climatological database for the world's oceans (1750–1854)
Global warming
History of climate change science
Temperature record
Little Ice Age
Alfred Thomas Grove
Jean Grove
Hubert Lamb
Gordon Manley
References
Further reading
Christian Pfister and Heinz Wanner: Climate and Society in Europe, The Last Thousand Years, Bern 2021, ISBN 978-3-258-08234-9.
External links
HistoricalClimatology.com
US Historical Climatology Network
Historical climatology and the cultural memory of extreme weather events – Exploring Environmental History Podcast featuring Christian Pfister
Climate change and society
Historical climatology
| 0.796466 | 0.950913 | 0.757369 |
Paleoecology
|
Paleoecology (also spelled palaeoecology) is the study of interactions between organisms and/or interactions between organisms and their environments across geologic timescales. As a discipline, paleoecology interacts with, depends on and informs a variety of fields including paleontology, ecology, climatology and biology.
Paleoecology emerged from the field of paleontology in the 1950s, though paleontologists have conducted paleoecological studies since the creation of paleontology in the 1700s and 1800s. Combining the investigative approach of searching for fossils with the theoretical approach of Charles Darwin and Alexander von Humboldt, paleoecology began as paleontologists began examining both the ancient organisms they discovered and the reconstructed environments in which they lived. Visual depictions of past marine and terrestrial communities have been considered an early form of paleoecology. The term "paleo-ecology" was coined by Frederic Clements in 1916.
Overview of paleoecological approaches
Classic paleoecology uses data from fossils and subfossils to reconstruct the ecosystems of the past. It involves the study of fossil organisms and their associated remains (such as shells, teeth, pollen, and seeds), which can help in the interpretation of their life cycle, living interactions, natural environment, communities, and manner of death and burial. Such interpretations aid the reconstruction of past environments (i.e., paleoenvironments). Paleoecologists have studied the fossil record to try to clarify the relationship animals have to their environment, in part to help understand the current state of biodiversity. They have identified close links between vertebrate taxonomic and ecological diversity, that is, between the diversity of animals and the niches they occupy. Classical paleoecology is a primarily reductionist approach: scientists conduct detailed analysis of relatively small groups of organisms within shorter geologic timeframes.
Evolutionary paleoecology uses data from fossils and other evidence to examine how organisms and their environments change throughout time. Evolutionary paleoecologists take the holistic approach of looking at both organism and environmental change, accounting for physical and chemical changes in the atmosphere, lithosphere and hydrosphere across time. By studying patterns of evolution and extinction in the context of environmental change, evolutionary paleoecologists are able to examine concepts of vulnerability and resilience in species and environments.
Community paleoecology uses statistical analysis to examine the composition and distribution of groups of plants or animals. By quantifying how plants or animals are associated, community paleoecologists are able to investigate the structures of ancient communities of organisms. Advances in technology have helped propel the field, through the use of physical models and computer-based analysis.
Major principles
While the functions and relationships of fossil organisms may not be observed directly (as in ecology), scientists can describe and analyze both individuals and communities over time. To do so, paleoecologists make the following assumptions:
All organisms are adapted and restricted to a particular environment, and are usually adapted to a particular lifestyle.
Essentially all organisms depend on another organism, whether directly or indirectly.
The fossil or physical records are inherently incomplete - the geologic record is selective and some environments are more likely to be preserved than others. Taphonomy, affecting the over- and underrepresentation of fossils, is an extremely important consideration in interpreting fossil assemblages.
Uniformitarianism is the concept that processes that took place in the geologic past are the same as the ones that are observed taking place today. In paleoecology, uniformitarianism is used as a methodology: paleoecologists make inferences about ancient organisms and environments based on analogies they find in the present.
Paleoecological methods
The aim of paleoecology is to build the most detailed model possible of the life environment of previously living organisms found today as fossils. The process of reconstructing past environments requires the use of archives (e.g., sediment sequences), proxies (e.g., the micro or mega-fossils and other sediment characteristics that provide the evidence of the biota and the physical environment), and chronology (e.g., obtaining absolute (or relative) dating of events in the archive). Such reconstruction takes into consideration complex interactions among environmental factors such as temperatures, food supplies, and degree of solar illumination. Often much of this information is lost or distorted by the fossilization process or diagenesis of the enclosing sediments, making interpretation difficult.
Some other proxies for reconstructing past environments include charcoal and pollen, which synthesize fire and vegetation data, respectively. Both of these alternates can be found in lakes and peat settings, and can provide moderate to high resolution information. These are well studied methods often utilized in the paleoecological field.
The environmental complexity factor is normally tackled through statistical analysis of the available numerical data (quantitative paleontology or paleostatistics), while the study of post-mortem processes is known as the field of taphonomy.
Quaternary
Because the Quaternary period is well represented in geographically extensive and high temporal-resolution records, many hypotheses arising from ecological studies of modern environments can be tested at the millennial scale using paleoecological data. In addition, such studies provide historical (pre-industrialization) baselines of species composition and disturbance regimes for ecosystem restoration, or provide examples for understanding the dynamics of ecosystem change through periods of large climate changes. Paleoecological studies are used to inform conservation, management and restoration efforts. In particular, fire-focused paleoecology is an informative field of study to land managers seeking to restore ecosystem fire regimes.
See also
Historical ecology
Palaeogeography
Paleolimnology
Palynology
Palaeogeography, Palaeoclimatology, Palaeoecology (peer-reviewed journal)
References
Bibliography
Taylor, P.D. and Wilson, M.A., 2003. Palaeoecology and evolution of marine hard substrate communities. Earth-Science Reviews 62: 1–103. wooster.edu
Acosta et al., 2018. Climate change and peopling of the Neotropics during the Pleistocene-Holocene transition. Boletín de la Sociedad Geológica Mexicana. http://boletinsgm.igeolcu.unam.mx/bsgm/index.php/component/content/article/368-sitio/articulos/cuarta-epoca/7001/1857-7001-1-acosta
Ecology
Subfields of ecology
| 0.774567 | 0.977796 | 0.757369 |
Emigration
|
Emigration is the act of leaving a resident country or place of residence with the intent to settle elsewhere (to permanently leave a country). Conversely, immigration describes the movement of people into one country from another (to permanently move to a country). A migrant emigrates from their old country, and immigrates to their new country. Thus, both emigration and immigration describe migration, but from different countries' perspectives.
Demographers examine push and pull factors for people to be pushed out of one place and attracted to another. There can be a desire to escape negative circumstances such as shortages of land or jobs, or unfair treatment. People can be pulled to the opportunities available elsewhere. Fleeing from oppressive conditions, being a refugee and seeking asylum to get refugee status in a foreign country, may lead to permanent emigration.
Forced displacement refers to groups that are forced to abandon their native country, such as by enforced population transfer or the threat of ethnic cleansing. Refugees and asylum seekers in this sense are the most marginalized extreme cases of migration, facing multiple hurdles in their journey and efforts to integrate into the new settings. Scholars in this sense have called for cross-sector engagement from businesses, non-governmental organizations, educational institutions, and other stakeholders within the receiving communities.
History
Patterns of emigration have been shaped by numerous economic, social, and political changes throughout the world in the last few hundred years. For instance, millions of individuals fled poverty, violence, and political turmoil in Europe to settle in the Americas and Oceania during the 18th, 19th, and 20th centuries. Likewise, millions left South China in the Chinese diaspora during the 19th and early 20th centuries.
"Push" and "pull" factors
Demographers distinguish factors at the origin that push people out, versus those at the destination that pull them in. Motives to migrate can be either incentives attracting people away, known as pull factors, or circumstances encouraging a person to leave. Diversity of push and pull factors inform management scholarship in their efforts to understand migrant movement.
Push factors
Poor living conditions or overcrowding
Lack of employment or entrepreneurial opportunities
Lack of educational opportunities
Threat of arrest or punishment
Persecution or intolerance based on race, religion, gender or sexual orientation
Political corruption, lack of government transparency or freedom of speech
Inability to find a spouse for marriage
Lack of freedom to choose religion, or to choose no religion
Resource depletion, scarcity or austerity
Military draft, warfare or terrorism
Expulsion by armed force or coercion
Recession or economic collapse
Famine or drought
Cultural fights with other cultural groups
Pull factors
Higher quality of life, economic growth or lower cost of living
Encouragement to join relatives or fellow countrymen; chain migration
Quick wealth (as in a gold rush)
More job opportunities or promise of higher pay
Prosperity or economic surplus
Educational opportunity (including university for adults or K-12 for children)
Prepaid travel (as from relatives)
Building a new nation (historically)
Building specific cultural or religious communities
Political freedom
Cultural opportunities
Greater opportunity to find a spouse
Favorable climate
Ease of crossing boundaries
Reduced tariff
Criticism
Some scholars criticize the "push-pull" approach to understanding international migration. Regarding lists of positive or negative factors about a place, Jose C. Moya writes "one could easily compile similar lists for periods and places where no migration took place."
Emigration waves by country
Jews escaping from German-occupied Europe
Yerida (Jewish emigration from Israel)
Swedish emigration to the United States
Statistics
Unlike immigration, in many countries few if any records have been recorded or maintained in regard to persons leaving a country either on a temporary or permanent basis. Therefore, estimates on emigration must be derived from secondary sources such as immigration records of the receiving country or records from other administrative agencies.
The rate of emigration has continued to grow, reaching 280 million in 2017.
In Armenia, for example, the migration is calculated by counting people arriving or leaving the country via airplane, train, railway or other means of transportation. Here, the emigration index is high: 1.5% of population leaves the country annually. In fact, it is one of the countries, where emigration has become a part of culture since 20th century. For example, between 1990 and 2005 approximately 700,000–1,300,000 Armenians left the country. The highly rising numbers of emigration are a direct response to socio-political and economic areas of the country. The internal migration (migration in country) is big (28.7%), while international migration is 71.3% of the total migration by people aging 15 and above. It is important to understand the reasons for both types of migration and the availability of the options. For example, in Armenia, everything is localized in the capital city Yerevan, thus, internal migration is from the villages and small cities to the biggest city of the country. The reason for the migration can be work or study. International migration follows the same reasoning of migration: work or study. The main destinations for it are Russia, France and US.
Emigration restrictions
Some countries restrict the ability of their citizens to emigrate to other countries. After 1668, the Qing Emperor banned Han Chinese migration to Manchuria. In 1681, the emperor ordered construction of the Willow Palisade, a barrier beyond which the Chinese were prohibited from encroaching on Manchu and Mongol lands.
The Soviet Socialist Republics of the later Soviet Union began such restrictions in 1918, with laws and borders tightening until even illegal emigration was nearly impossible by 1928. To strengthen this, they set up internal passport controls and individual city Propiska ("place of residence") permits, along with internal freedom of movement restrictions often called the 101st kilometre, rules which greatly restricted mobility within even small areas.
At the end of World War II in 1945, the Soviet Union occupied several Central European countries, together called the Eastern Bloc, with the majority of those living in the newly acquired areas aspiring to independence and wanted the Soviets to leave. Before 1950, over 15 million people emigrated from the Soviet-occupied eastern European countries and immigrated into the west in the five years immediately following World War II. By the early 1950s, the Soviet approach to controlling national movement was emulated by most of the rest of the Eastern Bloc. Restrictions implemented in the Eastern Bloc stopped most east–west migration, with only 13.3 million migrations westward between 1950 and 1990. However, hundreds of thousands of East Germans annually immigrated to West Germany through a "loophole" in the system that existed between East and West Berlin, where the four occupying World War II powers governed movement. The emigration resulted in massive "brain drain" from East Germany to West Germany of younger educated professionals, such that nearly 20% of East Germany's population had migrated to West Germany by 1961. In 1961, East Germany erected a barbed-wire barrier that would eventually be expanded through construction into the Berlin Wall, effectively closing the loophole. In 1989, the Berlin Wall fell, followed by German reunification and within two years the dissolution of the Soviet Union.
By the early 1950s, the Soviet approach to controlling international movement was also emulated by China, Mongolia, and North Korea. North Korea still tightly restricts emigration, and maintains one of the strictest emigration bans in the world, although some North Koreans still manage to illegally emigrate to China. Other countries with tight emigration restrictions at one time or another included Angola, Egypt, Ethiopia, Mozambique, Somalia, Afghanistan, Burma, Democratic Kampuchea (Cambodia from 1975 to 1979), Laos, North Vietnam, Iraq, South Yemen and Cuba.
See also
Canvas ceiling
Deportation
Diaspora
Eastern Bloc emigration and defection
Émigré
Exile
Expatriate
Feminization of migration
Immigration
Foot voting
Human capital flight
Human migration
Settlement
International Organization for Migration
Migration Letters
Political asylum
Political migration
Population transfer
Refugee
Separation barrier
Snowbird (people)
Xenophobia
Notes
References
Further reading
Labour market efficiency and emigration in Slovakia and EU neighbouring countries,
External links
Translation from Galician to English of 4 Classic Emigration Ballads
Human migration
Population
| 0.761534 | 0.99453 | 0.757369 |
Early medieval European dress
|
Early medieval European dress, from about 400 AD to 1100 AD, changed very gradually. The main feature of the period was the meeting of late Roman costume with that of the invading peoples who moved into Europe over this period. For a period of several centuries, people in many countries dressed differently depending on whether they identified with the old Romanised population, or the new populations such as Franks, Anglo-Saxons, Visigoths. The most easily recognisable difference between the two groups was in male costume, where the invading peoples generally wore short tunics, with belts, and visible trousers, hose or leggings. The Romanised populations, and the Church, remained faithful to the longer tunics of Roman formal costume, coming below the knee, and often to the ankles. By the end of the period, these distinctions had finally disappeared, and Roman dress forms remained mainly as special styles of clothing for the clergy – the vestments that have changed relatively little up to the present day.
Many aspects of clothing in the period remain unknown. This is partly because only the wealthy were buried with clothing; it was rather the custom that most people were buried in burial shrouds, also called winding sheets. Fully dressed burial may have been regarded as a pagan custom, and an impoverished family was probably glad to keep a serviceable set of clothing in use. Clothes were expensive for all except the richest in this period.
History
For many centuries people had worn simply sewn T-shaped tunics that they made themselves. It was only in the 11th century that a professional tailor class began to develop techniques to make fitted fashions. Some progress was made but 12th century fashions were usually too tightly fitted, and sleeves were too loose and too long.
Materials
Apart from the elite, most people in the period had low living standards, and clothes were probably home-made, usually from cloth made at a village level, and very simply cut. The elite imported silk cloth from the Byzantine and later Muslim worlds, and also probably cotton. They also could afford bleached linen and dyed and simply patterned wool woven in Europe itself. But embroidered decoration was probably very widespread, though not usually detectable in art. Most people probably wore only wool or linen, usually undyed, and leather or fur from locally hunted animals.
Archaeological finds have shown that the elite, especially men, could own superb jewellery, most commonly brooches to fasten their cloak, but also buckles, purses, weapon fittings, necklaces and other forms. The Sutton Hoo finds and the Tara Brooch are two of the most famous examples from Ireland and Britain in the middle of the period. In France, over three hundred gold and jewelled bees were found in the tomb of the Merovingian king Childeric I (died 481; all but two bees have since been stolen and lost), which are thought to have been sewn onto his cloak. Metalwork accessories were the clearest indicator of high-ranking persons. In Anglo-Saxon England, and probably most of Europe, only free people could carry a seax or knife, and both sexes normally wore one at the waist, to use for all purposes including as personal cutlery or self-defence.
Decoration
Both men's and women's clothing was trimmed with bands of decoration, variously embroidery, tablet-woven bands, or colourful borders woven into the fabric in the loom. The famous Anglo-Saxon opus anglicanum needlework was sought-after as far away as Rome.
Anglo-Saxons wore decorated belts.
Men's clothing
The primary garment was the tunic — generally a long fabric panel, folded over with a neck-hole cut into the fold, and sleeves attached. It was typical for the wealthy to display their affluence with a longer tunic made of finer and more colorful cloth, even silk or silk-trimmed. The tunic was usually belted, with either a leather or strong fabric belt. Depending on climate, trousers were tailored either loose or tight (or not worn at all if the weather was warm). The most basic leggings were strips of cloth wound round the leg, and held in place by long laces, presumably of leather, which is called cross-gartering. This may have been done with loose-fitting trousers also. Tighter-fitting hose were also worn.
Over this a sleeved tunic was worn, which for the upper classes gradually became longer towards the end of the period. For peasants and warriors it was always at the knee or above. For winter, outside or formal dress, a cloak or mantle completed the outfit. The Franks had a characteristic short cape called a "saie", which barely came to the waist. This was fastened on the left shoulder (so as not to impede sword strokes) by a brooch, typically a fibula and later a round brooch on the Continent, and nearly always a round one for Anglo-Saxons, while in Ireland and Scotland the particular style of the penannular or Celtic brooch was most common. In all areas the brooch could be a highly elaborate piece of jewellery in precious metal at the top of society, with the most elaborate Celtic brooches, like the Tara Brooch and Hunterston Brooch, perhaps the most ornate and finely made of all. The "cappa" or chaperon, a one-piece hood and cape over the shoulders was worn for cold weather, and the Roman straw hat for summer fieldwork presumably spread to the invading peoples, as it was universal by the High Middle Ages. Shoes, not always worn by the poor, were mostly the simple turnshoe – typically a cowhide sole and softer leather upper, which were sewn together, and then turned inside out.
Charlemagne
The biographers of Charlemagne record that he always dressed in the Frankish style, which means that he wore similar if superior versions of the clothes of better-off peasants over much of Europe for the later centuries of the period:
No English monarch of the time had his dress habits recorded in such detail. The biographers also record that he preferred English wool for his riding-cloaks (sagæ), and complained to Offa of Mercia about a trend to make cloaks imported into Frankia impractically short. A slightly later narrative told of his dissatisfaction with the short cloaks imported from Frisia: "What is the use of these pittaciola: I cannot cover myself up with them in bed, when riding I cannot defend myself against wind and rain, and getting down for Nature's call, the deficiency freezes the thighs". He was slightly over six feet tall.
Clergy
At the beginning of this period the clergy generally dressed the same as laymen in post-Roman populations; this changed completely during the period, as lay dress changed considerably but clerical dress hardly at all, and by the end all ranks of clergy wore distinctive forms of dress.
Clergy wore special short hairstyles called the tonsure; in England the choice between the Roman tonsure (the top of the head shaven) and the Celtic tonsure (only the front of the head shaven, from ear to ear) had to be resolved at the Synod of Whitby, in favour of Rome. Wealthy churches or monasteries came during this period to use richly decorated vestments for services, including opus anglicanum embroidery and imported patterned silks. Various forms of Roman-derived vestment, including the chasuble, cope, pallium, stole, maniple and dalmatic became regularised during the period, and by the end there were complicated prescriptions for who was to wear what, and when. To a large extent these forms of vestment survive today in the Catholic and (even more conservative) Anglican churches. The same process took place in the Byzantine world over the same period, which again retains early medieval styles in Eastern Orthodox vestments.
Secular (i.e. non-monastic) clergy usually wore a white alb, or loose tunic, tied at the waist with a cord (formally called a cincture), when not conducting services. Senior clergy seem always to have fastened their cloaks with a brooch in the centre of their chest, rather than at their right shoulder like laymen, who needed their sword-arm unencumbered.
Women's clothing
Women's clothing in Western Europe went through a transition during the early medieval period as the migrating Germanic tribes adopted Late Roman symbols of authority, including dress. In Northern Europe, at the beginning of the period around 400 - 500 AD in Continental Europe and slightly later in England, women's clothing consisted at least one long-sleeved tunic fitted at the wrists and a tube-like garment, sometimes called a peplos, worn pinned at the shoulders. This garment was carried with the Germanic Migrations to Iberia and Southern Europe. These garments could be decorated with metal applique, embroidery, and woven bands.
After around 500 AD, women's clothing moved towards layered tunics. In the territories of the Franks and their eventual client tribes the Alemanni and Bavarii, as well as in East Kent, women wore a long tunic as an inner layer and a long coat, closed in the front with multiple brooches and a belt, as an outer layer. An example of this can be seen in the interpretations of the grave of Queen Arnegunde. Not all graves identified as female contain the brooches necessary to close the front of the "coat dress", indicating that not all women wore that style, or at least that not all women were buried in that style. The brooches may have been too expensive for most women. The presence or absence of oval brooches in graves was seen as an indicator of social and marital status. Graves of shorter length were thought to be those of children and unmarried women who did not wear oval brooches, while longer graves were most likely those of married women who wore these brooches. However, an analysis of Solberg suggests that the absence of brooches in graves cannot be solely attributed to unmarried or underage women. Instead, less prosperous but still free farmers also did not wear these brooches. This research points to the fact that oval brooches were worn by women who had the same legal position as men or were in a position of authority on a farm.
The women of later Anglo-Saxon England, outside of East Kent, mostly wore an ensemble of multiple layered tunics. These women were particularly well known for their embroidery and may have decorated their clothing with silk and wool embroidery or woven bands. These tunics are often interpreted as having a style of neckline called a "keyhole neckline" that may have facilitated breast-feeding. This neckline would have been closed with a brooch for modesty and warmth. In later Anglo-Saxon England, there is visual evidence for a large poncho-like garment that may have been worn by noble or royal women.
The most famous garment of early medieval Scandinavia is the so-called Apron Dress (also called a trägerrock, hängerock, or smokkr). This may have evolved from the peoples of the early Germanic Iron Age. The garment is often interpreted as a tube shape (either fitted or loose) that is worn with straps over the shoulder and large brooches (sometimes called "turtle brooches") at the upper chest. Examples of appliqued silk bands used as decoration have been found in a number of graves. Not all graves identified as belonging to women contain the brooches that typify this type of garment, indicating that some women wore a different style of clothing. There is evidence from Dublin that at least some Norse women wore caps or other head-coverings, it is unclear however how pervasive this practice was.
On all top layers, the neckline, sleeves, and hems might be decorated with embroidery, tablet weaving, or appliqued silks, very richly so for the upper classes. Hose or socks may have been worn on the legs. Veils or other head coverings appear in art depicting northern European women beginning with the Romans, however this is not universal. More pervasive use of headcoverings, especially for married women, appears to follow the Christianization of the various Germanic tribes. Fur is described in many classical accounts of the Germanic tribes but has not survived well in archaeological remains, making it difficult to interpret how and where it was used in female clothing. In all regions, garments were primarily made out of wool and linen, with some examples of silk and hemp.
Regional variation
Areas where Roman influence remained strong include most of Italy except the North, South-Western France, as far north as Tours, and probably cities like Cologne in Germany. Iberia was largely ruled by the Moors in the later part of the period, and in any case had received rather different influences from the Visigoths compared to other invading peoples; Spanish dress remained distinctive well after the end of the period. The Visigothic Kingdom of Toulouse also ruled the South and West of France for the first two centuries of the period.
Early Anglo-Saxon women seem to have had a distinctive form of tubular dress, fastened on the shoulder with brooches, and belted. This style matches some German dresses from much earlier in the Roman period. After about 700, which roughly coincides with the general conversion to Christianity, they adopted the general Continental style.
The pagan Vikings, especially the women, dressed rather differently from most of Europe, with uncovered female hair, and an outer frock made of a single length of cloth, pinned with brooches at both shoulders. Under this they wore a sleeved undergarment, perhaps with an intervening wool tunic, especially in winter, when a jacket could be added as a final top layer.
See also
Anglo-Saxon dress
Anglo-Saxon brooches
Early Middle Ages
Byzantine dress
Byzantine silk
Gaelic clothing and fashion
English medieval clothing
History of Western fashion
Notes
References
Østergård, Else, Woven into the Earth: Textiles from Norse Greenland, Aarhus University Press, 2004,
Owen-Crocker, Gale R., Dress in Anglo-Saxon England, revised edition, Boydell Press, 2004,
Payne, Blanche; Winakor, Geitel; Farrell-Beck, Jane: The History of Costume, from the Ancient Mesopotamia to the Twentieth Century, 2nd Edn, pp. 1–28, HarperCollins, 1992.
Piponnier, Françoise, and Perrine Mane; Dress in the Middle Ages; Yale UP; 1997;
Youngs, Susan (ed), "The Work of Angels", Masterpieces of Celtic Metalwork, 6th–9th centuries AD, 1989, British Museum Press, London,
Further reading
Sylvester, Louise M., Mark C. Chambers and Gale R. Owen-Crocker (eds.), 2014, Medieval Dress and Textiles in Britain: A Multilingual Sourcebook Woodbridge, Suffolk and Rochester, NY Boydell & Brewer. .
Early Middle Ages
Medieval European costume
| 0.763708 | 0.991687 | 0.757359 |
Turn of the century
|
The turn of the century is the transition from one century to another, or the time period before or after that change in centuries.
Usage
The phrase "turn of the century" is generally understood to mean the change (whether upcoming or past) closest to the current generation. During the 20th century, the phrase, unqualified, was used to refer to the transition from the 19th century to the 20th century. In the 21st century, "turn of the 21st century" (or 20th century) may be used to avoid ambiguity.
The Chicago Manual of Style has indicated some ambiguity on the exact meaning of the phrase "turn of the n-th century". For instance, if a statement describes an event as taking place "at the turn of the 18th century", it could refer to a period around the year 1701 or around 1800, that is, the beginning or end of that century. Consequently they recommend only using "turn of the century", in a context that makes clear which transition is meant, otherwise using different, unambiguous, wording.
See also
Fin de siècle
Progressive Era
Edwardian Era
Information Age
War on Terror
References
Historical eras
English words and phrases
| 0.766982 | 0.987446 | 0.757353 |
Progress
|
Progress is movement towards a perceived refined, improved, or otherwise desired state. It is central to the philosophy of progressivism, which interprets progress as the set of advancements in technology, science, and social organization efficiency – the latter being generally achieved through direct societal action, as in social enterprise or through activism, but being also attainable through natural sociocultural evolution – that progressivism holds all human societies should strive towards.
The concept of progress was introduced in the early-19th-century social theories, especially social evolution as described by Auguste Comte and Herbert Spencer. It was present in the Enlightenment's philosophies of history. As a goal, social progress has been advocated by varying realms of political ideologies with different theories on how it is to be achieved.
Measuring progress
Specific indicators for measuring progress can range from economic data, technical innovations, change in the political or legal system, and questions bearing on individual life chances, such as life expectancy and risk of disease and disability.
GDP growth has become a key orientation for politics and is often taken as a key figure to evaluate a politician's performance. However, GDP has a number of flaws that make it a bad measure of progress, especially for developed countries. For example, environmental damage is not taken into account nor is the sustainability of economic activity. Wikiprogress has been set up to share information on evaluating societal progress. It aims to facilitate the exchange of ideas, initiatives and knowledge. HumanProgress.org is another online resource that seeks to compile data on different measures of societal progress.
Our World in Data is a scientific online publication, based at the University of Oxford, that studies how to make progress against large global problems such as poverty, disease, hunger, climate change, war, existential risks, and inequality.
The mission of Our World in Data is to present "research and data to make progress against the world’s largest problems".
The Social Progress Index is a tool developed by the International Organization Imperative Social Progress, which measures the extent to which countries cover social and environmental needs of its citizenry. There are fifty-two indicators in three areas or dimensions: Basic Human Needs, and Foundations of Wellbeing and Opportunities which show the relative performance of nations.
Indices that can be used to measure progress include:
Broad measures of economic progress
Disability-adjusted life year
Green national product
Gender-related Development Index
Genuine Progress Indicator
Gross National Happiness
Gross National Well-being
Happy Planet Index
Human Development Index
Legatum Prosperity Index
Social Progress Index
OECD Better Life Index
Subjective life satisfaction
Where-to-be-born Index
Wikiprogress
World Happiness Report
World Values Survey
Scientific progress
Scientific progress is the idea that the scientific community learns more over time, which causes a body of scientific knowledge to accumulate. The chemists in the 19th century knew less about chemistry than the chemists in the 20th century, and they in turn knew less than the chemists in the 21st century. Looking forward, today's chemists reasonably expect that chemists in future centuries will know more than they do.
From the 18th century through late 20th century, the history of science, especially of the physical and biological sciences, was often presented as a progressive accumulation of knowledge, in which true theories replaced false beliefs. Some more recent historical interpretations, such as those of Thomas Kuhn, tend to portray the history of science in terms of competing paradigms or conceptual systems in a wider matrix of intellectual, cultural, economic and political trends. These interpretations, however, have met with opposition for they also portray the history of science as an incoherent system of incommensurable paradigms, not leading to any scientific progress, but only to the illusion of progress.
Whether other intellectual disciplines make progress in the same way as the sciences is a matter of debate. For example, one might expect that today's historians know more about global history than their ancient counterparts (consider the histories of Herodotus). Yet, knowledge can be lost through the passage of time, or the criteria for evaluating what is worth knowing can change. Similarly, there is considerable disagreement over whether fields such as philosophy make progress - or even whether they aim at accumulating knowledge in the same way as the sciences.
Social progress
Aspects of social progress, as described by Condorcet, have included the disappearance of slavery, the rise of literacy, the lessening of inequalities between the sexes, reforms of harsh prisons and the decline of poverty. The social progress of a society can be measured based on factors such as its ability to address fundamental human needs, help citizens improve their quality of life, and provide opportunities for citizens to succeed.
Social progress is often improved by increases in GDP, although other factors are also relevant. An imbalance between economic and social progress hinders further economic progress, and can lead to political instability. Where there is an imbalance between economic growth and social progress, political instability and unrest often arise. Lagging social progress also holds back economic growth in these and other countries that fail to address human needs, build social capital, and create opportunity for their citizens.
Status of women
How progress improved the status of women in traditional society was a major theme of historians starting in the Enlightenment and continuing to today. British theorists William Robertson (1721–1793) and Edmund Burke (1729–1797), along with many of their contemporaries, remained committed to Christian- and republican-based conceptions of virtue, while working within a new Enlightenment paradigm. The political agenda related beauty, taste, and morality to the imperatives and needs of modern societies of a high level of sophistication and differentiation. Two themes in the work of Robertson and Burke—the nature of women in 'savage' and 'civilized' societies and 'beauty in distress'—reveals how long-held convictions about the character of women, especially with regard to their capacity and right to appear in the public domain, were modified and adjusted to the idea of progress and became central to modern European civilization.
Classics experts have examined the status of women in the ancient world, concluding that in the Roman Empire, with its superior social organization, internal peace, and rule of law, allowed women to enjoy a somewhat better standing than in ancient Greece, where women were distinctly inferior. The inferior status of women in traditional China has raised the issue of whether the idea of progress requires a thoroughgoing rejection of traditionalism—a belief held by many Chinese reformers in the early 20th century.
Historians Leo Marx and Bruce Mazlish asking, "should we in fact abandon the idea of progress as a view of the past," answer that there is no doubt "that the status of women has improved markedly" in cultures that have adopted the Enlightenment idea of progress.
Modernization
Modernization was promoted by classical liberals in the 19th and 20th centuries, who called for the rapid modernization of the economy and society to remove the traditional hindrances to free markets and free movements of people. During the Enlightenment in Europe social commentators and philosophers began to realize that people themselves could change society and change their way of life. Instead of being made completely by gods, there was increasing room for the idea that people themselves made their own society—and not only that, as Giambattista Vico argued, because people made their own society, they could also fully comprehend it. This gave rise to new sciences, or proto-sciences, which claimed to provide new scientific knowledge about what society was like, and how one may change it for the better.
In turn, this gave rise to progressive opinion, in contrast with conservational opinion. The social conservationists were skeptical about panaceas for social ills. According to conservatives, attempts to radically remake society normally make things worse. Edmund Burke was the leading exponent of this, although later-day liberals like Friedrich Hayek have espoused similar views. They argue that society changes organically and naturally, and that grand plans for the remaking of society, like the French Revolution, National Socialism and Communism hurt society by removing the traditional constraints on the exercise of power.
The scientific advances of the 16th and 17th centuries provided a basis for Francis Bacon's book the New Atlantis. In the 17th century, Bernard le Bovier de Fontenelle described progress with respect to arts and the sciences, saying that each age has the advantage of not having to rediscover what was accomplished in preceding ages. The epistemology of John Locke provided further support and was popularized by the Encyclopedists Diderot, Holbach, and Condorcet. Locke had a powerful influence on the American Founding Fathers. The first complete statement of progress is that of Turgot, in his "A Philosophical Review of the Successive Advances of the Human Mind" (1750). For Turgot, progress covers not only the arts and sciences but, on their base, the whole of culture—manner, mores, institutions, legal codes, economy, and society. Condorcet predicted the disappearance of slavery, the rise of literacy, the lessening of inequalities between the sexes, reforms of harsh prisons and the decline of poverty.
John Stuart Mill's (1806–1873) ethical and political thought demonstrated faith in the power of ideas and of intellectual education for improving human nature or behavior. For those who do not share this faith the idea of progress becomes questionable.
Alfred Marshall (1842–1924), a British economist of the early 20th century, was a proponent of classical liberalism. In his highly influential Principles of Economics (1890), he was deeply interested in human progress and in what is now called sustainable development. For Marshall, the importance of wealth lay in its ability to promote the physical, mental, and moral health of the general population. After World War II, the modernization and development programs undertaken in the Third World were typically based on the idea of progress.
In Russia the notion of progress was first imported from the West by Peter the Great (1672–1725). An absolute ruler, he used the concept to modernize Russia and to legitimize his monarchy (unlike its usage in Western Europe, where it was primarily associated with political opposition). By the early 19th century, the notion of progress was being taken up by Russian intellectuals and was no longer accepted as legitimate by the tsars. Four schools of thought on progress emerged in 19th-century Russia: conservative (reactionary), religious, liberal, and socialist—the latter winning out in the form of Bolshevist materialism.
The intellectual leaders of the American Revolution, such as Benjamin Franklin, Thomas Paine, Thomas Jefferson and John Adams, were immersed in Enlightenment thought and believed the idea of progress meant that they could reorganize the political system to the benefit of the human condition; both for Americans and also, as Jefferson put it, for an "Empire of Liberty" that would benefit all mankind. In particular, Adams wrote “I must study politics and war, that our sons may have liberty to study mathematics and philosophy. Our sons ought to study mathematics and philosophy, geography, natural history and naval architecture, navigation, commerce and agriculture in order to give their children a right to study painting, poetry, music, architecture, statuary, tapestry and porcelain.”
Juan Bautista Alberdi (1810–1884) was one of the most influential political theorists in Argentina. Economic liberalism was the key to his idea of progress. He promoted faith in progress, while chiding fellow Latin Americans for blind copying of United States and Europe models. He hoped for progress through promotion of immigration, education, and a moderate type of federalism and republicanism that might serve as a transition in Argentina to true democracy.
In Mexico, José María Luis Mora (1794–1850) was a leader of classical liberalism in the first generation after independence, leading the battle against the conservative trinity of the army, the church, and the hacendados. He envisioned progress as both a process of human development by the search for philosophical truth and as the introduction of an era of material prosperity by technological advancement. His plan for Mexican reform demanded a republican government bolstered by widespread popular education free of clerical control, confiscation and sale of ecclesiastical lands as a means of redistributing income and clearing government debts, and effective control of a reduced military force by the government. Mora also demanded the establishment of legal equality between native Mexicans and foreign residents. His program, untried in his lifetime, became the key element in the Mexican Constitution of 1857.
In Italy, the idea that progress in science and technology would lead to solutions for human ills was connected to the nationalism that united the country in 1860. The Piedmontese Prime Minister Camillo Cavour envisaged the railways as a major factor in the modernization and unification of the Italian peninsula. The new Kingdom of Italy, formed in 1861, worked to speed up the processes of modernization and industrialization that had begun in the north, but were slow to arrive in the Papal States and central Italy, and were nowhere in sight in the "Mezzogiorno" (that is, Southern Italy and Sicily). The government sought to combat the backwardness of the poorer regions in the south and work towards augmenting the size and quality of the newly created Italian army so that it could compete on an equal footing with the powerful nations of Europe. In the same period, the government was legislating in favour of public education to fight the great problem of illiteracy, upgrade the teaching classes, improve existing schools, and procure the funds needed for social hygiene and care of the body as factors in the physical and moral regeneration of the race.
In China, in the 20th century the Kuomintang or Nationalist party, which ruled from the 1920s to the 1940s, advocated progress. The Communists under Mao Zedong adopted different models and their ruinous projects caused mass famines. After Mao's death, however, the new regime led by Deng Xiaoping (1904–1997) and his successors aggressively promoted modernization of the economy using capitalist models and imported western technology. This was termed the "Opening of China" in the West, and more broadly encompasses Chinese economic reform.
Among environmentalists, there is a continuum between two opposing poles. The one pole is optimistic, progressive, and business-oriented, and endorses the classic idea of progress. For example, bright green environmentalism endorses the idea that new designs, social innovations and green technologies can solve critical environmental challenges. The other is pessimistic in respect of technological solutions, warning of impending global crisis (through climate change or peak oil, for example) and tends to reject the very idea of modernity and the myth of progress that is so central to modernization thinking. Similarly, Kirkpatrick Sale, wrote about progress as a myth benefiting the few, and a pending environmental doomsday for everyone. An example is the philosophy of Deep Ecology.
Philosophy
Sociologist Robert Nisbet said that "No single idea has been more important than ... the Idea of Progress in Western civilization for three thousand years", and defines five "crucial premises" of the idea of progress:
value of the past
nobility of Western civilization
worth of economic/technological growth
faith in reason and scientific/scholarly knowledge obtained through reason
intrinsic importance and worth of life on earth
Sociologist P. A. Sorokin said, "The ancient Chinese, Babylonian, Hindu, Greek, Roman, and most of the medieval thinkers supporting theories of rhythmical, cyclical or trendless movements of social processes were much nearer to reality than the present proponents of the linear view." Unlike Confucianism and to a certain extent Taoism, that both search for an ideal past, the Judeo-Christian-Islamic tradition believes in the fulfillment of history, which was translated into the idea of progress in the modern age. Therefore, Chinese proponents of modernization have looked to western models. According to Thompson, the late Qing dynasty reformer, Kang Youwei, believed he had found a model for reform and "modernisation" in the Ancient Chinese Classics.
Philosopher Karl Popper said that progress was not fully adequate as a scientific explanation of social phenomena. More recently, Kirkpatrick Sale, a self-proclaimed neo-luddite author, wrote exclusively about progress as a myth, in an essay entitled "Five Facets of a Myth".
Iggers (1965) says that proponents of progress underestimated the extent of man's destructiveness and irrationality, while critics misunderstand the role of rationality and morality in human behavior.
In 1946, psychoanalyst Charles Baudouin claimed modernity has retained the "corollary" of the progress myth, the idea that the present is superior to the past, while at the same time insisting that it is free of the myth:
A cyclical theory of history was adopted by Oswald Spengler (1880–1936), a German historian who wrote The Decline of the West in 1920. World War I, World War II, and the rise of totalitarianism demonstrated that progress was not automatic and that technological improvement did not necessarily guarantee democracy and moral advancement. British historian Arnold J. Toynbee (1889–1975) felt that Christianity would help modern civilization overcome its challenges.
The Jeffersonians said that history is not exhausted but that man may begin again in a new world. Besides rejecting the lessons of the past, they Americanized the idea of progress by democratizing and vulgarizing it to include the welfare of the common man as a form of republicanism. As Romantics deeply concerned with the past, collecting source materials and founding historical societies, the Founding Fathers were animated by clear principles. They saw man in control of his destiny, saw virtue as a distinguishing characteristic of a republic, and were concerned with happiness, progress, and prosperity. Thomas Paine, combining the spirit of rationalism and romanticism, pictured a time when America's innocence would sound like a romance, and concluded that the fall of America could mark the end of "the noblest work of human wisdom".
Historian J. B. Bury wrote in 1920:
In the postmodernist thought steadily gaining ground from the 1980s, the grandiose claims of the modernizers are steadily eroded, and the very concept of social progress is again questioned and scrutinized. In the new vision, radical modernizers like Joseph Stalin and Mao Zedong appear as totalitarian despots, whose vision of social progress is held to be totally deformed. Postmodernists question the validity of 19th-century and 20th-century notions of progress—both on the capitalist and the Marxist side of the spectrum. They argue that both capitalism and Marxism over-emphasize technological achievements and material prosperity while ignoring the value of inner happiness and peace of mind. Postmodernism posits that both dystopia and utopia are one and the same, overarching grand narratives with impossible conclusions.
Some 20th-century authors refer to the "Myth of Progress" to refer to the idea that the human condition will inevitably improve. In 1932, English physician Montague David Eder wrote: "The myth of progress states that civilization has moved, is moving, and will move in a desirable direction. Progress is inevitable... Philosophers, men of science and politicians have accepted the idea of the inevitability of progress." Eder argues that the advancement of civilization is leading to greater unhappiness and loss of control in the environment. The strongest critics of the idea of progress complain that it remains a dominant idea in the 21st century, and shows no sign of diminished influence. As one fierce critic, British historian John Gray (b. 1948), concludes:
Recently the idea of progress has been generalized to psychology, being related with the concept of a goal, that is, progress is understood as "what counts as a means of advancing towards the end result of a given defined goal."
Antiquity
Historian J. B. Bury said that thought in ancient Greece was dominated by the theory of world-cycles or the doctrine of eternal return, and was steeped in a belief parallel to the Judaic "fall of man," but rather from a preceding "Golden Age" of innocence and simplicity. Time was generally regarded as the enemy of humanity which depreciates the value of the world. He credits the Epicureans with having had a potential for leading to the foundation of a theory of progress through their materialistic acceptance of the atomism of Democritus as the explanation for a world without an intervening deity.
Robert Nisbet and Gertrude Himmelfarb have attributed a notion of progress to other Greeks. Xenophanes said "The gods did not reveal to men all things in the beginning, but men through their own search find in the course of time that which is better."
Islamic era
With the rise of the Umayyad and Abbasid caliphates and later Ottoman Empire, progress in the Islamic civilizations was characterized by a system of translating books (particularly Greek philosophy books in the Abbasid era) of various cultures into local languages (often Arabic and Persian), testing and refining their scientific or philosophical theories and claims, and then building upon them with their own Islamic ideas, theologies, ontologies, and scientific experimental results. The Round city of Baghdad was characterized as a model and example of progress for the region, where peoples of every religion and race sent their top students to study at its famous international academy called the House of Wisdom. Islamic Spain was also famed as a center of learning in Europe, where Jews and Christians flocked to Muslim halaqas, eager to bring the latest knowledge back to their countries in Europe, which later sparked the European Renaissance due the Muslim scholars' finesse in adapting classical knowledge (such as Greek philosophy) to Abrahamic contexts. Muslim rulers viewed knowledge, including both scientific and philosophical knowledge, as a key to power, and promoted learning, scientific inquiry, and patronization of scholars.
Renaissance
During the Medieval period, science was to a large extent based on Scholastic (a method of thinking and learning from the Middle Ages) interpretations of Aristotle's work. The Renaissance changed the mindset in Europe, which induced a revolution in curiosity about nature in general and scientific advance, which opened the gates for technical and economic advance. Furthermore, the individual potential was seen as a never-ending quest for being God-like, paving the way for a view of man based on unlimited perfection and progress.
Age of Enlightenment (1650–1800)
In the Enlightenment, French historian and philosopher Voltaire (1694–1778) was a major proponent of progress. At first Voltaire's thought was informed by the idea of progress coupled with rationalism. His subsequent notion of the historical idea of progress saw science and reason as the driving forces behind societal advancement.
Immanuel Kant (1724–1804) argued that progress is neither automatic nor continuous and does not measure knowledge or wealth, but is a painful and largely inadvertent passage from barbarism through civilization toward enlightened culture and the abolition of war. Kant called for education, with the education of humankind seen as a slow process whereby world history propels mankind toward peace through war, international commerce, and enlightened self-interest.
Scottish theorist Adam Ferguson (1723–1816) defined human progress as the working out of a divine plan, though he rejected predestination. The difficulties and dangers of life provided the necessary stimuli for human development, while the uniquely human ability to evaluate led to ambition and the conscious striving for excellence. But he never adequately analyzed the competitive and aggressive consequences stemming from his emphasis on ambition even though he envisioned man's lot as a perpetual striving with no earthly culmination. Man found his happiness only in effort.
Some scholars consider the idea of progress that was affirmed with the Enlightenment, as a secularization of ideas from early Christianity, and a reworking of ideas from ancient Greece.
Romanticism and 19th century
In the 19th century, Romantic critics charged that progress did not automatically better the human condition, and in some ways could make it worse. Thomas Malthus (1766–1834) reacted against the concept of progress as set forth by William Godwin and Condorcet because he believed that inequality of conditions is "the best (state) calculated to develop the energies and faculties of man". He said, "Had population and food increased in the same ratio, it is probable that man might never have emerged from the savage state." He argued that man's capacity for improvement has been demonstrated by the growth of his intellect, a form of progress which offsets the distresses engendered by the law of population.
German philosopher Friedrich Nietzsche (1844–1900) criticized the idea of progress as the 'weakling's doctrines of optimism,' and advocated undermining concepts such as faith in progress, to allow the strong individual to stand above the plebeian masses. An important part of his thinking consists of the attempt to use the classical model of 'eternal recurrence of the same' to dislodge the idea of progress.
Iggers (1965) argues there was general agreement in the late 19th century that the steady accumulation of knowledge and the progressive replacement of conjectural, that is, theological or metaphysical, notions by scientific ones was what created progress. Most scholars concluded this growth of scientific knowledge and methods led to the growth of industry and the transformation of warlike societies into industrial and pacific ones. They agreed as well that there had been a systematic decline of coercion in government, and an increasing role of liberty and of rule by consent. There was more emphasis on impersonal social and historical forces; progress was increasingly seen as the result of an inner logic of society.
Marxist theory (late 19th century)
Marx developed a theory of historical materialism. He describes the mid-19th-century condition in The Communist Manifesto as follows:
Furthermore, Marx described the process of social progress, which in his opinion is based on the interaction between the productive forces and the relations of production:
Capitalism is thought by Marx as a process of continual change, in which the growth of markets dissolve all fixities in human life, and Marx argues that capitalism is progressive and non-reactionary. Marxism further states that capitalism, in its quest for higher profits and new markets, will inevitably sow the seeds of its own destruction. Marxists believe that, in the future, capitalism will be replaced by socialism and eventually communism.
Many advocates of capitalism such as Schumpeter agreed with Marx's analysis of capitalism as a process of continual change through creative destruction, but, unlike Marx, believed and hoped that capitalism could essentially go on forever.
Thus, by the beginning of the 20th century, two opposing schools of thought—Marxism and liberalism—believed in the possibility and the desirability of continual change and improvement. Marxists strongly opposed capitalism and the liberals strongly supported it, but the one concept they could both agree on was progress, which affirms the power of human beings to make, improve and reshape their society, with the aid of scientific knowledge, technology and practical experimentation. Modernity denotes cultures that embrace that concept of progress. (This is not the same as modernism, which was the artistic and philosophical response to modernity, some of which embraced technology while rejecting individualism, but more of which rejected modernity entirely.)
See also
Accelerating change
Constitutional economics
Frontierism
Fordism
Global social change research project
Happiness economics
Higher good
High modernism
Leisure satisfaction
Manifest Destiny
Moral progress
New Frontier
Progressive utilization theory
Psychometrics
Social development
Social change
Social justice
Social order
Social regress
Sociocultural evolution
Scientism
Technocentrism
Techno-progressivism
References
Further reading
Alexander, Jeffrey C., & Piotr Sztompka (1990). Rethinking Progress: Movements, Forces, and Ideas at the End of the 20th Century. Boston: Unwin Hymans.
Becker, Carl L. (1932). Progress and Power. Stanford University Press.
Brunetière, Ferdinand (1922). "La Formation de l'Idée de Progrés." In: Études Critiques. Paris: Librairie Hachette, pp. 183–250.
Burgess, Yvonne (1994). The Myth of Progress. Wild Goose Publications.
Bury, J.B. (1920). The Idea of Progress: An Inquiry into Its Origin and Growth (mirror). London: The Macmillan and Co.
Dawson, Christopher (1929). Progress and Religion. London: Sheed & Ward.
Dodds, E.R. (1985). The Ancient Concept of Progress and Other Essays on Greek Literature and Belief. New York: Oxford University Press.
Doren, Charles Van (1967). The Idea of Progress. New York: Praeger.
Fay, Sidney B. (1947). "The Idea of Progress," American Historical Review, Vol. 52, No. 2, pp. 231–46 in JSTOR, reflections after two world wars.
Hahn, Lewis Edwin and Paul Arthur Schilpp (eds.).(1999). The Philosophy of Georg Henrik von Wright. Open Court.
Iggers, Georg G. (1965). "The Idea of Progress: A Critical Reassessment," American Historical Review, Vol. 71, No. 1, pp. 1–17 in JSTOR, emphasis on 20th-century philosophies of history
Inge, William Ralph (1922). "The Idea of Progress." In: Outspoken Essays, Second series. London: Longmans, Green & Co., pp. 158–83.
Kauffman, Bill. (1998). With Good Intentions? Reflections on the Myth of Progress in America. Praeger online edition, based on interviews in a small town.
Lasch, Christopher (1991). The True and Only Heaven: Progress and Its Critics. W. W. Norton online edition
Mackenzie, J. S. (1899). "The Idea of Progress," International Journal of Ethics, Vol. IX, No. 2, pp. 195–213, representative of late 19th-century approaches
Mathiopoulos, Margarita. History and Progress: In Search of the European and American Mind (1989) online edition
Melzer, Arthur M. et al. eds. History and the Idea of Progress (1995), scholars discuss Machiavelli, Kant, Nietzsche, Spengler and others online edition
Nisbet, Robert (1979). "The Idea of Progress," Literature of Liberty, Vol. II, No. 1, pp. 7–37.
Nisbet, Robert (1980). History of the Idea of Progress. New York: Basic Books.
Norberg, Johan (2016). Progress: Ten Reasons to Look Forward to the Future. London: Oneworld Publications
Painter, George S. (1922). "The Idea of Progress," American Journal of Sociology, Vol. 28, No. 3, pp. 257–82.
Pinker, Steven (2018). Enlightenment Now: The Case for Reason, Science, Humanism, and Progress, Penguin Books
Pollard, Sidney (1971). The Idea of Progress: History and Society. New York: Pelican.
Rescher, Nicholas; Scientific Progress (Oxford: Blackwells, 1978).
Ryan, Christopher (2019). Civilized to Death: The Price of Progress. Simon & Schuster
Sklair, Leslie (1970). The Sociology of Progress. London: Routledge and Kegan Paul. online edition
Slaboch, Matthew W. (2018). A Road to Nowhere: The Idea of Progress and Its Critics. Philadelphia: The University of Pennsylvania Press.
Spadafora, David (1990). The Idea of Progress in Eighteenth Century Britain. Yale University Press.
Spalding, Henry Norman, Civilization in East and West : an introduction to the study of human progress, London, Oxford university press, H. Milford, 1939.
Teggart, F. J. (1949). The Idea of Progress: A Collection of Readings. Berkeley: University of California Press.
Tuveson, Ernest Lee (1949). Millennium and Utopia: A Study in the Background of the Idea of Progress. Berkeley: University of California Press.
Zarandi, Merhdad M., ed. (2004). Science and the Myth of Progress. World Wisdom Books.
External links
United Nations Economic and Social Development
Islam, Modernity and the Concept of Progress
Activism
Anthropology
Concepts in aesthetics
Concepts in epistemology
Concepts in metaphysics
Epistemology of science
Ethical principles
Ethnology
Human evolution
Innovation
Metanarratives
Ontology
Philosophical schools and traditions
Philosophy of culture
Philosophy of history
Philosophy of life
Political concepts
Political science
Progressivism
Social change
Social concepts
Social sciences
Social theories
Sociocultural evolution theory
Theories of history
Virtue
| 0.765087 | 0.989874 | 0.75734 |
Mediterraneanism
|
Mediterraneanism is an ideology that claims that there are distinctive characteristics that Mediterranean cultures have in common.
Giuseppe Sergi asserted that the Mediterranean race was "the greatest race...derived neither from the black nor white people...an autonomous stock in the human family." Italian Fascism initially adhered strongly to a similar version of Mediterraneanism that claimed a bond existed between all Mediterranean cultures and Mediterranean peoples , often placing Mediterranean people and cultures above other cultures. This form of Mediterraneanism was in stark contrast to and was partially a reaction to the then-popular Nordicist racial theory common in North America and Northwestern, Central, Germanic-speaking, and Northern Europe that claimed Mediterranean people were inferior to the supposed Nordic race.
History
The Italian anthropologist Giuseppe Sergi claimed that the Mediterranean race was "the greatest race in the world". He defined it as "the finest brunet race which has appeared in North Africa…derived neither from the black nor white peoples, but constitut[ing] an autonomous stock in the human family.". Sergi claimed that the Mediterranean Race probably historically spoke a Hamitic language related to the language of the prehistoric Egyptians, Iberians, and Libyans. Sergi noted that the Roman Empire led to the spread of Mediterranean civilization across Europe and thus contemporary European civilization was bound by ancestry to the Mediterranean race.
Sergi rejected Nordicism's claims of Nordic peoples being strongly Aryan, saying that Aryans were not Nordic in appearance. Instead he claimed that Nordics were "Aryanized Euroafricans", and that the Nordic race is related to Mediterranean race. Sergi responded to typical Nordicist claims of superiority of Nordics over Mediterraneans, by saying that the reason for the perceived lack of wealth or progress in Romance-speaking countries as compared with countries of Northern Europe was because the Aryans of the North, living in frigid climates had developed close-knit groups that allowed them to survive in that environment, as such they became more disciplined, productive civic-minded than southern Europeans. However Sergi rejected claims that Aryans who were a Euroasian people were responsible for founding Greco-Latin civilization. Sergi described the original Aryans in Europe in a negative manner: "The Aryans were savages when they invaded Europe: they destroyed in part the superior civilization of the Neolithic populations, and could not have created the Greco-Latin civilization". Sergi claimed that the only contribution by the ancient Aryans to European civilization was Indo-European languages.
Sergi claimed the Nordics had made no substantial contribution to pre-modern civilization, noting that "in the epoch of Tacitus, the Germans ... remained barbarians as in prehistoric times". He claimed that the Romans were unable to Romanize the Germans because the Germans were averse to the Romans' civilizing influence. He rejected Germanic scholars' claims that Germans were the saviors of a decadent post-Roman Italy. Instead Sergi claimed that the Germans were responsible for bringing forward the Dark Ages in the Medieval period and that the Germans of the Medieval period were known for "delinquency, vagabondage, and ferocity".
C. G. Seligman supported Mediterraneanist claims, stating "it must, I think, be recognized that the Mediterranean race has actually more achievement to its credit than any other, since it is responsible for by far the greater part of Mediterranean civilization, certainly before 1000 B.C. (and probably much later), and so shaped not only the Aegean cultures, but those of Western as well as the greater part of Eastern Mediterranean lands, while the culture of their near relatives, the Hamitic pre-dynastic Egyptians, formed the basis of that of Egypt."
The French historian Fernand Braudel in the 1920s invoked the conception of the Mediterraneanism including claims of Mediterranean universalism to justify French colonialism in Algeria. Braudel had entered his doctrinal studies in the 1920s at the precise time when the issue of Mediterranean unity was being fiercely debated. Braudel supported the pro-unity argument. The argument for Mediterranean unity justified French colonialism in Algeria and viewed the Berbers in a place of privilege amongst the peoples of Africa, as retainers of the lost Roman legacy in Africa. It was claimed that if the Berbers could be culturally separated from the Arabo-Islamic surrounding culture, that the Berbers would become natural allies of the French through their Mediterranean heritage that would challenge anti-colonial sentiment.
Italian Fascist conception
At first, Italian Fascism promoted a variant of Mediterraneanism that, like Sergi's strain of Mediterraneanism, held that Mediterranean people and cultures shared a common historical and cultural bond. Initially, this variant mostly avoided explicit racial connotations; its followers often rejected biological racism and instead stressed the importance of the cultural aspects rather than the racial aspects of the Mediterranean peoples. Implicitly, however, this form of Mediterraneanism posited the Mediterranean race and Mediterranean cultures as superior to Northwestern and “Nordic” European groups, including the Northwestern European, Germanic, and Nordic people. This "defensive" form of Mediterraneanism arose mostly as a response to the then-popular theory of Nordicism, a racial theory popular at the time among Northwestern European and Germanic racial theorists, as well as racial theorists of Northwestern European descent in countries such as the United States, that viewed non-Nordic people, including some Italians and other Mediterranean people, as racially subordinate to the Nordic, Aryan, or Germanic peoples.
In a 1921 speech in Bologna, Benito Mussolini stated that "Fascism was born... out of a profound, perennial need of this our Aryan and Mediterranean race". In this speech Mussolini was referring to Italians as being the Mediterranean branch of the Indo-European Aryan race, in the sense of people of an Indo-European heritage rather than in the more famous Nordicist sense that was promoted by the Nazis. Italian Fascism emphasized that race was bound by spiritual and cultural foundations, and identified a racial hierarchy based on spiritual and cultural factors. Mussolini explicitly rejected notions that biologically "pure" races existed in modern times.
In 1929, Mussolini asserted that Jewish culture was Mediterranean and that Jews were native to Italy, after living there for a long time. He also praised their contributions to Italy despite their minority status.
Italian Fascism strongly rejected the Nordicist and Nazi conception of the Aryan race that idealized "pure" Aryans as having certain physical traits that were defined as Nordic such as fair skin, or blond hair, traits uncommon among Mediterranean and Italian people and the often olive-skinned members of the so-called "Mediterranean race." The antipathy by Mussolini and other Italian Fascists to Nordicism was over the existence of such theories by German and Anglo-Saxon Nordicists who viewed Mediterranean peoples as racially degenerate. Both Nordicism and biological racism were often considered incompatible with the early Italian fascist philosophy at the time; Nordicism inherently subordinated Italians and other Mediterranean people beneath the Germans and Northwestern Europeans in its proposed racial hierarchy, and early Italian fascists, including Mussolini, often viewed race as a cultural and political invention rather than a biological reality or saw physical race as something that could be overcome through culture. In a speech given in Bari in 1934, Mussolini reiterated his attitude toward Nordicism: "Thirty centuries of history allow us to look with supreme pity on certain doctrines which are preached beyond the Alps by the descendants of those who were illiterate when Rome had Caesar, Virgil and Augustus".
Nazi German influence and “Nordicist” Mediterraneanism
From the late 1930s through World War II, the Italian Fascists became divided in their stance on Mediterraneanism. Originally, Nazi-like Nordicist racial theories were found among only a small number of fringe Italian Fascists, mostly Germanophiles, anti-Semites, anti-intellectuals, and Northern Italians who regarded themselves to have Nordic or Germanic Lombard racial heritage; among most other Italian Fascists, Nordicism and “Nazi Aryanism” remained at odds with Italian Fascist theories on the greatness of the Mediterranean people. However, by 1938, as the alliance between Fascist Italy and Nazi Germany became stronger, and as Nazi German policies and theories increasingly influenced Italian Fascist thought, many Italian Fascists began to embrace a new form of Mediterraneanism, a variant that mixed Nazi Nordicism with original Mediterraneanism. Unlike other forms of Mediterraneanism, this form based its racial view on Nazism and asserted that Italians were part of the "white race" or "white Aryan race" and utilized white supremacism to justify colonialism.
In 1938, mere months before creating the Pact of Steel alliance with Nazi Germany, the Fascist Italian government created the Italian Racial Laws and officially but gradually recognized and embraced the racial myth of Italians having Nordic heritage and being of Nordic-Mediterranean descent. According to the Diary of Giuseppe Bottai, in a meeting with Fascist Party members, Mussolini declared that previous policy of focus on Mediterraneanism was to be replaced by a focus on Aryanism. Both Italian historian Renzo De Felice in his book La storia degli ebrei italiani sotto il fascismo (1961) and William Shirer in The Rise and Fall of the Third Reich (1960) suggest that Mussolini enacted the Italian Racial Laws and turned towards Nazi racial theories partially to appease his Nazi German allies, rather than to satisfy a genuine anti-Semitic sentiment among the Italian people.
With the rise in influence of pro-Nordicist Nazi Germany in Europe, and as the Fascist Italian regime sought unity with Nazi Germany, the Fascist regime gave previously-fringe Italian Nordicists prominent positions in the National Fascist Party (PNF), which aggravated the original Mediterraneanists in the party. Prominent (and previously fringe) Nordicists such as Julius Evola rejected Mediterraneanism and, in particular, Evola denounced Sergi's association of Southern Europeans with Northern Africans as "dangerous". Evola rejected biological determinism for race but was a supporter of spiritual Nordicism. In direct contradiction of the earlier or original forms of Mediterraneanism that embraced the idea of a shared origin or culture among all people of the Mediterranean, the Manifesto of Racial Scientists (1938) declared that Mediterranean Europeans were distinct from Mediterranean Africans and Mediterranean Asians and rejected claims that European Mediterraneans were related to the Mediterranean Semitic or Hamitic peoples.
In 1941, the PNF's Mediterraneanists, through the influence of Giacomo Acerbo, put forward a comprehensive definition of the Italian race. However these efforts were challenged by Mussolini's endorsement of Nordicist figures with the appointment of staunch spiritual Nordicist Alberto Luchini as head of Italy's Racial Office in May 1941, as well as with Mussolini becoming interested with Evola's spiritual Nordicism in late 1941. Acerbo and the Mediterraneanists in his High Council on Demography and Race sought to bring the regime back to supporting Mediterraneanism by thoroughly denouncing the pro-Nordicist Manifesto of the Racial Scientists. The Council recognized Aryans as being a linguistic-based group, and condemned the Manifesto for denying the influence of pre-Aryan civilization on modern Italy, saying that the Manifesto "constitutes an unjustifiable and undemonstrable negation of the anthropological, ethnological, and archaeological discoveries that have occurred and are occurring in our country". Furthermore, the Council denounced the Manifesto for "implicitly" crediting Germanic invaders of Italy in the guise of the Lombards for having "a formative influence on the Italian race in a disproportional degree to the number of invaders and to their biological predominance". The High Council claimed that the obvious superiority of the ancient Greeks and Romans in comparison with the ancient Germanic tribes made it inconceivable that Italian culture owed a debt to ancient Germans.
See also
Mediterranean cuisine
Olive skin
References
Further reading
Talks with Mussolini, Emil Ludwig, Boston: Little, Brown. 1933, p. 202.
The Aryan Myth, Leon Poliakov, New York: Basic Books. 1974
Mediterranean
Historical definitions of race
| 0.772889 | 0.979881 | 0.75734 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.