id
stringlengths 2
8
| url
stringlengths 31
206
| title
stringlengths 1
130
| text
stringlengths 16.4k
435k
|
---|---|---|---|
5267 | https://en.wikipedia.org/wiki/Constellation | Constellation | A constellation is an area on the celestial sphere in which a group of visible stars forms a perceived pattern or outline, typically representing an animal, mythological subject, or inanimate object.
The origins of the earliest constellations likely go back to prehistory. People used them to relate stories of their beliefs, experiences, creation, or mythology. Different cultures and countries invented their own constellations, some of which lasted into the early 20th century before today's constellations were internationally recognized. The recognition of constellations has changed significantly over time. Many changed in size or shape. Some became popular, only to drop into obscurity. Some were limited to a single culture or nation. Naming constellations also helped astronomers and navigators identify stars more easily.
Twelve (or thirteen) ancient constellations belong to the zodiac (straddling the ecliptic, which the Sun, Moon, and planets all traverse). The origins of the zodiac remain historically uncertain; its astrological divisions became prominent 400 BC in Babylonian or Chaldean astronomy. Constellations appear in Western culture via Greece and are mentioned in the works of Hesiod, Eudoxus and Aratus. The traditional 48 constellations, consisting of the Zodiac and 36 more (now 38, following the division of Argo Navis into three constellations) are listed by Ptolemy, a Greco-Roman astronomer from Alexandria, Egypt, in his Almagest. The formation of constellations was the subject of extensive mythology, most notably in the Metamorphoses of the Latin poet Ovid. Constellations in the far southern sky were added from the 15th century until the mid-18th century when European explorers began traveling to the Southern Hemisphere. Due to Roman and European transmission, each constellation has a Latin name.
In 1922, the International Astronomical Union (IAU) formally accepted the modern list of 88 constellations, and in 1928 adopted official constellation boundaries that together cover the entire celestial sphere. Any given point in a celestial coordinate system lies in one of the modern constellations. Some astronomical naming systems include the constellation where a given celestial object is found to convey its approximate location in the sky. The Flamsteed designation of a star, for example, consists of a number and the genitive form of the constellation's name.
Other star patterns or groups called asterisms are not constellations under the formal definition, but are also used by observers to navigate the night sky. Asterisms may be several stars within a constellation, or they may share stars with more than one constellation. Examples of asterisms include the teapot within the constellation Sagittarius, or the big dipper in the constellation of Ursa Major.
Terminology
The word constellation comes from the Late Latin term , which can be translated as "set of stars"; it came into use in Middle English during the 14th century. The Ancient Greek word for constellation is ἄστρον (astron). These terms historically referred to any recognisable pattern of stars whose appearance was associated with mythological characters or creatures, earthbound animals, or objects. Over time, among European astronomers, the constellations became clearly defined and widely recognised. Today, there are 88 IAU designated constellations.
A constellation or star that never sets below the horizon when viewed from a particular latitude on Earth is termed circumpolar. From the North Pole or South Pole, all constellations south or north of the celestial equator are circumpolar. Depending on the definition, equatorial constellations may include those that lie between declinations 45° north and 45° south, or those that pass through the declination range of the ecliptic or zodiac ranging between 23½° north, the celestial equator, and 23½° south.
Stars in constellations can appear near each other in the sky, but they usually lie at a variety of distances away from the Earth. Since each star has its own independent motion, all constellations will change slowly over time. After tens to hundreds of thousands of years, familiar outlines will become unrecognizable. Astronomers can predict the past or future constellation outlines by measuring individual stars' common proper motions or cpm by accurate astrometry and their radial velocities by astronomical spectroscopy.
Identification
The 88 constellations recognized by the International Astronomical Union as well as those that cultures have recognized throughout history are imagined figures and shapes derived from the patterns of stars in the observable sky. Many officially recognized constellations are based on the imaginations of ancient, Near Eastern and Mediterranean mythologies. H.A. Rey, who wrote popular books on astronomy, pointed out the imaginative nature of the constellations and their mythological and artistic basis, and the practical use of identifying them through definite images, according to the classical names they were given.
History of the early constellations
Lascaux Caves, southern France
It has been suggested that the 17,000-year-old cave paintings in Lascaux, southern France, depict star constellations such as Taurus, Orion's Belt, and the Pleiades. However, this view is not generally accepted among scientists.
Mesopotamia
Inscribed stones and clay writing tablets from Mesopotamia (in modern Iraq) dating to 3000 BC provide the earliest generally accepted evidence for humankind's identification of constellations. It seems that the bulk of the Mesopotamian constellations were created within a relatively short interval from around 1300 to 1000 BC. Mesopotamian constellations appeared later in many of the classical Greek constellations.
Ancient Near East
The oldest Babylonian catalogues of stars and constellations date back to the beginning of the Middle Bronze Age, most notably the Three Stars Each texts and the MUL.APIN, an expanded and revised version based on more accurate observation from around 1000 BC. However, the numerous Sumerian names in these catalogues suggest that they built on older, but otherwise unattested, Sumerian traditions of the Early Bronze Age.
The classical Zodiac is a revision of Neo-Babylonian constellations from the 6th century BC. The Greeks adopted the Babylonian constellations in the 4th century BC. Twenty Ptolemaic constellations are from the Ancient Near East. Another ten have the same stars but different names.
Biblical scholar E. W. Bullinger interpreted some of the creatures mentioned in the books of Ezekiel and Revelation as the middle signs of the four-quarters of the Zodiac, with the Lion as Leo, the Bull as Taurus, the Man representing Aquarius, and the Eagle standing in for Scorpio. The biblical Book of Job also makes reference to a number of constellations, including "bier", "fool" and "heap" (Job 9:9, 38:31–32), rendered as "Arcturus, Orion and Pleiades" by the KJV, but ‘Ayish "the bier" actually corresponding to Ursa Major. The term Mazzaroth , translated as a garland of crowns, is a hapax legomenon in Job 38:32, and it might refer to the zodiacal constellations.
Classical antiquity
There is only limited information on ancient Greek constellations, with some fragmentary evidence being found in the Works and Days of the Greek poet Hesiod, who mentioned the "heavenly bodies". Greek astronomy essentially adopted the older Babylonian system in the Hellenistic era, first introduced to Greece by Eudoxus of Cnidus in the 4th century BC. The original work of Eudoxus is lost, but it survives as a versification by Aratus, dating to the 3rd century BC. The most complete existing works dealing with the mythical origins of the constellations are by the Hellenistic writer termed pseudo-Eratosthenes and an early Roman writer styled pseudo-Hyginus. The basis of Western astronomy as taught during Late Antiquity and until the Early Modern period is the Almagest by Ptolemy, written in the 2nd century.
In the Ptolemaic Kingdom, native Egyptian tradition of anthropomorphic figures represented the planets, stars, and various constellations. Some of these were combined with Greek and Babylonian astronomical systems culminating in the Zodiac of Dendera; it remains unclear when this occurred, but most were placed during the Roman period between 2nd to 4th centuries AD. The oldest known depiction of the zodiac showing all the now familiar constellations, along with some original Egyptian constellations, decans, and planets. Ptolemy's Almagest remained the standard definition of constellations in the medieval period both in Europe and in Islamic astronomy.
Ancient China
Ancient China had a long tradition of observing celestial phenomena. Nonspecific Chinese star names, later categorized in the twenty-eight mansions, have been found on oracle bones from Anyang, dating back to the middle Shang dynasty. These constellations are some of the most important observations of Chinese sky, attested from the 5th century BC. Parallels to the earliest Babylonian (Sumerian) star catalogues suggest that the ancient Chinese system did not arise independently.
Three schools of classical Chinese astronomy in the Han period are attributed to astronomers of the earlier Warring States period. The constellations of the three schools were conflated into a single system by Chen Zhuo, an astronomer of the 3rd century (Three Kingdoms period). Chen Zhuo's work has been lost, but information on his system of constellations survives in Tang period records, notably by Qutan Xida. The oldest extant Chinese star chart dates to that period and was preserved as part of the Dunhuang Manuscripts. Native Chinese astronomy flourished during the Song dynasty, and during the Yuan dynasty became increasingly influenced by medieval Islamic astronomy (see Treatise on Astrology of the Kaiyuan Era). As maps were prepared during this period on more scientific lines, they were considered as more reliable.
A well-known map from the Song period is the Suzhou Astronomical Chart, which was prepared with carvings of stars on the planisphere of the Chinese sky on a stone plate; it is done accurately based on observations, and it shows the supernova of the year of 1054 in Taurus.
Influenced by European astronomy during the late Ming dynasty, charts depicted more stars but retained the traditional constellations. Newly observed stars were incorporated as supplementary to old constellations in the southern sky, which did not depict the traditional stars recorded by ancient Chinese astronomers. Further improvements were made during the later part of the Ming dynasty by Xu Guangqi and Johann Adam Schall von Bell, the German Jesuit and was recorded in Chongzhen Lishu (Calendrical Treatise of Chongzhen period, 1628). Traditional Chinese star maps incorporated 23 new constellations with 125 stars of the southern hemisphere of the sky based on the knowledge of Western star charts; with this improvement, the Chinese Sky was integrated with the World astronomy.
Ancient Greece
A lot of well-known constellations also have histories that connect to ancient Greece.
Early modern astronomy
Historically, the origins of the constellations of the northern and southern skies are distinctly different. Most northern constellations date to antiquity, with names based mostly on Classical Greek legends. Evidence of these constellations has survived in the form of star charts, whose oldest representation appears on the statue known as the Farnese Atlas, based perhaps on the star catalogue of the Greek astronomer Hipparchus. Southern constellations are more modern inventions, sometimes as substitutes for ancient constellations (e.g. Argo Navis). Some southern constellations had long names that were shortened to more usable forms; e.g. Musca Australis became simply Musca.
Some of the early constellations were never universally adopted. Stars were often grouped into constellations differently by different observers, and the arbitrary constellation boundaries often led to confusion as to which constellation a celestial object belonged. Before astronomers delineated precise boundaries (starting in the 19th century), constellations generally appeared as ill-defined regions of the sky. Today they now follow officially accepted designated lines of right ascension and declination based on those defined by Benjamin Gould in epoch 1875.0 in his star catalogue Uranometria Argentina.
The 1603 star atlas "Uranometria" of Johann Bayer assigned stars to individual constellations and formalized the division by assigning a series of Greek and Latin letters to the stars within each constellation. These are known today as Bayer designations. Subsequent star atlases led to the development of today's accepted modern constellations.
Origin of the southern constellations
The southern sky, below about −65° declination, was only partially catalogued by ancient Babylonians, Egyptians, Greeks, Chinese, and Persian astronomers of the north. The knowledge that northern and southern star patterns differed goes back to Classical writers, who describe, for example, the African circumnavigation expedition commissioned by Egyptian Pharaoh Necho II in c. 600 BC and those of Hanno the Navigator in c. 500 BC.
The history of southern constellations is not straightforward. Different groupings and different names were proposed by various observers, some reflecting national traditions or designed to promote various sponsors. Southern constellations were important from the 14th to 16th centuries, when sailors used the stars for celestial navigation. Italian explorers who recorded new southern constellations include Andrea Corsali, Antonio Pigafetta, and Amerigo Vespucci.
Many of the 88 IAU-recognized constellations in this region first appeared on celestial globes developed in the late 16th century by Petrus Plancius, based mainly on observations of the Dutch navigators Pieter Dirkszoon Keyser and Frederick de Houtman. These became widely known through Johann Bayer's star atlas Uranometria of 1603. Fourteen more were created in 1763 by the French astronomer Nicolas Louis de Lacaille, who also split the ancient constellation Argo Navis into three; these new figures appeared in his star catalogue, published in 1756.
Several modern proposals have not survived. The French astronomers Pierre Lemonnier and Joseph Lalande, for example, proposed constellations that were once popular but have since been dropped. The northern constellation Quadrans Muralis survived into the 19th century (when its name was attached to the Quadrantid meteor shower), but is now divided between Boötes and Draco.
88 modern constellations
A list of 88 constellations was produced for the International Astronomical Union in 1922. It is roughly based on the traditional Greek constellations listed by Ptolemy in his Almagest in the 2nd century and Aratus' work Phenomena, with early modern modifications and additions (most importantly introducing constellations covering the parts of the southern sky unknown to Ptolemy) by Petrus Plancius (1592, 1597/98 and 1613), Johannes Hevelius (1690) and Nicolas Louis de Lacaille (1763), who introduced fourteen new constellations. Lacaille studied the stars of the southern hemisphere from 1751 until 1752 from the Cape of Good Hope, when he was said to have observed more than 10,000 stars using a refracting telescope with an aperture of .
In 1922, Henry Norris Russell produced a list of 88 constellations with three-letter abbreviations for them. However, these constellations did not have clear borders between them. In 1928, the International Astronomical Union (IAU) formally accepted 88 modern constellations, with contiguous boundaries along vertical and horizontal lines of right ascension and declination developed by Eugene Delporte that, together, cover the entire celestial sphere; this list was finally published in 1930. Where possible, these modern constellations usually share the names of their Graeco-Roman predecessors, such as Orion, Leo or Scorpius. The aim of this system is area-mapping, i.e. the division of the celestial sphere into contiguous fields. Out of the 88 modern constellations, 36 lie predominantly in the northern sky, and the other 52 predominantly in the southern.
The boundaries developed by Delporte used data that originated back to epoch B1875.0, which was when Benjamin A. Gould first made his proposal to designate boundaries for the celestial sphere, a suggestion on which Delporte based his work. The consequence of this early date is that because of the precession of the equinoxes, the borders on a modern star map, such as epoch J2000, are already somewhat skewed and no longer perfectly vertical or horizontal. This effect will increase over the years and centuries to come.
Symbols
The constellations have no official symbols, though those of the ecliptic may take the signs of the zodiac. Symbols for the other modern constellations, as well as older ones that still occur in modern nomenclature, have occasionally been published.
Dark cloud constellations
The Great Rift, a series of dark patches in the Milky Way, is more visible and striking in the southern hemisphere than in the northern. It vividly stands out when conditions are otherwise so dark that the Milky Way's central region casts shadows on the ground. Some cultures have discerned shapes in these patches and have given names to these "dark cloud constellations". Members of the Inca civilization identified various dark areas or dark nebulae in the Milky Way as animals and associated their appearance with the seasonal rains. Australian Aboriginal astronomy also describes dark cloud constellations, the most famous being the "emu in the sky" whose head is formed by the Coalsack, a dark nebula, instead of the stars.
List of dark cloud constellations
Great Rift (astronomy)
Emu in the sky
Cygnus Rift
Serpens–Aquila Rift
Dark Horse (astronomy)
Rho Ophiuchi cloud complex
See also
Celestial cartography
Constellation family
Former constellations
IAU designated constellations
Lists of stars by constellation
Constellations listed by Johannes Hevelius
Constellations listed by Lacaille
Constellations listed by Petrus Plancius
Constellations listed by Ptolemy
References
Further reading
Mythology, lore, history, and archaeoastronomy
Allen, Richard Hinckley. (1899) Star-Names And Their Meanings, G. E. Stechert, New York, hardcover; reprint 1963 as Star Names: Their Lore and Meaning, Dover Publications, Inc., Mineola, NY, softcover.
Olcott, William Tyler. (1911); Star Lore of All Ages, G. P. Putnam's Sons, New York, hardcover; reprint 2004 as Star Lore: Myths, Legends, and Facts, Dover Publications, Inc., Mineola, NY, softcover.
Kelley, David H. and Milone, Eugene F. (2004) Exploring Ancient Skies: An Encyclopedic Survey of Archaeoastronomy, Springer, hardcover.
Ridpath, Ian. (2018) Star Tales 2nd ed., Lutterworth Press, softcover.
Staal, Julius D. W. (1988) The New Patterns in the Sky: Myths and Legends of the Stars, McDonald & Woodward Publishing Co., hardcover, softcover.
Atlases and celestial maps
General and nonspecialized – entire celestial heavens
Becvar, Antonin. Atlas Coeli. Published as Atlas of the Heavens, Sky Publishing Corporation, Cambridge, MA, with coordinate grid transparency overlay.
Norton, Arthur Philip. (1910) Norton's Star Atlas, 20th Edition 2003 as Norton's Star Atlas and Reference Handbook, edited by Ridpath, Ian, Pi Press, , hardcover.
National Geographic Society. (1957, 1970, 2001, 2007) The Heavens (1970), Cartographic Division of the National Geographic Society (NGS), Washington, DC, two-sided large map chart depicting the constellations of the heavens; as a special supplement to the August 1970 issue of National Geographic. Forerunner map as A Map of The Heavens, as a special supplement to the December 1957 issue. Current version 2001 (Tirion), with 2007 reprint.
Sinnott, Roger W. and Perryman, Michael A.C. (1997) Millennium Star Atlas, Epoch 2000.0, Sky Publishing Corporation, Cambridge, MA, and European Space Agency (ESA), ESTEC, Noordwijk, The Netherlands. Subtitle: "An All-Sky Atlas Comprising One Million Stars to Visual Magnitude Eleven from the Hipparcos and Tycho Catalogues and Ten Thousand Nonstellar Objects". 3 volumes, hardcover, . Vol. 1, 0–8 Hours (Right Ascension), hardcover; Vol. 2, 8–16 Hours, hardcover; Vol. 3, 16–24 Hours, hardcover. Softcover version available. Supplemental separate purchasable coordinate grid transparent overlays.
Tirion, Wil; et al. (1987) Uranometria 2000.0, Willmann-Bell, Inc., Richmond, VA, 3 volumes, hardcover. Vol. 1 (1987): "The Northern Hemisphere to −6°", by Wil Tirion, Barry Rappaport, and George Lovi, hardcover, printed boards. Vol. 2 (1988): "The Southern Hemisphere to +6°", by Wil Tirion, Barry Rappaport and George Lovi, hardcover, printed boards. Vol. 3 (1993) as a separate added work: The Deep Sky Field Guide to Uranometria 2000.0, by Murray Cragin, James Lucyk, and Barry Rappaport, hardcover, printed boards. 2nd Edition 2001 as collective set of 3 volumes – Vol. 1: Uranometria 2000.0 Deep Sky Atlas, by Wil Tirion, Barry Rappaport, and Will Remaklus, hardcover, printed boards; Vol. 2: Uranometria 2000.0 Deep Sky Atlas, by Wil Tirion, Barry Rappaport, and Will Remaklus, hardcover, printed boards; Vol. 3: Uranometria 2000.0 Deep Sky Field Guide by Murray Cragin and Emil Bonanno, , hardcover, printed boards.
Tirion, Wil and Sinnott, Roger W. (1998) Sky Atlas 2000.0, various editions. 2nd Deluxe Edition, Cambridge University Press, Cambridge, England.
Northern celestial hemisphere and north circumpolar region
Becvar, Antonin. (1962) Atlas Borealis 1950.0, Czechoslovak Academy of Sciences (Ceskoslovenske Akademie Ved), Praha, Czechoslovakia, 1st Edition, elephant folio hardcover, with small transparency overlay coordinate grid square and separate paper magnitude legend ruler. 2nd Edition 1972 and 1978 reprint, Czechoslovak Academy of Sciences (Ceskoslovenske Akademie Ved), Prague, Czechoslovakia, and Sky Publishing Corporation, Cambridge, MA, oversize folio softcover spiral-bound, with transparency overlay coordinate grid ruler.
Equatorial, ecliptic, and zodiacal celestial sky
Becvar, Antonin. (1958) Atlas Eclipticalis 1950.0, Czechoslovak Academy of Sciences (Ceskoslovenske Akademie Ved), Praha, Czechoslovakia, 1st Edition, elephant folio hardcover, with small transparency overlay coordinate grid square and separate paper magnitude legend ruler. 2nd Edition 1974, Czechoslovak Academy of Sciences (Ceskoslovenske Akademie Ved), Prague, Czechoslovakia, and Sky Publishing Corporation, Cambridge, MA, oversize folio softcover spiral-bound, with transparency overlay coordinate grid ruler.
Southern celestial hemisphere and south circumpolar region
Becvar, Antonin. Atlas Australis 1950.0, Czechoslovak Academy of Sciences (Ceskoslovenske Akademie Ved), Praha, Czechoslovakia, 1st Edition, hardcover, with small transparency overlay coordinate grid square and separate paper magnitude legend ruler. 2nd Edition, Czechoslovak Academy of Sciences (Ceskoslovenske Akademie Ved), Prague, Czechoslovakia, and Sky Publishing Corporation, Cambridge, MA, oversize folio softcover spiral-bound, with transparency overlay coordinate grid ruler.
Catalogs
Becvar, Antonin. (1959) Atlas Coeli II Katalog 1950.0, Praha, 1960 Prague. Published 1964 as Atlas of the Heavens – II Catalogue 1950.0, Sky Publishing Corporation, Cambridge, MA
Hirshfeld, Alan and Sinnott, Roger W. (1982) Sky Catalogue 2000.0, Cambridge University Press and Sky Publishing Corporation, 1st Edition, 2 volumes. both vols., and vol. 1. "Volume 1: Stars to Magnitude 8.0", (Cambridge) and hardcover, softcover. Vol. 2 (1985) – "Volume 2: Double Stars, Variable Stars, and Nonstellar Objects", (Cambridge) hardcover, (Cambridge) softcover. 2nd Edition (1991) with additional third author François Ochsenbein, 2 volumes, . Vol. 1: (Cambridge) hardcover; (Cambridge) softcover . Vol. 2 (1999): (Cambridge) softcover and 0-933346-38-7 softcover – reprint of 1985 edition.
Yale University Observatory. (1908, et al.) Catalogue of Bright Stars, New Haven, CN. Referred to commonly as "Bright Star Catalogue". Various editions with various authors historically, the longest term revising author as (Ellen) Dorrit Hoffleit. 1st Edition 1908. 2nd Edition 1940 by Frank Schlesinger and Louise F. Jenkins. 3rd Edition (1964), 4th Edition, 5th Edition (1991), and 6th Edition (pending posthumous) by Hoffleit.
External links
IAU: The Constellations, including high quality maps.
Atlascoelestis, di Felice Stoppa.
Celestia free 3D realtime space-simulation (OpenGL)
Stellarium realtime sky rendering program (OpenGL)
Strasbourg Astronomical Data Center Files on official IAU constellation boundaries
Studies of Occidental Constellations and Star Names to the Classical Period: An Annotated Bibliography
Table of Constellations
Online Text: Hyginus, Astronomica translated by Mary Grant Greco-Roman constellation myths
Neave Planetarium Adobe Flash interactive web browser planetarium and stardome with realistic movement of stars and the planets.
Audio – Cain/Gay (2009) Astronomy Cast Constellations
The Greek Star-Map short essay by Gavin White
Bucur D. The network signature of constellation line figures. PLOS ONE 17(7): e0272270 (2022). A comparative analysis on the structure of constellation line figures across 56 sky cultures.
Constellations
Celestial cartography
Constellations
Concepts in astronomy |
5272 | https://en.wikipedia.org/wiki/Printer%20%28computing%29 | Printer (computing) | In computing, a printer is a peripheral machine which makes a persistent representation of graphics or text, usually on paper. While most output is human-readable, bar code printers are an example of an expanded use for printers. Different types of printers include 3D printers, inkjet printers, laser printers, and thermal printers.
History
The first computer printer designed was a mechanically driven apparatus by Charles Babbage for his difference engine in the 19th century; however, his mechanical printer design was not built until 2000.
The first patented printing mechanism for applying a marking medium to a recording medium or more particularly an electrostatic inking apparatus and a method for electrostatically depositing ink on controlled areas of a receiving medium, was in 1962 by C. R. Winston, Teletype Corporation, using continuous inkjet printing. The ink was a red stamp-pad ink manufactured by Phillips Process Company of Rochester, NY under the name Clear Print. This patent (US3060429) led to the Teletype Inktronic Printer product delivered to customers in late 1966.
The first compact, lightweight digital printer was the EP-101, invented by Japanese company Epson and released in 1968, according to Epson.
The first commercial printers generally used mechanisms from electric typewriters and Teletype machines. The demand for higher speed led to the development of new systems specifically for computer use. In the 1980s there were daisy wheel systems similar to typewriters, line printers that produced similar output but at much higher speed, and dot-matrix systems that could mix text and graphics but produced relatively low-quality output. The plotter was used for those requiring high-quality line art like blueprints.
The introduction of the low-cost laser printer in 1984, with the first HP LaserJet, and the addition of PostScript in next year's Apple LaserWriter set off a revolution in printing known as desktop publishing. Laser printers using PostScript mixed text and graphics, like dot-matrix printers, but at quality levels formerly available only from commercial typesetting systems. By 1990, most simple printing tasks like fliers and brochures were now created on personal computers and then laser printed; expensive offset printing systems were being dumped as scrap. The HP Deskjet of 1988 offered the same advantages as a laser printer in terms of flexibility, but produced somewhat lower-quality output (depending on the paper) from much less-expensive mechanisms. Inkjet systems rapidly displaced dot-matrix and daisy-wheel printers from the market. By the 2000s, high-quality printers of this sort had fallen under the $100 price point and became commonplace.
The rapid improvement of internet email through the 1990s and into the 2000s has largely displaced the need for printing as a means of moving documents, and a wide variety of reliable storage systems means that a "physical backup" is of little benefit today.
Starting around 2010, 3D printing became an area of intense interest, allowing the creation of physical objects with the same sort of effort as an early laser printer required to produce a brochure. As of the 2020s, 3D printing has become a widespread hobby due to the abundance of cheap 3D printer kits, with the most common process being Fused deposition modeling.
Types
Personal printers are mainly designed to support individual users, and may be connected to only a single computer. These printers are designed for low-volume, short-turnaround print jobs, requiring minimal setup time to produce a hard copy of a given document. However, they are generally slow devices ranging from 6 to around 25 pages per minute (ppm), and the cost per page is relatively high. However, this is offset by the on-demand convenience. Some printers can print documents stored on memory cards or from digital cameras and scanners.
Networked or shared printers are "designed for high-volume, high-speed printing". They are usually shared by many users on a network and can print at speeds of 45 to around 100 ppm. The Xerox 9700 could achieve 120 ppm.
An ID Card printer is used for printing plastic ID cards. These can now be customised with important features such as holographic overlays, HoloKotes and watermarks. This is either a direct to card printer (the more feasible option, or a retransfer printer.
A virtual printer is a piece of computer software whose user interface and API resembles that of a printer driver, but which is not connected with a physical computer printer. A virtual printer can be used to create a file which is an image of the data which would be printed, for archival purposes or as input to another program, for example to create a PDF or to transmit to another system or user.
A barcode printer is a computer peripheral for printing barcode labels or tags that can be attached to, or printed directly on, physical objects. Barcode printers are commonly used to label cartons before shipment, or to label retail items with UPCs or EANs.
A 3D printer is a device for making a three-dimensional object from a 3D model or other electronic data source through additive processes in which successive layers of material (including plastics, metals, food, cement, wood, and other materials) are laid down under computer control. It is called a printer by analogy with an inkjet printer which produces a two-dimensional document by a similar process of depositing a layer of ink on paper.
ID Card printers
A card printer is an electronic desktop printer with single card feeders which print and personalize plastic cards. In this respect they differ from, for example, label printers which have a continuous supply feed. Card dimensions are usually 85.60 × 53.98 mm, standardized under ISO/IEC 7810 as ID-1. This format is also used in EC-cards, telephone cards, credit cards, driver's licenses and health insurance cards. This is commonly known as the bank card format. Card printers are controlled by corresponding printer drivers or by means of a specific programming language. Generally card printers are designed with laminating, striping, and punching functions, and use desktop or web-based software. The hardware features of a card printer differentiate a card printer from the more traditional printers, as ID cards are usually made of PVC plastic and require laminating and punching. Different card printers can accept different card thickness and dimensions.
The principle is the same for practically all card printers: the plastic card is passed through a thermal print head at the same time as a color ribbon. The color from the ribbon is transferred onto the card through the heat given out from the print head. The standard performance for card printing is 300 dpi (300 dots per inch, equivalent to 11.8 dots per mm). There are different printing processes, which vary in their detail:
Thermal transfer Mainly used to personalize pre-printed plastic cards in monochrome. The color is "transferred" from the (monochrome) color ribbon onto the card.
Dye sublimation This process uses four panels of color according to the CMYK color ribbon. The card to be printed passes under the print head several times each time with the corresponding ribbon panel. Each color in turn is diffused (sublimated) directly onto the card. Thus it is possible to produce a high depth of color (up to 16 million shades) on the card. Afterwards a transparent overlay (O) also known as a topcoat (T) is placed over the card to protect it from mechanical wear and tear and to render the printed image UV resistant.
Reverse image technology The standard for high-security card applications that use contact and contactless smart chip cards. The technology prints images onto the underside of a special film that fuses to the surface of a card through heat and pressure. Since this process transfers dyes and resins directly onto a smooth, flexible film, the print-head never comes in contact with the card surface itself. As such, card surface interruptions such as smart chips, ridges caused by internal RFID antennae and debris do not affect print quality. Even printing over the edge is possible.
Thermal rewrite print process In contrast to the majority of other card printers, in the thermal rewrite process the card is not personalized through the use of a color ribbon, but by activating a thermal sensitive foil within the card itself. These cards can be repeatedly personalized, erased and rewritten. The most frequent use of these are in chip-based student identity cards, whose validity changes every semester.
Common printing problems: Many printing problems are caused by physical defects in the card material itself, such as deformation or warping of the card that is fed into the machine in the first place. Printing irregularities can also result from chip or antenna embedding that alters the thickness of the plastic and interferes with the printer's effectiveness. Other issues are often caused by operator errors, such as users attempting to feed non-compatible cards into the card printer, while other printing defects may result from environmental abnormalities such as dirt or contaminants on the card or in the printer. Reverse transfer printers are less vulnerable to common printing problems than direct-to-card printers, since with these printers the card does not come into direct contact with the printhead.
Variations in card printers:
Broadly speaking there are three main types of card printers, differing mainly by the method used to print onto the card. They are:
Near to Edge. This term designates the cheapest type of printing by card printers. These printers print up to 5 mm from the edge of the card stock.
Direct to Card, also known as "Edge to Edge Printing". The print-head comes in direct contact with the card. This printing type is the most popular nowadays, mostly due to cost factor. The majority of identification card printers today are of this type.
Reverse Transfer, also known as "High Definition Printing" or "Over the Edge Printing". The print-head prints to a transfer film backwards (hence the reverse) and then the printed film is rolled onto the card with intense heat (hence the transfer). The term "over the edge" is due to the fact that when the printer prints onto the film it has a "bleed", and when rolled onto the card the bleed extends to completely over the edge of the card, leaving no border.
Different ID Card Printers use different encoding techniques to facilitate disparate business environments and to support security initiatives. Known encoding techniques are:
Contact Smart Card – The Contact Smart Cards use RFID technology and require direct contact to a conductive plate to register admission or transfer of information. The transmission of commands, data, and card status held between the two physical contact points.
Contactless Smart Card – Contactless Smart Cards exhibit integrated circuit that can store and process data while communicating with the terminal via Radio Frequency. Unlike Contact Smart Card, contact less cards feature intelligent re-writable microchip that can be transcribed through radio waves.
HiD Proximity – HID's proximity technology allows fast, accurate reading while offering card or key tag read ranges from 4” to 24” inches (10 cm to 60.96 cm), dependent on the type of proximity reader being used. Since these cards and key tags do not require physical contact with the reader, they are virtually maintenance and wear-free.
ISO Magnetic Stripe - A magnetic stripe card is a type of card capable of storing data by modifying the magnetism of tiny iron-based magnetic particles on a band of magnetic material on the card. The magnetic stripe, sometimes called swipe card or magstripe, is read by physical contact and swiping past a magnetic reading head.
Software
There are basically two categories of card printer software: desktop-based, and web-based (online). The biggest difference between the two is whether or not a customer has a printer on their network that is capable of printing identification cards. If a business already owns an ID card printer, then a desktop-based badge maker is probably suitable for their needs. Typically, large organizations who have high employee turnover will have their own printer. A desktop-based badge maker is also required if a company needs their IDs make instantly. An example of this is the private construction site that has restricted access. However, if a company does not already have a local (or network) printer that has the features they need, then the web-based option is a perhaps a more affordable solution. The web-based solution is good for small businesses that do not anticipate a lot of rapid growth, or organizations who either can not afford a card printer, or do not have the resources to learn how to set up and use one. Generally speaking, desktop-based solutions involve software, a database (or spreadsheet) and can be installed on a single computer or network.
Other options
Alongside the basic function of printing cards, card printers can also read and encode magnetic stripes as well as contact and contact free RFID chip cards (smart cards). Thus card printers enable the encoding of plastic cards both visually and logically. Plastic cards can also be laminated after printing. Plastic cards are laminated after printing to achieve a considerable increase in durability and a greater degree of counterfeit prevention. Some card printers come with an option to print both sides at the same time, which cuts down the time taken to print and less margin of error. In such printers one side of id card is printed and then the card is flipped in the flip station and other side is printed.
Applications
Alongside the traditional uses in time attendance and access control (in particular with photo personalization), countless other applications have been found for plastic cards, e.g. for personalized customer and members' cards, for sports ticketing and in local public transport systems for the production of season tickets, for the production of school and college identity cards as well as for the production of national ID cards.
Technology
The choice of print technology has a great effect on the cost of the printer and cost of operation, speed, quality and permanence of documents, and noise. Some printer technologies do not work with certain types of physical media, such as carbon paper or transparencies.
A second aspect of printer technology that is often forgotten is resistance to alteration: liquid ink, such as from an inkjet head or fabric ribbon, becomes absorbed by the paper fibers, so documents printed with liquid ink are more difficult to alter than documents printed with toner or solid inks, which do not penetrate below the paper surface.
Cheques can be printed with liquid ink or on special cheque paper with toner anchorage so that alterations may be detected. The machine-readable lower portion of a cheque must be printed using MICR toner or ink. Banks and other clearing houses employ automation equipment that relies on the magnetic flux from these specially printed characters to function properly.
Modern print technology
The following printing technologies are routinely found in modern printers:
Toner-based printers
A laser printer rapidly produces high quality text and graphics. As with digital photocopiers and multifunction printers (MFPs), laser printers employ a xerographic printing process but differ from analog photocopiers in that the image is produced by the direct scanning of a laser beam across the printer's photoreceptor.
Another toner-based printer is the LED printer which uses an array of LEDs instead of a laser to cause toner adhesion to the print drum.
Liquid inkjet printers
Inkjet printers operate by propelling variably sized droplets of liquid ink onto almost any sized page. They are the most common type of computer printer used by consumers.
Solid ink printers
Solid ink printers, also known as phase-change ink or hot-melt ink printers, are a type of thermal transfer printer, graphics sheet printer or 3D printer . They use solid sticks, crayons, pearls or granular ink materials. Common inks are CMYK-colored ink, similar in consistency to candle wax, which are melted and fed into a piezo crystal operated print-head. A Thermal transfer printhead jets the liquid ink on a rotating, oil coated drum. The paper then passes over the print drum, at which time the image is immediately transferred, or transfixed, to the page. Solid ink printers are most commonly used as color office printers and are excellent at printing on transparencies and other non-porous media. Solid ink is also called phase-change or hot-melt ink was first used by Data Products and Howtek, Inc., in 1984. Solid ink printers can produce excellent results with text and images. Some solid ink printers have evolved to print 3D models, for example, Visual Impact Corporation of Windham, NH was started by retired Howtek employee, Richard Helinski whose 3D patents US4721635 and then US5136515 was licensed to Sanders Prototype, Inc., later named Solidscape, Inc. Acquisition and operating costs are similar to laser printers. Drawbacks of the technology include high energy consumption and long warm-up times from a cold state. Also, some users complain that the resulting prints are difficult to write on, as the wax tends to repel inks from pens, and are difficult to feed through automatic document feeders, but these traits have been significantly reduced in later models. This type of thermal transfer printer is only available from one manufacturer, Xerox, manufactured as part of their Xerox Phaser office printer line. Previously, solid ink printers were manufactured by Tektronix, but Tektronix sold the printing business to Xerox in 2001.
Dye-sublimation printers
A dye-sublimation printer (or dye-sub printer) is a printer that employs a printing process that uses heat to transfer dye to a medium such as a plastic card, paper, or canvas. The process is usually to lay one color at a time using a ribbon that has color panels. Dye-sub printers are intended primarily for high-quality color applications, including color photography; and are less well-suited for text. While once the province of high-end print shops, dye-sublimation printers are now increasingly used as dedicated consumer photo printers.
Thermal printers
Thermal printers work by selectively heating regions of special heat-sensitive paper. Monochrome thermal printers are used in cash registers, ATMs, gasoline dispensers and some older inexpensive fax machines. Colors can be achieved with special papers and different temperatures and heating rates for different colors; these colored sheets are not required in black-and-white output. One example is Zink (a portmanteau of "zero ink").
Obsolete and special-purpose printing technologies
The following technologies are either obsolete, or limited to special applications though most were, at one time, in widespread use.
Impact printers
Impact printers rely on a forcible impact to transfer ink to the media. The impact printer uses a print head that either hits the surface of the ink ribbon, pressing the ink ribbon against the paper (similar to the action of a typewriter), or, less commonly, hits the back of the paper, pressing the paper against the ink ribbon (the IBM 1403 for example). All but the dot matrix printer rely on the use of fully formed characters, letterforms that represent each of the characters that the printer was capable of printing. In addition, most of these printers were limited to monochrome, or sometimes two-color, printing in a single typeface at one time, although bolding and underlining of text could be done by "overstriking", that is, printing two or more impressions either in the same character position or slightly offset. Impact printers varieties include typewriter-derived printers, teletypewriter-derived printers, daisywheel printers, dot matrix printers, and line printers. Dot-matrix printers remain in common use in businesses where multi-part forms are printed. An overview of impact printing contains a detailed description of many of the technologies used.
Typewriter-derived printers
Several different computer printers were simply computer-controllable versions of existing electric typewriters. The Friden Flexowriter and IBM Selectric-based printers were the most-common examples. The Flexowriter printed with a conventional typebar mechanism while the Selectric used IBM's well-known "golf ball" printing mechanism. In either case, the letter form then struck a ribbon which was pressed against the paper, printing one character at a time. The maximum speed of the Selectric printer (the faster of the two) was 15.5 characters per second.
Teletypewriter-derived printers
The common teleprinter could easily be interfaced with the computer and became very popular except for those computers manufactured by IBM. Some models used a "typebox" that was positioned, in the X- and Y-axes, by a mechanism, and the selected letter form was struck by a hammer. Others used a type cylinder in a similar way as the Selectric typewriters used their type ball. In either case, the letter form then struck a ribbon to print the letterform. Most teleprinters operated at ten characters per second although a few achieved 15 CPS.
Daisy wheel printers
Daisy wheel printers operate in much the same fashion as a typewriter. A hammer strikes a wheel with petals, the "daisy wheel", each petal containing a letter form at its tip. The letter form strikes a ribbon of ink, depositing the ink on the page and thus printing a character. By rotating the daisy wheel, different characters are selected for printing. These printers were also referred to as letter-quality printers because they could produce text which was as clear and crisp as a typewriter. The fastest letter-quality printers printed at 30 characters per second.
Dot-matrix printers
The term dot matrix printer is used for impact printers that use a matrix of small pins to transfer ink to the page. The advantage of dot matrix over other impact printers is that they can produce graphical images in addition to text; however the text is generally of poorer quality than impact printers that use letterforms (type).
Dot-matrix printers can be broadly divided into two major classes:
Ballistic wire printers
Stored energy printers
Dot matrix printers can either be character-based or line-based (that is, a single horizontal series of pixels across the page), referring to the configuration of the print head.
In the 1970s and '80s, dot matrix printers were one of the more common types of printers used for general use, such as for home and small office use. Such printers normally had either 9 or 24 pins on the print head (early 7 pin printers also existed, which did not print descenders). There was a period during the early home computer era when a range of printers were manufactured under many brands such as the Commodore VIC-1525 using the Seikosha Uni-Hammer system. This used a single solenoid with an oblique striker that would be actuated 7 times for each column of 7 vertical pixels while the head was moving at a constant speed. The angle of the striker would align the dots vertically even though the head had moved one dot spacing in the time. The vertical dot position was controlled by a synchronized longitudinally ribbed platen behind the paper that rotated rapidly with a rib moving vertically seven dot spacings in the time it took to print one pixel column. 24-pin print heads were able to print at a higher quality and started to offer additional type styles and were marketed as Near Letter Quality by some vendors. Once the price of inkjet printers dropped to the point where they were competitive with dot matrix printers, dot matrix printers began to fall out of favour for general use.
Some dot matrix printers, such as the NEC P6300, can be upgraded to print in color. This is achieved through the use of a four-color ribbon mounted on a mechanism (provided in an upgrade kit that replaces the standard black ribbon mechanism after installation) that raises and lowers the ribbons as needed. Color graphics are generally printed in four passes at standard resolution, thus slowing down printing considerably. As a result, color graphics can take up to four times longer to print than standard monochrome graphics, or up to 8-16 times as long at high resolution mode.
Dot matrix printers are still commonly used in low-cost, low-quality applications such as cash registers, or in demanding, very high volume applications like invoice printing. Impact printing, unlike laser printing, allows the pressure of the print head to be applied to a stack of two or more forms to print multi-part documents such as sales invoices and credit card receipts using continuous stationery with carbonless copy paper. It also has security advantages as ink impressed into a paper matrix by force is harder to erase invisibly. Dot-matrix printers were being superseded even as receipt printers after the end of the twentieth century.
Line printers
Line printers print an entire line of text at a time. Four principal designs exist.
Drum printers, where a horizontally mounted rotating drum carries the entire character set of the printer repeated in each printable character position. The IBM 1132 printer is an example of a drum printer. Drum printers are also found in adding machines and other numeric printers (POS), the dimensions are compact as only a dozen characters need to be supported.
Chain or train printers, where the character set is arranged multiple times around a linked chain or a set of character slugs in a track traveling horizontally past the print line. The IBM 1403 is perhaps the most popular and comes in both chain and train varieties. The band printer is a later variant where the characters are embossed on a flexible steel band. The LP27 from Digital Equipment Corporation is a band printer.
Bar printers, where the character set is attached to a solid bar that moves horizontally along the print line, such as the IBM 1443.
A fourth design, used mainly on very early printers such as the IBM 402, features independent type bars, one for each printable position. Each bar contains the character set to be printed. The bars move vertically to position the character to be printed in front of the print hammer.
In each case, to print a line, precisely timed hammers strike against the back of the paper at the exact moment that the correct character to be printed is passing in front of the paper. The paper presses forward against a ribbon which then presses against the character form and the impression of the character form is printed onto the paper. Each system could have slight timing issues, which could cause minor misalignment of the resulting printed characters. For drum or typebar printers, this appeared as vertical misalignment, with characters being printed slightly above or below the rest of the line. In chain or bar printers, the misalignment was horizontal, with printed characters being crowded closer together or farther apart. This was much less noticeable to human vision than vertical misalignment, where characters seemed to bounce up and down in the line, so they were considered as higher quality print.
Comb printers, also called line matrix printers, represent the fifth major design. These printers are a hybrid of dot matrix printing and line printing. In these printers, a comb of hammers prints a portion of a row of pixels at one time, such as every eighth pixel. By shifting the comb back and forth slightly, the entire pixel row can be printed, continuing the example, in just eight cycles. The paper then advances, and the next pixel row is printed. Because far less motion is involved than in a conventional dot matrix printer, these printers are very fast compared to dot matrix printers and are competitive in speed with formed-character line printers while also being able to print dot matrix graphics. The Printronix P7000 series of line matrix printers are still manufactured as of 2013.
Line printers are the fastest of all impact printers and are used for bulk printing in large computer centres. A line printer can print at 1100 lines per minute or faster, frequently printing pages more rapidly than many current laser printers. On the other hand, the mechanical components of line printers operate with tight tolerances and require regular preventive maintenance (PM) to produce a top quality print. They are virtually never used with personal computers and have now been replaced by high-speed laser printers. The legacy of line printers lives on in many operating systems, which use the abbreviations "lp", "lpr", or "LPT" to refer to printers.
Liquid ink electrostatic printers
Liquid ink electrostatic printers use a chemical coated paper, which is charged by the print head according to the image of the document. The paper is passed near a pool of liquid ink with the opposite charge. The charged areas of the paper attract the ink and thus form the image. This process was developed from the process of electrostatic copying. Color reproduction is very accurate, and because there is no heating the scale distortion is less than ±0.1%. (All laser printers have an accuracy of ±1%.)
Worldwide, most survey offices used this printer before color inkjet plotters become popular. Liquid ink electrostatic printers were mostly available in width and also 6 color printing. These were also used to print large billboards. It was first introduced by Versatec, which was later bought by Xerox. 3M also used to make these printers.
Plotters
Pen-based plotters were an alternate printing technology once common in engineering and architectural firms. Pen-based plotters rely on contact with the paper (but not impact, per se) and special purpose pens that are mechanically run over the paper to create text and images. Since the pens output continuous lines, they were able to produce technical drawings of higher resolution than was achievable with dot-matrix technology. Some plotters used roll-fed paper, and therefore had a minimal restriction on the size of the output in one dimension. These plotters were capable of producing quite sizable drawings.
Other printers
A number of other sorts of printers are important for historical reasons, or for special purpose uses.
Digital minilab (photographic paper)
Electrolytic printers
Spark printer
Barcode printer multiple technologies, including: thermal printing, inkjet printing, and laser printing barcodes
Billboard / sign paint spray printers
Laser etching (product packaging) industrial printers
Microsphere (special paper)
Attributes
Connectivity
Printers can be connected to computers in many ways: directly by a dedicated data cable such as the USB, through a short-range radio like Bluetooth, a local area network using cables (such as the Ethernet) or radio (such as WiFi), or on a standalone basis without a computer, using a memory card or other portable data storage device.
Printer control languages
Most printers other than line printers accept control characters or unique character sequences to control various printer functions. These may range from shifting from lower to upper case or from black to red ribbon on typewriter printers to switching fonts and changing character sizes and colors on raster printers. Early printer controls were not standardized, with each manufacturer's equipment having its own set. The IBM Personal Printer Data Stream (PPDS) became a commonly used command set for dot-matrix printers.
Today, most printers accept one or more page description languages (PDLs). Laser printers with greater processing power frequently offer support for variants of Hewlett-Packard's Printer Command Language (PCL), PostScript or XML Paper Specification. Most inkjet devices support manufacturer proprietary PDLs such as ESC/P. The diversity in mobile platforms have led to various standardization efforts around device PDLs such as the Printer Working Group (PWG's) PWG Raster.
Printing speed
The speed of early printers was measured in units of characters per minute (cpm) for character printers, or lines per minute (lpm) for line printers. Modern printers are measured in pages per minute (ppm). These measures are used primarily as a marketing tool, and are not as well standardised as toner yields. Usually pages per minute refers to sparse monochrome office documents, rather than dense pictures which usually print much more slowly, especially color images. Speeds in ppm usually apply to A4 paper in most countries in the world, and letter paper size, about 6% shorter, in North America.
Printing mode
The data received by a printer may be:
A string of characters
A bitmapped image
A vector image
A computer program written in a page description language, such as PCL or PostScript
Some printers can process all four types of data, others not.
Character printers, such as daisy wheel printers, can handle only plain text data or rather simple point plots.
Pen plotters typically process vector images. Inkjet based plotters can adequately reproduce all four.
Modern printing technology, such as laser printers and inkjet printers, can adequately reproduce all four. This is especially true of printers equipped with support for PCL or PostScript, which includes the vast majority of printers produced today.
Today it is possible to print everything (even plain text) by sending ready bitmapped images to the printer. This allows better control over formatting, especially among machines from different vendors. Many printer drivers do not use the text mode at all, even if the printer is capable of it.
Monochrome, color and photo printers
A monochrome printer can only produce monochrome images, with only shades of a single color. Most printers can produce only two colors, black (ink) and white (no ink). With half-tonning techniques, however, such a printer can produce acceptable grey-scale images too
A color printer can produce images of multiple colors. A photo printer is a color printer that can produce images that mimic the color range (gamut) and resolution of prints made from photographic film.
Page yield
The page yield is the number of pages that can be printed from a toner cartridge or ink cartridge—before the cartridge needs to be refilled or replaced.
The actual number of pages yielded by a specific cartridge depends on a number of factors.
For a fair comparison, many laser printer manufacturers use the ISO/IEC 19752 process to measure the toner cartridge yield.
Economics
In order to fairly compare operating expenses of printers with a relatively small ink cartridge to printers with a larger, more expensive toner cartridge that typically holds more toner and so prints more pages before the cartridge needs to be replaced, many people prefer to estimate operating expenses in terms of cost per page (CPP).
Retailers often apply the "razor and blades" model: a company may sell a printer at cost and make profits on the ink cartridge, paper, or some other replacement part. This has caused legal disputes regarding the right of companies other than the printer manufacturer to sell compatible ink cartridges. To protect their business model, several manufacturers invest heavily in developing new cartridge technology and patenting it.
Other manufacturers, in reaction to the challenges from using this business model, choose to make more money on printers and less on ink, promoting the latter through their advertising campaigns. Finally, this generates two clearly different proposals: "cheap printer – expensive ink" or "expensive printer – cheap ink". Ultimately, the consumer decision depends on their reference interest rate or their time preference. From an economics viewpoint, there is a clear trade-off between cost per copy and cost of the printer.
Printer steganography
Printer steganography is a type of steganography – "hiding data within data" – produced by color printers, including Brother, Canon, Dell, Epson, HP, IBM, Konica Minolta, Kyocera, Lanier, Lexmark, Ricoh, Toshiba and Xerox brand color laser printers, where tiny yellow dots are added to each page. The dots are barely visible and contain encoded printer serial numbers, as well as date and time stamps.
Manufacturers and market share
As of 2020-2021, the largest worldwide vendor of printers is Hewlett-Packard, followed by Canon, Brother, Seiko Epson and Kyocera. Other known vendors include NEC, Ricoh, Xerox, Lexmark, OKI, Sharp, Konica Minolta, Samsung, Kodak, Dell, Toshiba, Star Micronics, Citizen and Panasonic.
See also
Campus card
Cardboard modeling
Dye-sublimation printer
History of printing
Label printer
List of printer companies
Print (command)
Printer driver
Print screen
Print server
Printer friendly (also known as a printable version)
Printer point
Printer (publishing)
Printmaking
Smart card
Typewriter ribbon
3D printing
References
External links
Computer printers
Office equipment
Typography
Articles containing video clips |
5278 | https://en.wikipedia.org/wiki/Copyright | Copyright | A copyright is a type of intellectual property that gives its owner the exclusive right to copy, distribute, adapt, display, and perform a creative work, usually for a limited time. The creative work may be in a literary, artistic, educational, or musical form. Copyright is intended to protect the original expression of an idea in the form of a creative work, but not the idea itself. A copyright is subject to limitations based on public interest considerations, such as the fair use doctrine in the United States.
Some jurisdictions require "fixing" copyrighted works in a tangible form. It is often shared among multiple authors, each of whom holds a set of rights to use or license the work, and who are commonly referred to as rights holders. These rights frequently include reproduction, control over derivative works, distribution, public performance, and moral rights such as attribution.
Copyrights can be granted by public law and are in that case considered "territorial rights". This means that copyrights granted by the law of a certain state do not extend beyond the territory of that specific jurisdiction. Copyrights of this type vary by country; many countries, and sometimes a large group of countries, have made agreements with other countries on procedures applicable when works "cross" national borders or national rights are inconsistent.
Typically, the public law duration of a copyright expires 50 to 100 years after the creator dies, depending on the jurisdiction. Some countries require certain copyright formalities to establishing copyright, others recognize copyright in any completed work, without a formal registration. When the copyright of a work expires, it enters the public domain.
History
Background
The concept of copyright developed after the printing press came into use in Europe in the 15th and 16th centuries. The printing press made it much cheaper to produce works, but as there was initially no copyright law, anyone could buy or rent a press and print any text. Popular new works were immediately re-set and re-published by competitors, so printers needed a constant stream of new material. Fees paid to authors for new works were high, and significantly supplemented the incomes of many academics.
Printing brought profound social changes. The rise in literacy across Europe led to a dramatic increase in the demand for reading matter. Prices of reprints were low, so publications could be bought by poorer people, creating a mass audience. In German language markets before the advent of copyright, technical materials, like popular fiction, were inexpensive and widely available; it has been suggested this contributed to Germany's industrial and economic success. After copyright law became established (in 1710 in England and Scotland, and in the 1840s in German-speaking areas) the low-price mass market vanished, and fewer, more expensive editions were published; distribution of scientific and technical information was greatly reduced.
Conception
The concept of copyright first developed in England. In reaction to the printing of "scandalous books and pamphlets", the English Parliament passed the Licensing of the Press Act 1662, which required all intended publications to be registered with the government-approved Stationers' Company, giving the Stationers the right to regulate what material could be printed.
The Statute of Anne, enacted in 1710 in England and Scotland, provided the first legislation to protect copyrights (but not authors' rights). The Copyright Act of 1814 extended more rights for authors but did not protect British from reprinting in the US. The Berne International Copyright Convention of 1886 finally provided protection for authors among the countries who signed the agreement, although the US did not join the Berne Convention until 1989.
In the US, the Constitution grants Congress the right to establish copyright and patent laws. Shortly after the Constitution was passed, Congress enacted the Copyright Act of 1790, modeling it after the Statute of Anne. While the national law protected authors’ published works, authority was granted to the states to protect authors’ unpublished works. The most recent major overhaul of copyright in the US, the 1976 Copyright Act, extended federal copyright to works as soon as they are created and "fixed", without requiring publication or registration. State law continues to apply to unpublished works that are not otherwise copyrighted by federal law. This act also changed the calculation of copyright term from a fixed term (then a maximum of fifty-six years) to "life of the author plus 50 years". These changes brought the US closer to conformity with the Berne Convention, and in 1989 the United States further revised its copyright law and joined the Berne Convention officially.
Copyright laws allow products of creative human activities, such as literary and artistic production, to be preferentially exploited and thus incentivized. Different cultural attitudes, social organizations, economic models and legal frameworks are seen to account for why copyright emerged in Europe and not, for example, in Asia. In the Middle Ages in Europe, there was generally a lack of any concept of literary property due to the general relations of production, the specific organization of literary production and the role of culture in society. The latter refers to the tendency of oral societies, such as that of Europe in the medieval period, to view knowledge as the product and expression of the collective, rather than to see it as individual property. However, with copyright laws, intellectual production comes to be seen as a product of an individual, with attendant rights. The most significant point is that patent and copyright laws support the expansion of the range of creative human activities that can be commodified. This parallels the ways in which capitalism led to the commodification of many aspects of social life that earlier had no monetary or economic value per se.
Copyright has developed into a concept that has a significant effect on nearly every modern industry, including not just literary work, but also forms of creative work such as sound recordings, films, photographs, software, and architecture.
National copyrights
Often seen as the first real copyright law, the 1709 British Statute of Anne gave the publishers rights for a fixed period, after which the copyright expired.
The act also alluded to individual rights of the artist. It began, "Whereas Printers, Booksellers, and other Persons, have of late frequently taken the Liberty of Printing ... Books, and other Writings, without the Consent of the Authors ... to their very great Detriment, and too often to the Ruin of them and their Families:". A right to benefit financially from the work is articulated, and court rulings and legislation have recognized a right to control the work, such as ensuring that the integrity of it is preserved. An irrevocable right to be recognized as the work's creator appears in some countries' copyright laws.
The Copyright Clause of the United States, Constitution (1787) authorized copyright legislation: "To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries." That is, by guaranteeing them a period of time in which they alone could profit from their works, they would be enabled and encouraged to invest the time required to create them, and this would be good for society as a whole. A right to profit from the work has been the philosophical underpinning for much legislation extending the duration of copyright, to the life of the creator and beyond, to their heirs.
The original length of copyright in the United States was 14 years, and it had to be explicitly applied for. If the author wished, they could apply for a second 14‑year monopoly grant, but after that the work entered the public domain, so it could be used and built upon by others.
Copyright law was enacted rather late in German states, and the historian Eckhard Höffner argues that the absence of copyright laws in the early 19th century encouraged publishing, was profitable for authors, led to a proliferation of books, enhanced knowledge, and was ultimately an important factor in the ascendency of Germany as a power during that century. However, empirical evidence derived from the exogenous differential introduction of copyright in Napoleonic Italy shows that "basic copyrights increased both the number and the quality of operas, measured by their popularity and durability".
International copyright treaties
The 1886 Berne Convention first established recognition of copyrights among sovereign nations, rather than merely bilaterally. Under the Berne Convention, copyrights for creative works do not have to be asserted or declared, as they are automatically in force at creation: an author need not "register" or "apply for" a copyright in countries adhering to the Berne Convention. As soon as a work is "fixed", that is, written or recorded on some physical medium, its author is automatically entitled to all copyrights in the work, and to any derivative works unless and until the author explicitly disclaims them, or until the copyright expires. The Berne Convention also resulted in foreign authors being treated equivalently to domestic authors, in any country signed onto the Convention. The UK signed the Berne Convention in 1887 but did not implement large parts of it until 100 years later with the passage of the Copyright, Designs and Patents Act 1988. Specially, for educational and scientific research purposes, the Berne Convention provides the developing countries issue compulsory licenses for the translation or reproduction of copyrighted works within the limits prescribed by the Convention. This was a special provision that had been added at the time of 1971 revision of the Convention, because of the strong demands of the developing countries. The United States did not sign the Berne Convention until 1989.
The United States and most Latin American countries instead entered into the Buenos Aires Convention in 1910, which required a copyright notice on the work (such as all rights reserved), and permitted signatory nations to limit the duration of copyrights to shorter and renewable terms. The Universal Copyright Convention was drafted in 1952 as another less demanding alternative to the Berne Convention, and ratified by nations such as the Soviet Union and developing nations.
The regulations of the Berne Convention are incorporated into the World Trade Organization's TRIPS agreement (1995), thus giving the Berne Convention effectively near-global application.
In 1961, the United International Bureaux for the Protection of Intellectual Property signed the Rome Convention for the Protection of Performers, Producers of Phonograms and Broadcasting Organizations. In 1996, this organization was succeeded by the founding of the World Intellectual Property Organization, which launched the 1996 WIPO Performances and Phonograms Treaty and the 2002 WIPO Copyright Treaty, which enacted greater restrictions on the use of technology to copy works in the nations that ratified it. The Trans-Pacific Partnership includes intellectual Property Provisions relating to copyright.
Copyright laws are standardized somewhat through these international conventions such as the Berne Convention and Universal Copyright Convention. These multilateral treaties have been ratified by nearly all countries, and international organizations such as the European Union or World Trade Organization require their member states to comply with them.
Obtaining protection
Ownership
The original holder of the copyright may be the employer of the author rather than the author themself if the work is a "work for hire". For example, in English law the Copyright, Designs and Patents Act 1988 provides that if a copyrighted work is made by an employee in the course of that employment, the copyright is automatically owned by the employer which would be a "Work for Hire". Typically, the first owner of a copyright is the person who created the work i.e. the author. But when more than one person creates the work, then a case of joint authorship can be made provided some criteria are met.
Eligible works
Copyright may apply to a wide range of creative, intellectual, or artistic forms, or "works". Specifics vary by jurisdiction, but these can include poems, theses, fictional characters, plays and other literary works, motion pictures, choreography, musical compositions, sound recordings, paintings, drawings, sculptures, photographs, computer software, radio and television broadcasts, and industrial designs. Graphic designs and industrial designs may have separate or overlapping laws applied to them in some jurisdictions.
Copyright does not cover ideas and information themselves, only the form or manner in which they are expressed. For example, the copyright to a Mickey Mouse cartoon restricts others from making copies of the cartoon or creating derivative works based on Disney's particular anthropomorphic mouse, but does not prohibit the creation of other works about anthropomorphic mice in general, so long as they are different enough to not be judged copies of Disney's. Note additionally that Mickey Mouse is not copyrighted because characters cannot be copyrighted; rather, Steamboat Willie is copyrighted and Mickey Mouse, as a character in that copyrighted work, is afforded protection.
Originality
Typically, a work must meet minimal standards of originality in order to qualify for copyright, and the copyright expires after a set period of time (some jurisdictions may allow this to be extended). Different countries impose different tests, although generally the requirements are low; in the United Kingdom there has to be some "skill, labour, and judgment" that has gone into it. In Australia and the United Kingdom it has been held that a single word is insufficient to comprise a copyright work. However, single words or a short string of words can sometimes be registered as a trademark instead.
Copyright law recognizes the right of an author based on whether the work actually is an original creation, rather than based on whether it is unique; two authors may own copyright on two substantially identical works, if it is determined that the duplication was coincidental, and neither was copied from the other.
Registration
In all countries where the Berne Convention standards apply, copyright is automatic, and need not be obtained through official registration with any government office. Once an idea has been reduced to tangible form, for example by securing it in a fixed medium (such as a drawing, sheet music, photograph, a videotape, or a computer file), the copyright holder is entitled to enforce their exclusive rights. However, while registration is not needed to exercise copyright, in jurisdictions where the laws provide for registration, it serves as prima facie evidence of a valid copyright and enables the copyright holder to seek statutory damages and attorney's fees. (In the US, registering after an infringement only enables one to receive actual damages and lost profits.)
A widely circulated strategy to avoid the cost of copyright registration is referred to as the poor man's copyright. It proposes that the creator send the work to themself in a sealed envelope by registered mail, using the postmark to establish the date. This technique has not been recognized in any published opinions of the United States courts. The United States Copyright Office says the technique is not a substitute for actual registration. The United Kingdom Intellectual Property Office discusses the technique and notes that the technique (as well as commercial registries) does not constitute dispositive proof that the work is original or establish who created the work.
Fixing
The Berne Convention allows member countries to decide whether creative works must be "fixed" to enjoy copyright. Article 2, Section 2 of the Berne Convention states: "It shall be a matter for legislation in the countries of the Union to prescribe that works in general or any specified categories of works shall not be protected unless they have been fixed in some material form." Some countries do not require that a work be produced in a particular form to obtain copyright protection. For instance, Spain, France, and Australia do not require fixation for copyright protection. The United States and Canada, on the other hand, require that most works must be "fixed in a tangible medium of expression" to obtain copyright protection. US law requires that the fixation be stable and permanent enough to be "perceived, reproduced or communicated for a period of more than transitory duration". Similarly, Canadian courts consider fixation to require that the work be "expressed to some extent at least in some material form, capable of identification and having a more or less permanent endurance".
Note this provision of US law: c) Effect of Berne Convention.—No right or interest in a work eligible for protection under this title may be claimed by virtue of, or in reliance upon, the provisions of the Berne Convention, or the adherence of the United States thereto. Any rights in a work eligible for protection under this title that derive from this title, other Federal or State statutes, or the common law, shall not be expanded or reduced by virtue of, or in reliance upon, the provisions of the Berne Convention, or the adherence of the United States thereto.
Copyright notice
Before 1989, United States law required the use of a copyright notice, consisting of the copyright symbol (©, the letter C inside a circle), the abbreviation "Copr.", or the word "Copyright", followed by the year of the first publication of the work and the name of the copyright holder. Several years may be noted if the work has gone through substantial revisions. The proper copyright notice for sound recordings of musical or other audio works is a sound recording copyright symbol (℗, the letter P inside a circle), which indicates a sound recording copyright, with the letter P indicating a "phonorecord". In addition, the phrase All rights reserved which indicates that the copyright holder reserves, or holds for their own use was once required to assert copyright, but that phrase is now legally obsolete. Almost everything on the Internet has some sort of copyright attached to it. Whether these things are watermarked, signed, or have any other sort of indication of the copyright is a different story however.
In 1989 the United States enacted the Berne Convention Implementation Act, amending the 1976 Copyright Act to conform to most of the provisions of the Berne Convention. As a result, the use of copyright notices has become optional to claim copyright, because the Berne Convention makes copyright automatic. However, the lack of notice of copyright using these marks may have consequences in terms of reduced damages in an infringement lawsuit – using notices of this form may reduce the likelihood of a defense of "innocent infringement" being successful.
Enforcement
Copyrights are generally enforced by the holder in a civil law court, but there are also criminal infringement statutes in some jurisdictions. While central registries are kept in some countries which aid in proving claims of ownership, registering does not necessarily prove ownership, nor does the fact of copying (even without permission) necessarily prove that copyright was infringed. Criminal sanctions are generally aimed at serious counterfeiting activity, but are now becoming more commonplace as copyright collectives such as the RIAA are increasingly targeting the file sharing home Internet user. Thus far, however, most such cases against file sharers have been settled out of court. (See Legal aspects of file sharing)
In most jurisdictions the copyright holder must bear the cost of enforcing copyright. This will usually involve engaging legal representation, administrative or court costs. In light of this, many copyright disputes are settled by a direct approach to the infringing party in order to settle the dispute out of court.
"... by 1978, the scope was expanded to apply to any 'expression' that has been 'fixed' in any medium, this protection granted automatically whether the maker wants it or not, no registration required."
Copyright infringement
For a work to be considered to infringe upon copyright, its use must have occurred in a nation that has domestic copyright laws or adheres to a bilateral treaty or established international convention such as the Berne Convention or WIPO Copyright Treaty. Improper use of materials outside of legislation is deemed "unauthorized edition", not copyright infringement.
Statistics regarding the effects of copyright infringement are difficult to determine. Studies have attempted to determine whether there is a monetary loss for industries affected by copyright infringement by predicting what portion of pirated works would have been formally purchased if they had not been freely available. Other reports indicate that copyright infringement does not have an adverse effect on the entertainment industry, and can have a positive effect. In particular, a 2014 university study concluded that free music content, accessed on YouTube, does not necessarily hurt sales, instead has the potential to increase sales.
According to the IP Commission Report the annual cost of intellectual property theft to the US economy "continues to exceed $225 billion in counterfeit goods, pirated software, and theft of trade secrets and could be as high as $600 billion." A 2019 study sponsored by the US Chamber of Commerce Global Innovation Policy Center (GIPC), in partnership with NERA Economic Consulting "estimates that global online piracy costs the U.S. economy at least $29.2 billion in lost revenue each year." An August 2021 report by the Digital Citizens Alliance states that "online criminals who offer stolen movies, TV shows, games, and live events through websites and apps are reaping $1.34 billion in annual advertising revenues." This comes as a result of users visiting pirate websites who are then subjected to pirated content, malware, and fraud.
Rights granted
According to World Intellectual Property Organisation, copyright protects two types of rights. Economic rights allow right owners to derive financial reward from the use of their works by others. Moral rights allow authors and creators to take certain actions to preserve and protect their link with their work. The author or creator may be the owner of the economic rights or those rights may be transferred to one or more copyright owners. Many countries do not allow the transfer of moral rights.
Economic rights
With any kind of property, its owner may decide how it is to be used, and others can use it lawfully only if they have the owner's permission, often through a license. The owner's use of the property must, however, respect the legally recognised rights and interests of other members of society. So the owner of a copyright-protected work may decide how to use the work, and may prevent others from using it without permission. National laws usually grant copyright owners exclusive rights to allow third parties to use their works, subject to the legally recognised rights and interests of others. Most copyright laws state that authors or other right owners have the right to authorise or prevent certain acts in relation to a work. Right owners can authorise or prohibit:
reproduction of the work in various forms, such as printed publications or sound recordings;
distribution of copies of the work;
public performance of the work;
broadcasting or other communication of the work to the public;
translation of the work into other languages; and
adaptation of the work, such as turning a novel into a screenplay.
Moral rights
Moral rights are concerned with the non-economic rights of a creator. They protect the creator's connection with a work as well as the integrity of the work. Moral rights are only accorded to individual authors and in many national laws they remain with the authors even after the authors have transferred their economic rights. In some EU countries, such as France, moral rights last indefinitely. In the UK, however, moral rights are finite. That is, the right of attribution and the right of integrity last only as long as the work is in copyright. When the copyright term comes to an end, so too do the moral rights in that work. This is just one reason why the moral rights regime within the UK is often regarded as weaker or inferior to the protection of moral rights in continental Europe and elsewhere in the world. The Berne Convention, in Article 6bis, requires its members to grant authors the following rights:
the right to claim authorship of a work (sometimes called the right of paternity or the right of attribution); and
the right to object to any distortion or modification of a work, or other derogatory action in relation to a work, which would be prejudicial to the author's honour or reputation (sometimes called the right of integrity).
These and other similar rights granted in national laws are generally known as the moral rights of authors. The Berne Convention requires these rights to be independent of authors’ economic rights. Moral rights are only accorded to individual authors and in many national laws they remain with the authors even after the authors have transferred their economic rights. This means that even where, for example, a film producer or publisher owns the economic rights in a work, in many jurisdictions the individual author continues to have moral rights. Recently, as a part of the debates being held at the US Copyright Office on the question of inclusion of Moral Rights as a part of the framework of the Copyright Law in United States, the Copyright Office concluded that many diverse aspects of the current moral rights patchwork – including copyright law's derivative work right, state moral rights statutes, and contract law – are generally working well and should not be changed. Further, the Office concludes that there is no need for the creation of a blanket moral rights statute at this time. However, there are aspects of the US moral rights patchwork that could be improved to the benefit of individual authors and the copyright system as a whole.
The Copyright Law in the United States, several exclusive rights are granted to the holder of a copyright, as are listed below:
protection of the work;
to determine and decide how, and under what conditions, the work may be marketed, publicly displayed, reproduced, distributed, etc.
to produce copies or reproductions of the work and to sell those copies; (including, typically, electronic copies)
to import or export the work;
to create derivative works; (works that adapt the original work)
to perform or display the work publicly;
to sell or cede these rights to others;
to transmit or display by radio, video or internet.
The basic right when a work is protected by copyright is that the holder may determine and decide how and under what conditions the protected work may be used by others. This includes the right to decide to distribute the work for free. This part of copyright is often overseen. The phrase "exclusive right" means that only the copyright holder is free to exercise those rights, and others are prohibited from using the work without the holder's permission. Copyright is sometimes called a "negative right", as it serves to prohibit certain people (e.g., readers, viewers, or listeners, and primarily publishers and would be publishers) from doing something they would otherwise be able to do, rather than permitting people (e.g., authors) to do something they would otherwise be unable to do. In this way it is similar to the unregistered design right in English law and European law. The rights of the copyright holder also permit him/her to not use or exploit their copyright, for some or all of the term. There is, however, a critique which rejects this assertion as being based on a philosophical interpretation of copyright law that is not universally shared. There is also debate on whether copyright should be considered a property right or a moral right.
UK copyright law gives creators both economic rights and moral rights. While ‘copying’ someone else's work without permission may constitute an infringement of their economic rights, that is, the reproduction right or the right of communication to the public, whereas, ‘mutilating’ it might infringe the creator's moral rights. In the UK, moral rights include the right to be identified as the author of the work, which is generally identified as the right of attribution, and the right not to have your work subjected to ‘derogatory treatment’, that is the right of integrity.
Indian copyright law is at parity with the international standards as contained in TRIPS. The Indian Copyright Act, 1957, pursuant to the amendments in 1999, 2002 and 2012, fully reflects the Berne Convention and the Universal Copyrights Convention, to which India is a party. India is also a party to the Geneva Convention for the Protection of Rights of Producers of Phonograms and is an active member of the World Intellectual Property Organization (WIPO) and United Nations Educational, Scientific and Cultural Organization (UNESCO). The Indian system provides both the economic and moral rights under different provisions of its Indian Copyright Act of 1957.
Duration
Copyright subsists for a variety of lengths in different jurisdictions. The length of the term can depend on several factors, including the type of work (e.g. musical composition, novel), whether the work has been published, and whether the work was created by an individual or a corporation. In most of the world, the default length of copyright is the life of the author plus either 50 or 70 years. In the United States, the term for most existing works is a fixed number of years after the date of creation or publication. Under most countries' laws (for example, the United States and the United Kingdom), copyrights expire at the end of the calendar year in which they would otherwise expire.
The length and requirements for copyright duration are subject to change by legislation, and since the early 20th century there have been a number of adjustments made in various countries, which can make determining the duration of a given copyright somewhat difficult. For example, the United States used to require copyrights to be renewed after 28 years to stay in force, and formerly required a copyright notice upon first publication to gain coverage. In Italy and France, there were post-wartime extensions that could increase the term by approximately 6 years in Italy and up to about 14 in France. Many countries have extended the length of their copyright terms (sometimes retroactively). International treaties establish minimum terms for copyrights, but individual countries may enforce longer terms than those.
In the United States, all books and other works, except for sound recordings, published before 1928 have expired copyrights and are in the public domain. The applicable date for sound recordings in the United States is before 1923. In addition, works published before 1964 that did not have their copyrights renewed 28 years after first publication year also are in the public domain. Hirtle points out that the great majority of these works (including 93% of the books) were not renewed after 28 years and are in the public domain. Books originally published outside the US by non-Americans are exempt from this renewal requirement, if they are still under copyright in their home country.
But if the intended exploitation of the work includes publication (or distribution of derivative work, such as a film based on a book protected by copyright) outside the US, the terms of copyright around the world must be considered. If the author has been dead more than 70 years, the work is in the public domain in most, but not all, countries.
In 1998, the length of a copyright in the United States was increased by 20 years under the Copyright Term Extension Act. This legislation was strongly promoted by corporations which had valuable copyrights which otherwise would have expired, and has been the subject of substantial criticism on this point.
Limitations and exceptions
In many jurisdictions, copyright law makes exceptions to these restrictions when the work is copied for the purpose of commentary or other related uses. United States copyright law does not cover names, titles, short phrases or listings (such as ingredients, recipes, labels, or formulas). However, there are protections available for those areas copyright does not cover, such as trademarks and patents.
Idea–expression dichotomy and the merger doctrine
The idea–expression divide differentiates between ideas and expression, and states that copyright protects only the original expression of ideas, and not the ideas themselves. This principle, first clarified in the 1879 case of Baker v. Selden, has since been codified by the Copyright Act of 1976 at 17 U.S.C. § 102(b).
The first-sale doctrine and exhaustion of rights
Copyright law does not restrict the owner of a copy from reselling legitimately obtained copies of copyrighted works, provided that those copies were originally produced by or with the permission of the copyright holder. It is therefore legal, for example, to resell a copyrighted book or CD. In the United States this is known as the first-sale doctrine, and was established by the courts to clarify the legality of reselling books in second-hand bookstores.
Some countries may have parallel importation restrictions that allow the copyright holder to control the aftermarket. This may mean for example that a copy of a book that does not infringe copyright in the country where it was printed does infringe copyright in a country into which it is imported for retailing. The first-sale doctrine is known as exhaustion of rights in other countries and is a principle which also applies, though somewhat differently, to patent and trademark rights. It is important to note that the first-sale doctrine permits the transfer of the particular legitimate copy involved. It does not permit making or distributing additional copies.
In Kirtsaeng v. John Wiley & Sons, Inc., in 2013, the United States Supreme Court held in a 6–3 decision that the first-sale doctrine applies to goods manufactured abroad with the copyright owner's permission and then imported into the US without such permission. The case involved a plaintiff who imported Asian editions of textbooks that had been manufactured abroad with the publisher-plaintiff's permission. The defendant, without permission from the publisher, imported the textbooks and resold on eBay. The Supreme Court's holding severely limits the ability of copyright holders to prevent such importation.
In addition, copyright, in most cases, does not prohibit one from acts such as modifying, defacing, or destroying one's own legitimately obtained copy of a copyrighted work, so long as duplication is not involved. However, in countries that implement moral rights, a copyright holder can in some cases successfully prevent the mutilation or destruction of a work that is publicly visible.
Fair use and fair dealing
Copyright does not prohibit all copying or replication. In the United States, the fair use doctrine, codified by the Copyright Act of 1976 as 17 U.S.C. Section 107, permits some copying and distribution without permission of the copyright holder or payment to same. The statute does not clearly define fair use, but instead gives four non-exclusive factors to consider in a fair use analysis. Those factors are:
the purpose and character of one's use;
the nature of the copyrighted work;
what amount and proportion of the whole work was taken;
the effect of the use upon the potential market for or value of the copyrighted work.
In the United Kingdom and many other Commonwealth countries, a similar notion of fair dealing was established by the courts or through legislation. The concept is sometimes not well defined; however in Canada, private copying for personal use has been expressly permitted by statute since 1999. In Alberta (Education) v. Canadian Copyright Licensing Agency (Access Copyright), 2012 SCC 37, the Supreme Court of Canada concluded that limited copying for educational purposes could also be justified under the fair dealing exemption. In Australia, the fair dealing exceptions under the Copyright Act 1968 (Cth) are a limited set of circumstances under which copyrighted material can be legally copied or adapted without the copyright holder's consent. Fair dealing uses are research and study; review and critique; news reportage and the giving of professional advice (i.e. legal advice). Under current Australian law, although it is still a breach of copyright to copy, reproduce or adapt copyright material for personal or private use without permission from the copyright owner, owners of a legitimate copy are permitted to "format shift" that work from one medium to another for personal, private use, or to "time shift" a broadcast work for later, once and only once, viewing or listening. Other technical exemptions from infringement may also apply, such as the temporary reproduction of a work in machine readable form for a computer.
In the United States the AHRA (Audio Home Recording Act Codified in Section 10, 1992) prohibits action against consumers making noncommercial recordings of music, in return for royalties on both media and devices plus mandatory copy-control mechanisms on recorders.
Later acts amended US Copyright law so that for certain purposes making 10 copies or more is construed to be commercial, but there is no general rule permitting such copying. Indeed, making one complete copy of a work, or in many cases using a portion of it, for commercial purposes will not be considered fair use. The Digital Millennium Copyright Act prohibits the manufacture, importation, or distribution of devices whose intended use, or only significant commercial use, is to bypass an access or copy control put in place by a copyright owner. An appellate court has held that fair use is not a defense to engaging in such distribution.
EU copyright laws recognise the right of EU member states to implement some national exceptions to copyright. Examples of those exceptions are:
photographic reproductions on paper or any similar medium of works (excluding sheet music) provided that the rightholders receives fair compensation;
reproduction made by libraries, educational establishments, museums or archives, which are non-commercial;
archival reproductions of broadcasts;
uses for the benefit of people with a disability;
for demonstration or repair of equipment;
for non-commercial research or private study;
when used in parody.
Accessible copies
It is legal in several countries including the United Kingdom and the United States to produce alternative versions (for example, in large print or braille) of a copyrighted work to provide improved access to a work for blind and visually impaired people without permission from the copyright holder.
Religious Service Exemption
In the US there is a Religious Service Exemption (1976 law, section 110[3]), namely "performance of a non-dramatic literary or musical work or of a dramatico-musical work of a religious nature or display of a work, in the course of services at a place of worship or other religious assembly" shall not constitute infringement of copyright.
Useful articles
In Canada, items deemed useful articles such as clothing designs are exempted from copyright protection under the Copyright Act if reproduced more than 50 times. Fast fashion brands may reproduce clothing designs from smaller companies without violating copyright protections.
Transfer, assignment and licensing
A copyright, or aspects of it (e.g. reproduction alone, all but moral rights), may be assigned or transferred from one party to another. For example, a musician who records an album will often sign an agreement with a record company in which the musician agrees to transfer all copyright in the recordings in exchange for royalties and other considerations. The creator (and original copyright holder) benefits, or expects to, from production and marketing capabilities far beyond those of the author. In the digital age of music, music may be copied and distributed at minimal cost through the Internet; however, the record industry attempts to provide promotion and marketing for the artist and their work so it can reach a much larger audience. A copyright holder need not transfer all rights completely, though many publishers will insist. Some of the rights may be transferred, or else the copyright holder may grant another party a non-exclusive license to copy or distribute the work in a particular region or for a specified period of time.
A transfer or licence may have to meet particular formal requirements in order to be effective, for example under the Australian Copyright Act 1968 the copyright itself must be expressly transferred in writing. Under the US Copyright Act, a transfer of ownership in copyright must be memorialized in a writing signed by the transferor. For that purpose, ownership in copyright includes exclusive licenses of rights. Thus exclusive licenses, to be effective, must be granted in a written instrument signed by the grantor. No special form of transfer or grant is required. A simple document that identifies the work involved and the rights being granted is sufficient. Non-exclusive grants (often called non-exclusive licenses) need not be in writing under US law. They can be oral or even implied by the behavior of the parties. Transfers of copyright ownership, including exclusive licenses, may and should be recorded in the U.S. Copyright Office. (Information on recording transfers is available on the Office's web site.) While recording is not required to make the grant effective, it offers important benefits, much like those obtained by recording a deed in a real estate transaction.
Copyright may also be licensed. Some jurisdictions may provide that certain classes of copyrighted works be made available under a prescribed statutory license (e.g. musical works in the United States used for radio broadcast or performance). This is also called a compulsory license, because under this scheme, anyone who wishes to copy a covered work does not need the permission of the copyright holder, but instead merely files the proper notice and pays a set fee established by statute (or by an agency decision under statutory guidance) for every copy made. Failure to follow the proper procedures would place the copier at risk of an infringement suit. Because of the difficulty of following every individual work, copyright collectives or collecting societies and performing rights organizations (such as ASCAP, BMI, and SESAC) have been formed to collect royalties for hundreds (thousands and more) works at once. Though this market solution bypasses the statutory license, the availability of the statutory fee still helps dictate the price per work collective rights organizations charge, driving it down to what avoidance of procedural hassle would justify.
Free licenses
Copyright licenses known as open or free licenses seek to grant several rights to licensees, either for a fee or not. Free in this context is not as much of a reference to price as it is to freedom. What constitutes free licensing has been characterised in a number of similar definitions, including by order of longevity the Free Software Definition, the Debian Free Software Guidelines, the Open Source Definition and the Definition of Free Cultural Works. Further refinements to these definitions have resulted in categories such as copyleft and permissive. Common examples of free licences are the GNU General Public License, BSD licenses and some Creative Commons licenses.
Founded in 2001 by James Boyle, Lawrence Lessig, and Hal Abelson, the Creative Commons (CC) is a non-profit organization which aims to facilitate the legal sharing of creative works. To this end, the organization provides a number of generic copyright license options to the public, gratis. These licenses allow copyright holders to define conditions under which others may use a work and to specify what types of use are acceptable.
Terms of use have traditionally been negotiated on an individual basis between copyright holder and potential licensee. Therefore, a general CC license outlining which rights the copyright holder is willing to waive enables the general public to use such works more freely. Six general types of CC licenses are available (although some of them are not properly free per the above definitions and per Creative Commons' own advice). These are based upon copyright-holder stipulations such as whether they are willing to allow modifications to the work, whether they permit the creation of derivative works and whether they are willing to permit commercial use of the work. approximately 130 million individuals had received such licenses.
Criticism
Some sources are critical of particular aspects of the copyright system. This is known as a debate over copynorms. Particularly to the background of uploading content to internet platforms and the digital exchange of original work, there is discussion about the copyright aspects of downloading and streaming, the copyright aspects of hyperlinking and framing.
Concerns are often couched in the language of digital rights, digital freedom, database rights, open data or censorship. Discussions include Free Culture, a 2004 book by Lawrence Lessig. Lessig coined the term permission culture to describe a worst-case system. Good Copy Bad Copy (documentary) and RiP!: A Remix Manifesto, discuss copyright. Some suggest an alternative compensation system. In Europe consumers are acting up against the raising costs of music, film and books, and as a result Pirate Parties have been created. Some groups reject copyright altogether, taking an anti-copyright stance. The perceived inability to enforce copyright online leads some to advocate ignoring legal statutes when on the web.
Public domain
Copyright, like other intellectual property rights, is subject to a statutorily determined term. Once the term of a copyright has expired, the formerly copyrighted work enters the public domain and may be used or exploited by anyone without obtaining permission, and normally without payment. However, in paying public domain regimes the user may still have to pay royalties to the state or to an authors' association. Courts in common law countries, such as the United States and the United Kingdom, have rejected the doctrine of a common law copyright. Public domain works should not be confused with works that are publicly available. Works posted in the internet, for example, are publicly available, but are not generally in the public domain. Copying such works may therefore violate the author's copyright.
See also
Adelphi Charter
Artificial scarcity
Authors' rights and related rights, roughly equivalent concepts in civil law countries
Conflict of laws
Copyfraud
Copyleft
Copyright abolition
Copyright Alliance
Copyright alternatives
Copyright for Creativity
Copyright in architecture in the United States
Copyright on the content of patents and in the context of patent prosecution
Criticism of copyright
Criticism of intellectual property
Directive on Copyright in the Digital Single Market (European Union)
Copyright infringement
Copyright Remedy Clarification Act (CRCA)
Digital rights management
Digital watermarking
Entertainment law
Freedom of panorama
Information literacies
Intellectual property protection of typefaces
List of Copyright Acts
List of copyright case law
Literary property
Model release
Paracopyright
Philosophy of copyright
Photography and the law
Pirate Party
Printing patent, a precursor to copyright
Private copying levy
Production music
Rent-seeking
Reproduction fees
Samizdat
Software copyright
Threshold pledge system
World Book and Copyright Day
References
Further reading
Ellis, Sara R. Copyrighting Couture: An Examination of Fashion Design Protection and Why the DPPA and IDPPPA are a Step Towards the Solution to Counterfeit Chic, 78 Tenn. L. Rev. 163 (2010), available at Copyrighting Couture: An Examination of Fashion Design Protection and Why the DPPA and IDPPPA are a Step Towards the Solution to Counterfeit Chic.
Ghosemajumder, Shuman. Advanced Peer-Based Technology Business Models. MIT Sloan School of Management, 2002.
Lehman, Bruce: Intellectual Property and the National Information Infrastructure (Report of the Working Group on Intellectual Property Rights, 1995)
Lindsey, Marc: Copyright Law on Campus. Washington State University Press, 2003. .
Mazzone, Jason. Copyfraud. SSRN
McDonagh, Luke. Is Creative use of Musical Works without a licence acceptable under Copyright? International Review of Intellectual Property and Competition Law (IIC) 4 (2012) 401–426, available at SSRN
Rife, by Martine Courant. Convention, Copyright, and Digital Writing (Southern Illinois University Press; 2013) 222 pages; Examines legal, pedagogical, and other aspects of online authorship.
Shipley, David E. "Thin But Not Anorexic: Copyright Protection for Compilations and Other Fact Works" UGA Legal Studies Research Paper No. 08-001; Journal of Intellectual Property Law, Vol. 15, No. 1, 2007.
Silverthorne, Sean. Music Downloads: Pirates- or Customers? . Harvard Business School Working Knowledge, 2004.
Sorce Keller, Marcello. "Originality, Authenticity and Copyright", Sonus, VII(2007), no. 2, pp. 77–85.
Rose, M. (1993), Authors and Owners: The Invention of Copyright, London: Harvard University Press
Loewenstein, J. (2002), The Author's Due: Printing and the Prehistory of Copyright, London: University of Chicago Press.
External links
A simplified guide.
WIPOLex from WIPO; global database of treaties and statutes relating to intellectual property
Copyright Berne Convention: Country List List of the 164 members of the Berne Convention for the protection of literary and artistic works
Copyright and State Sovereign Immunity, U.S. Copyright Office
The Multi-Billion-Dollar Piracy Industry with Tom Galvin of Digital Citizens Alliance, The Illusion of More Podcast
Education
Copyright Cortex
A Bibliography on the Origins of Copyright and Droit d'Auteur
MIT OpenCourseWare 6.912 Introduction to Copyright Law Free self-study course with video lectures as offered during the January 2006, Independent Activities Period (IAP)
US
Copyright Law of the United States Documents, US Government
Compendium of Copyright Practices (3rd ed.) United States Copyright Office
Copyright from UCB Libraries GovPubs
Early Copyright Records From the Rare Book and Special Collections Division at the Library of Congress
UK
Copyright: Detailed information at the UK Intellectual Property Office
Fact sheet P-01: UK copyright law (Issued April 2000, amended 25 November 2020) at the UK Copyright Service
Data management
Intellectual property law
Monopoly (economics)
Product management
Public records
Intangible assets |
5282 | https://en.wikipedia.org/wiki/Catalan%20language | Catalan language | Catalan (; autonym: , ), known in the Valencian Community and Carche as Valencian (autonym: ), is a Western Romance language. It is the official language of Andorra, and an official language of two autonomous communities in eastern Spain: Catalonia and the Balearic Islands. It is also an official language in Valencia, where it is called Valencian. It has semi-official status in the Italian comune of Alghero, and it is spoken in the Pyrénées-Orientales department of France and in two further areas in eastern Spain: the eastern strip of Aragon and the Carche area in the Region of Murcia. The Catalan-speaking territories are often called the or "Catalan Countries".
The language evolved from Vulgar Latin in the Middle Ages around the eastern Pyrenees. Nineteenth-century Spain saw a Catalan literary revival, culminating in the early 1900s.
Etymology and pronunciation
The word Catalan is derived from the territorial name of Catalonia, itself of disputed etymology. The main theory suggests that (Latin Gathia Launia) derives from the name Gothia or Gauthia ("Land of the Goths"), since the origins of the Catalan counts, lords and people were found in the March of Gothia, whence Gothland > Gothlandia > Gothalania > Catalonia theoretically derived.
In English, the term referring to a person first appears in the mid 14th century as Catelaner, followed in the 15th century as Catellain (from French). It is attested a language name since at least 1652. The word Catalan can be pronounced in English as , or .
The endonym is pronounced in the Eastern Catalan dialects, and in the Western dialects. In the Valencian Community and Carche, the term is frequently used instead. Thus, the name "Valencian", although often employed for referring to the varieties specific to the Valencian Community and Carche, is also used by Valencians as a name for the language as a whole, synonymous with "Catalan". Both uses of the term have their respective entries in the dictionaries by the Acadèmia Valenciana de la Llengua and the Institut d'Estudis Catalans. See also status of Valencian below.
History
Middle Ages
By the 9th century, Catalan had evolved from Vulgar Latin on both sides of the eastern end of the Pyrenees, as well as the territories of the Roman province of Hispania Tarraconensis to the south. From the 8th century onwards the Catalan counts extended their territory southwards and westwards at the expense of the Muslims, bringing their language with them. This process was given definitive impetus with the separation of the County of Barcelona from the Carolingian Empire in 988.
In the 11th century, documents written in macaronic Latin begin to show Catalan elements, with texts written almost completely in Romance appearing by 1080. Old Catalan shared many features with Gallo-Romance, diverging from Old Occitan between the 11th and 14th centuries.
During the 11th and 12th centuries the Catalan rulers expanded southward to the Ebro river, and in the 13th century they conquered the Land of Valencia and the Balearic Islands. The city of Alghero in Sardinia was repopulated with Catalan speakers in the 14th century. The language also reached Murcia, which became Spanish-speaking in the 15th century.
In the Low Middle Ages, Catalan went through a golden age, reaching a peak of maturity and cultural richness. Examples include the work of Majorcan polymath Ramon Llull (1232–1315), the Four Great Chronicles (13th–14th centuries), and the Valencian school of poetry culminating in Ausiàs March (1397–1459). By the 15th century, the city of Valencia had become the sociocultural center of the Crown of Aragon, and Catalan was present all over the Mediterranean world. During this period, the Royal Chancery propagated a highly standardized language. Catalan was widely used as an official language in Sicily until the 15th century, and in Sardinia until the 17th. During this period, the language was what Costa Carreras terms "one of the 'great languages' of medieval Europe".
Martorell's outstanding novel of chivalry Tirant lo Blanc (1490) shows a transition from Medieval to Renaissance values, something that can also be seen in Metge's work. The first book produced with movable type in the Iberian Peninsula was printed in Catalan.
Start of the modern era
Spain
With the union of the crowns of Castille and Aragon in 1479, the Spanish kings ruled over different kingdoms, each with its own cultural, linguistic and political particularities, and they had to swear by the laws of each territory before the respective parliaments. But after the War of the Spanish Succession, Spain became an absolute monarchy under Philip V, which led to the assimilation of the Crown of Aragon by the Crown of Castile through the Nueva Planta decrees, as a first step in the creation of the Spanish nation-state; as in other contemporary European states, this meant the imposition of the political and cultural characteristics of the dominant groups. Since the political unification of 1714, Spanish assimilation policies towards national minorities have been a constant.
The process of assimilation began with secret instructions to the corregidores of the Catalan territory: they "will take the utmost care to introduce the Castilian language, for which purpose he will give the most temperate and disguised measures so that the effect is achieved, without the care being noticed." From there, actions in the service of assimilation, discreet or aggressive, were continued, and reached to the last detail, such as, in 1799, the Royal Certificate forbidding anyone to "represent, sing and dance pieces that were not in Spanish." Anyway, the use of Spanish gradually became more prestigious and marked the start of the decline of Catalan. Starting in the 16th century, Catalan literature came under the influence of Spanish, and the nobles, part of the urban and literary classes became bilingual.
France
With the Treaty of the Pyrenees (1659), Spain ceded the northern part of Catalonia to France, and soon thereafter the local Catalan varieties came under the influence of French, which in 1700 became the sole official language of the region.
Shortly after the French Revolution (1789), the French First Republic prohibited official use of, and enacted discriminating policies against, the regional languages of France, such as Catalan, Alsatian, Breton, Occitan, Flemish, and Basque.
France: 19th to 20th century
Following the French establishment of the colony of Algeria from 1830 onward, it received several waves of Catalan-speaking settlers. People from the Spanish Alicante province settled around Oran, whereas Algiers received immigration from Northern Catalonia and Menorca.
Their speech was known as patuet. By 1911, the number of Catalan speakers was around 100,000. After the declaration of independence of Algeria in 1962, almost all the Catalan speakers fled to Northern Catalonia (as Pieds-Noirs) or Alacant.
The government of France formally recognizes only French as an official language. Nevertheless, on 10 December 2007, the General Council of the Pyrénées-Orientales officially recognized Catalan as one of the languages of the department and seeks to further promote it in public life and education.
Spain: 18th to 20th century
In 1807, the Statistics Office of the French Ministry of the Interior asked the prefects for an official survey on the limits of the French language. The survey found that in Roussillon, almost only Catalan was spoken, and since Napoleon wanted to incorporate Catalonia into France, as happened in 1812, the consul in Barcelona was also asked. He declared that Catalan "is taught in schools, it is printed and spoken, not only among the lower class, but also among people of first quality, also in social gatherings, as in visits and congresses", indicating that it was spoken everywhere "with the exception of the royal courts". He also indicated that Catalan was spoken "in the Kingdom of Valencia, in the islands of Mallorca, Menorca, Ibiza, Sardinia, Corsica and much of Sicily, in the Vall d "Aran and Cerdaña".
The defeat of the pro-Habsburg coalition in the War of Spanish Succession (1714) initiated a series of laws which, among other centralizing measures, imposed the use of Spanish in legal documentation all over Spain. Because of this, use of the Catalan language declined into the 18th century.
However, the 19th century saw a Catalan literary revival (), which has continued up to the present day. This period starts with Aribau's Ode to the Homeland (1833); followed in the second half of the 19th century, and the early 20th by the work of Verdaguer (poetry), Oller (realist novel), and Guimerà (drama). In the 19th century, the region of Carche, in the province of Murcia was repopulated with Valencian speakers. Catalan spelling was standardized in 1913 and the language became official during the Second Spanish Republic (1931–1939). The Second Spanish Republic saw a brief period of tolerance, with most restrictions against Catalan lifted. The Generalitat (the autonomous government of Catalonia, established during the Republic in 1931) made a normal use of Catalan in its administration and put efforts to promote it at social level, including in schools and the University of Barcelona.
The Catalan language and culture were still vibrant during the Spanish Civil War (1936–1939), but were crushed at an unprecedented level throughout the subsequent decades due to Francoist dictatorship (1939–1975), which abolished the official status of Catalan and imposed the use of Spanish in schools and in public administration in all of Spain, while banning the use of Catalan in them. Between 1939 and 1943 newspapers and book printing in Catalan almost disappeared. Francisco Franco's desire for a homogenous Spanish population resonated with some Catalans in favor of his regime, primarily members of the upper class, who began to reject the use of Catalan. Despite all of these hardships, Catalan continued to be used privately within households, and it was able to survive Franco's dictatorship. At the end of World War II, however, some of the harsh mesures began to be lifted and, while Spanish language remained the sole promoted one, limited number of Catalan literature began to be tolerated. Several prominent Catalan authors resisted the suppression through literature. Private initiative contests were created to reward works in Catalan, among them Joan Martorell prize (1947), Víctor Català prize (1953) Carles Riba award (1950), or the Honor Award of Catalan Letters (1969). The first Catalan-language TV show was broadcast in 1964. At the same time, oppression of the Catalan language and identity was carried out in schools, through governmental bodies, and in religious centers.
In addition to the loss of prestige for Catalan and its prohibition in schools, migration during the 1950s into Catalonia from other parts of Spain also contributed to the diminished use of the language. These migrants were often unaware of the existence of Catalan, and thus felt no need to learn or use it. Catalonia was the economic powerhouse of Spain, so these migrations continued to occur from all corners of the country. Employment opportunities were reduced for those who were not bilingual. Daily newspapers remained exclusively in Spanish until after Franco's death, when the first one in Catalan since the end of the Civil War, Avui, began to be published in 1976.
Present day
Since the Spanish transition to democracy (1975–1982), Catalan has been institutionalized as an official language, language of education, and language of mass media; all of which have contributed to its increased prestige. In Catalonia, there is an unparalleled large bilingual European non-state linguistic community. The teaching of Catalan is mandatory in all schools, but it is possible to use Spanish for studying in the public education system of Catalonia in two situations – if the teacher assigned to a class chooses to use Spanish, or during the learning process of one or more recently arrived immigrant students. There is also some intergenerational shift towards Catalan.
More recently, several Spanish political forces have tried to increase the use of Spanish in the Catalan educational system. As a result, in May 2022 the Spanish Supreme Court urged the Catalan regional government to enforce a measure by which 25% of all lessons must be taught in Spanish.
According to the Statistical Institute of Catalonia, in 2013 the Catalan language is the second most commonly used in Catalonia, after Spanish, as a native or self-defining language: 7% of the population self-identifies with both Catalan and Spanish equally, 36.4% with Catalan and 47.5% only Spanish. In 2003 the same studies concluded no language preference for self-identification within the population above 15 years old: 5% self-identified with both languages, 44.3% with Catalan and 47.5% with Spanish. To promote use of Catalan, the Generalitat de Catalunya (Catalonia's official Autonomous government) spends part of its annual budget on the promotion of the use of Catalan in Catalonia and in other territories, with entities such as (Consortium for Linguistic Normalization)
In Andorra, Catalan has always been the sole official language. Since the promulgation of the 1993 constitution, several policies favoring Catalan have been enforced, like Catalan medium education.
On the other hand, there are several language shift processes currently taking place. In the Northern Catalonia area of France, Catalan has followed the same trend as the other minority languages of France, with most of its native speakers being 60 or older (as of 2004). Catalan is studied as a foreign language by 30% of the primary education students, and by 15% of the secondary. The cultural association promotes a network of community-run schools engaged in Catalan language immersion programs.
In Alicante province, Catalan is being replaced by Spanish and in Alghero by Italian. There is also well ingrained diglossia in the Valencian Community, Ibiza, and to a lesser extent, in the rest of the Balearic islands.
During the 20th century many Catalans emigrated or went into exile to Venezuela, Mexico, Cuba, Argentina, and other South American countries. They formed a large number of Catalan colonies that today continue to maintain the Catalan language. They also founded many Catalan casals (associations).
Classification and relationship with other Romance languages
One classification of Catalan is given by Pèire Bèc:
Romance languages
Italo-Western languages
Western Romance languages
Gallo-Iberian languages
Gallo-Romance languages
Occitano-Romance languages
Catalan language
However, the ascription of Catalan to the Occitano-Romance branch of Gallo-Romance languages is not shared by all linguists and philologists, particularly among Spanish ones, such as Ramón Menéndez Pidal.
Catalan bears varying degrees of similarity to the linguistic varieties subsumed under the cover term Occitan language (see also differences between Occitan and Catalan and Gallo-Romance languages). Thus, as it should be expected from closely related languages, Catalan today shares many traits with other Romance languages.
Relationship with other Romance languages
Some include Catalan in Occitan, as the linguistic distance between this language and some Occitan dialects (such as the Gascon language) is similar to the distance among different Occitan dialects. Catalan was considered a dialect of Occitan until the end of the 19th century and still today remains its closest relative.
Catalan shares many traits with the other neighboring Romance languages (Occitan, French, Italian, Sardinian as well as Spanish and Portuguese among others). However, despite being spoken mostly on the Iberian Peninsula, Catalan has marked differences with the Iberian Romance group (Spanish and Portuguese) in terms of pronunciation, grammar, and especially vocabulary; it shows instead its closest affinity with languages native to France and northern Italy, particularly Occitan and to a lesser extent Gallo-Romance (Franco-Provençal, French, Gallo-Italian).
According to Ethnologue, the lexical similarity between Catalan and other Romance languages is: 87% with Italian; 85% with Portuguese and Spanish; 76% with Ladin and Romansh; 75% with Sardinian; and 73% with Romanian.
During much of its history, and especially during the Francoist dictatorship (1939–1975), the Catalan language was ridiculed as a mere dialect of Spanish. This view, based on political and ideological considerations, has no linguistic validity. Spanish and Catalan have important differences in their sound systems, lexicon, and grammatical features, placing the language in features closer to Occitan (and French).
There is evidence that, at least from the 2nd century , the vocabulary and phonology of Roman Tarraconensis was different from the rest of Roman Hispania. Differentiation arose generally because Spanish, Asturian, and Galician-Portuguese share certain peripheral archaisms (Spanish , Asturian and Portuguese vs. Catalan , Occitan "to boil") and innovatory regionalisms (Sp , Ast vs. Cat , Oc "bullock"), while Catalan has a shared history with the Western Romance innovative core, especially Occitan.
Like all Romance languages, Catalan has a handful of native words which are unique to it, or rare elsewhere. These include:
verbs: 'to fasten; transfix' > 'to compose, write up', > 'to combine, conjugate', > 'to wake; awaken', 'to thicken; crowd together' > 'to save, keep', > 'to miss, yearn, pine for', 'to investigate, track' > Old Catalan enagar 'to incite, induce', > OCat ujar 'to exhaust, fatigue', > 'to appease, mollify', > 'to reject, refuse';
nouns: > 'pomace', > 'reedmace', > 'catarrh', > 'snowdrift', > 'ardor, passion', > 'brake', > 'avalanche', > 'edge, border', 'sawfish' > pestriu > 'thresher shark, smooth hound; ray', 'live coal' > 'spark', > tardaó > 'autumn'.
The Gothic superstrate produced different outcomes in Spanish and Catalan. For example, Catalan "mud" and "to roast", of Germanic origin, contrast with Spanish and , of Latin origin; whereas Catalan "spinning wheel" and "temple", of Latin origin, contrast with Spanish and , of Germanic origin.
The same happens with Arabic loanwords. Thus, Catalan "large earthenware jar" and "tile", of Arabic origin, contrast with Spanish and , of Latin origin; whereas Catalan "oil" and "olive", of Latin origin, contrast with Spanish and . However, the Arabic element in Spanish is generally much more prevalent.
Situated between two large linguistic blocks (Iberian Romance and Gallo-Romance), Catalan has many unique lexical choices, such as "to miss somebody", "to calm somebody down", and "reject".
Geographic distribution
Catalan-speaking territories
Traditionally Catalan-speaking territories are sometimes called the (Catalan Countries), a denomination based on cultural affinity and common heritage, that has also had a subsequent political interpretation but no official status. Various interpretations of the term may include some or all of these regions.
Number of speakers
The number of people known to be fluent in Catalan varies depending on the sources used. A 2004 study did not count the total number of speakers, but estimated a total of 9–9.5 million by matching the percentage of speakers to the population of each area where Catalan is spoken. The web site of the Generalitat de Catalunya estimated that as of 2004 there were 9,118,882 speakers of Catalan. These figures only reflect potential speakers; today it is the native language of only 35.6% of the Catalan population. According to Ethnologue, Catalan had 4.1 million native speakers and 5.1 million second-language speakers in 2021.
According to a 2011 study the total number of Catalan speakers is over 9.8 million, with 5.9 million residing in Catalonia. More than half of them speak Catalan as a second language, with native speakers being about 4.4 million of those (more than 2.8 in Catalonia). Very few Catalan monoglots exist; basically, virtually all of the Catalan speakers in Spain are bilingual speakers of Catalan and Spanish, with a sizable population of Spanish-only speakers of immigrant origin (typically born outside Catalonia or whose parents were both born outside Catalonia) existing in the major Catalan urban areas as well.
In Roussillon, only a minority of French Catalans speak Catalan nowadays, with French being the majority language for the inhabitants after a continued process of language shift. According to a 2019 survey by the Catalan government, 31.5% of the inhabitants of Catalonia have Catalan as first language at home whereas 52.7% have Spanish, 2.8% both Catalan and Spanish and 10.8% other languages.
Spanish is the most spoken language in Barcelona (according to the linguistic census held by the Government of Catalonia in 2013) and it is understood almost universally. According to this census of 2013 Catalan is also very commonly spoken in the city of 1,501,262: it is understood by 95% of the population, while 72.3% over the age of 2 can speak it (1,137,816), 79% can read it (1,246.555), and 53% can write it (835,080). The proportion in Barcelona who can speak it, 72.3%, is lower than that of the overall Catalan population, of whom 81.2% over the age of 15 speak the language. Knowledge of Catalan has increased significantly in recent decades thanks to a language immersion educational system. An important social characteristic of the Catalan language is that all the areas where it is spoken are bilingual in practice: together with the French language in Roussillon, with Italian in Alghero, with Spanish and French in Andorra and with Spanish in the rest of the territories.
1. The number of people who understand Catalan includes those who can speak it.
2. Figures relate to all self-declared capable speakers, not just native speakers.
Level of knowledge
(% of the population 15 years old and older).
Social use
(% of the population 15 years old and older).
Native language
Phonology
Catalan phonology varies by dialect. Notable features include:
Marked contrast of the vowel pairs and , as in other Western Romance languages, other than Spanish.
Lack of diphthongization of Latin short , , as in Galician and Portuguese, but unlike French, Spanish, or Italian.
Abundance of diphthongs containing , as in Galician and Portuguese.
In contrast to other Romance languages, Catalan has many monosyllabic words, and these may end in a wide variety of consonants, including some consonant clusters. Additionally, Catalan has final obstruent devoicing, which gives rise to an abundance of such couplets as ("male friend") vs. ("female friend").
Central Catalan pronunciation is considered to be standard for the language. The descriptions below are mostly representative of this variety. For the differences in pronunciation between the different dialects, see the section on pronunciation of dialects in this article.
Vowels
Catalan has inherited the typical vowel system of Vulgar Latin, with seven stressed phonemes: , a common feature in Western Romance, with the exception of Spanish. Balearic also has instances of stressed . Dialects differ in the different degrees of vowel reduction, and the incidence of the pair .
In Central Catalan, unstressed vowels reduce to three: ; ; remains distinct. The other dialects have different vowel reduction processes (see the section pronunciation of dialects in this article).
Consonants
The consonant system of Catalan is rather conservative.
has a velarized allophone in syllable coda position in most dialects. However, is velarized irrespective of position in Eastern dialects like Majorcan and standard Eastern Catalan.
occurs in Balearic, Algherese, standard Valencian and some areas in southern Catalonia. It has merged with elsewhere.
Voiced obstruents undergo final-obstruent devoicing: .
Voiced stops become lenited to approximants in syllable onsets, after continuants: > , > , > . Exceptions include after lateral consonants, and after . In coda position, these sounds are realized as stops, except in some Valencian dialects where they are lenited.
There is some confusion in the literature about the precise phonetic characteristics of , , , . Some sources describe them as "postalveolar". Others as "back alveolo-palatal", implying that the characters would be more accurate. However, in all literature only the characters for palato-alveolar affricates and fricatives are used, even when the same sources use for other languages like Polish and Chinese.
The distribution of the two rhotics and closely parallels that of Spanish. Between vowels, the two contrast, but they are otherwise in complementary distribution: in the onset of the first syllable in a word, appears unless preceded by a consonant. Dialects vary in regards to rhotics in the coda with Western Catalan generally featuring and Central Catalan dialects featuring a weakly trilled unless it precedes a vowel-initial word in the same prosodic unit, in which case appears.
In careful speech, , , may be geminated. Geminated may also occur. Some analyze intervocalic as the result of gemination of a single rhotic phoneme. This is similar to the common analysis of Spanish and Portuguese rhotics.
Phonological evolution
Sociolinguistics
Catalan sociolinguistics studies the situation of Catalan in the world and the different varieties that this language presents. It is a subdiscipline of Catalan philology and other affine studies and has as an objective to analyze the relation between the Catalan language, the speakers and the close reality (including the one of other languages in contact).
Preferential subjects of study
Dialects of Catalan
Variations of Catalan by class, gender, profession, age and level of studies
Process of linguistic normalization
Relations between Catalan and Spanish or French
Perception on the language of Catalan speakers and non-speakers
Presence of Catalan in several fields: tagging, public function, media, professional sectors
Dialects
Overview
The dialects of the Catalan language feature a relative uniformity, especially when compared to other Romance languages; both in terms of vocabulary, semantics, syntax, morphology, and phonology. Mutual intelligibility between dialects is very high, estimates ranging from 90% to 95%. The only exception is the isolated idiosyncratic Algherese dialect.
Catalan is split in two major dialectal blocks: Eastern and Western. The main difference lies in the treatment of unstressed and ; which have merged to in Eastern dialects, but which remain distinct as and in Western dialects. There are a few other differences in pronunciation, verbal morphology, and vocabulary.
Western Catalan comprises the two dialects of Northwestern Catalan and Valencian; the Eastern block comprises four dialects: Central Catalan, Balearic, Rossellonese, and Algherese. Each dialect can be further subdivided in several subdialects. The terms "Catalan" and "Valencian" (respectively used in Catalonia and the Valencian Community) refer to two varieties of the same language. There are two institutions regulating the two standard varieties, the Institute of Catalan Studies in Catalonia and the Valencian Academy of the Language in the Valencian Community.
Central Catalan is considered the standard pronunciation of the language and has the largest number of speakers. It is spoken in the densely populated regions of the Barcelona province, the eastern half of the province of Tarragona, and most of the province of Girona.
Catalan has an inflectional grammar. Nouns have two genders (masculine, feminine), and two numbers (singular, plural). Pronouns additionally can have a neuter gender, and some are also inflected for case and politeness, and can be combined in very complex ways. Verbs are split in several paradigms and are inflected for person, number, tense, aspect, mood, and gender. In terms of pronunciation, Catalan has many words ending in a wide variety of consonants and some consonant clusters, in contrast with many other Romance languages.
Pronunciation
Vowels
Catalan has inherited the typical vowel system of Vulgar Latin, with seven stressed phonemes: , a common feature in Western Romance, except Spanish. Balearic has also instances of stressed . Dialects differ in the different degrees of vowel reduction, and the incidence of the pair .
In Eastern Catalan (except Majorcan), unstressed vowels reduce to three: ; ; remains distinct. There are a few instances of unreduced , in some words. Algherese has lowered to .
In Majorcan, unstressed vowels reduce to four: follow the Eastern Catalan reduction pattern; however reduce to , with remaining distinct, as in Western Catalan.
In Western Catalan, unstressed vowels reduce to five: ; ; remain distinct. This reduction pattern, inherited from Proto-Romance, is also found in Italian and Portuguese. Some Western dialects present further reduction or vowel harmony in some cases.
Central, Western, and Balearic differ in the lexical incidence of stressed and . Usually, words with in Central Catalan correspond to in Balearic and in Western Catalan. Words with in Balearic almost always have in Central and Western Catalan as well. As a result, Central Catalan has a much higher incidence of .
Consonants
Morphology
Western Catalan: In verbs, the ending for 1st-person present indicative is in verbs of the 1st conjugation and -∅ in verbs of the 2nd and 3rd conjugations in most of the Valencian Community, or in all verb conjugations in the Northern Valencian Community and Western Catalonia.E.g. , , (Valencian); , , (Northwestern Catalan).
Eastern Catalan: In verbs, the ending for 1st-person present indicative is , , or -∅ in all conjugations. E.g. (Central), (Balearic), and (Northern), all meaning ('I speak').
Western Catalan: In verbs, the inchoative endings are /, , , /.
Eastern Catalan: In verbs, the inchoative endings are , , , .
Western Catalan: In nouns and adjectives, maintenance of of medieval plurals in proparoxytone words.E.g. 'men', 'youth'.
Eastern Catalan: In nouns and adjectives, loss of of medieval plurals in proparoxytone words.E.g. 'men', 'youth' (Ibicencan, however, follows the model of Western Catalan in this case).
Vocabulary
Despite its relative lexical unity, the two dialectal blocks of Catalan (Eastern and Western) show some differences in word choices. Any lexical divergence within any of the two groups can be explained as an archaism. Also, usually Central Catalan acts as an innovative element.
Standards
Standard Catalan, virtually accepted by all speakers, is mostly based on Eastern Catalan, which is the most widely used dialect. Nevertheless, the standards of the Valencian Community and the Balearics admit alternative forms, mostly traditional ones, which are not current in eastern Catalonia.
The most notable difference between both standards is some tonic accentuation, for instance: (IEC) – (AVL). Nevertheless, AVL's standard keeps the grave accent , while pronouncing it as rather than , in some words like: ('what'), or . Other divergences include the use of (AVL) in some words instead of like in / ('almond'), / ('back'), the use of elided demonstratives ( 'this', 'that') in the same level as reinforced ones () or the use of many verbal forms common in Valencian, and some of these common in the rest of Western Catalan too, like subjunctive mood or inchoative conjugation in at the same level as or the priority use of morpheme in 1st person singular in present indicative ( verbs): instead of ('I buy').
In the Balearic Islands, IEC's standard is used but adapted for the Balearic dialect by the University of the Balearic Islands's philological section. In this way, for instance, IEC says it is correct writing as much as ('we sing'), but the university says that the priority form in the Balearic Islands must be in all fields. Another feature of the Balearic standard is the non-ending in the 1st person singular present indicative: ('I buy'), ('I fear'), ('I sleep').
In Alghero, the IEC has adapted its standard to the Algherese dialect of Sardinia. In this standard one can find, among other features: the definite article instead of , special possessive pronouns and determinants ('mine'), ('his/her'), ('yours'), and so on, the use of in the imperfect tense in all conjugations: , , ; the use of many archaic words, usual words in Algherese: instead of ('less'), instead of ('someone'), instead of ('which'), and so on; and the adaptation of weak pronouns. In 1999, Catalan (Algherese dialect) was among the twelve minority languages officially recognized as Italy's "historical linguistic minorities" by the Italian State under Law No. 482/1999.
In 2011, the Aragonese government passed a decree approving the statutes of a new language regulator of Catalan in La Franja (the so-called Catalan-speaking areas of Aragon) as originally provided for by Law 10/2009. The new entity, designated as , shall allow a facultative education in Catalan and a standardization of the Catalan language in La Franja.
Status of Valencian
Valencian is classified as a Western dialect, along with the northwestern varieties spoken in Western Catalonia (provinces of Lleida and the western half of Tarragona). Central Catalan has 90% to 95% inherent intelligibility for speakers of Valencian.
Linguists, including Valencian scholars, deal with Catalan and Valencian as the same language. The official regulating body of the language of the Valencian Community, the Valencian Academy of Language (Acadèmia Valenciana de la Llengua, AVL) declares the linguistic unity between Valencian and Catalan varieties.
The AVL, created by the Valencian parliament, is in charge of dictating the official rules governing the use of Valencian, and its standard is based on the Norms of Castelló (Normes de Castelló). Currently, everyone who writes in Valencian uses this standard, except the Royal Academy of Valencian Culture (Acadèmia de Cultura Valenciana, RACV), which uses for Valencian an independent standard.
Despite the position of the official organizations, an opinion poll carried out between 2001 and 2004 showed that the majority of the Valencian people consider Valencian different from Catalan. This position is promoted by people who do not use Valencian regularly. Furthermore, the data indicates that younger generations educated in Valencian are much less likely to hold these views. A minority of Valencian scholars active in fields other than linguistics defends the position of the Royal Academy of Valencian Culture (Acadèmia de Cultura Valenciana, RACV), which uses for Valencian a standard independent from Catalan.
This clash of opinions has sparked much controversy. For example, during the drafting of the European Constitution in 2004, the Spanish government supplied the EU with translations of the text into Basque, Galician, Catalan, and Valencian, but the latter two were identical.
Vocabulary
Word choices
Despite its relative lexical unity, the two dialectal blocks of Catalan (Eastern and Western) show some differences in word choices. Any lexical divergence within any of the two groups can be explained as an archaism. Also, usually Central Catalan acts as an innovative element.
Literary Catalan allows the use of words from different dialects, except those of very restricted use. However, from the 19th century onwards, there has been a tendency towards favoring words of Northern dialects to the detriment of others,
Latin and Greek loanwords
Like other languages, Catalan has a large list of loanwords from Greek and Latin. This process started very early, and one can find such examples in Ramon Llull's work. In the 14th and 15th centuries Catalan had a far greater number of Greco-Latin loanwords than other Romance languages, as is attested for example in Roís de Corella's writings. The incorporation of learned, or "bookish" words from its own ancestor language, Latin, into Catalan is arguably another form of lexical borrowing through the influence of written language and the liturgical language of the Church. Throughout the Middle Ages and into the early modern period, most literate Catalan speakers were also literate in Latin; and thus they easily adopted Latin words into their writing—and eventually speech—in Catalan.
Word formation
The process of morphological derivation in Catalan follows the same principles as the other Romance languages, where agglutination is common. Many times, several affixes are appended to a preexisting lexeme, and some sound alternations can occur, for example ("electrical") vs. . Prefixes are usually appended to verbs, as in ("foresee").
There is greater regularity in the process of word-compounding, where one can find compounded words formed much like those in English.
Writing system
Catalan uses the Latin script, with some added symbols and digraphs. The Catalan orthography is systematic and largely phonologically based. Standardization of Catalan was among the topics discussed during the First International Congress of the Catalan Language, held in Barcelona October 1906. Subsequently, the Philological Section of the Institut d'Estudis Catalans (IEC, founded in 1911) published the Normes ortogràfiques in 1913 under the direction of Antoni Maria Alcover and Pompeu Fabra. In 1932, Valencian writers and intellectuals gathered in Castelló de la Plana to make a formal adoption of the so-called Normes de Castelló, a set of guidelines following Pompeu Fabra's Catalan language norms.
Grammar
The grammar of Catalan is similar to other Romance languages. Features include:
Use of definite and indefinite articles.
Nouns, adjectives, pronouns, and articles are inflected for gender (masculine and feminine), and number (singular and plural). There is no case inflexion, except in pronouns.
Verbs are highly inflected for person, number, tense, aspect, and mood (including a subjunctive).
There are no modal auxiliaries.
Word order is freer than in English.
Gender and number inflection
In gender inflection, the most notable feature is (compared to Portuguese, Spanish or Italian), the loss of the typical masculine suffix . Thus, the alternance of /, has been replaced by ø/. There are only a few exceptions, like / ("scarce"). Many not completely predictable morphological alternations may occur, such as:
Affrication: / ("insane") vs. / ("ugly")
Loss of : / ("flat") vs. / ("second")
Final obstruent devoicing: / ("felt") vs. / ("said")
Catalan has few suppletive couplets, like Italian and Spanish, and unlike French. Thus, Catalan has / ("boy"/"girl") and / ("cock"/"hen"), whereas French has / and /.
There is a tendency to abandon traditionally gender-invariable adjectives in favor of marked ones, something prevalent in Occitan and French. Thus, one can find / ("boiling") in contrast with traditional /.
As in the other Western Romance languages, the main plural expression is the suffix , which may create morphological alternations similar to the ones found in gender inflection, albeit more rarely. The most important one is the addition of before certain consonant groups, a phonetic phenomenon that does not affect feminine forms: / ("the pulse"/"the pulses") vs. / ("the dust"/"the dusts").
Determiners
The inflection of determinatives is complex, specially because of the high number of elisions, but is similar to the neighboring languages. Catalan has more contractions of preposition + article than Spanish, like ("of + the [plural]"), but not as many as Italian (which has , , , etc.).
Central Catalan has abandoned almost completely unstressed possessives (, etc.) in favor of constructions of article + stressed forms (, etc.), a feature shared with Italian.
Personal pronouns
The morphology of Catalan personal pronouns is complex, especially in unstressed forms, which are numerous (13 distinct forms, compared to 11 in Spanish or 9 in Italian). Features include the gender-neutral and the great degree of freedom when combining different unstressed pronouns (65 combinations).
Catalan pronouns exhibit T–V distinction, like all other Romance languages (and most European languages, but not Modern English). This feature implies the use of a different set of second person pronouns for formality.
This flexibility allows Catalan to use extraposition extensively, much more than French or Spanish. Thus, Catalan can have ("they recommended me to him"), whereas in French one must say , and Spanish . This allows the placement of almost any nominal term as a sentence topic, without having to use so often the passive voice (as in French or English), or identifying the direct object with a preposition (as in Spanish).
Verbs
Like all the Romance languages, Catalan verbal inflection is more complex than the nominal. Suffixation is omnipresent, whereas morphological alternations play a secondary role. Vowel alternances are active, as well as infixation and suppletion. However, these are not as productive as in Spanish, and are mostly restricted to irregular verbs.
The Catalan verbal system is basically common to all Western Romance, except that most dialects have replaced the synthetic indicative perfect with a periphrastic form of ("to go") + infinitive.
Catalan verbs are traditionally divided into three conjugations, with vowel themes , , , the last two being split into two subtypes. However, this division is mostly theoretical. Only the first conjugation is nowadays productive (with about 3500 common verbs), whereas the third (the subtype of , with about 700 common verbs) is semiproductive. The verbs of the second conjugation are fewer than 100, and it is not possible to create new ones, except by compounding.
Syntax
The grammar of Catalan follows the general pattern of Western Romance languages. The primary word order is subject–verb–object. However, word order is very flexible. Commonly, verb-subject constructions are used to achieve a semantic effect. The sentence "The train has arrived" could be translated as or . Both sentences mean "the train has arrived", but the former puts a focus on the train, while the latter puts a focus on the arrival. This subtle distinction is described as "what you might say while waiting in the station" versus "what you might say on the train."
Catalan names
In Spain, every person officially has two surnames, one of which is the father's first surname and the other is the mother's first surname. The law contemplates the possibility of joining both surnames with the Catalan conjunction i ("and").
Sample text
Selected text from Manuel de Pedrolo's 1970 novel ("A love affair outside the city").
See also
Organizations
Institut d'Estudis Catalans (Catalan Studies Institute)
Acadèmia Valenciana de la Llengua (Valencian Academy of the Language)
Òmnium Cultural
Plataforma per la Llengua
Scholars
Marina Abràmova
Germà Colón
Dominique de Courcelles
Martí de Riquer
Arthur Terry
Lawrence Venuti
Other
Languages of Catalonia
Linguistic features of Spanish as spoken by Catalan speakers
Languages of France
Languages of Italy
Languages of Spain
Normes de Castelló
Pompeu Fabra
Notes
References
Works cited
External links
Institutions
Consorci per a la Normalització Lingüística
Institut d'Estudis Catalans
Acadèmia Valenciana de la Llengua
About the Catalan language
llengua.gencat.cat, by the Government of Catalonia
Gramàtica de la Llengua Catalana (Catalan grammar), from the Institute for Catalan Studies
Gramàtica Normativa Valenciana (2006, Valencian grammar), from the Acadèmia Valenciana de la Llengua
verbs.cat (Catalan verb conjugations with online trainers)
Catalan and its dialects
LEXDIALGRAM – online portal of 19th-century dialectal lexicographical and grammatical works of Catalan hosted by the University of Barcelona
Monolingual dictionaries
DIEC2, from the Institut d'Estudis Catalans
Gran Diccionari de la Llengua Catalana , from Enciclopèdia Catalana
Diccionari Català-Valencià-Balear d'Alcover i Moll , from the Institut d'Estudis Catalans
Diccionari Normatiu Valencià (AVL), from the Acadèmia Valenciana de la Llengua
diccionarivalencia.com (online Valencian dictionary)
Diccionari Invers de la Llengua Catalana (dictionary of Catalan words spelled backwards)
Bilingual and multilingual dictionaries
Diccionari de la Llengua Catalana Multilingüe (Catalan ↔ English, French, German and Spanish), from Enciclopèdia Catalana
DACCO – open source, collaborative dictionary (Catalan–English)
Automated translation systems
Traductor automated, online translations of text and web pages (Catalan ↔ English, French and Spanish), from gencat.cat by the Government of Catalonia
Phrasebooks
Catalan phrasebook on Wikivoyage
Learning resources
Catalan Swadesh list of basic vocabulary words, from Wiktionary's Swadesh-list appendix
Catalan-language online encyclopedia
Enciclopèdia Catalana
Subject–verb–object languages
Stress-timed languages |
5288 | https://en.wikipedia.org/wiki/Classical%20period%20%28music%29 | Classical period (music) | The Classical period was an era of classical music between roughly 1750 and 1820.
The Classical period falls between the Baroque and the Romantic periods. Classical music has a lighter, clearer texture than Baroque music, but a more varying use of musical form, which is, in simpler terms, the rhythm and organization of any given piece of music. It is mainly homophonic, using a clear melody line over a subordinate chordal accompaniment, but counterpoint was by no means forgotten, especially in liturgical vocal music and, later in the period, secular instrumental music. It also makes use of style galant which emphasized light elegance in place of the Baroque's dignified seriousness and impressive grandeur. Variety and contrast within a piece became more pronounced than before and the orchestra increased in size, range, and power.
The harpsichord was replaced as the main keyboard instrument by the piano (or fortepiano). Unlike the harpsichord, which plucks strings with quills, pianos strike the strings with leather-covered hammers when the keys are pressed, which enables the performer to play louder or softer (hence the original name "fortepiano," literally "loud soft") and play with more expression; in contrast, the force with which a performer plays the harpsichord keys does not change the sound. Instrumental music was considered important by Classical period composers. The main kinds of instrumental music were the sonata, trio, string quartet, quintet, symphony (performed by an orchestra) and the solo concerto, which featured a virtuoso solo performer playing a solo work for violin, piano, flute, or another instrument, accompanied by an orchestra. Vocal music, such as songs for a singer and piano (notably the work of Schubert), choral works, and opera (a staged dramatic work for singers and orchestra) were also important during this period.
The best-known composers from this period are Joseph Haydn, Wolfgang Amadeus Mozart, Ludwig van Beethoven, and Franz Schubert; other names in this period include: Carl Philipp Emanuel Bach, Johann Christian Bach, Luigi Boccherini, Domenico Cimarosa, Joseph Martin Kraus, Muzio Clementi, Christoph Willibald Gluck, Carl Ditters von Dittersdorf, André Grétry, Pierre-Alexandre Monsigny, Leopold Mozart, Michael Haydn, Giovanni Paisiello, Johann Baptist Wanhal, François-André Danican Philidor, Niccolò Piccinni, Antonio Salieri, Etienne Nicolas Mehul, Georg Christoph Wagenseil, Georg Matthias Monn, Johann Gottlieb Graun, Carl Heinrich Graun, Franz Benda, Georg Anton Benda, Johann Georg Albrechtsberger, Mauro Giuliani, Christian Cannabich and the Chevalier de Saint-Georges. Beethoven is regarded either as a Romantic composer or a Classical period composer who was part of the transition to the Romantic era. Schubert is also a transitional figure, as were Johann Nepomuk Hummel, Luigi Cherubini, Gaspare Spontini, Gioachino Rossini, Carl Maria von Weber, John Field, Jan Ladislav Dussek and Niccolò Paganini. The period is sometimes referred to as the era of Viennese Classicism (), since Gluck, Haydn, Salieri, Mozart, Beethoven, and Schubert all worked in Vienna.
Classicism
In the middle of the 18th century, Europe began to move toward a new style in architecture, literature, and the arts, generally known as Neoclassicism. This style sought to emulate the ideals of Classical antiquity, especially those of Classical Greece. Classical music used formality and emphasis on order and hierarchy, and a "clearer", "cleaner" style that used clearer divisions between parts (notably a clear, single melody accompanied by chords), brighter contrasts and "tone colors" (achieved by the use of dynamic changes and modulations to more keys). In contrast with the richly layered music of the Baroque era, Classical music moved towards simplicity rather than complexity. In addition, the typical size of orchestras began to increase, giving orchestras a more powerful sound.
The remarkable development of ideas in "natural philosophy" had already established itself in the public consciousness. In particular, Newton's physics was taken as a paradigm: structures should be well-founded in axioms and be both well-articulated and orderly. This taste for structural clarity began to affect music, which moved away from the layered polyphony of the Baroque period toward a style known as homophony, in which the melody is played over a subordinate harmony. This move meant that chords became a much more prevalent feature of music, even if they interrupted the melodic smoothness of a single part. As a result, the tonal structure of a piece of music became more audible.
The new style was also encouraged by changes in the economic order and social structure. As the 18th century progressed well, the nobility became the primary patrons of instrumental music, while public taste increasingly preferred lighter, funny comic operas. This led to changes in the way music was performed, the most crucial of which was the move to standard instrumental groups and the reduction in the importance of the continuo—the rhythmic and harmonic groundwork of a piece of music, typically played by a keyboard (harpsichord or organ) and usually accompanied by a varied group of bass instruments, including cello, double bass, bass viol, and theorbo. One way to trace the decline of the continuo and its figured chords is to examine the disappearance of the term obbligato, meaning a mandatory instrumental part in a work of chamber music. In Baroque compositions, additional instruments could be added to the continuo group according to the group or leader's preference; in Classical compositions, all parts were specifically noted, though not always notated, so the term "obbligato" became redundant. By 1800, basso continuo was practically extinct, except for the occasional use of a pipe organ continuo part in a religious Mass in the early 1800s.
Economic changes also had the effect of altering the balance of availability and quality of musicians. While in the late Baroque, a major composer would have the entire musical resources of a town to draw on, the musical forces available at an aristocratic hunting lodge or small court were smaller and more fixed in their level of ability. This was a spur to having simpler parts for ensemble musicians to play, and in the case of a resident virtuoso group, a spur to writing spectacular, idiomatic parts for certain instruments, as in the case of the Mannheim orchestra, or virtuoso solo parts for particularly skilled violinists or flautists. In addition, the appetite by audiences for a continual supply of new music carried over from the Baroque. This meant that works had to be performable with, at best, one or two rehearsals. Even after 1790 Mozart writes about "the rehearsal", with the implication that his concerts would have only one rehearsal.
Since there was a greater emphasis on a single melodic line, there was greater emphasis on notating that line for dynamics and phrasing. This contrasts with the Baroque era, when melodies were typically written with no dynamics, phrasing marks or ornaments, as it was assumed that the performer would improvise these elements on the spot. In the Classical era, it became more common for composers to indicate where they wanted performers to play ornaments such as trills or turns. The simplification of texture made such instrumental detail more important, and also made the use of characteristic rhythms, such as attention-getting opening fanfares, the funeral march rhythm, or the minuet genre, more important in establishing and unifying the tone of a single movement.
The Classical period also saw the gradual development of sonata form, a set of structural principles for music that reconciled the Classical preference for melodic material with harmonic development, which could be applied across musical genres. The sonata itself continued to be the principal form for solo and chamber music, while later in the Classical period the string quartet became a prominent genre. The symphony form for orchestra was created in this period (this is popularly attributed to Joseph Haydn). The concerto grosso (a concerto for more than one musician), a very popular form in the Baroque era, began to be replaced by the solo concerto, featuring only one soloist. Composers began to place more importance on the particular soloist's ability to show off virtuoso skills, with challenging, fast scale and arpeggio runs. Nonetheless, some concerti grossi remained, the most famous of which being Mozart's Sinfonia Concertante for Violin and Viola in E-flat major.
Main characteristics
In the classical period, the theme consists of phrases with contrasting melodic figures and rhythms. These phrases are relatively brief, typically four bars in length, and can occasionally seem sparse or terse. The texture is mainly homophonic, with a clear melody above a subordinate chordal accompaniment, for instance an Alberti bass. This contrasts with the practice in Baroque music, where a piece or movement would typically have only one musical subject, which would then be worked out in a number of voices according to the principles of counterpoint, while maintaining a consistent rhythm or metre throughout. As a result, Classical music tends to have a lighter, clearer texture than the Baroque. The classical style draws on the style galant, a musical style which emphasised light elegance in place of the Baroque's dignified seriousness and impressive grandeur.
Structurally, Classical music generally has a clear musical form, with a well-defined contrast between tonic and dominant, introduced by clear cadences. Dynamics are used to highlight the structural characteristics of the piece. In particular, sonata form and its variants were developed during the early classical period and was frequently used. The Classical approach to structure again contrasts with the Baroque, where a composition would normally move between tonic and dominant and back again, but through a continual progress of chord changes and without a sense of "arrival" at the new key. While counterpoint was less emphasised in the classical period, it was by no means forgotten, especially later in the period, and composers still used counterpoint in "serious" works such as symphonies and string quartets, as well as religious pieces, such as Masses.
The classical musical style was supported by technical developments in instruments. The widespread adoption of equal temperament made classical musical structure possible, by ensuring that cadences in all keys sounded similar. The fortepiano and then the pianoforte replaced the harpsichord, enabling more dynamic contrast and more sustained melodies. Over the Classical period, keyboard instruments became richer, more sonorous and more powerful.
The orchestra increased in size and range, and became more standardised. The harpsichord or pipe organ basso continuo role in orchestra fell out of use between 1750 and 1775, leaving the string section. Woodwinds became a self-contained section, consisting of clarinets, oboes, flutes and bassoons.
While vocal music such as comic opera was popular, great importance was given to instrumental music. The main kinds of instrumental music were the sonata, trio, string quartet, quintet, symphony, concerto (usually for a virtuoso solo instrument accompanied by orchestra), and light pieces such as serenades and divertimentos. Sonata form developed and became the most important form. It was used to build up the first movement of most large-scale works in symphonies and string quartets. Sonata form was also used in other movements and in single, standalone pieces such as overtures.
History
Baroque/Classical transition c. 1750–1760
In his book The Classical Style, author and pianist Charles Rosen claims that from 1755 to 1775, composers groped for a new style that was more effectively dramatic. In the High Baroque period, dramatic expression was limited to the representation of individual affects (the "doctrine of affections", or what Rosen terms "dramatic sentiment"). For example, in Handel's oratorio Jephtha, the composer renders four emotions separately, one for each character, in the quartet "O, spare your daughter". Eventually this depiction of individual emotions came to be seen as simplistic and unrealistic; composers sought to portray multiple emotions, simultaneously or progressively, within a single character or movement ("dramatic action"). Thus in the finale of act 2 of Mozart's Die Entführung aus dem Serail, the lovers move "from joy through suspicion and outrage to final reconciliation."
Musically speaking, this "dramatic action" required more musical variety. Whereas Baroque music was characterized by seamless flow within individual movements and largely uniform textures, composers after the High Baroque sought to interrupt this flow with abrupt changes in texture, dynamic, harmony, or tempo. Among the stylistic developments which followed the High Baroque, the most dramatic came to be called Empfindsamkeit, (roughly "sensitive style"), and its best-known practitioner was Carl Philipp Emanuel Bach. Composers of this style employed the above-discussed interruptions in the most abrupt manner, and the music can sound illogical at times. The Italian composer Domenico Scarlatti took these developments further. His more than five hundred single-movement keyboard sonatas also contain abrupt changes of texture, but these changes are organized into periods, balanced phrases that became a hallmark of the classical style. However, Scarlatti's changes in texture still sound sudden and unprepared. The outstanding achievement of the great classical composers (Haydn, Mozart and Beethoven) was their ability to make these dramatic surprises sound logically motivated, so that "the expressive and the elegant could join hands."
Between the death of J. S. Bach and the maturity of Haydn and Mozart (roughly 1750–1770), composers experimented with these new ideas, which can be seen in the music of Bach's sons. Johann Christian developed a style which we now call Roccoco, comprising simpler textures and harmonies, and which was "charming, undramatic, and a little empty." As mentioned previously, Carl Philipp Emmanuel sought to increase drama, and his music was "violent, expressive, brilliant, continuously surprising, and often incoherent." And finally Wilhelm Friedemann, J.S. Bach's eldest son, extended Baroque traditions in an idiomatic, unconventional way.
At first the new style took over Baroque forms—the ternary da capo aria, the sinfonia and the concerto—but composed with simpler parts, more notated ornamentation, rather than the improvised ornaments that were common in the Baroque era, and more emphatic division of pieces into sections. However, over time, the new aesthetic caused radical changes in how pieces were put together, and the basic formal layouts changed. Composers from this period sought dramatic effects, striking melodies, and clearer textures. One of the big textural changes was a shift away from the complex, dense polyphonic style of the Baroque, in which multiple interweaving melodic lines were played simultaneously, and towards homophony, a lighter texture which uses a clear single melody line accompanied by chords.
Baroque music generally uses many harmonic fantasies and polyphonic sections that focus less on the structure of the musical piece, and there was less emphasis on clear musical phrases. In the classical period, the harmonies became simpler. However, the structure of the piece, the phrases and small melodic or rhythmic motives, became much more important than in the Baroque period.
Another important break with the past was the radical overhaul of opera by Christoph Willibald Gluck, who cut away a great deal of the layering and improvisational ornaments and focused on the points of modulation and transition. By making these moments where the harmony changes more of a focus, he enabled powerful dramatic shifts in the emotional color of the music. To highlight these transitions, he used changes in instrumentation (orchestration), melody, and mode. Among the most successful composers of his time, Gluck spawned many emulators, including Antonio Salieri. Their emphasis on accessibility brought huge successes in opera, and in other vocal music such as songs, oratorios, and choruses. These were considered the most important kinds of music for performance and hence enjoyed greatest public success.
The phase between the Baroque and the rise of the Classical (around 1730), was home to various competing musical styles. The diversity of artistic paths are represented in the sons of Johann Sebastian Bach: Wilhelm Friedemann Bach, who continued the Baroque tradition in a personal way; Johann Christian Bach, who simplified textures of the Baroque and most clearly influenced Mozart; and Carl Philipp Emanuel Bach, who composed passionate and sometimes violently eccentric music of the Empfindsamkeit movement. Musical culture was caught at a crossroads: the masters of the older style had the technique, but the public hungered for the new. This is one of the reasons C. P. E. Bach was held in such high regard: he understood the older forms quite well and knew how to present them in new garb, with an enhanced variety of form.
1750–1775
By the late 1750s there were flourishing centers of the new style in Italy, Vienna, Mannheim, and Paris; dozens of symphonies were composed and there were bands of players associated with musical theatres. Opera or other vocal music accompanied by orchestra was the feature of most musical events, with concertos and symphonies (arising from the overture) serving as instrumental interludes and introductions for operas and church services. Over the course of the Classical period, symphonies and concertos developed and were presented independently of vocal music.
The "normal" orchestra ensemble—a body of strings supplemented by winds—and movements of particular rhythmic character were established by the late 1750s in Vienna. However, the length and weight of pieces was still set with some Baroque characteristics: individual movements still focused on one "affect" (musical mood) or had only one sharply contrasting middle section, and their length was not significantly greater than Baroque movements. There was not yet a clearly enunciated theory of how to compose in the new style. It was a moment ripe for a breakthrough.
The first great master of the style was the composer Joseph Haydn. In the late 1750s he began composing symphonies, and by 1761 he had composed a triptych (Morning, Noon, and Evening) solidly in the contemporary mode. As a vice-Kapellmeister and later Kapellmeister, his output expanded: he composed over forty symphonies in the 1760s alone. And while his fame grew, as his orchestra was expanded and his compositions were copied and disseminated, his voice was only one among many.
While some scholars suggest that Haydn was overshadowed by Mozart and Beethoven, it would be difficult to overstate Haydn's centrality to the new style, and therefore to the future of Western art music as a whole. At the time, before the pre-eminence of Mozart or Beethoven, and with Johann Sebastian Bach known primarily to connoisseurs of keyboard music, Haydn reached a place in music that set him above all other composers except perhaps the Baroque era's George Frideric Handel. Haydn took existing ideas, and radically altered how they functioned—earning him the titles "father of the symphony" and "father of the string quartet".
One of the forces that worked as an impetus for his pressing forward was the first stirring of what would later be called Romanticism—the Sturm und Drang, or "storm and stress" phase in the arts, a short period where obvious and dramatic emotionalism was a stylistic preference. Haydn accordingly wanted more dramatic contrast and more emotionally appealing melodies, with sharpened character and individuality in his pieces. This period faded away in music and literature: however, it influenced what came afterward and would eventually be a component of aesthetic taste in later decades.
The Farewell Symphony, No. 45 in F minor, exemplifies Haydn's integration of the differing demands of the new style, with surprising sharp turns and a long slow adagio to end the work. In 1772, Haydn completed his Opus 20 set of six string quartets, in which he deployed the polyphonic techniques he had gathered from the previous Baroque era to provide structural coherence capable of holding together his melodic ideas. For some, this marks the beginning of the "mature" Classical style, in which the period of reaction against late Baroque complexity yielded to a period of integration Baroque and Classical elements.
1775–1790
Haydn, having worked for over a decade as the music director for a prince, had far more resources and scope for composing than most other composers. His position also gave him the ability to shape the forces that would play his music, as he could select skilled musicians. This opportunity was not wasted, as Haydn, beginning quite early on his career, sought to press forward the technique of building and developing ideas in his music. His next important breakthrough was in the Opus 33 string quartets (1781), in which the melodic and the harmonic roles segue among the instruments: it is often momentarily unclear what is melody and what is harmony. This changes the way the ensemble works its way between dramatic moments of transition and climactic sections: the music flows smoothly and without obvious interruption. He then took this integrated style and began applying it to orchestral and vocal music.
Haydn's gift to music was a way of composing, a way of structuring works, which was at the same time in accord with the governing aesthetic of the new style. However, a younger contemporary, Wolfgang Amadeus Mozart, brought his genius to Haydn's ideas and applied them to two of the major genres of the day: opera, and the virtuoso concerto. Whereas Haydn spent much of his working life as a court composer, Mozart wanted public success in the concert life of cities, playing for the general public. This meant he needed to write operas and write and perform virtuoso pieces. Haydn was not a virtuoso at the international touring level; nor was he seeking to create operatic works that could play for many nights in front of a large audience. Mozart wanted to achieve both. Moreover, Mozart also had a taste for more chromatic chords (and greater contrasts in harmonic language generally), a greater love for creating a welter of melodies in a single work, and a more Italianate sensibility in music as a whole. He found, in Haydn's music and later in his study of the polyphony of J.S. Bach, the means to discipline and enrich his artistic gifts.
Mozart rapidly came to the attention of Haydn, who hailed the new composer, studied his works, and considered the younger man his only true peer in music. In Mozart, Haydn found a greater range of instrumentation, dramatic effect and melodic resource. The learning relationship moved in both directions. Mozart also had a great respect for the older, more experienced composer, and sought to learn from him.
Mozart's arrival in Vienna in 1780 brought an acceleration in the development of the Classical style. There, Mozart absorbed the fusion of Italianate brilliance and Germanic cohesiveness that had been brewing for the previous 20 years. His own taste for flashy brilliances, rhythmically complex melodies and figures, long cantilena melodies, and virtuoso flourishes was merged with an appreciation for formal coherence and internal connectedness. It is at this point that war and economic inflation halted a trend to larger orchestras and forced the disbanding or reduction of many theater orchestras. This pressed the Classical style inwards: toward seeking greater ensemble and technical challenges—for example, scattering the melody across woodwinds, or using a melody harmonized in thirds. This process placed a premium on small ensemble music, called chamber music. It also led to a trend for more public performance, giving a further boost to the string quartet and other small ensemble groupings.
It was during this decade that public taste began, increasingly, to recognize that Haydn and Mozart had reached a high standard of composition. By the time Mozart arrived at age 25, in 1781, the dominant styles of Vienna were recognizably connected to the emergence in the 1750s of the early Classical style. By the end of the 1780s, changes in performance practice, the relative standing of instrumental and vocal music, technical demands on musicians, and stylistic unity had become established in the composers who imitated Mozart and Haydn. During this decade Mozart composed his most famous operas, his six late symphonies that helped to redefine the genre, and a string of piano concerti that still stand at the pinnacle of these forms.
One composer who was influential in spreading the more serious style that Mozart and Haydn had formed is Muzio Clementi, a gifted virtuoso pianist who tied with Mozart in a musical "duel" before the emperor in which they each improvised on the piano and performed their compositions. Clementi's sonatas for the piano circulated widely, and he became the most successful composer in London during the 1780s. Also in London at this time was Jan Ladislav Dussek, who, like Clementi, encouraged piano makers to extend the range and other features of their instruments, and then fully exploited the newly opened up possibilities. The importance of London in the Classical period is often overlooked, but it served as the home to the Broadwood's factory for piano manufacturing and as the base for composers who, while less notable than the "Vienna School", had a decisive influence on what came later. They were composers of many fine works, notable in their own right. London's taste for virtuosity may well have encouraged the complex passage work and extended statements on tonic and dominant.
Around 1790–1820
When Haydn and Mozart began composing, symphonies were played as single movements—before, between, or as interludes within other works—and many of them lasted only ten or twelve minutes; instrumental groups had varying standards of playing, and the continuo was a central part of music-making.
In the intervening years, the social world of music had seen dramatic changes. International publication and touring had grown explosively, and concert societies formed. Notation became more specific, more descriptive—and schematics for works had been simplified (yet became more varied in their exact working out). In 1790, just before Mozart's death, with his reputation spreading rapidly, Haydn was poised for a series of successes, notably his late oratorios and London symphonies. Composers in Paris, Rome, and all over Germany turned to Haydn and Mozart for their ideas on form.
In the 1790s, a new generation of composers, born around 1770, emerged. While they had grown up with the earlier styles, they heard in the recent works of Haydn and Mozart a vehicle for greater expression. In 1788 Luigi Cherubini settled in Paris and in 1791 composed Lodoiska, an opera that raised him to fame. Its style is clearly reflective of the mature Haydn and Mozart, and its instrumentation gave it a weight that had not yet been felt in the grand opera. His contemporary Étienne Méhul extended instrumental effects with his 1790 opera Euphrosine et Coradin, from which followed a series of successes. The final push towards change came from Gaspare Spontini, who was deeply admired by future romantic composers such as Weber, Berlioz and Wagner. The innovative harmonic language of his operas, their refined instrumentation and their "enchained" closed numbers (a structural pattern which was later adopted by Weber in Euryanthe and from him handed down, through Marschner, to Wagner), formed the basis from which French and German romantic opera had its beginnings.
The most fateful of the new generation was Ludwig van Beethoven, who launched his numbered works in 1794 with a set of three piano trios, which remain in the repertoire. Somewhat younger than the others, though equally accomplished because of his youthful study under Mozart and his native virtuosity, was Johann Nepomuk Hummel. Hummel studied under Haydn as well; he was a friend to Beethoven and Franz Schubert. He concentrated more on the piano than any other instrument, and his time in London in 1791 and 1792 generated the composition and publication in 1793 of three piano sonatas, opus 2, which idiomatically used Mozart's techniques of avoiding the expected cadence, and Clementi's sometimes modally uncertain virtuoso figuration. Taken together, these composers can be seen as the vanguard of a broad change in style and the center of music. They studied one another's works, copied one another's gestures in music, and on occasion behaved like quarrelsome rivals.
The crucial differences with the previous wave can be seen in the downward shift in melodies, increasing durations of movements, the acceptance of Mozart and Haydn as paradigmatic, the greater use of keyboard resources, the shift from "vocal" writing to "pianistic" writing, the growing pull of the minor and of modal ambiguity, and the increasing importance of varying accompanying figures to bring "texture" forward as an element in music. In short, the late Classical was seeking music that was internally more complex. The growth of concert societies and amateur orchestras, marking the importance of music as part of middle-class life, contributed to a booming market for pianos, piano music, and virtuosi to serve as exemplars. Hummel, Beethoven, and Clementi were all renowned for their improvising.
The direct influence of the Baroque continued to fade: the figured bass grew less prominent as a means of holding performance together, the performance practices of the mid-18th century continued to die out. However, at the same time, complete editions of Baroque masters began to become available, and the influence of Baroque style continued to grow, particularly in the ever more expansive use of brass. Another feature of the period is the growing number of performances where the composer was not present. This led to increased detail and specificity in notation; for example, there were fewer "optional" parts that stood separately from the main score.
The force of these shifts became apparent with Beethoven's 3rd Symphony, given the name Eroica, which is Italian for "heroic", by the composer. As with Stravinsky's The Rite of Spring, it may not have been the first in all of its innovations, but its aggressive use of every part of the Classical style set it apart from its contemporary works: in length, ambition, and harmonic resources as well making it the first symphony of the Romantic era.
First Viennese School
The First Viennese School is a name mostly used to refer to three composers of the Classical period in late-18th-century Vienna: Haydn, Mozart, and Beethoven. Franz Schubert is occasionally added to the list.
In German-speaking countries, the term Wiener Klassik (lit. Viennese classical era/art) is used. That term is often more broadly applied to the Classical era in music as a whole, as a means to distinguish it from other periods that are colloquially referred to as classical, namely Baroque and Romantic music.
The term "Viennese School" was first used by Austrian musicologist Raphael Georg Kiesewetter in 1834, although he only counted Haydn and Mozart as members of the school. Other writers followed suit, and eventually Beethoven was added to the list. The designation "first" is added today to avoid confusion with the Second Viennese School.
Whilst, Schubert apart, these composers certainly knew each other (with Haydn and Mozart even being occasional chamber-music partners), there is no sense in which they were engaged in a collaborative effort in the sense that one would associate with 20th-century schools such as the Second Viennese School, or Les Six. Nor is there any significant sense in which one composer was "schooled" by another (in the way that Berg and Webern were taught by Schoenberg), though it is true that Beethoven for a time received lessons from Haydn.
Attempts to extend the First Viennese School to include such later figures as Anton Bruckner, Johannes Brahms, and Gustav Mahler are merely journalistic, and never encountered in academic musicology.
Classical influence on later composers
Musical eras and their prevalent styles, forms and instruments seldom disappear at once; instead, features are replaced over time, until the old approach is simply felt as "old-fashioned". The Classical style did not "die" suddenly; rather, it gradually got phased out under the weight of changes. To give just one example, while it is generally stated that the Classical era stopped using the harpsichord in orchestras, this did not happen all of a sudden at the start of the Classical era in 1750. Rather, orchestras slowly stopped using the harpsichord to play basso continuo until the practice was discontinued by the end of the 1700s.
One crucial change was the shift towards harmonies centering on "flatward" keys: shifts in the subdominant direction . In the Classical style, major key was far more common than minor, chromaticism being moderated through the use of "sharpward" modulation (e.g., a piece in C major modulating to G major, D major, or A major, all of which are keys with more sharps). As well, sections in the minor mode were often used for contrast. Beginning with Mozart and Clementi, there began a creeping colonization of the subdominant region (the ii or IV chord, which in the key of C major would be the keys of d minor or F major). With Schubert, subdominant modulations flourished after being introduced in contexts in which earlier composers would have confined themselves to dominant shifts (modulations to the dominant chord, e.g., in the key of C major, modulating to G major). This introduced darker colors to music, strengthened the minor mode, and made structure harder to maintain. Beethoven contributed to this by his increasing use of the fourth as a consonance, and modal ambiguity—for example, the opening of the Symphony No. 9 in D minor.
Ludwig van Beethoven, Franz Schubert, Carl Maria von Weber, Johann Nepomuk Hummel, and John Field are among the most prominent in this generation of "Proto-Romantics", along with the young Felix Mendelssohn. Their sense of form was strongly influenced by the Classical style. While they were not yet "learned" composers (imitating rules which were codified by others), they directly responded to works by Haydn, Mozart, Clementi, and others, as they encountered them. The instrumental forces at their disposal in orchestras were also quite "Classical" in number and variety, permitting similarity with Classical works.
However, the forces destined to end the hold of the Classical style gathered strength in the works of many of the above composers, particularly Beethoven. The most commonly cited one is harmonic innovation. Also important is the increasing focus on having a continuous and rhythmically uniform accompanying figuration: Beethoven's Moonlight Sonata was the model for hundreds of later pieces—where the shifting movement of a rhythmic figure provides much of the drama and interest of the work, while a melody drifts above it. Greater knowledge of works, greater instrumental expertise, increasing variety of instruments, the growth of concert societies, and the unstoppable domination of the increasingly more powerful piano (which was given a bolder, louder tone by technological developments such as the use of steel strings, heavy cast-iron frames and sympathetically vibrating strings) all created a huge audience for sophisticated music. All of these trends contributed to the shift to the "Romantic" style.
Drawing the line between these two styles is very difficult: some sections of Mozart's later works, taken alone, are indistinguishable in harmony and orchestration from music written 80 years later—and some composers continued to write in normative Classical styles into the early 20th century. Even before Beethoven's death, composers such as Louis Spohr were self-described Romantics, incorporating, for example, more extravagant chromaticism in their works (e.g., using chromatic harmonies in a piece's chord progression). Conversely, works such as Schubert's Symphony No. 5, written during the chronological end of the Classical era and dawn of the Romantic era, exhibit a deliberately anachronistic artistic paradigm, harking back to the compositional style of several decades before.
However, Vienna's fall as the most important musical center for orchestral composition during the late 1820s, precipitated by the deaths of Beethoven and Schubert, marked the Classical style's final eclipse—and the end of its continuous organic development of one composer learning in close proximity to others. Franz Liszt and Frédéric Chopin visited Vienna when they were young, but they then moved on to other cities. Composers such as Carl Czerny, while deeply influenced by Beethoven, also searched for new ideas and new forms to contain the larger world of musical expression and performance in which they lived.
Renewed interest in the formal balance and restraint of 18th century classical music led in the early 20th century to the development of so-called Neoclassical style, which numbered Stravinsky and Prokofiev among its proponents, at least at certain times in their careers.
Classical period instruments
Guitar
The Baroque guitar, with four or five sets of double strings or "courses" and elaborately decorated soundhole, was a very different instrument from the early classical guitar which more closely resembles the modern instrument with the standard six strings. Judging by the number of instructional manuals published for the instrument – over three hundred texts were published by over two hundred authors between 1760 and 1860 – the classical period marked a golden age for guitar.
Strings
In the Baroque era, there was more variety in the bowed stringed instruments used in ensembles, with instruments such as the viola d'amore and a range of fretted viols being used, ranging from small viols to large bass viols. In the Classical period, the string section of the orchestra was standardized as just four instruments:
Violin (in orchestras and chamber music, typically there are first violins and second violins, with the former playing the melody and/or a higher line and the latter playing either a countermelody, a harmony part, a part below the first violin line in pitch, or an accompaniment line)
Viola (the alto voice of the orchestral string section and string quartet; it often performs "inner voices", which are accompaniment lines which fill in the harmony of the piece)
Cello (the cello plays two roles in Classical era music; at times it is used to play the bassline of the piece, typically doubled by the double basses [Note: When cellos and double basses read the same bassline, the basses play an octave below the cellos, because the bass is a transposing instrument]; and at other times it performs melodies and solos in the lower register)
Double bass (the bass typically performs the lowest pitches in the string section in order to provide the bassline for the piece)
In the Baroque era, the double bass players were not usually given a separate part; instead, they typically played the same basso continuo bassline that the cellos and other low-pitched instruments (e.g., theorbo, serpent wind instrument, viols), albeit an octave below the cellos, because the double bass is a transposing instrument that sounds one octave lower than it is written. In the Classical era, some composers continued to write only one bass part for their symphony, labeled "bassi"; this bass part was played by cellists and double bassists. During the Classical era, some composers began to give the double basses their own part.
Woodwinds
It was commonplace for all orchestras to have at least 2 winds, usually oboes, flutes, clarinets, or sometimes english horns (see Symphony No. 22 (Haydn). Patrons also usually employed an ensemble of entirely winds, called the harmonie, which would be employed for certain events. The harmonie would join the larger string orchestra sometimes to serve as the wind section.
Piccolo (used in military bands)
Flute
Oboe
English horn
Clarinet
Basset horn
Basset Clarinet
Clarinette d'amour
Bassoon
Contrabassoon
Bagpipe (see Leopold Mozart's divertimento, "Die Bauernhochzeit" or "Peasant Wedding")
Percussion
Timpani
"Turkish music":
Bass drum
Cymbals
Triangle
Tambourine
Keyboards
Clavichord
Fortepiano (the forerunner to the modern piano)
Harpsichord, the standard Baroque era basso continuo keyboard instrument, was used until the 1750s, after which time it was gradually phased out, and replaced with the fortepiano and then the piano. By the early 1800s, the harpsichord was no longer used.
Organ
Brasses
Natural horn
Natural trumpet
Sackbut (Trombone precursor)
Serpent (instrument)
Post horn (see Serenade No. 9 (Mozart))
See also
List of Classical-era composers
Notes
Further reading
Downs, Philip G. (1992). Classical Music: The Era of Haydn, Mozart, and Beethoven, 4th vol of Norton Introduction to Music History. W. W. Norton. (hardcover).
Grout, Donald Jay; Palisca, Claude V. (1996). A History of Western Music, Fifth Edition. W. W. Norton. (hardcover).
Hanning, Barbara Russano; Grout, Donald Jay (1998 rev. 2006). Concise History of Western Music. W. W. Norton. (hardcover).
Kennedy, Michael (2006), The Oxford Dictionary of Music, 985 pages,
Lihoreau, Tim; Fry, Stephen (2004). Stephen Fry's Incomplete and Utter History of Classical Music. Boxtree.
Rosen, Charles (1972 expanded 1997). The Classical Style. New York: W. W. Norton. (expanded edition with CD, 1997)
Taruskin, Richard (2005, rev. Paperback version 2009). Oxford History of Western Music. Oxford University Press (US). (Hardback), (Paperback)
External links
Classical Net – Classical music reference site |
5295 | https://en.wikipedia.org/wiki/Character%20encoding | Character encoding | Character encoding is the process of assigning numbers to graphical characters, especially the written characters of human language, allowing them to be stored, transmitted, and transformed using digital computers. The numerical values that make up a character encoding are known as "code points" and collectively comprise a "code space", a "code page", or a "character map".
Early character codes associated with the optical or electrical telegraph could only represent a subset of the characters used in written languages, sometimes restricted to upper case letters, numerals and some punctuation only. The low cost of digital representation of data in modern computer systems allows more elaborate character codes (such as Unicode) which represent most of the characters used in many written languages. Character encoding using internationally accepted standards permits worldwide interchange of text in electronic form.
History
The history of character codes illustrates the evolving need for machine-mediated character-based symbolic information over a distance, using once-novel electrical means. The earliest codes were based upon manual and hand-written encoding and cyphering systems, such as Bacon's cipher, Braille, international maritime signal flags, and the 4-digit encoding of Chinese characters for a Chinese telegraph code (Hans Schjellerup, 1869). With the adoption of electrical and electro-mechanical techniques these earliest codes were adapted to the new capabilities and limitations of the early machines. The earliest well-known electrically transmitted character code, Morse code, introduced in the 1840s, used a system of four "symbols" (short signal, long signal, short space, long space) to generate codes of variable length. Though some commercial use of Morse code was via machinery, it was often used as a manual code, generated by hand on a telegraph key and decipherable by ear, and persists in amateur radio and aeronautical use. Most codes are of fixed per-character length or variable-length sequences of fixed-length codes (e.g. Unicode).
Common examples of character encoding systems include Morse code, the Baudot code, the American Standard Code for Information Interchange (ASCII) and Unicode. Unicode, a well-defined and extensible encoding system, has supplanted most earlier character encodings, but the path of code development to the present is fairly well known.
The Baudot code, a five-bit encoding, was created by Émile Baudot in 1870, patented in 1874, modified by Donald Murray in 1901, and standardized by CCITT as International Telegraph Alphabet No. 2 (ITA2) in 1930. The name baudot has been erroneously applied to ITA2 and its many variants. ITA2 suffered from many shortcomings and was often improved by many equipment manufacturers, sometimes creating compatibility issues. In 1959 the U.S. military defined its Fieldata code, a six-or seven-bit code, introduced by the U.S. Army Signal Corps. While Fieldata addressed many of the then-modern issues (e.g. letter and digit codes arranged for machine collation), it fell short of its goals and was short-lived. In 1963 the first ASCII code was released (X3.4-1963) by the ASCII committee (which contained at least one member of the Fieldata committee, W. F. Leubbert), which addressed most of the shortcomings of Fieldata, using a simpler code. Many of the changes were subtle, such as collatable character sets within certain numeric ranges. ASCII63 was a success, widely adopted by industry, and with the follow-up issue of the 1967 ASCII code (which added lower-case letters and fixed some "control code" issues) ASCII67 was adopted fairly widely. ASCII67's American-centric nature was somewhat addressed in the European ECMA-6 standard.
Herman Hollerith invented punch card data encoding in the late 19th century to analyze census data. Initially, each hole position represented a different data element, but later, numeric information was encoded by numbering the lower rows 0 to 9, with a punch in a column representing its row number. Later alphabetic data was encoded by allowing more than one punch per column. Electromechanical tabulating machines represented date internally by the timing of pulses relative to the motion of the cards through the machine. When IBM went to electronic processing, starting with the IBM 603 Electronic Multiplier, it used a variety of binary encoding schemes that were tied to the punch card code.
IBM's Binary Coded Decimal (BCD) was a six-bit encoding scheme used by IBM as early as 1953 in its 702 and 704 computers, and in its later 7000 Series and 1400 series, as well as in associated peripherals. Since the punched card code then in use only allowed digits, upper-case English letters and a few special characters, six bits were sufficient. BCD extended existing simple four-bit numeric encoding to include alphabetic and special characters, mapping it easily to punch-card encoding which was already in widespread use. IBMs codes were used primarily with IBM equipment; other computer vendors of the era had their own character codes, often six-bit, but usually had the ability to read tapes produced on IBM equipment. BCD was the precursor of IBM's Extended Binary-Coded Decimal Interchange Code (usually abbreviated as EBCDIC), an eight-bit encoding scheme developed in 1963 for the IBM System/360 that featured a larger character set, including lower case letters.
In trying to develop universally interchangeable character encodings, researchers in the 1980s faced the dilemma that, on the one hand, it seemed necessary to add more bits to accommodate additional characters, but on the other hand, for the users of the relatively small character set of the Latin alphabet (who still constituted the majority of computer users), those additional bits were a colossal waste of then-scarce and expensive computing resources (as they would always be zeroed out for such users). In 1985, the average personal computer user's hard disk drive could store only about 10 megabytes, and it cost approximately US$250 on the wholesale market (and much higher if purchased separately at retail), so it was very important at the time to make every bit count.
The compromise solution that was eventually found and was to break the assumption (dating back to telegraph codes) that each character should always directly correspond to a particular sequence of bits. Instead, characters would first be mapped to a universal intermediate representation in the form of abstract numbers called code points. Code points would then be represented in a variety of ways and with various default numbers of bits per character (code units) depending on context. To encode code points higher than the length of the code unit, such as above 256 for eight-bit units, the solution was to implement variable-length encodings where an escape sequence would signal that subsequent bits should be parsed as a higher code point.
Terminology
Informally, the terms "character encoding", "character map", "character set" and "code page" are often used interchangeably. Historically, the same standard would specify a repertoire of characters and how they were to be encoded into a stream of code units — usually with a single character per code unit. However, due to the emergence of more sophisticated character encodings, the distinction between these terms has become important.
A character is a minimal unit of text that has semantic value.
A character set is a collection of elements used to represent text. For example, the Latin alphabet and Greek alphabet are both character sets.
A coded character set is a character set mapped to set of unique numbers. For historical reasons, this is also often referred to as a code page.
A character repertoire is the set of characters that can be represented by a particular coded character set. The repertoire may be closed, meaning that no additions are allowed without creating a new standard (as is the case with ASCII and most of the ISO-8859 series); or it may be open, allowing additions (as is the case with Unicode and to a limited extent Windows code pages).
A code point is a value or position of a character in a coded character set.
A code space is the range of numerical values spanned by a coded character set.
A code unit is the minimum bit combination that can represent a character in a character encoding (in computer science terms, it is the word size of the character encoding). For example, common code units include 7-bit, 8-bit, 16-bit, and 32-bit. In some encodings, some characters are encoded using multiple code units; such an encoding is referred to as a variable-width encoding.
Code pages
"Code page" is a historical name for a coded character set.
Originally, a code page referred to a specific page number in the IBM standard character set manual, which would define a particular character encoding. Other vendors, including Microsoft, SAP, and Oracle Corporation, also published their own sets of code pages; the most well-known code page suites are "Windows" (based on Windows-1252) and "IBM"/"DOS" (based on code page 437).
Despite no longer referring to specific page numbers in a standard, many character encodings are still referred to by their code page number; likewise, the term "code page" is often still used to refer to character encodings in general.
The term "code page" is not used in Unix or Linux, where "charmap" is preferred, usually in the larger context of locales. IBM's Character Data Representation Architecture (CDRA) designates entities with coded character set identifiers (CCSIDs), each of which is variously called a "charset", "character set", "code page", or "CHARMAP".
Code units
The code unit size is equivalent to the bit measurement for the particular encoding:
A code unit in US-ASCII consists of 7 bits;
A code unit in UTF-8, EBCDIC and GB 18030 consists of 8 bits;
A code unit in UTF-16 consists of 16 bits;
A code unit in UTF-32 consists of 32 bits.
Code points
A code point is represented by a sequence of code units. The mapping is defined by the encoding. Thus, the number of code units required to represent a code point depends on the encoding:
UTF-8: code points map to a sequence of one, two, three or four code units.
UTF-16: code units are twice as long as 8-bit code units. Therefore, any code point with a scalar value less than U+10000 is encoded with a single code unit. Code points with a value U+10000 or higher require two code units each. These pairs of code units have a unique term in UTF-16: "Unicode surrogate pairs".
UTF-32: the 32-bit code unit is large enough that every code point is represented as a single code unit.
GB 18030: multiple code units per code point are common, because of the small code units. Code points are mapped to one, two, or four code units.
Characters
Exactly what constitutes a character varies between character encodings.
For example, for letters with diacritics, there are two distinct approaches that can be taken to encode them: they can be encoded either as a single unified character (known as a precomposed character), or as separate characters that combine into a single glyph. The former simplifies the text handling system, but the latter allows any letter/diacritic combination to be used in text. Ligatures pose similar problems.
Exactly how to handle glyph variants is a choice that must be made when constructing a particular character encoding. Some writing systems, such as Arabic and Hebrew, need to accommodate things like graphemes that are joined in different ways in different contexts, but represent the same semantic character.
Unicode encoding model
Unicode and its parallel standard, the ISO/IEC 10646 Universal Character Set, together constitute a unified standard for character encoding. Rather than mapping characters directly to bytes, Unicode separately defines a coded character set that maps characters to unique natural numbers (code points), how those code points are mapped to a series of fixed-size natural numbers (code units), and finally how those units are encoded as a stream of octets (bytes). The purpose of this decomposition is to establish a universal set of characters that can be encoded in a variety of ways. To describe this model precisely, Unicode uses its own set of terminology to describe its process:
An abstract character repertoire (ACR) is the full set of abstract characters that a system supports. Unicode has an open repertoire, meaning that new characters will be added to the repertoire over time.
A coded character set (CCS) is a function that maps characters to code points (each code point represents one character). For example, in a given repertoire, the capital letter "A" in the Latin alphabet might be represented by the code point 65, the character "B" by 66, and so on. Multiple coded character sets may share the same character repertoire; for example ISO/IEC 8859-1 and IBM code pages 037 and 500 all cover the same repertoire but map them to different code points.
A character encoding form (CEF) is the mapping of code points to code units to facilitate storage in a system that represents numbers as bit sequences of fixed length (i.e. practically any computer system). For example, a system that stores numeric information in 16-bit units can only directly represent code points 0 to 65,535 in each unit, but larger code points (say, 65,536 to 1.4 million) could be represented by using multiple 16-bit units. This correspondence is defined by a CEF.
A character encoding scheme (CES) is the mapping of code units to a sequence of octets to facilitate storage on an octet-based file system or transmission over an octet-based network. Simple character encoding schemes include UTF-8, UTF-16BE, UTF-32BE, UTF-16LE, and UTF-32LE; compound character encoding schemes, such as UTF-16, UTF-32 and ISO/IEC 2022, switch between several simple schemes by using a byte order mark or escape sequences; compressing schemes try to minimize the number of bytes used per code unit (such as SCSU and BOCU).
Although UTF-32BE and UTF-32LE are simpler CESes, most systems working with Unicode use either UTF-8, which is backward compatible with fixed-length ASCII and maps Unicode code points to variable-length sequences of octets, or UTF-16BE, which is backward compatible with fixed-length UCS-2BE and maps Unicode code points to variable-length sequences of 16-bit words. See comparison of Unicode encodings for a detailed discussion.
Finally, there may be a higher-level protocol which supplies additional information to select the particular variant of a Unicode character, particularly where there are regional variants that have been 'unified' in Unicode as the same character. An example is the XML attribute xml:lang.
The Unicode model uses the term "character map" for other systems which directly assign a sequence of characters to a sequence of bytes, covering all of the CCS, CEF and CES layers.
Unicode code points
In Unicode, a character can be referred to as 'U+' followed by its codepoint value in hexadecimal. The range of valid code points (the codespace) for the Unicode standard is U+0000 to U+10FFFF, inclusive, divided in 17 planes, identified by the numbers 0 to 16. Characters in the range U+0000 to U+FFFF are in plane 0, called the Basic Multilingual Plane (BMP). This plane contains most commonly-used characters. Characters in the range U+10000 to U+10FFFF in the other planes are called supplementary characters.
The following table shows examples of code point values:
Example
Consider a string of the letters "ab̲c𐐀"—that is, a string containing a Unicode combining character () as well a supplementary character (). This string has several Unicode representations which are logically equivalent, yet while each is suited to a diverse set of circumstances or range of requirements:
Four composed characters:
, , ,
Five graphemes:
, , , ,
Five Unicode code points:
, , , ,
Five UTF-32 code units (32-bit integer values):
, , , ,
Six UTF-16 code units (16-bit integers)
, , , , ,
Nine UTF-8 code units (8-bit values, or bytes)
, , , , , , , ,
Note in particular that 𐐀 is represented with either one 32-bit value (UTF-32), two 16-bit values (UTF-16), or four 8-bit values (UTF-8). Although each of those forms uses the same total number of bits (32) to represent the glyph, it is not obvious how the actual numeric byte values are related.
Transcoding
As a result of having many character encoding methods in use (and the need for backward compatibility with archived data), many computer programs have been developed to translate data between character encoding schemes, a process known as transcoding. Some of these are cited below.
Cross-platform:
Web browsers – most modern web browsers feature automatic character encoding detection. On Firefox 3, for example, see the View/Character Encoding submenu.
iconv – a program and standardized API to convert encodings
luit – a program that converts encoding of input and output to programs running interactively
International Components for Unicode – A set of C and Java libraries to perform charset conversion. uconv can be used from ICU4C.
Windows:
Encoding.Convert – .NET API
MultiByteToWideChar/WideCharToMultiByte – to convert from ANSI to Unicode & Unicode to ANSI
See also
Percent-encoding
Alt code
Character encodings in HTML
:Category:Character encoding – articles related to character encoding in general
:Category:Character sets – articles detailing specific character encodings
Hexadecimal representations
Mojibake – character set mismap
Mojikyō – a system ("glyph set") that includes over 100,000 Chinese character drawings, modern and ancient, popular and obscure
Presentation layer
TRON, part of the TRON project, is an encoding system that does not use Han Unification; instead, it uses "control codes" to switch between 16-bit "planes" of characters.
Universal Character Set characters
Charset sniffing – used in some applications when character encoding metadata is not available
Common character encodings
ISO 646
ASCII
EBCDIC
ISO 8859:
ISO 8859-1 Western Europe
ISO 8859-2 Western and Central Europe
ISO 8859-3 Western Europe and South European (Turkish, Maltese plus Esperanto)
ISO 8859-4 Western Europe and Baltic countries (Lithuania, Estonia, Latvia and Lapp)
ISO 8859-5 Cyrillic alphabet
ISO 8859-6 Arabic
ISO 8859-7 Greek
ISO 8859-8 Hebrew
ISO 8859-9 Western Europe with amended Turkish character set
ISO 8859-10 Western Europe with rationalised character set for Nordic languages, including complete Icelandic set
ISO 8859-11 Thai
ISO 8859-13 Baltic languages plus Polish
ISO 8859-14 Celtic languages (Irish Gaelic, Scottish, Welsh)
ISO 8859-15 Added the Euro sign and other rationalisations to ISO 8859-1
ISO 8859-16 Central, Eastern and Southern European languages (Albanian, Bosnian, Croatian, Hungarian, Polish, Romanian, Serbian and Slovenian, but also French, German, Italian and Irish Gaelic)
CP437, CP720, CP737, CP850, CP852, CP855, CP857, CP858, CP860, CP861, CP862, CP863, CP865, CP866, CP869, CP872
MS-Windows character sets:
Windows-1250 for Central European languages that use Latin script, (Polish, Czech, Slovak, Hungarian, Slovene, Serbian, Croatian, Bosnian, Romanian and Albanian)
Windows-1251 for Cyrillic alphabets
Windows-1252 for Western languages
Windows-1253 for Greek
Windows-1254 for Turkish
Windows-1255 for Hebrew
Windows-1256 for Arabic
Windows-1257 for Baltic languages
Windows-1258 for Vietnamese
Mac OS Roman
KOI8-R, KOI8-U, KOI7
MIK
ISCII
TSCII
VISCII
JIS X 0208 is a widely deployed standard for Japanese character encoding that has several encoding forms.
Shift JIS (Microsoft Code page 932 is a dialect of Shift_JIS)
EUC-JP
ISO-2022-JP
JIS X 0213 is an extended version of JIS X 0208.
Shift_JIS-2004
EUC-JIS-2004
ISO-2022-JP-2004
Chinese Guobiao
GB 2312
GBK (Microsoft Code page 936)
GB 18030
Taiwan Big5 (a more famous variant is Microsoft Code page 950)
Hong Kong HKSCS
Korean
KS X 1001 is a Korean double-byte character encoding standard
EUC-KR
ISO-2022-KR
Unicode (and subsets thereof, such as the 16-bit 'Basic Multilingual Plane')
UTF-8
UTF-16
UTF-32
ANSEL or ISO/IEC 6937
References
Further reading
External links
Character sets registered by Internet Assigned Numbers Authority (IANA)
Characters and encodings, by Jukka Korpela
Unicode Technical Report #17: Character Encoding Model
Decimal, Hexadecimal Character Codes in HTML Unicode – Encoding converter
The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!) by Joel Spolsky (Oct 10, 2003)
Encoding |
5298 | https://en.wikipedia.org/wiki/Control%20character | Control character | In computing and telecommunication, a control character or non-printing character (NPC) is a code point in a character set that does not represent a written character or symbol. They are used as in-band signaling to cause effects other than the addition of a symbol to the text. All other characters are mainly graphic characters, also known as printing characters (or printable characters), except perhaps for "space" characters. In the ASCII standard there are 33 control characters, such as code 7, , which rings a terminal bell.
History
Procedural signs in Morse code are a form of control character.
A form of control characters were introduced in the 1870 Baudot code: NUL and DEL.
The 1901 Murray code added the carriage return (CR) and line feed (LF), and other versions of the Baudot code included other control characters.
The bell character (BEL), which rang a bell to alert operators, was also an early teletype control character.
Some control characters have also been called "format effectors".
In ASCII
There were quite a few control characters defined (33 in ASCII, and the ECMA-48 standard adds 32 more). This was because early terminals had very primitive mechanical or electrical controls that made any kind of state-remembering API quite expensive to implement, thus a different code for each and every function looked like a requirement. It quickly became possible and inexpensive to interpret sequences of codes to perform a function, and device makers found a way to send hundreds of device instructions. Specifically, they used ASCII code 2710 (escape), followed by a series of characters called a "control sequence" or "escape sequence". The mechanism was invented by Bob Bemer, the father of ASCII. For example, the sequence of code 2710, followed by the printable characters "[2;10H", would cause a Digital Equipment Corporation VT100 terminal to move its cursor to the 10th cell of the 2nd line of the screen. Several standards exist for these sequences, notably ANSI X3.64. But the number of non-standard variations in use is large, especially among printers, where technology has advanced far faster than any standards body can possibly keep up with.
All entries in the ASCII table below code 3210 (technically the C0 control code set) are of this kind, including CR and LF used to separate lines of text. The code 12710 (DEL) is also a control character. Extended ASCII sets defined by ISO 8859 added the codes 12810 through 15910 as control characters. This was primarily done so that if the high bit was stripped, it would not change a printing character to a C0 control code. This second set is called the C1 set.
These 65 control codes were carried over to Unicode. Unicode added more characters that could be considered controls, but it makes a distinction between these "Formatting characters" (such as the zero-width non-joiner) and the 65 control characters.
The Extended Binary Coded Decimal Interchange Code (EBCDIC) character set contains 65 control codes, including all of the ASCII control codes plus additional codes which are mostly used to control IBM peripherals.
The control characters in ASCII still in common use include:
0x00 (null, , , ), originally intended to be an ignored character, but now used by many programming languages including C to mark the end of a string.
0x07 (bell, , , ), which may cause the device to emit a warning such as a bell or beep sound or the screen flashing.
0x08 (backspace, , , ), may overprint the previous character.
0x09 (horizontal tab, , , ), moves the printing position right to the next tab stop.
0x0A (line feed, , , ), moves the print head down one line, or to the left edge and down. Used as the end of line marker in most UNIX systems and variants.
0x0B (vertical tab, , , ), vertical tabulation.
0x0C (form feed, , , ), to cause a printer to eject paper to the top of the next page, or a video terminal to clear the screen.
0x0D (carriage return, , , ), moves the printing position to the start of the line, allowing overprinting. Used as the end of line marker in Classic Mac OS, OS-9, FLEX (and variants). A pair is used by CP/M-80 and its derivatives including DOS and Windows, and by Application Layer protocols such as FTP, SMTP, and HTTP.
0x1A (Control-Z, , ). Acts as an end-of-file for the Windows text-mode file i/o.
0x1B (escape, , (GCC only), ). Introduces an escape sequence.
Control characters may be described as doing something when the user inputs them, such as code 3 (End-of-Text character, ETX, ) to interrupt the running process, or code 4 (End-of-Transmission character, EOT, ), used to end text input on Unix or to exit a Unix shell. These uses usually have little to do with their use when they are in text being output.
In Unicode
In Unicode, "Control-characters" are U+0000—U+001F (C0 controls), U+007F (delete), and U+0080—U+009F (C1 controls). Their General Category is "Cc". Formatting codes are distinct, in General Category "Cf". The Cc control characters have no Name in Unicode, but are given labels such as "<control-001A>" instead.
Display
There are a number of techniques to display non-printing characters, which may be illustrated with the bell character in ASCII encoding:
Code point: decimal 7, hexadecimal 0x07
An abbreviation, often three capital letters: BEL
A special character condensing the abbreviation: Unicode U+2407 (␇), "symbol for bell"
An ISO 2047 graphical representation: Unicode U+237E (⍾), "graphic for bell"
Caret notation in ASCII, where code point 00xxxxx is represented as a caret followed by the capital letter at code point 10xxxxx: ^G
An escape sequence, as in C/C++ character string codes: , , , etc.
How control characters map to keyboards
ASCII-based keyboards have a key labelled "Control", "Ctrl", or (rarely) "Cntl" which is used much like a shift key, being pressed in combination with another letter or symbol key. In one implementation, the control key generates the code 64 places below the code for the (generally) uppercase letter it is pressed in combination with (i.e., subtract 0x40 from ASCII code value of the (generally) uppercase letter). The other implementation is to take the ASCII code produced by the key and bitwise AND it with 0x1F, forcing bits 5 to 7 to zero. For example, pressing "control" and the letter "g" (which is 0110 0111 in binary), produces the code 7 (BELL, 7 in base ten, or 0000 0111 in binary). The NULL character (code 0) is represented by Ctrl-@, "@" being the code immediately before "A" in the ASCII character set. For convenience, some terminals accept Ctrl-Space as an alias for Ctrl-@. In either case, this produces one of the 32 ASCII control codes between 0 and 31. Neither approach works to produce the DEL character because of its special location in the table and its value (code 12710), Ctrl-? is sometimes used for this character.
When the control key is held down, letter keys produce the same control characters regardless of the state of the shift or caps lock keys. In other words, it does not matter whether the key would have produced an upper-case or a lower-case letter. The interpretation of the control key with the space, graphics character, and digit keys (ASCII codes 32 to 63) vary between systems. Some will produce the same character code as if the control key were not held down. Other systems translate these keys into control characters when the control key is held down. The interpretation of the control key with non-ASCII ("foreign") keys also varies between systems.
Control characters are often rendered into a printable form known as caret notation by printing a caret (^) and then the ASCII character that has a value of the control character plus 64. Control characters generated using letter keys are thus displayed with the upper-case form of the letter. For example, ^G represents code 7, which is generated by pressing the G key when the control key is held down.
Keyboards also typically have a few single keys which produce control character codes. For example, the key labelled "Backspace" typically produces code 8, "Tab" code 9, "Enter" or "Return" code 13 (though some keyboards might produce code 10 for "Enter").
Many keyboards include keys that do not correspond to any ASCII printable or control character, for example cursor control arrows and word processing functions. The associated keypresses are communicated to computer programs by one of four methods: appropriating otherwise unused control characters; using some encoding other than ASCII; using multi-character control sequences; or using an additional mechanism outside of generating characters. "Dumb" computer terminals typically use control sequences. Keyboards attached to stand-alone personal computers made in the 1980s typically use one (or both) of the first two methods. Modern computer keyboards generate scancodes that identify the specific physical keys that are pressed; computer software then determines how to handle the keys that are pressed, including any of the four methods described above.
The design purpose
The control characters were designed to fall into a few groups: printing and display control, data structuring, transmission control, and miscellaneous.
Printing and display control
Printing control characters were first used to control the physical mechanism of printers, the earliest output device. An early example of this idea was the use of Figures (FIGS) and Letters (LTRS) in Baudot code to shift between two code pages. A later, but still early, example was the out-of-band ASA carriage control characters. Later, control characters were integrated into the stream of data to be printed.
The carriage return character (CR), when sent to such a device, causes it to put the character at the edge of the paper at which writing begins (it may, or may not, also move the printing position to the next line).
The line feed character (LF/NL) causes the device to put the printing position on the next line. It may (or may not), depending on the device and its configuration, also move the printing position to the start of the next line (which would be the leftmost position for left-to-right scripts, such as the alphabets used for Western languages, and the rightmost position for right-to-left scripts such as the Hebrew and Arabic alphabets).
The vertical and horizontal tab characters (VT and HT/TAB) cause the output device to move the printing position to the next tab stop in the direction of reading.
The form feed character (FF/NP) starts a new sheet of paper, and may or may not move to the start of the first line.
The backspace character (BS) moves the printing position one character space backwards. On printers, including hard-copy terminals, this is most often used so the printer can overprint characters to make other, not normally available, characters. On video terminals and other electronic output devices, there are often software (or hardware) configuration choices that allow a destructive backspace (e.g., a BS, SP, BS sequence), which erases, or a non-destructive one, which does not.
The shift in and shift out characters (SI and SO) selected alternate character sets, fonts, underlining, or other printing modes. Escape sequences were often used to do the same thing.
With the advent of computer terminals that did not physically print on paper and so offered more flexibility regarding screen placement, erasure, and so forth, printing control codes were adapted. Form feeds, for example, usually cleared the screen, there being no new paper page to move to. More complex escape sequences were developed to take advantage of the flexibility of the new terminals, and indeed of newer printers. The concept of a control character had always been somewhat limiting, and was extremely so when used with new, much more flexible, hardware. Control sequences (sometimes implemented as escape sequences) could match the new flexibility and power and became the standard method. However, there were, and remain, a large variety of standard sequences to choose from.
Data structuring
The separators (File, Group, Record, and Unit: FS, GS, RS and US) were made to structure data, usually on a tape, in order to simulate punched cards.
End of medium (EM) warns that the tape (or other recording medium) is ending.
While many systems use CR/LF and TAB for structuring data, it is possible to encounter the separator control characters in data that needs to be structured. The separator control characters are not overloaded; there is no general use of them except to separate data into structured groupings. Their numeric values are contiguous with the space character, which can be considered a member of the group, as a word separator.
For example, the RS separator is used by (JSON Text Sequences) to encode a sequence of JSON elements. Each sequence item starts with a RS character and ends with a line feed. This allows to serialize open-ended JSON sequences. It is one of the JSON streaming protocols.
Transmission control
The transmission control characters were intended to structure a data stream, and to manage re-transmission or graceful failure, as needed, in the face of transmission errors.
The start of heading (SOH) character was to mark a non-data section of a data stream—the part of a stream containing addresses and other housekeeping data. The start of text character (STX) marked the end of the header, and the start of the textual part of a stream. The end of text character (ETX) marked the end of the data of a message. A widely used convention is to make the two characters preceding ETX a checksum or CRC for error-detection purposes. The end of transmission block character (ETB) was used to indicate the end of a block of data, where data was divided into such blocks for transmission purposes.
The escape character (ESC) was intended to "quote" the next character, if it was another control character it would print it instead of performing the control function. It is almost never used for this purpose today. Various printable characters are used as visible "escape characters", depending on context.
The substitute character (SUB) was intended to request a translation of the next character from a printable character to another value, usually by setting bit 5 to zero. This is handy because some media (such as sheets of paper produced by typewriters) can transmit only printable characters. However, on MS-DOS systems with files opened in text mode, "end of text" or "end of file" is marked by this Ctrl-Z character, instead of the Ctrl-C or Ctrl-D, which are common on other operating systems.
The cancel character (CAN) signaled that the previous element should be discarded. The negative acknowledge character (NAK) is a definite flag for, usually, noting that reception was a problem, and, often, that the current element should be sent again. The acknowledge character (ACK) is normally used as a flag to indicate no problem detected with current element.
When a transmission medium is half duplex (that is, it can transmit in only one direction at a time), there is usually a master station that can transmit at any time, and one or more slave stations that transmit when they have permission. The enquire character (ENQ) is generally used by a master station to ask a slave station to send its next message. A slave station indicates that it has completed its transmission by sending the end of transmission character (EOT).
The device control codes (DC1 to DC4) were originally generic, to be implemented as necessary by each device. However, a universal need in data transmission is to request the sender to stop transmitting when a receiver is temporarily unable to accept any more data. Digital Equipment Corporation invented a convention which used 19 (the device control 3 character (DC3), also known as control-S, or XOFF) to "S"top transmission, and 17 (the device control 1 character (DC1), a.k.a. control-Q, or XON) to start transmission. It has become so widely used that most don't realize it is not part of official ASCII. This technique, however implemented, avoids additional wires in the data cable devoted only to transmission management, which saves money. A sensible protocol for the use of such transmission flow control signals must be used, to avoid potential deadlock conditions, however.
The data link escape character (DLE) was intended to be a signal to the other end of a data link that the following character is a control character such as STX or ETX. For example a packet may be structured in the following way (DLE) <STX> <PAYLOAD> (DLE) <ETX>.
Miscellaneous codes
Code 7 (BEL) is intended to cause an audible signal in the receiving terminal.
Many of the ASCII control characters were designed for devices of the time that are not often seen today. For example, code 22, "synchronous idle" (SYN), was originally sent by synchronous modems (which have to send data constantly) when there was no actual data to send. (Modern systems typically use a start bit to announce the beginning of a transmitted word— this is a feature of asynchronous communication. Synchronous communication links were more often seen with mainframes, where they were typically run over corporate leased lines to connect a mainframe to another mainframe or perhaps a minicomputer.)
Code 0 (ASCII code name NUL) is a special case. In paper tape, it is the case when there are no holes. It is convenient to treat this as a fill character with no meaning otherwise. Since the position of a NUL character has no holes punched, it can be replaced with any other character at a later time, so it was typically used to reserve space, either for correcting errors or for inserting information that would be available at a later time or in another place. In computing it is often used for padding in fixed length records and more commonly, to mark the end of a string.
Code 127 (DEL, a.k.a. "rubout") is likewise a special case. Its 7-bit code is all-bits-on in binary, which essentially erased a character cell on a paper tape when overpunched. Paper tape was a common storage medium when ASCII was developed, with a computing history dating back to WWII code breaking equipment at Biuro Szyfrów. Paper tape became obsolete in the 1970s, so this clever aspect of ASCII rarely saw any use after that. Some systems (such as the original Apples) converted it to a backspace. But because its code is in the range occupied by other printable characters, and because it had no official assigned glyph, many computer equipment vendors used it as an additional printable character (often an all-black "box" character useful for erasing text by overprinting with ink).
Non-erasable programmable ROMs are typically implemented as arrays of fusible elements, each representing a bit, which can only be switched one way, usually from one to zero. In such PROMs, the DEL and NUL characters can be used in the same way that they were used on punched tape: one to reserve meaningless fill bytes that can be written later, and the other to convert written bytes to meaningless fill bytes. For PROMs that switch one to zero, the roles of NUL and DEL are reversed; also, DEL will only work with 7-bit characters, which are rarely used today; for 8-bit content, the character code 255, commonly defined as a nonbreaking space character, can be used instead of DEL.
Many file systems do not allow control characters in filenames, as they may have reserved functions.
See also
, HJKL as arrow keys, used on ADM-3A terminal
C0 and C1 control codes
Escape sequence
In-band signaling
Whitespace character
Notes and references
External links
ISO IR 1 C0 Set of ISO 646 (PDF) |
5299 | https://en.wikipedia.org/wiki/Carbon | Carbon | Carbon () is a chemical element with the symbol C and atomic number 6. It is nonmetallic and tetravalent—its atom making four electrons available to form covalent chemical bonds. It belongs to group 14 of the periodic table. Carbon makes up about 0.025 percent of Earth's crust. Three isotopes occur naturally, C and C being stable, while C is a radionuclide, decaying with a half-life of about 5,730 years. Carbon is one of the few elements known since antiquity.
Carbon is the 15th most abundant element in the Earth's crust, and the fourth most abundant element in the universe by mass after hydrogen, helium, and oxygen. Carbon's abundance, its unique diversity of organic compounds, and its unusual ability to form polymers at the temperatures commonly encountered on Earth, enables this element to serve as a common element of all known life. It is the second most abundant element in the human body by mass (about 18.5%) after oxygen.
The atoms of carbon can bond together in diverse ways, resulting in various allotropes of carbon. Well-known allotropes include graphite, diamond, amorphous carbon, and fullerenes. The physical properties of carbon vary widely with the allotropic form. For example, graphite is opaque and black, while diamond is highly transparent. Graphite is soft enough to form a streak on paper (hence its name, from the Greek verb "γράφειν" which means "to write"), while diamond is the hardest naturally occurring material known. Graphite is a good electrical conductor while diamond has a low electrical conductivity. Under normal conditions, diamond, carbon nanotubes, and graphene have the highest thermal conductivities of all known materials. All carbon allotropes are solids under normal conditions, with graphite being the most thermodynamically stable form at standard temperature and pressure. They are chemically resistant and require high temperature to react even with oxygen.
The most common oxidation state of carbon in inorganic compounds is +4, while +2 is found in carbon monoxide and transition metal carbonyl complexes. The largest sources of inorganic carbon are limestones, dolomites and carbon dioxide, but significant quantities occur in organic deposits of coal, peat, oil, and methane clathrates. Carbon forms a vast number of compounds, with about two hundred million having been described and indexed; and yet that number is but a fraction of the number of theoretically possible compounds under standard conditions.
Characteristics
The allotropes of carbon include graphite, one of the softest known substances, and diamond, the hardest naturally occurring substance. It bonds readily with other small atoms, including other carbon atoms, and is capable of forming multiple stable covalent bonds with suitable multivalent atoms. Carbon is a component element in the large majority of all chemical compounds, with about two hundred million examples having been described in the published chemical literature. Carbon also has the highest sublimation point of all elements. At atmospheric pressure it has no melting point, as its triple point is at and , so it sublimes at about . Graphite is much more reactive than diamond at standard conditions, despite being more thermodynamically stable, as its delocalised pi system is much more vulnerable to attack. For example, graphite can be oxidised by hot concentrated nitric acid at standard conditions to mellitic acid, C6(CO2H)6, which preserves the hexagonal units of graphite while breaking up the larger structure.
Carbon sublimes in a carbon arc, which has a temperature of about 5800 K (5,530 °C or 9,980 °F). Thus, irrespective of its allotropic form, carbon remains solid at higher temperatures than the highest-melting-point metals such as tungsten or rhenium. Although thermodynamically prone to oxidation, carbon resists oxidation more effectively than elements such as iron and copper, which are weaker reducing agents at room temperature.
Carbon is the sixth element, with a ground-state electron configuration of 1s22s22p2, of which the four outer electrons are valence electrons. Its first four ionisation energies, 1086.5, 2352.6, 4620.5 and 6222.7 kJ/mol, are much higher than those of the heavier group-14 elements. The electronegativity of carbon is 2.5, significantly higher than the heavier group-14 elements (1.8–1.9), but close to most of the nearby nonmetals, as well as some of the second- and third-row transition metals. Carbon's covalent radii are normally taken as 77.2 pm (C−C), 66.7 pm (C=C) and 60.3 pm (C≡C), although these may vary depending on coordination number and what the carbon is bonded to. In general, covalent radius decreases with lower coordination number and higher bond order.
Carbon-based compounds form the basis of all known life on Earth, and the carbon-nitrogen-oxygen cycle provides a small portion of the energy produced by the Sun, and most of the energy in larger stars (e.g. Sirius). Although it forms an extraordinary variety of compounds, most forms of carbon are comparatively unreactive under normal conditions. At standard temperature and pressure, it resists all but the strongest oxidizers. It does not react with sulfuric acid, hydrochloric acid, chlorine or any alkalis. At elevated temperatures, carbon reacts with oxygen to form carbon oxides and will rob oxygen from metal oxides to leave the elemental metal. This exothermic reaction is used in the iron and steel industry to smelt iron and to control the carbon content of steel:
+ 4 C + 2 → 3 Fe + 4 .
Carbon reacts with sulfur to form carbon disulfide, and it reacts with steam in the coal-gas reaction used in coal gasification:
C + HO → CO + H.
Carbon combines with some metals at high temperatures to form metallic carbides, such as the iron carbide cementite in steel and tungsten carbide, widely used as an abrasive and for making hard tips for cutting tools.
The system of carbon allotropes spans a range of extremes:
Allotropes
Atomic carbon is a very short-lived species and, therefore, carbon is stabilized in various multi-atomic structures with diverse molecular configurations called allotropes. The three relatively well-known allotropes of carbon are amorphous carbon, graphite, and diamond. Once considered exotic, fullerenes are nowadays commonly synthesized and used in research; they include buckyballs, carbon nanotubes, carbon nanobuds and nanofibers. Several other exotic allotropes have also been discovered, such as lonsdaleite, glassy carbon, carbon nanofoam and linear acetylenic carbon (carbyne).
Graphene is a two-dimensional sheet of carbon with the atoms arranged in a hexagonal lattice. As of 2009, graphene appears to be the strongest material ever tested. The process of separating it from graphite will require some further technological development before it is economical for industrial processes. If successful, graphene could be used in the construction of a space elevator. It could also be used to safely store hydrogen for use in a hydrogen based engine in cars.
The amorphous form is an assortment of carbon atoms in a non-crystalline, irregular, glassy state, not held in a crystalline macrostructure. It is present as a powder, and is the main constituent of substances such as charcoal, lampblack (soot), and activated carbon. At normal pressures, carbon takes the form of graphite, in which each atom is bonded trigonally to three others in a plane composed of fused hexagonal rings, just like those in aromatic hydrocarbons. The resulting network is 2-dimensional, and the resulting flat sheets are stacked and loosely bonded through weak van der Waals forces. This gives graphite its softness and its cleaving properties (the sheets slip easily past one another). Because of the delocalization of one of the outer electrons of each atom to form a π-cloud, graphite conducts electricity, but only in the plane of each covalently bonded sheet. This results in a lower bulk electrical conductivity for carbon than for most metals. The delocalization also accounts for the energetic stability of graphite over diamond at room temperature.
At very high pressures, carbon forms the more compact allotrope, diamond, having nearly twice the density of graphite. Here, each atom is bonded tetrahedrally to four others, forming a 3-dimensional network of puckered six-membered rings of atoms. Diamond has the same cubic structure as silicon and germanium, and because of the strength of the carbon-carbon bonds, it is the hardest naturally occurring substance measured by resistance to scratching. Contrary to the popular belief that "diamonds are forever", they are thermodynamically unstable (ΔfG°(diamond, 298 K) = 2.9 kJ/mol) under normal conditions (298 K, 105 Pa) and should theoretically transform into graphite. But due to a high activation energy barrier, the transition into graphite is so slow at normal temperature that it is unnoticeable. However, at very high temperatures diamond will turn into graphite, and diamonds can burn up in a house fire. The bottom left corner of the phase diagram for carbon has not been scrutinized experimentally. Although a computational study employing density functional theory methods reached the conclusion that as and , diamond becomes more stable than graphite by approximately 1.1 kJ/mol, more recent and definitive experimental and computational studies show that graphite is more stable than diamond for , without applied pressure, by 2.7 kJ/mol at T = 0 K and 3.2 kJ/mol at T = 298.15 K. Under some conditions, carbon crystallizes as lonsdaleite, a hexagonal crystal lattice with all atoms covalently bonded and properties similar to those of diamond.
Fullerenes are a synthetic crystalline formation with a graphite-like structure, but in place of flat hexagonal cells only, some of the cells of which fullerenes are formed may be pentagons, nonplanar hexagons, or even heptagons of carbon atoms. The sheets are thus warped into spheres, ellipses, or cylinders. The properties of fullerenes (split into buckyballs, buckytubes, and nanobuds) have not yet been fully analyzed and represent an intense area of research in nanomaterials. The names fullerene and buckyball are given after Richard Buckminster Fuller, popularizer of geodesic domes, which resemble the structure of fullerenes. The buckyballs are fairly large molecules formed completely of carbon bonded trigonally, forming spheroids (the best-known and simplest is the soccerball-shaped C buckminsterfullerene). Carbon nanotubes (buckytubes) are structurally similar to buckyballs, except that each atom is bonded trigonally in a curved sheet that forms a hollow cylinder. Nanobuds were first reported in 2007 and are hybrid buckytube/buckyball materials (buckyballs are covalently bonded to the outer wall of a nanotube) that combine the properties of both in a single structure.
Of the other discovered allotropes, carbon nanofoam is a ferromagnetic allotrope discovered in 1997. It consists of a low-density cluster-assembly of carbon atoms strung together in a loose three-dimensional web, in which the atoms are bonded trigonally in six- and seven-membered rings. It is among the lightest known solids, with a density of about 2 kg/m. Similarly, glassy carbon contains a high proportion of closed porosity, but contrary to normal graphite, the graphitic layers are not stacked like pages in a book, but have a more random arrangement. Linear acetylenic carbon has the chemical structure −(C≡C)− . Carbon in this modification is linear with sp orbital hybridization, and is a polymer with alternating single and triple bonds. This carbyne is of considerable interest to nanotechnology as its Young's modulus is 40 times that of the hardest known material – diamond.
In 2015, a team at the North Carolina State University announced the development of another allotrope they have dubbed Q-carbon, created by a high-energy low-duration laser pulse on amorphous carbon dust. Q-carbon is reported to exhibit ferromagnetism, fluorescence, and a hardness superior to diamonds.
In the vapor phase, some of the carbon is in the form of highly reactive diatomic carbon dicarbon (). When excited, this gas glows green.
Occurrence
Carbon is the fourth most abundant chemical element in the observable universe by mass after hydrogen, helium, and oxygen. Carbon is abundant in the Sun, stars, comets, and in the atmospheres of most planets. Some meteorites contain microscopic diamonds that were formed when the Solar System was still a protoplanetary disk. Microscopic diamonds may also be formed by the intense pressure and high temperature at the sites of meteorite impacts.
In 2014 NASA announced a greatly upgraded database for tracking polycyclic aromatic hydrocarbons (PAHs) in the universe. More than 20% of the carbon in the universe may be associated with PAHs, complex compounds of carbon and hydrogen without oxygen. These compounds figure in the PAH world hypothesis where they are hypothesized to have a role in abiogenesis and formation of life. PAHs seem to have been formed "a couple of billion years" after the Big Bang, are widespread throughout the universe, and are associated with new stars and exoplanets.
It has been estimated that the solid earth as a whole contains 730 ppm of carbon, with 2000 ppm in the core and 120 ppm in the combined mantle and crust. Since the mass of the earth is , this would imply 4360 million gigatonnes of carbon. This is much more than the amount of carbon in the oceans or atmosphere (below).
In combination with oxygen in carbon dioxide, carbon is found in the Earth's atmosphere (approximately 900 gigatonnes of carbon — each ppm corresponds to 2.13 Gt) and dissolved in all water bodies (approximately 36,000 gigatonnes of carbon). Carbon in the biosphere has been estimated at 550 gigatonnes but with a large uncertainty, due mostly to a huge uncertainty in the amount of terrestrial deep subsurface bacteria. Hydrocarbons (such as coal, petroleum, and natural gas) contain carbon as well. Coal "reserves" (not "resources") amount to around 900 gigatonnes with perhaps 18,000 Gt of resources. Oil reserves are around 150 gigatonnes. Proven sources of natural gas are about (containing about 105 gigatonnes of carbon), but studies estimate another of "unconventional" deposits such as shale gas, representing about 540 gigatonnes of carbon.
Carbon is also found in methane hydrates in polar regions and under the seas. Various estimates put this carbon between 500, 2500, or 3,000 Gt.
According to one source, in the period from 1751 to 2008 about 347 gigatonnes of carbon were released as carbon dioxide to the atmosphere from burning of fossil fuels. Another source puts the amount added to the atmosphere for the period since 1750 at 879 Gt, and the total going to the atmosphere, sea, and land (such as peat bogs) at almost 2,000 Gt.
Carbon is a constituent (about 12% by mass) of the very large masses of carbonate rock (limestone, dolomite, marble, and others). Coal is very rich in carbon (anthracite contains 92–98%) and is the largest commercial source of mineral carbon, accounting for 4,000 gigatonnes or 80% of fossil fuel.
As for individual carbon allotropes, graphite is found in large quantities in the United States (mostly in New York and Texas), Russia, Mexico, Greenland, and India. Natural diamonds occur in the rock kimberlite, found in ancient volcanic "necks", or "pipes". Most diamond deposits are in Africa, notably in South Africa, Namibia, Botswana, the Republic of the Congo, and Sierra Leone. Diamond deposits have also been found in Arkansas, Canada, the Russian Arctic, Brazil, and in Northern and Western Australia. Diamonds are now also being recovered from the ocean floor off the Cape of Good Hope. Diamonds are found naturally, but about 30% of all industrial diamonds used in the U.S. are now manufactured.
Carbon-14 is formed in upper layers of the troposphere and the stratosphere at altitudes of 9–15 km by a reaction that is precipitated by cosmic rays. Thermal neutrons are produced that collide with the nuclei of nitrogen-14, forming carbon-14 and a proton. As such, of atmospheric carbon dioxide contains carbon-14.
Carbon-rich asteroids are relatively preponderant in the outer parts of the asteroid belt in the Solar System. These asteroids have not yet been directly sampled by scientists. The asteroids can be used in hypothetical space-based carbon mining, which may be possible in the future, but is currently technologically impossible.
Isotopes
Isotopes of carbon are atomic nuclei that contain six protons plus a number of neutrons (varying from 2 to 16). Carbon has two stable, naturally occurring isotopes. The isotope carbon-12 (C) forms 98.93% of the carbon on Earth, while carbon-13 (C) forms the remaining 1.07%. The concentration of C is further increased in biological materials because biochemical reactions discriminate against C. In 1961, the International Union of Pure and Applied Chemistry (IUPAC) adopted the isotope carbon-12 as the basis for atomic weights. Identification of carbon in nuclear magnetic resonance (NMR) experiments is done with the isotope C.
Carbon-14 (C) is a naturally occurring radioisotope, created in the upper atmosphere (lower stratosphere and upper troposphere) by interaction of nitrogen with cosmic rays. It is found in trace amounts on Earth of 1 part per trillion (0.0000000001%) or more, mostly confined to the atmosphere and superficial deposits, particularly of peat and other organic materials. This isotope decays by 0.158 MeV β emission. Because of its relatively short half-life of 5730 years, C is virtually absent in ancient rocks. The amount of C in the atmosphere and in living organisms is almost constant, but decreases predictably in their bodies after death. This principle is used in radiocarbon dating, invented in 1949, which has been used extensively to determine the age of carbonaceous materials with ages up to about 40,000 years.
There are 15 known isotopes of carbon and the shortest-lived of these is C which decays through proton emission and alpha decay and has a half-life of 1.98739 × 10 s. The exotic C exhibits a nuclear halo, which means its radius is appreciably larger than would be expected if the nucleus were a sphere of constant density.
Formation in stars
Formation of the carbon atomic nucleus occurs within a giant or supergiant star through the triple-alpha process. This requires a nearly simultaneous collision of three alpha particles (helium nuclei), as the products of further nuclear fusion reactions of helium with hydrogen or another helium nucleus produce lithium-5 and beryllium-8 respectively, both of which are highly unstable and decay almost instantly back into smaller nuclei. The triple-alpha process happens in conditions of temperatures over 100 megakelvins and helium concentration that the rapid expansion and cooling of the early universe prohibited, and therefore no significant carbon was created during the Big Bang.
According to current physical cosmology theory, carbon is formed in the interiors of stars on the horizontal branch. When massive stars die as supernova, the carbon is scattered into space as dust. This dust becomes component material for the formation of the next-generation star systems with accreted planets. The Solar System is one such star system with an abundance of carbon, enabling the existence of life as we know it. It is the opinion of most scholars that all the carbon in the Solar System and the Milky Way comes from dying stars.
The CNO cycle is an additional hydrogen fusion mechanism that powers stars, wherein carbon operates as a catalyst.
Rotational transitions of various isotopic forms of carbon monoxide (for example, CO, CO, and CO) are detectable in the submillimeter wavelength range, and are used in the study of newly forming stars in molecular clouds.
Carbon cycle
Under terrestrial conditions, conversion of one element to another is very rare. Therefore, the amount of carbon on Earth is effectively constant. Thus, processes that use carbon must obtain it from somewhere and dispose of it somewhere else. The paths of carbon in the environment form the carbon cycle. For example, photosynthetic plants draw carbon dioxide from the atmosphere (or seawater) and build it into biomass, as in the Calvin cycle, a process of carbon fixation. Some of this biomass is eaten by animals, while some carbon is exhaled by animals as carbon dioxide. The carbon cycle is considerably more complicated than this short loop; for example, some carbon dioxide is dissolved in the oceans; if bacteria do not consume it, dead plant or animal matter may become petroleum or coal, which releases carbon when burned.
Compounds
Organic compounds
Carbon can form very long chains of interconnecting carbon–carbon bonds, a property that is called catenation. Carbon-carbon bonds are strong and stable. Through catenation, carbon forms a countless number of compounds. A tally of unique compounds shows that more contain carbon than do not. A similar claim can be made for hydrogen because most organic compounds contain hydrogen chemically bonded to carbon or another common element like oxygen or nitrogen.
The simplest form of an organic molecule is the hydrocarbon—a large family of organic molecules that are composed of hydrogen atoms bonded to a chain of carbon atoms. A hydrocarbon backbone can be substituted by other atoms, known as heteroatoms. Common heteroatoms that appear in organic compounds include oxygen, nitrogen, sulfur, phosphorus, and the nonradioactive halogens, as well as the metals lithium and magnesium. Organic compounds containing bonds to metal are known as organometallic compounds (see below). Certain groupings of atoms, often including heteroatoms, recur in large numbers of organic compounds. These collections, known as functional groups, confer common reactivity patterns and allow for the systematic study and categorization of organic compounds. Chain length, shape and functional groups all affect the properties of organic molecules.
In most stable compounds of carbon (and nearly all stable organic compounds), carbon obeys the octet rule and is tetravalent, meaning that a carbon atom forms a total of four covalent bonds (which may include double and triple bonds). Exceptions include a small number of stabilized carbocations (three bonds, positive charge), radicals (three bonds, neutral), carbanions (three bonds, negative charge) and carbenes (two bonds, neutral), although these species are much more likely to be encountered as unstable, reactive intermediates.
Carbon occurs in all known organic life and is the basis of organic chemistry. When united with hydrogen, it forms various hydrocarbons that are important to industry as refrigerants, lubricants, solvents, as chemical feedstock for the manufacture of plastics and petrochemicals, and as fossil fuels.
When combined with oxygen and hydrogen, carbon can form many groups of important biological compounds including sugars, lignans, chitins, alcohols, fats, aromatic esters, carotenoids and terpenes. With nitrogen it forms alkaloids, and with the addition of sulfur also it forms antibiotics, amino acids, and rubber products. With the addition of phosphorus to these other elements, it forms DNA and RNA, the chemical-code carriers of life, and adenosine triphosphate (ATP), the most important energy-transfer molecule in all living cells. Norman Horowitz, head of the Mariner and Viking missions to Mars (1965-1976), considered that the unique characteristics of carbon made it unlikely that any other element could replace carbon, even on another planet, to generate the biochemistry necessary for life.
Inorganic compounds
Commonly carbon-containing compounds which are associated with minerals or which do not contain bonds to the other carbon atoms, halogens, or hydrogen, are treated separately from classical organic compounds; the definition is not rigid, and the classification of some compounds can vary from author to author (see reference articles above). Among these are the simple oxides of carbon. The most prominent oxide is carbon dioxide (). This was once the principal constituent of the paleoatmosphere, but is a minor component of the Earth's atmosphere today. Dissolved in water, it forms carbonic acid (), but as most compounds with multiple single-bonded oxygens on a single carbon it is unstable. Through this intermediate, though, resonance-stabilized carbonate ions are produced. Some important minerals are carbonates, notably calcite. Carbon disulfide () is similar. Nevertheless, due to its physical properties and its association with organic synthesis, carbon disulfide is sometimes classified as an organic solvent.
The other common oxide is carbon monoxide (CO). It is formed by incomplete combustion, and is a colorless, odorless gas. The molecules each contain a triple bond and are fairly polar, resulting in a tendency to bind permanently to hemoglobin molecules, displacing oxygen, which has a lower binding affinity. Cyanide (CN), has a similar structure, but behaves much like a halide ion (pseudohalogen). For example, it can form the nitride cyanogen molecule ((CN)), similar to diatomic halides. Likewise, the heavier analog of cyanide, cyaphide (CP), is also considered inorganic, though most simple derivatives are highly unstable. Other uncommon oxides are carbon suboxide (), the unstable dicarbon monoxide (CO), carbon trioxide (CO), cyclopentanepentone (CO), cyclohexanehexone (CO), and mellitic anhydride (CO). However, mellitic anhydride is the triple acyl anhydride of mellitic acid; moreover, it contains a benzene ring. Thus, many chemists consider it to be organic.
With reactive metals, such as tungsten, carbon forms either carbides (C) or acetylides () to form alloys with high melting points. These anions are also associated with methane and acetylene, both very weak acids. With an electronegativity of 2.5, carbon prefers to form covalent bonds. A few carbides are covalent lattices, like carborundum (SiC), which resembles diamond. Nevertheless, even the most polar and salt-like of carbides are not completely ionic compounds.
Organometallic compounds
Organometallic compounds by definition contain at least one carbon-metal covalent bond. A wide range of such compounds exist; major classes include simple alkyl-metal compounds (for example, tetraethyllead), η-alkene compounds (for example, Zeise's salt), and η-allyl compounds (for example, allylpalladium chloride dimer); metallocenes containing cyclopentadienyl ligands (for example, ferrocene); and transition metal carbene complexes. Many metal carbonyls and metal cyanides exist (for example, tetracarbonylnickel and potassium ferricyanide); some workers consider metal carbonyl and cyanide complexes without other carbon ligands to be purely inorganic, and not organometallic. However, most organometallic chemists consider metal complexes with any carbon ligand, even 'inorganic carbon' (e.g., carbonyls, cyanides, and certain types of carbides and acetylides) to be organometallic in nature. Metal complexes containing organic ligands without a carbon-metal covalent bond (e.g., metal carboxylates) are termed metalorganic compounds.
While carbon is understood to strongly prefer formation of four covalent bonds, other exotic bonding schemes are also known. Carboranes are highly stable dodecahedral derivatives of the [B12H12]2- unit, with one BH replaced with a CH+. Thus, the carbon is bonded to five boron atoms and one hydrogen atom. The cation [(PhPAu)C] contains an octahedral carbon bound to six phosphine-gold fragments. This phenomenon has been attributed to the aurophilicity of the gold ligands, which provide additional stabilization of an otherwise labile species. In nature, the iron-molybdenum cofactor (FeMoco) responsible for microbial nitrogen fixation likewise has an octahedral carbon center (formally a carbide, C(-IV)) bonded to six iron atoms. In 2016, it was confirmed that, in line with earlier theoretical predictions, the hexamethylbenzene dication contains a carbon atom with six bonds. More specifically, the dication could be described structurally by the formulation [MeC(η5-C5Me5)]2+, making it an "organic metallocene" in which a MeC3+ fragment is bonded to a η5-C5Me5− fragment through all five of the carbons of the ring.
It is important to note that in the cases above, each of the bonds to carbon contain less than two formal electron pairs. Thus, the formal electron count of these species does not exceed an octet. This makes them hypercoordinate but not hypervalent. Even in cases of alleged 10-C-5 species (that is, a carbon with five ligands and a formal electron count of ten), as reported by Akiba and co-workers, electronic structure calculations conclude that the electron population around carbon is still less than eight, as is true for other compounds featuring four-electron three-center bonding.
History and etymology
The English name carbon comes from the Latin carbo for coal and charcoal, whence also comes the French charbon, meaning charcoal. In German, Dutch and Danish, the names for carbon are Kohlenstoff, koolstof, and kulstof respectively, all literally meaning coal-substance.
Carbon was discovered in prehistory and was known in the forms of soot and charcoal to the earliest human civilizations. Diamonds were known probably as early as 2500 BCE in China, while carbon in the form of charcoal was made around Roman times by the same chemistry as it is today, by heating wood in a pyramid covered with clay to exclude air.
In 1722, René Antoine Ferchault de Réaumur demonstrated that iron was transformed into steel through the absorption of some substance, now known to be carbon. In 1772, Antoine Lavoisier showed that diamonds are a form of carbon; when he burned samples of charcoal and diamond and found that neither produced any water and that both released the same amount of carbon dioxide per gram. In 1779, Carl Wilhelm Scheele showed that graphite, which had been thought of as a form of lead, was instead identical with charcoal but with a small admixture of iron, and that it gave "aerial acid" (his name for carbon dioxide) when oxidized with nitric acid. In 1786, the French scientists Claude Louis Berthollet, Gaspard Monge and C. A. Vandermonde confirmed that graphite was mostly carbon by oxidizing it in oxygen in much the same way Lavoisier had done with diamond. Some iron again was left, which the French scientists thought was necessary to the graphite structure. In their publication they proposed the name carbone (Latin carbonum) for the element in graphite which was given off as a gas upon burning graphite. Antoine Lavoisier then listed carbon as an element in his 1789 textbook.
A new allotrope of carbon, fullerene, that was discovered in 1985 includes nanostructured forms such as buckyballs and nanotubes. Their discoverers – Robert Curl, Harold Kroto, and Richard Smalley – received the Nobel Prize in Chemistry in 1996. The resulting renewed interest in new forms led to the discovery of further exotic allotropes, including glassy carbon, and the realization that "amorphous carbon" is not strictly amorphous.
Production
Graphite
Commercially viable natural deposits of graphite occur in many parts of the world, but the most important sources economically are in China, India, Brazil, and North Korea. Graphite deposits are of metamorphic origin, found in association with quartz, mica, and feldspars in schists, gneisses, and metamorphosed sandstones and limestone as lenses or veins, sometimes of a metre or more in thickness. Deposits of graphite in Borrowdale, Cumberland, England were at first of sufficient size and purity that, until the 19th century, pencils were made by sawing blocks of natural graphite into strips before encasing the strips in wood. Today, smaller deposits of graphite are obtained by crushing the parent rock and floating the lighter graphite out on water.
There are three types of natural graphite—amorphous, flake or crystalline flake, and vein or lump. Amorphous graphite is the lowest quality and most abundant. Contrary to science, in industry "amorphous" refers to very small crystal size rather than complete lack of crystal structure. Amorphous is used for lower value graphite products and is the lowest priced graphite. Large amorphous graphite deposits are found in China, Europe, Mexico and the United States. Flake graphite is less common and of higher quality than amorphous; it occurs as separate plates that crystallized in metamorphic rock. Flake graphite can be four times the price of amorphous. Good quality flakes can be processed into expandable graphite for many uses, such as flame retardants. The foremost deposits are found in Austria, Brazil, Canada, China, Germany and Madagascar. Vein or lump graphite is the rarest, most valuable, and highest quality type of natural graphite. It occurs in veins along intrusive contacts in solid lumps, and it is only commercially mined in Sri Lanka.
According to the USGS, world production of natural graphite was 1.1 million tonnes in 2010, to which China contributed 800,000 t, India 130,000 t, Brazil 76,000 t, North Korea 30,000 t and Canada 25,000 t. No natural graphite was reported mined in the United States, but 118,000 t of synthetic graphite with an estimated value of $998 million was produced in 2009.
Diamond
The diamond supply chain is controlled by a limited number of powerful businesses, and is also highly concentrated in a small number of locations around the world (see figure).
Only a very small fraction of the diamond ore consists of actual diamonds. The ore is crushed, during which care has to be taken in order to prevent larger diamonds from being destroyed in this process and subsequently the particles are sorted by density. Today, diamonds are located in the diamond-rich density fraction with the help of X-ray fluorescence, after which the final sorting steps are done by hand. Before the use of X-rays became commonplace, the separation was done with grease belts; diamonds have a stronger tendency to stick to grease than the other minerals in the ore.
Historically diamonds were known to be found only in alluvial deposits in southern India. India led the world in diamond production from the time of their discovery in approximately the 9th century BC to the mid-18th century AD, but the commercial potential of these sources had been exhausted by the late 18th century and at that time India was eclipsed by Brazil where the first non-Indian diamonds were found in 1725.
Diamond production of primary deposits (kimberlites and lamproites) only started in the 1870s after the discovery of the diamond fields in South Africa. Production has increased over time and an accumulated total of over 4.5 billion carats have been mined since that date. Most commercially viable diamond deposits were in Russia, Botswana, Australia and the Democratic Republic of Congo. By 2005, Russia produced almost one-fifth of the global diamond output (mostly in Yakutia territory; for example, Mir pipe and Udachnaya pipe) but the Argyle mine in Australia became the single largest source, producing 14 million carats in 2018. New finds, the Canadian mines at Diavik and Ekati, are expected to become even more valuable owing to their production of gem quality stones.
In the United States, diamonds have been found in Arkansas, Colorado, and Montana. In 2004, a startling discovery of a microscopic diamond in the United States led to the January 2008 bulk-sampling of kimberlite pipes in a remote part of Montana.
Applications
Carbon is essential to all known living systems, and without it life as we know it could not exist (see alternative biochemistry). The major economic use of carbon other than food and wood is in the form of hydrocarbons, most notably the fossil fuel methane gas and crude oil (petroleum). Crude oil is distilled in refineries by the petrochemical industry to produce gasoline, kerosene, and other products. Cellulose is a natural, carbon-containing polymer produced by plants in the form of wood, cotton, linen, and hemp. Cellulose is used primarily for maintaining structure in plants. Commercially valuable carbon polymers of animal origin include wool, cashmere, and silk. Plastics are made from synthetic carbon polymers, often with oxygen and nitrogen atoms included at regular intervals in the main polymer chain. The raw materials for many of these synthetic substances come from crude oil.
The uses of carbon and its compounds are extremely varied. It can form alloys with iron, of which the most common is carbon steel. Graphite is combined with clays to form the 'lead' used in pencils used for writing and drawing. It is also used as a lubricant and a pigment, as a molding material in glass manufacture, in electrodes for dry batteries and in electroplating and electroforming, in brushes for electric motors, and as a neutron moderator in nuclear reactors.
Charcoal is used as a drawing material in artwork, barbecue grilling, iron smelting, and in many other applications. Wood, coal and oil are used as fuel for production of energy and heating. Gem quality diamond is used in jewelry, and industrial diamonds are used in drilling, cutting and polishing tools for machining metals and stone. Plastics are made from fossil hydrocarbons, and carbon fiber, made by pyrolysis of synthetic polyester fibers is used to reinforce plastics to form advanced, lightweight composite materials.
Carbon fiber is made by pyrolysis of extruded and stretched filaments of polyacrylonitrile (PAN) and other organic substances. The crystallographic structure and mechanical properties of the fiber depend on the type of starting material, and on the subsequent processing. Carbon fibers made from PAN have structure resembling narrow filaments of graphite, but thermal processing may re-order the structure into a continuous rolled sheet. The result is fibers with higher specific tensile strength than steel.
Carbon black is used as the black pigment in printing ink, artist's oil paint, and water colours, carbon paper, automotive finishes, India ink and laser printer toner. Carbon black is also used as a filler in rubber products such as tyres and in plastic compounds. Activated charcoal is used as an absorbent and adsorbent in filter material in applications as diverse as gas masks, water purification, and kitchen extractor hoods, and in medicine to absorb toxins, poisons, or gases from the digestive system. Carbon is used in chemical reduction at high temperatures. Coke is used to reduce iron ore into iron (smelting). Case hardening of steel is achieved by heating finished steel components in carbon powder. Carbides of silicon, tungsten, boron, and titanium are among the hardest known materials, and are used as abrasives in cutting and grinding tools. Carbon compounds make up most of the materials used in clothing, such as natural and synthetic textiles and leather, and almost all of the interior surfaces in the built environment other than glass, stone, drywall and metal.
Diamonds
The diamond industry falls into two categories: one dealing with gem-grade diamonds and the other, with industrial-grade diamonds. While a large trade in both types of diamonds exists, the two markets function dramatically differently.
Unlike precious metals such as gold or platinum, gem diamonds do not trade as a commodity: there is a substantial mark-up in the sale of diamonds, and there is not a very active market for resale of diamonds.
Industrial diamonds are valued mostly for their hardness and heat conductivity, with the gemological qualities of clarity and color being mostly irrelevant. About 80% of mined diamonds (equal to about 100 million carats or 20 tonnes annually) are unsuitable for use as gemstones and relegated for industrial use (known as bort). Synthetic diamonds, invented in the 1950s, found almost immediate industrial applications; 3 billion carats (600 tonnes) of synthetic diamond is produced annually.
The dominant industrial use of diamond is in cutting, drilling, grinding, and polishing. Most of these applications do not require large diamonds; in fact, most diamonds of gem-quality except for their small size can be used industrially. Diamonds are embedded in drill tips or saw blades, or ground into a powder for use in grinding and polishing applications. Specialized applications include use in laboratories as containment for high-pressure experiments (see diamond anvil cell), high-performance bearings, and limited use in specialized windows. With the continuing advances in the production of synthetic diamonds, new applications are becoming feasible. Garnering much excitement is the possible use of diamond as a semiconductor suitable for microchips, and because of its exceptional heat conductance property, as a heat sink in electronics.
Precautions
Pure carbon has extremely low toxicity to humans and can be handled safely in the form of graphite or charcoal. It is resistant to dissolution or chemical attack, even in the acidic contents of the digestive tract. Consequently, once it enters into the body's tissues it is likely to remain there indefinitely. Carbon black was probably one of the first pigments to be used for tattooing, and Ötzi the Iceman was found to have carbon tattoos that survived during his life and for 5200 years after his death. Inhalation of coal dust or soot (carbon black) in large quantities can be dangerous, irritating lung tissues and causing the congestive lung disease, coalworker's pneumoconiosis. Diamond dust used as an abrasive can be harmful if ingested or inhaled. Microparticles of carbon are produced in diesel engine exhaust fumes, and may accumulate in the lungs. In these examples, the harm may result from contaminants (e.g., organic chemicals, heavy metals) rather than from the carbon itself.
Carbon generally has low toxicity to life on Earth; but carbon nanoparticles are deadly to Drosophila.
Carbon may burn vigorously and brightly in the presence of air at high temperatures. Large accumulations of coal, which have remained inert for hundreds of millions of years in the absence of oxygen, may spontaneously combust when exposed to air in coal mine waste tips, ship cargo holds and coal bunkers, and storage dumps.
In nuclear applications where graphite is used as a neutron moderator, accumulation of Wigner energy followed by a sudden, spontaneous release may occur. Annealing to at least 250 °C can release the energy safely, although in the Windscale fire the procedure went wrong, causing other reactor materials to combust.
The great variety of carbon compounds include such lethal poisons as tetrodotoxin, the lectin ricin from seeds of the castor oil plant Ricinus communis, cyanide (CN), and carbon monoxide; and such essentials to life as glucose and protein.
See also
Carbon chauvinism
Carbon detonation
Carbon footprint
Carbon star
Carbon planet
Gas carbon
Low-carbon economy
Timeline of carbon nanotubes
References
Bibliography
External links
Carbon at The Periodic Table of Videos (University of Nottingham)
Carbon on Britannica
Extensive Carbon page at asu.edu (archived 18 June 2010)
Electrochemical uses of carbon (archived 9 November 2001)
Carbon—Super Stuff. Animation with sound and interactive 3D-models. (archived 9 November 2012)
Allotropes of carbon
Chemical elements with hexagonal planar structure
Chemical elements
Native element minerals
Polyatomic nonmetals
Reactive nonmetals
Reducing agents |
5300 | https://en.wikipedia.org/wiki/Computer%20data%20storage | Computer data storage | Computer data storage is a technology consisting of computer components and recording media that are used to retain digital data. It is a core function and fundamental component of computers.
The central processing unit (CPU) of a computer is what manipulates data by performing computations. In practice, almost all computers use a storage hierarchy, which puts fast but expensive and small storage options close to the CPU and slower but less expensive and larger options further away. Generally, the fast technologies are referred to as "memory", while slower persistent technologies are referred to as "storage".
Even the first computer designs, Charles Babbage's Analytical Engine and Percy Ludgate's Analytical Machine, clearly distinguished between processing and memory (Babbage stored numbers as rotations of gears, while Ludgate stored numbers as displacements of rods in shuttles). This distinction was extended in the Von Neumann architecture, where the CPU consists of two main parts: The control unit and the arithmetic logic unit (ALU). The former controls the flow of data between the CPU and memory, while the latter performs arithmetic and logical operations on data.
Functionality
Without a significant amount of memory, a computer would merely be able to perform fixed operations and immediately output the result. It would have to be reconfigured to change its behavior. This is acceptable for devices such as desk calculators, digital signal processors, and other specialized devices. Von Neumann machines differ in having a memory in which they store their operating instructions and data. Such computers are more versatile in that they do not need to have their hardware reconfigured for each new program, but can simply be reprogrammed with new in-memory instructions; they also tend to be simpler to design, in that a relatively simple processor may keep state between successive computations to build up complex procedural results. Most modern computers are von Neumann machines.
Data organization and representation
A modern digital computer represents data using the binary numeral system. Text, numbers, pictures, audio, and nearly any other form of information can be converted into a string of bits, or binary digits, each of which has a value of 0 or 1. The most common unit of storage is the byte, equal to 8 bits. A piece of information can be handled by any computer or device whose storage space is large enough to accommodate the binary representation of the piece of information, or simply data. For example, the complete works of Shakespeare, about 1250 pages in print, can be stored in about five megabytes (40 million bits) with one byte per character.
Data are encoded by assigning a bit pattern to each character, digit, or multimedia object. Many standards exist for encoding (e.g. character encodings like ASCII, image encodings like JPEG, and video encodings like MPEG-4).
By adding bits to each encoded unit, redundancy allows the computer to detect errors in coded data and correct them based on mathematical algorithms. Errors generally occur in low probabilities due to random bit value flipping, or "physical bit fatigue", loss of the physical bit in the storage of its ability to maintain a distinguishable value (0 or 1), or due to errors in inter or intra-computer communication. A random bit flip (e.g. due to random radiation) is typically corrected upon detection. A bit or a group of malfunctioning physical bits (the specific defective bit is not always known; group definition depends on the specific storage device) is typically automatically fenced out, taken out of use by the device, and replaced with another functioning equivalent group in the device, where the corrected bit values are restored (if possible). The cyclic redundancy check (CRC) method is typically used in communications and storage for error detection. A detected error is then retried.
Data compression methods allow in many cases (such as a database) to represent a string of bits by a shorter bit string ("compress") and reconstruct the original string ("decompress") when needed. This utilizes substantially less storage (tens of percent) for many types of data at the cost of more computation (compress and decompress when needed). Analysis of the trade-off between storage cost saving and costs of related computations and possible delays in data availability is done before deciding whether to keep certain data compressed or not.
For security reasons, certain types of data (e.g. credit card information) may be kept encrypted in storage to prevent the possibility of unauthorized information reconstruction from chunks of storage snapshots.
Hierarchy of storage
Generally, the lower a storage is in the hierarchy, the lesser its bandwidth and the greater its access latency is from the CPU. This traditional division of storage to primary, secondary, tertiary, and off-line storage is also guided by cost per bit.
In contemporary usage, memory is usually semiconductor storage read-write random-access memory, typically DRAM (dynamic RAM) or other forms of fast but temporary storage. Storage consists of storage devices and their media not directly accessible by the CPU (secondary or tertiary storage), typically hard disk drives, optical disc drives, and other devices slower than RAM but non-volatile (retaining contents when powered down).
Historically, memory has, depending on technology, been called central memory, core memory, core storage, drum, main memory, real storage, or internal memory. Meanwhile, slower persistent storage devices have been referred to as secondary storage, external memory, or auxiliary/peripheral storage.
Primary storage
Primary storage (also known as main memory, internal memory, or prime memory), often referred to simply as memory, is the only one directly accessible to the CPU. The CPU continuously reads instructions stored there and executes them as required. Any data actively operated on is also stored there in a uniform manner.
Historically, early computers used delay lines, Williams tubes, or rotating magnetic drums as primary storage. By 1954, those unreliable methods were mostly replaced by magnetic-core memory. Core memory remained dominant until the 1970s, when advances in integrated circuit technology allowed semiconductor memory to become economically competitive.
This led to modern random-access memory (RAM). It is small-sized, light, but quite expensive at the same time. The particular types of RAM used for primary storage are volatile, meaning that they lose the information when not powered. Besides storing opened programs, it serves as disk cache and write buffer to improve both reading and writing performance. Operating systems borrow RAM capacity for caching so long as it's not needed by running software. Spare memory can be utilized as RAM drive for temporary high-speed data storage.
As shown in the diagram, traditionally there are two more sub-layers of the primary storage, besides main large-capacity RAM:
Processor registers are located inside the processor. Each register typically holds a word of data (often 32 or 64 bits). CPU instructions instruct the arithmetic logic unit to perform various calculations or other operations on this data (or with the help of it). Registers are the fastest of all forms of computer data storage.
Processor cache is an intermediate stage between ultra-fast registers and much slower main memory. It was introduced solely to improve the performance of computers. Most actively used information in the main memory is just duplicated in the cache memory, which is faster, but of much lesser capacity. On the other hand, main memory is much slower, but has a much greater storage capacity than processor registers. Multi-level hierarchical cache setup is also commonly used—primary cache being smallest, fastest and located inside the processor; secondary cache being somewhat larger and slower.
Main memory is directly or indirectly connected to the central processing unit via a memory bus. It is actually two buses (not on the diagram): an address bus and a data bus. The CPU firstly sends a number through an address bus, a number called memory address, that indicates the desired location of data. Then it reads or writes the data in the memory cells using the data bus. Additionally, a memory management unit (MMU) is a small device between CPU and RAM recalculating the actual memory address, for example to provide an abstraction of virtual memory or other tasks.
As the RAM types used for primary storage are volatile (uninitialized at start up), a computer containing only such storage would not have a source to read instructions from, in order to start the computer. Hence, non-volatile primary storage containing a small startup program (BIOS) is used to bootstrap the computer, that is, to read a larger program from non-volatile secondary storage to RAM and start to execute it. A non-volatile technology used for this purpose is called ROM, for read-only memory (the terminology may be somewhat confusing as most ROM types are also capable of random access).
Many types of "ROM" are not literally read only, as updates to them are possible; however it is slow and memory must be erased in large portions before it can be re-written. Some embedded systems run programs directly from ROM (or similar), because such programs are rarely changed. Standard computers do not store non-rudimentary programs in ROM, and rather, use large capacities of secondary storage, which is non-volatile as well, and not as costly.
Recently, primary storage and secondary storage in some uses refer to what was historically called, respectively, secondary storage and tertiary storage.
Secondary storage
Secondary storage (also known as external memory or auxiliary storage) differs from primary storage in that it is not directly accessible by the CPU. The computer usually uses its input/output channels to access secondary storage and transfer the desired data to primary storage. Secondary storage is non-volatile (retaining data when its power is shut off). Modern computer systems typically have two orders of magnitude more secondary storage than primary storage because secondary storage is less expensive.
In modern computers, hard disk drives (HDDs) or solid-state drives (SSDs) are usually used as secondary storage. The access time per byte for HDDs or SSDs is typically measured in milliseconds (thousandths of a second), while the access time per byte for primary storage is measured in nanoseconds (billionths of a second). Thus, secondary storage is significantly slower than primary storage. Rotating optical storage devices, such as CD and DVD drives, have even longer access times. Other examples of secondary storage technologies include USB flash drives, floppy disks, magnetic tape, paper tape, punched cards, and RAM disks.
Once the disk read/write head on HDDs reaches the proper placement and the data, subsequent data on the track are very fast to access. To reduce the seek time and rotational latency, data are transferred to and from disks in large contiguous blocks. Sequential or block access on disks is orders of magnitude faster than random access, and many sophisticated paradigms have been developed to design efficient algorithms based on sequential and block access. Another way to reduce the I/O bottleneck is to use multiple disks in parallel to increase the bandwidth between primary and secondary memory.
Secondary storage is often formatted according to a file system format, which provides the abstraction necessary to organize data into files and directories, while also providing metadata describing the owner of a certain file, the access time, the access permissions, and other information.
Most computer operating systems use the concept of virtual memory, allowing the utilization of more primary storage capacity than is physically available in the system. As the primary memory fills up, the system moves the least-used chunks (pages) to a swap file or page file on secondary storage, retrieving them later when needed. If a lot of pages are moved to slower secondary storage, the system performance is degraded.
Tertiary storage
Tertiary storage or tertiary memory is a level below secondary storage. Typically, it involves a robotic mechanism which will mount (insert) and dismount removable mass storage media into a storage device according to the system's demands; such data are often copied to secondary storage before use. It is primarily used for archiving rarely accessed information since it is much slower than secondary storage (e.g. 5–60 seconds vs. 1–10 milliseconds). This is primarily useful for extraordinarily large data stores, accessed without human operators. Typical examples include tape libraries and optical jukeboxes.
When a computer needs to read information from the tertiary storage, it will first consult a catalog database to determine which tape or disc contains the information. Next, the computer will instruct a robotic arm to fetch the medium and place it in a drive. When the computer has finished reading the information, the robotic arm will return the medium to its place in the library.
Tertiary storage is also known as nearline storage because it is "near to online". The formal distinction between online, nearline, and offline storage is:
Online storage is immediately available for I/O.
Nearline storage is not immediately available, but can be made online quickly without human intervention.
Offline storage is not immediately available, and requires some human intervention to become online.
For example, always-on spinning hard disk drives are online storage, while spinning drives that spin down automatically, such as in massive arrays of idle disks (MAID), are nearline storage. Removable media such as tape cartridges that can be automatically loaded, as in tape libraries, are nearline storage, while tape cartridges that must be manually loaded are offline storage.
Off-line storage
Off-line storage is computer data storage on a medium or a device that is not under the control of a processing unit. The medium is recorded, usually in a secondary or tertiary storage device, and then physically removed or disconnected. It must be inserted or connected by a human operator before a computer can access it again. Unlike tertiary storage, it cannot be accessed without human interaction.
Off-line storage is used to transfer information since the detached medium can easily be physically transported. Additionally, it is useful for cases of disaster, where, for example, a fire destroys the original data, a medium in a remote location will be unaffected, enabling disaster recovery. Off-line storage increases general information security since it is physically inaccessible from a computer, and data confidentiality or integrity cannot be affected by computer-based attack techniques. Also, if the information stored for archival purposes is rarely accessed, off-line storage is less expensive than tertiary storage.
In modern personal computers, most secondary and tertiary storage media are also used for off-line storage. Optical discs and flash memory devices are the most popular, and to a much lesser extent removable hard disk drives; older examples include floppy disks and Zip disks. In enterprise uses, magnetic tape cartridges are predominant; older examples include open-reel magnetic tape and punched cards.
Characteristics of storage
Storage technologies at all levels of the storage hierarchy can be differentiated by evaluating certain core characteristics as well as measuring characteristics specific to a particular implementation. These core characteristics are volatility, mutability, accessibility, and addressability. For any particular implementation of any storage technology, the characteristics worth measuring are capacity and performance.
Volatility
Non-volatile memory retains the stored information even if not constantly supplied with electric power. It is suitable for long-term storage of information. Volatile memory requires constant power to maintain the stored information. The fastest memory technologies are volatile ones, although that is not a universal rule. Since the primary storage is required to be very fast, it predominantly uses volatile memory.
Dynamic random-access memory is a form of volatile memory that also requires the stored information to be periodically reread and rewritten, or refreshed, otherwise it would vanish. Static random-access memory is a form of volatile memory similar to DRAM with the exception that it never needs to be refreshed as long as power is applied; it loses its content when the power supply is lost.
An uninterruptible power supply (UPS) can be used to give a computer a brief window of time to move information from primary volatile storage into non-volatile storage before the batteries are exhausted. Some systems, for example EMC Symmetrix, have integrated batteries that maintain volatile storage for several minutes.
Mutability
Read/write storage or mutable storage Allows information to be overwritten at any time. A computer without some amount of read/write storage for primary storage purposes would be useless for many tasks. Modern computers typically use read/write storage also for secondary storage.
Slow write, fast read storage Read/write storage which allows information to be overwritten multiple times, but with the write operation being much slower than the read operation. Examples include CD-RW and SSD.
Write once storage Write once read many (WORM) allows the information to be written only once at some point after manufacture. Examples include semiconductor programmable read-only memory and CD-R.
Read only storage Retains the information stored at the time of manufacture. Examples include mask ROM ICs and CD-ROM.
Accessibility
Random access Any location in storage can be accessed at any moment in approximately the same amount of time. Such characteristic is well suited for primary and secondary storage. Most semiconductor memories, flash memories and hard disk drives provide random access, though both semiconductor and flash memories have minimal latency when compared to hard disk drives, as no mechanical parts need to be moved.
Sequential access The accessing of pieces of information will be in a serial order, one after the other; therefore the time to access a particular piece of information depends upon which piece of information was last accessed. Such characteristic is typical of off-line storage.
Addressability
Location-addressable Each individually accessible unit of information in storage is selected with its numerical memory address. In modern computers, location-addressable storage usually limits to primary storage, accessed internally by computer programs, since location-addressability is very efficient, but burdensome for humans.
File addressable Information is divided into files of variable length, and a particular file is selected with human-readable directory and file names. The underlying device is still location-addressable, but the operating system of a computer provides the file system abstraction to make the operation more understandable. In modern computers, secondary, tertiary and off-line storage use file systems.
Content-addressable Each individually accessible unit of information is selected based on the basis of (part of) the contents stored there. Content-addressable storage can be implemented using software (computer program) or hardware (computer device), with hardware being faster but more expensive option. Hardware content addressable memory is often used in a computer's CPU cache.
Capacity
Raw capacity The total amount of stored information that a storage device or medium can hold. It is expressed as a quantity of bits or bytes (e.g. 10.4 megabytes).
Memory storage density The compactness of stored information. It is the storage capacity of a medium divided with a unit of length, area or volume (e.g. 1.2 megabytes per square inch).
Performance
Latency The time it takes to access a particular location in storage. The relevant unit of measurement is typically nanosecond for primary storage, millisecond for secondary storage, and second for tertiary storage. It may make sense to separate read latency and write latency (especially for non-volatile memory) and in case of sequential access storage, minimum, maximum and average latency.
Throughput The rate at which information can be read from or written to the storage. In computer data storage, throughput is usually expressed in terms of megabytes per second (MB/s), though bit rate may also be used. As with latency, read rate and write rate may need to be differentiated. Also accessing media sequentially, as opposed to randomly, typically yields maximum throughput.
Granularity The size of the largest "chunk" of data that can be efficiently accessed as a single unit, e.g. without introducing additional latency.
Reliability The probability of spontaneous bit value change under various conditions, or overall failure rate.
Utilities such as hdparm and sar can be used to measure IO performance in Linux.
Energy use
Storage devices that reduce fan usage automatically shut-down during inactivity, and low power hard drives can reduce energy consumption by 90 percent.
2.5-inch hard disk drives often consume less power than larger ones. Low capacity solid-state drives have no moving parts and consume less power than hard disks. Also, memory may use more power than hard disks. Large caches, which are used to avoid hitting the memory wall, may also consume a large amount of power.
Security
Full disk encryption, volume and virtual disk encryption, andor file/folder encryption is readily available for most storage devices.
Hardware memory encryption is available in Intel Architecture, supporting Total Memory Encryption (TME) and page granular memory encryption with multiple keys (MKTME). and in SPARC M7 generation since October 2015.
Vulnerability and reliability
Distinct types of data storage have different points of failure and various methods of predictive failure analysis.
Vulnerabilities that can instantly lead to total loss are head crashing on mechanical hard drives and failure of electronic components on flash storage.
Error detection
Impending failure on hard disk drives is estimable using S.M.A.R.T. diagnostic data that includes the hours of operation and the count of spin-ups, though its reliability is disputed.
Flash storage may experience downspiking transfer rates as a result of accumulating errors, which the flash memory controller attempts to correct.
The health of optical media can be determined by measuring correctable minor errors, of which high counts signify deteriorating and/or low-quality media. Too many consecutive minor errors can lead to data corruption. Not all vendors and models of optical drives support error scanning.
Storage media
, the most commonly used data storage media are semiconductor, magnetic, and optical, while paper still sees some limited usage. Some other fundamental storage technologies, such as all-flash arrays (AFAs) are proposed for development.
Semiconductor
Semiconductor memory uses semiconductor-based integrated circuit (IC) chips to store information. Data are typically stored in metal–oxide–semiconductor (MOS) memory cells. A semiconductor memory chip may contain millions of memory cells, consisting of tiny MOS field-effect transistors (MOSFETs) and/or MOS capacitors. Both volatile and non-volatile forms of semiconductor memory exist, the former using standard MOSFETs and the latter using floating-gate MOSFETs.
In modern computers, primary storage almost exclusively consists of dynamic volatile semiconductor random-access memory (RAM), particularly dynamic random-access memory (DRAM). Since the turn of the century, a type of non-volatile floating-gate semiconductor memory known as flash memory has steadily gained share as off-line storage for home computers. Non-volatile semiconductor memory is also used for secondary storage in various advanced electronic devices and specialized computers that are designed for them.
As early as 2006, notebook and desktop computer manufacturers started using flash-based solid-state drives (SSDs) as default configuration options for the secondary storage either in addition to or instead of the more traditional HDD.
Magnetic
Magnetic storage uses different patterns of magnetization on a magnetically coated surface to store information. Magnetic storage is non-volatile. The information is accessed using one or more read/write heads which may contain one or more recording transducers. A read/write head only covers a part of the surface so that the head or medium or both must be moved relative to another in order to access data. In modern computers, magnetic storage will take these forms:
Magnetic disk;
Floppy disk, used for off-line storage;
Hard disk drive, used for secondary storage.
Magnetic tape, used for tertiary and off-line storage;
Carousel memory (magnetic rolls).
In early computers, magnetic storage was also used as:
Primary storage in a form of magnetic memory, or core memory, core rope memory, thin-film memory and/or twistor memory;
Tertiary (e.g. NCR CRAM) or off line storage in the form of magnetic cards;
Magnetic tape was then often used for secondary storage.
Magnetic storage does not have a definite limit of rewriting cycles like flash storage and re-writeable optical media, as altering magnetic fields causes no physical wear. Rather, their life span is limited by mechanical parts.
Optical
Optical storage, the typical optical disc, stores information in deformities on the surface of a circular disc and reads this information by illuminating the surface with a laser diode and observing the reflection. Optical disc storage is non-volatile. The deformities may be permanent (read only media), formed once (write once media) or reversible (recordable or read/write media). The following forms are in common use :
CD, CD-ROM, DVD, BD-ROM: Read only storage, used for mass distribution of digital information (music, video, computer programs);
CD-R, DVD-R, DVD+R, BD-R: Write once storage, used for tertiary and off-line storage;
CD-RW, DVD-RW, DVD+RW, DVD-RAM, BD-RE: Slow write, fast read storage, used for tertiary and off-line storage;
Ultra Density Optical or UDO is similar in capacity to BD-R or BD-RE and is slow write, fast read storage used for tertiary and off-line storage.
Magneto-optical disc storage is optical disc storage where the magnetic state on a ferromagnetic surface stores information. The information is read optically and written by combining magnetic and optical methods. Magneto-optical disc storage is non-volatile, sequential access, slow write, fast read storage used for tertiary and off-line storage.
3D optical data storage has also been proposed.
Light induced magnetization melting in magnetic photoconductors has also been proposed for high-speed low-energy consumption magneto-optical storage.
Paper
Paper data storage, typically in the form of paper tape or punched cards, has long been used to store information for automatic processing, particularly before general-purpose computers existed. Information was recorded by punching holes into the paper or cardboard medium and was read mechanically (or later optically) to determine whether a particular location on the medium was solid or contained a hole. Barcodes make it possible for objects that are sold or transported to have some computer-readable information securely attached.
Relatively small amounts of digital data (compared to other digital data storage) may be backed up on paper as a matrix barcode for very long-term storage, as the longevity of paper typically exceeds even magnetic data storage.
Other storage media or substrates
Vacuum-tube memory A Williams tube used a cathode-ray tube, and a Selectron tube used a large vacuum tube to store information. These primary storage devices were short-lived in the market, since the Williams tube was unreliable, and the Selectron tube was expensive.
Electro-acoustic memory Delay-line memory used sound waves in a substance such as mercury to store information. Delay-line memory was dynamic volatile, cycle sequential read/write storage, and was used for primary storage.
Optical tape is a medium for optical storage, generally consisting of a long and narrow strip of plastic, onto which patterns can be written and from which the patterns can be read back. It shares some technologies with cinema film stock and optical discs, but is compatible with neither. The motivation behind developing this technology was the possibility of far greater storage capacities than either magnetic tape or optical discs.
Phase-change memory uses different mechanical phases of phase-change material to store information in an X–Y addressable matrix and reads the information by observing the varying electrical resistance of the material. Phase-change memory would be non-volatile, random-access read/write storage, and might be used for primary, secondary and off-line storage. Most rewritable and many write-once optical disks already use phase-change material to store information.
Holographic data storage stores information optically inside crystals or photopolymers. Holographic storage can utilize the whole volume of the storage medium, unlike optical disc storage, which is limited to a small number of surface layers. Holographic storage would be non-volatile, sequential-access, and either write-once or read/write storage. It might be used for secondary and off-line storage. See Holographic Versatile Disc (HVD).
Molecular memory stores information in polymer that can store electric charge. Molecular memory might be especially suited for primary storage. The theoretical storage capacity of molecular memory is 10 terabits per square inch (16 Gbit/mm2).
Magnetic photoconductors store magnetic information, which can be modified by low-light illumination.
DNA stores information in DNA nucleotides. It was first done in 2012, when researchers achieved a ratio of 1.28 petabytes per gram of DNA. In March 2017 scientists reported that a new algorithm called a DNA fountain achieved 85% of the theoretical limit, at 215 petabytes per gram of DNA.
Related technologies
Redundancy
While a group of bits malfunction may be resolved by error detection and correction mechanisms (see above), storage device malfunction requires different solutions. The following solutions are commonly used and valid for most storage devices:
Device mirroring (replication) – A common solution to the problem is constantly maintaining an identical copy of device content on another device (typically of the same type). The downside is that this doubles the storage, and both devices (copies) need to be updated simultaneously with some overhead and possibly some delays. The upside is the possible concurrent reading of the same data group by two independent processes, which increases performance. When one of the replicated devices is detected to be defective, the other copy is still operational and is being utilized to generate a new copy on another device (usually available operational in a pool of stand-by devices for this purpose).
Redundant array of independent disks (RAID) – This method generalizes the device mirroring above by allowing one device in a group of devices to fail and be replaced with the content restored (Device mirroring is RAID with n=2). RAID groups of n=5 or n=6 are common. n>2 saves storage, when compared with n=2, at the cost of more processing during both regular operation (with often reduced performance) and defective device replacement.
Device mirroring and typical RAID are designed to handle a single device failure in the RAID group of devices. However, if a second failure occurs before the RAID group is completely repaired from the first failure, then data can be lost. The probability of a single failure is typically small. Thus the probability of two failures in the same RAID group in time proximity is much smaller (approximately the probability squared, i.e., multiplied by itself). If a database cannot tolerate even such a smaller probability of data loss, then the RAID group itself is replicated (mirrored). In many cases such mirroring is done geographically remotely, in a different storage array, to handle recovery from disasters (see disaster recovery above).
Network connectivity
A secondary or tertiary storage may connect to a computer utilizing computer networks. This concept does not pertain to the primary storage, which is shared between multiple processors to a lesser degree.
Direct-attached storage (DAS) is a traditional mass storage, that does not use any network. This is still a most popular approach. This retronym was coined recently, together with NAS and SAN.
Network-attached storage (NAS) is mass storage attached to a computer which another computer can access at file level over a local area network, a private wide area network, or in the case of online file storage, over the Internet. NAS is commonly associated with the NFS and CIFS/SMB protocols.
Storage area network (SAN) is a specialized network, that provides other computers with storage capacity. The crucial difference between NAS and SAN, is that NAS presents and manages file systems to client computers, while SAN provides access at block-addressing (raw) level, leaving it to attaching systems to manage data or file systems within the provided capacity. SAN is commonly associated with Fibre Channel networks.
Robotic storage
Large quantities of individual magnetic tapes, and optical or magneto-optical discs may be stored in robotic tertiary storage devices. In tape storage field they are known as tape libraries, and in optical storage field optical jukeboxes, or optical disk libraries per analogy. The smallest forms of either technology containing just one drive device are referred to as autoloaders or autochangers.
Robotic-access storage devices may have a number of slots, each holding individual media, and usually one or more picking robots that traverse the slots and load media to built-in drives. The arrangement of the slots and picking devices affects performance. Important characteristics of such storage are possible expansion options: adding slots, modules, drives, robots. Tape libraries may have from 10 to more than 100,000 slots, and provide terabytes or petabytes of near-line information. Optical jukeboxes are somewhat smaller solutions, up to 1,000 slots.
Robotic storage is used for backups, and for high-capacity archives in imaging, medical, and video industries. Hierarchical storage management is a most known archiving strategy of automatically migrating long-unused files from fast hard disk storage to libraries or jukeboxes. If the files are needed, they are retrieved back to disk.
See also
Primary storage topics
Aperture (computer memory)
Dynamic random-access memory (DRAM)
Memory latency
Mass storage
Memory cell (disambiguation)
Memory management
Memory leak
Virtual memory
Memory protection
Page address register
Stable storage
Static random-access memory (SRAM)
Secondary, tertiary and off-line storage topics
Cloud storage
Hybrid cloud storage
Data deduplication
Data proliferation
Data storage tag used for capturing research data
Disk utility
File system
List of file formats
Global filesystem
Flash memory
Geoplexing
Information repository
Noise-predictive maximum-likelihood detection
Object(-based) storage
Removable media
Solid-state drive
Spindle
Virtual tape library
Wait state
Write buffer
Write protection
Data storage conferences
Storage Networking World
Storage World Conference
Notes
References
Further reading
Memory & storage, Computer history museum
Computer architecture |
5306 | https://en.wikipedia.org/wiki/Chemical%20equilibrium | Chemical equilibrium | In a chemical reaction, chemical equilibrium is the state in which both the reactants and products are present in concentrations which have no further tendency to change with time, so that there is no observable change in the properties of the system. This state results when the forward reaction proceeds at the same rate as the reverse reaction. The reaction rates of the forward and backward reactions are generally not zero, but they are equal. Thus, there are no net changes in the concentrations of the reactants and products. Such a state is known as dynamic equilibrium.
Historical introduction
The concept of chemical equilibrium was developed in 1803, after Berthollet found that some chemical reactions are reversible. For any reaction mixture to exist at equilibrium, the rates of the forward and backward (reverse) reactions must be equal. In the following chemical equation, arrows point both ways to indicate equilibrium. A and B are reactant chemical species, S and T are product species, and α, β, σ, and τ are the stoichiometric coefficients of the respective reactants and products:
α A + β B σ S + τ T
The equilibrium concentration position of a reaction is said to lie "far to the right" if, at equilibrium, nearly all the reactants are consumed. Conversely the equilibrium position is said to be "far to the left" if hardly any product is formed from the reactants.
Guldberg and Waage (1865), building on Berthollet's ideas, proposed the law of mass action:
where A, B, S and T are active masses and k+ and k− are rate constants. Since at equilibrium forward and backward rates are equal:
and the ratio of the rate constants is also a constant, now known as an equilibrium constant.
By convention, the products form the numerator.
However, the law of mass action is valid only for concerted one-step reactions that proceed through a single transition state and is not valid in general because rate equations do not, in general, follow the stoichiometry of the reaction as Guldberg and Waage had proposed (see, for example, nucleophilic aliphatic substitution by SN1 or reaction of hydrogen and bromine to form hydrogen bromide). Equality of forward and backward reaction rates, however, is a necessary condition for chemical equilibrium, though it is not sufficient to explain why equilibrium occurs.
Despite the limitations of this derivation, the equilibrium constant for a reaction is indeed a constant, independent of the activities of the various species involved, though it does depend on temperature as observed by the van 't Hoff equation. Adding a catalyst will affect both the forward reaction and the reverse reaction in the same way and will not have an effect on the equilibrium constant. The catalyst will speed up both reactions thereby increasing the speed at which equilibrium is reached.
Although the macroscopic equilibrium concentrations are constant in time, reactions do occur at the molecular level. For example, in the case of acetic acid dissolved in water and forming acetate and hydronium ions,
a proton may hop from one molecule of acetic acid onto a water molecule and then onto an acetate anion to form another molecule of acetic acid and leaving the number of acetic acid molecules unchanged. This is an example of dynamic equilibrium. Equilibria, like the rest of thermodynamics, are statistical phenomena, averages of microscopic behavior.
Le Châtelier's principle (1884) predicts the behavior of an equilibrium system when changes to its reaction conditions occur. If a dynamic equilibrium is disturbed by changing the conditions, the position of equilibrium moves to partially reverse the change. For example, adding more S (to the chemical reaction above) from the outside will cause an excess of products, and the system will try to counteract this by increasing the reverse reaction and pushing the equilibrium point backward (though the equilibrium constant will stay the same).
If mineral acid is added to the acetic acid mixture, increasing the concentration of hydronium ion, the amount of dissociation must decrease as the reaction is driven to the left in accordance with this principle. This can also be deduced from the equilibrium constant expression for the reaction:
If {H3O+} increases {CH3CO2H} must increase and must decrease. The H2O is left out, as it is the solvent and its concentration remains high and nearly constant.
A quantitative version is given by the reaction quotient.
J. W. Gibbs suggested in 1873 that equilibrium is attained when the Gibbs free energy of the system is at its minimum value (assuming the reaction is carried out at a constant temperature and pressure). What this means is that the derivative of the Gibbs energy with respect to reaction coordinate (a measure of the extent of reaction that has occurred, ranging from zero for all reactants to a maximum for all products) vanishes (because dG = 0), signaling a stationary point. This derivative is called the reaction Gibbs energy (or energy change) and corresponds to the difference between the chemical potentials of reactants and products at the composition of the reaction mixture. This criterion is both necessary and sufficient. If a mixture is not at equilibrium, the liberation of the excess Gibbs energy (or Helmholtz energy at constant volume reactions) is the "driving force" for the composition of the mixture to change until equilibrium is reached. The equilibrium constant can be related to the standard Gibbs free energy change for the reaction by the equation
where R is the universal gas constant and T the temperature.
When the reactants are dissolved in a medium of high ionic strength the quotient of activity coefficients may be taken to be constant. In that case the concentration quotient, Kc,
where [A] is the concentration of A, etc., is independent of the analytical concentration of the reactants. For this reason, equilibrium constants for solutions are usually determined in media of high ionic strength. Kc varies with ionic strength, temperature and pressure (or volume). Likewise Kp for gases depends on partial pressure. These constants are easier to measure and encountered in high-school chemistry courses.
Thermodynamics
At constant temperature and pressure, one must consider the Gibbs free energy, G, while at constant temperature and volume, one must consider the Helmholtz free energy, A, for the reaction; and at constant internal energy and volume, one must consider the entropy, S, for the reaction.
The constant volume case is important in geochemistry and atmospheric chemistry where pressure variations are significant. Note that, if reactants and products were in standard state (completely pure), then there would be no reversibility and no equilibrium. Indeed, they would necessarily occupy disjoint volumes of space. The mixing of the products and reactants contributes a large entropy increase (known as entropy of mixing) to states containing equal mixture of products and reactants and gives rise to a distinctive minimum in the Gibbs energy as a function of the extent of reaction. The standard Gibbs energy change, together with the Gibbs energy of mixing, determine the equilibrium state.
In this article only the constant pressure case is considered. The relation between the Gibbs free energy and the equilibrium constant can be found by considering chemical potentials.
At constant temperature and pressure in the absence of an applied voltage, the Gibbs free energy, G, for the reaction depends only on the extent of reaction: ξ (Greek letter xi), and can only decrease according to the second law of thermodynamics. It means that the derivative of G with respect to ξ must be negative if the reaction happens; at the equilibrium this derivative is equal to zero.
:equilibrium
In order to meet the thermodynamic condition for equilibrium, the Gibbs energy must be stationary, meaning that the derivative of G with respect to the extent of reaction, ξ, must be zero. It can be shown that in this case, the sum of chemical potentials times the stoichiometric coefficients of the products is equal to the sum of those corresponding to the reactants. Therefore, the sum of the Gibbs energies of the reactants must be the equal to the sum of the Gibbs energies of the products.
where μ is in this case a partial molar Gibbs energy, a chemical potential. The chemical potential of a reagent A is a function of the activity, {A} of that reagent.
(where μ is the standard chemical potential).
The definition of the Gibbs energy equation interacts with the fundamental thermodynamic relation to produce
.
Inserting dNi = νi dξ into the above equation gives a stoichiometric coefficient () and a differential that denotes the reaction occurring to an infinitesimal extent (dξ). At constant pressure and temperature the above equations can be written as
which is the "Gibbs free energy change for the reaction. This results in:
.
By substituting the chemical potentials:
,
the relationship becomes:
:
which is the standard Gibbs energy change for the reaction' that can be calculated using thermodynamical tables.
The reaction quotient is defined as:
Therefore,
At equilibrium:
leading to:
and
Obtaining the value of the standard Gibbs energy change, allows the calculation of the equilibrium constant.
Addition of reactants or products
For a reactional system at equilibrium: Qr = Keq; ξ = ξeq.
If the activities of constituents are modified, the value of the reaction quotient changes and becomes different from the equilibrium constant: Qr ≠ Keq and then
If activity of a reagent i increases the reaction quotient decreases. Then and The reaction will shift to the right (i.e. in the forward direction, and thus more products will form).
If activity of a product j increases, then and The reaction will shift to the left (i.e. in the reverse direction, and thus less products will form).
Note that activities and equilibrium constants are dimensionless numbers.
Treatment of activity
The expression for the equilibrium constant can be rewritten as the product of a concentration quotient, Kc and an activity coefficient quotient, Γ.
[A] is the concentration of reagent A, etc. It is possible in principle to obtain values of the activity coefficients, γ. For solutions, equations such as the Debye–Hückel equation or extensions such as Davies equation Specific ion interaction theory or Pitzer equations may be used.Software (below) However this is not always possible. It is common practice to assume that Γ is a constant, and to use the concentration quotient in place of the thermodynamic equilibrium constant. It is also general practice to use the term equilibrium constant instead of the more accurate concentration quotient. This practice will be followed here.
For reactions in the gas phase partial pressure is used in place of concentration and fugacity coefficient in place of activity coefficient. In the real world, for example, when making ammonia in industry, fugacity coefficients must be taken into account. Fugacity, f, is the product of partial pressure and fugacity coefficient. The chemical potential of a species in the real gas phase is given by
so the general expression defining an equilibrium constant is valid for both solution and gas phases.
Concentration quotients
In aqueous solution, equilibrium constants are usually determined in the presence of an "inert" electrolyte such as sodium nitrate, NaNO3, or potassium perchlorate, KClO4. The ionic strength of a solution is given by
where ci and zi stand for the concentration and ionic charge of ion type i, and the sum is taken over all the N types of charged species in solution. When the concentration of dissolved salt is much higher than the analytical concentrations of the reagents, the ions originating from the dissolved salt determine the ionic strength, and the ionic strength is effectively constant. Since activity coefficients depend on ionic strength, the activity coefficients of the species are effectively independent of concentration. Thus, the assumption that Γ is constant is justified. The concentration quotient is a simple multiple of the equilibrium constant.
However, Kc will vary with ionic strength. If it is measured at a series of different ionic strengths, the value can be extrapolated to zero ionic strength. The concentration quotient obtained in this manner is known, paradoxically, as a thermodynamic equilibrium constant.
Before using a published value of an equilibrium constant in conditions of ionic strength different from the conditions used in its determination, the value should be adjustedSoftware (below).
Metastable mixtures
A mixture may appear to have no tendency to change, though it is not at equilibrium. For example, a mixture of SO2 and O2 is metastable as there is a kinetic barrier to formation of the product, SO3.
2 SO2 + O2 2 SO3
The barrier can be overcome when a catalyst is also present in the mixture as in the contact process, but the catalyst does not affect the equilibrium concentrations.
Likewise, the formation of bicarbonate from carbon dioxide and water is very slow under normal conditions
but almost instantaneous in the presence of the catalytic enzyme carbonic anhydrase.
Pure substances
When pure substances (liquids or solids) are involved in equilibria their activities do not appear in the equilibrium constant because their numerical values are considered one.
Applying the general formula for an equilibrium constant to the specific case of a dilute solution of acetic acid in water one obtains
CH3CO2H + H2O CH3CO2− + H3O+
For all but very concentrated solutions, the water can be considered a "pure" liquid, and therefore it has an activity of one. The equilibrium constant expression is therefore usually written as
.
A particular case is the self-ionization of water
2 H2O H3O+ + OH−
Because water is the solvent, and has an activity of one, the self-ionization constant of water is defined as
It is perfectly legitimate to write [H+] for the hydronium ion concentration, since the state of solvation of the proton is constant (in dilute solutions) and so does not affect the equilibrium concentrations. Kw varies with variation in ionic strength and/or temperature.
The concentrations of H+ and OH− are not independent quantities. Most commonly [OH−] is replaced by Kw[H+]−1 in equilibrium constant expressions which would otherwise include hydroxide ion.
Solids also do not appear in the equilibrium constant expression, if they are considered to be pure and thus their activities taken to be one. An example is the Boudouard reaction:
2 CO CO2 + C
for which the equation (without solid carbon) is written as:
Multiple equilibria
Consider the case of a dibasic acid H2A. When dissolved in water, the mixture will contain H2A, HA− and A2−. This equilibrium can be split into two steps in each of which one proton is liberated.K1 and K2 are examples of stepwise equilibrium constants. The overall equilibrium constant, βD, is product of the stepwise constants.
{H2A} <=> {A^{2-}} + {2H+}:
Note that these constants are dissociation constants because the products on the right hand side of the equilibrium expression are dissociation products. In many systems, it is preferable to use association constants.β1 and β2 are examples of association constants. Clearly and ; and
For multiple equilibrium systems, also see: theory of Response reactions.
Effect of temperature
The effect of changing temperature on an equilibrium constant is given by the van 't Hoff equation
Thus, for exothermic reactions (ΔH is negative), K decreases with an increase in temperature, but, for endothermic reactions, (ΔH is positive) K increases with an increase temperature. An alternative formulation is
At first sight this appears to offer a means of obtaining the standard molar enthalpy of the reaction by studying the variation of K with temperature. In practice, however, the method is unreliable because error propagation almost always gives very large errors on the values calculated in this way.
Effect of electric and magnetic fields
The effect of electric field on equilibrium has been studied by Manfred Eigen among others.
Types of equilibrium
Equilibrium can be broadly classified as heterogeneous and homogeneous equilibrium. Homogeneous equilibrium consists of reactants and products belonging in the same phase whereas heterogeneous equilibrium comes into play for reactants and products in different phases.
In the gas phase: rocket engines
The industrial synthesis such as ammonia in the Haber–Bosch process (depicted right) takes place through a succession of equilibrium steps including adsorption processes
Atmospheric chemistry
Seawater and other natural waters: chemical oceanography
Distribution between two phases
log D distribution coefficient: important for pharmaceuticals where lipophilicity is a significant property of a drug
Liquid–liquid extraction, Ion exchange, Chromatography
Solubility product
Uptake and release of oxygen by hemoglobin in blood
Acid–base equilibria: acid dissociation constant, hydrolysis, buffer solutions, indicators, acid–base homeostasis
Metal–ligand complexation: sequestering agents, chelation therapy, MRI contrast reagents, Schlenk equilibrium
Adduct formation: host–guest chemistry, supramolecular chemistry, molecular recognition, dinitrogen tetroxide
In certain oscillating reactions, the approach to equilibrium is not asymptotically but in the form of a damped oscillation .
The related Nernst equation in electrochemistry gives the difference in electrode potential as a function of redox concentrations.
When molecules on each side of the equilibrium are able to further react irreversibly in secondary reactions, the final product ratio is determined according to the Curtin–Hammett principle.
In these applications, terms such as stability constant, formation constant, binding constant, affinity constant, association constant and dissociation constant are used. In biochemistry, it is common to give units for binding constants, which serve to define the concentration units used when the constant's value was determined.
Composition of a mixture
When the only equilibrium is that of the formation of a 1:1 adduct as the composition of a mixture, there are many ways that the composition of a mixture can be calculated. For example, see ICE table for a traditional method of calculating the pH of a solution of a weak acid.
There are three approaches to the general calculation of the composition of a mixture at equilibrium.
The most basic approach is to manipulate the various equilibrium constants until the desired concentrations are expressed in terms of measured equilibrium constants (equivalent to measuring chemical potentials) and initial conditions.
Minimize the Gibbs energy of the system.
Satisfy the equation of mass balance. The equations of mass balance are simply statements that demonstrate that the total concentration of each reactant must be constant by the law of conservation of mass.
Mass-balance equations
In general, the calculations are rather complicated or complex. For instance, in the case of a dibasic acid, H2A dissolved in water the two reactants can be specified as the conjugate base, A2−, and the proton, H+. The following equations of mass-balance could apply equally well to a base such as 1,2-diaminoethane, in which case the base itself is designated as the reactant A:
with TA the total concentration of species A. Note that it is customary to omit the ionic charges when writing and using these equations.
When the equilibrium constants are known and the total concentrations are specified there are two equations in two unknown "free concentrations" [A] and [H]. This follows from the fact that [HA] = β1[A][H], [H2A] = β2[A][H]2 and [OH] = Kw[H]−1
so the concentrations of the "complexes" are calculated from the free concentrations and the equilibrium constants.
General expressions applicable to all systems with two reagents, A and B would be
It is easy to see how this can be extended to three or more reagents.
Polybasic acids
The composition of solutions containing reactants A and H is easy to calculate as a function of p[H]. When [H] is known, the free concentration [A] is calculated from the mass-balance equation in A.
The diagram alongside, shows an example of the hydrolysis of the aluminium Lewis acid Al3+(aq) shows the species concentrations for a 5 × 10−6 M solution of an aluminium salt as a function of pH. Each concentration is shown as a percentage of the total aluminium.
Solution and precipitation
The diagram above illustrates the point that a precipitate that is not one of the main species in the solution equilibrium may be formed. At pH just below 5.5 the main species present in a 5 μM solution of Al3+ are aluminium hydroxides Al(OH)2+, and , but on raising the pH Al(OH)3 precipitates from the solution. This occurs because Al(OH)3 has a very large lattice energy. As the pH rises more and more Al(OH)3 comes out of solution. This is an example of Le Châtelier's principle in action: Increasing the concentration of the hydroxide ion causes more aluminium hydroxide to precipitate, which removes hydroxide from the solution. When the hydroxide concentration becomes sufficiently high the soluble aluminate, , is formed.
Another common instance where precipitation occurs is when a metal cation interacts with an anionic ligand to form an electrically neutral complex. If the complex is hydrophobic, it will precipitate out of water. This occurs with the nickel ion Ni2+ and dimethylglyoxime, (dmgH2): in this case the lattice energy of the solid is not particularly large, but it greatly exceeds the energy of solvation of the molecule Ni(dmgH)2.
Minimization of Gibbs energy
At equilibrium, at a specified temperature and pressure, and with no external forces, the Gibbs free energy G is at a minimum:
where μj is the chemical potential of molecular species j, and Nj is the amount of molecular species j. It may be expressed in terms of thermodynamic activity as:
where is the chemical potential in the standard state, R is the gas constant T is the absolute temperature, and Aj is the activity.
For a closed system, no particles may enter or leave, although they may combine in various ways. The total number of atoms of each element will remain constant. This means that the minimization above must be subjected to the constraints:
where aij is the number of atoms of element i in molecule j and b is the total number of atoms of element i, which is a constant, since the system is closed. If there are a total of k types of atoms in the system, then there will be k such equations. If ions are involved, an additional row is added to the aij matrix specifying the respective charge on each molecule which will sum to zero.
This is a standard problem in optimisation, known as constrained minimisation. The most common method of solving it is using the method of Lagrange multipliers (although other methods may be used).
Define:
where the λi are the Lagrange multipliers, one for each element. This allows each of the Nj and λj to be treated independently, and it can be shown using the tools of multivariate calculus that the equilibrium condition is given by
(For proof see Lagrange multipliers.) This is a set of (m + k) equations in (m + k) unknowns (the Nj and the λi) and may, therefore, be solved for the equilibrium concentrations Nj as long as the chemical activities are known as functions of the concentrations at the given temperature and pressure. (In the ideal case, activities are proportional to concentrations.) (See Thermodynamic databases for pure substances.) Note that the second equation is just the initial constraints for minimization.
This method of calculating equilibrium chemical concentrations is useful for systems with a large number of different molecules. The use of k atomic element conservation equations for the mass constraint is straightforward, and replaces the use of the stoichiometric coefficient equations. The results are consistent with those specified by chemical equations. For example, if equilibrium is specified by a single chemical equation:,
where νj is the stoichiometric coefficient for the j th molecule (negative for reactants, positive for products) and Rj is the symbol for the j th molecule, a properly balanced equation will obey:
Multiplying the first equilibrium condition by νj and using the above equation yields:
As above, defining ΔG
where Kc'' is the equilibrium constant, and ΔG will be zero at equilibrium.
Analogous procedures exist for the minimization of other thermodynamic potentials.
See also
Acidosis
Alkalosis
Arterial blood gas
Benesi–Hildebrand method
Determination of equilibrium constants
Equilibrium constant
Henderson–Hasselbalch equation
Michaelis–Menten kinetics
pCO2
pH
pKa
Redox equilibria
Steady state (chemistry)
Thermodynamic databases for pure substances
Non-random two-liquid model (NRTL model) - Phase equilibrium calculations
UNIQUAC model - Phase equilibrium calculations
References
Further reading
Mainly concerned with gas-phase equilibria.
External links
Analytical chemistry
Physical chemistry |
5309 | https://en.wikipedia.org/wiki/Software | Software | Software is a set of computer programs and associated documentation and data. This is in contrast to hardware, from which the system is built and which actually performs the work.
At the lowest programming level, executable code consists of machine language instructions supported by an individual processor—typically a central processing unit (CPU) or a graphics processing unit (GPU). Machine language consists of groups of binary values signifying processor instructions that change the state of the computer from its preceding state. For example, an instruction may change the value stored in a particular storage location in the computer—an effect that is not directly observable to the user. An instruction may also invoke one of many input or output operations, for example, displaying some text on a computer screen, causing state changes that should be visible to the user. The processor executes the instructions in the order they are provided, unless it is instructed to "jump" to a different instruction or is interrupted by the operating system. , most personal computers, smartphone devices, and servers have processors with multiple execution units, or multiple processors performing computation together, so computing has become a much more concurrent activity than in the past.
The majority of software is written in high-level programming languages. They are easier and more efficient for programmers because they are closer to natural languages than machine languages. High-level languages are translated into machine language using a compiler, an interpreter, or a combination of the two. Software may also be written in a low-level assembly language that has a strong correspondence to the computer's machine language instructions and is translated into machine language using an assembler.
History
An algorithm for what would have been the first piece of software was written by Ada Lovelace in the 19th century, for the planned Analytical Engine. She created proofs to show how the engine would calculate Bernoulli numbers. Because of the proofs and the algorithm, she is considered the first computer programmer.
The first theory about software, prior to the creation of computers as we know them today, was proposed by Alan Turing in his 1936 essay, On Computable Numbers, with an Application to the Entscheidungsproblem (decision problem). This eventually led to the creation of the academic fields of computer science and software engineering; both fields study software and its creation. Computer science is the theoretical study of computer and software (Turing's essay is an example of computer science), whereas software engineering is the application of engineering principles to development of software.
In 2000, Fred Shapiro, a librarian at the Yale Law School, published a letter revealing that John Wilder Tukey's 1958 paper "The Teaching of Concrete Mathematics" contained the earliest known usage of the term "software" found in a search of JSTOR's electronic archives, predating the Oxford English Dictionary's citation by two years. This led many to credit Tukey with coining the term, particularly in obituaries published that same year, although Tukey never claimed credit for any such coinage. In 1995, Paul Niquette claimed he had originally coined the term in October 1953, although he could not find any documents supporting his claim. The earliest known publication of the term "software" in an engineering context was in August 1953 by Richard R. Carhart, in a Rand Corporation Research Memorandum.
Types
On virtually all computer platforms, software can be grouped into a few broad categories.
Purpose, or domain of use
Based on the goal, computer software can be divided into:
Application software uses the computer system to perform special functions beyond the basic operation of the computer itself. There are many different types of application software because the range of tasks that can be performed with a modern computer is so large—see list of software.
System software manages hardware behaviour, as to provide basic functionalities that are required by users, or for other software to run properly, if at all. System software is also designed for providing a platform for running application software, and it includes the following:
Operating systems are essential collections of software that manage resources and provide common services for other software that runs "on top" of them. Supervisory programs, boot loaders, shells and window systems are core parts of operating systems. In practice, an operating system comes bundled with additional software (including application software) so that a user can potentially do some work with a computer that only has one operating system.
Device drivers operate or control a particular type of device that is attached to a computer. Each device needs at least one corresponding device driver; because a computer typically has at minimum at least one input device and at least one output device, a computer typically needs more than one device driver.
Utilities are computer programs designed to assist users in the maintenance and care of their computers.
Malicious software, or malware, is software that is developed to harm or disrupt computers. Malware is closely associated with computer-related crimes, though some malicious programs may have been designed as practical jokes.
Nature or domain of execution
Desktop applications such as web browsers and Microsoft Office and LibreOffice and WordPerfect, as well as smartphone and tablet applications (called "apps").
JavaScript scripts are pieces of software traditionally embedded in web pages that are run directly inside the web browser when a web page is loaded without the need for a web browser plugin. Software written in other programming languages can also be run within the web browser if the software is either translated into JavaScript, or if a web browser plugin that supports that language is installed; the most common example of the latter is ActionScript scripts, which are supported by the Adobe Flash plugin.
Server software, including:
Web applications, which usually run on the web server and output dynamically generated web pages to web browsers, using e.g. PHP, Java, ASP.NET, or even JavaScript that runs on the server. In modern times these commonly include some JavaScript to be run in the web browser as well, in which case they typically run partly on the server, partly in the web browser.
Plugins and extensions are software that extends or modifies the functionality of another piece of software, and require that software be used in order to function.
Embedded software resides as firmware within embedded systems, devices dedicated to a single use or a few uses such as cars and televisions (although some embedded devices such as wireless chipsets can themselves be part of an ordinary, non-embedded computer system such as a PC or smartphone). In the embedded system context there is sometimes no clear distinction between the system software and the application software. However, some embedded systems run embedded operating systems, and these systems do retain the distinction between system software and application software (although typically there will only be one, fixed application which is always run).
Microcode is a special, relatively obscure type of embedded software which tells the processor itself how to execute machine code, so it is actually a lower level than machine code. It is typically proprietary to the processor manufacturer, and any necessary correctional microcode software updates are supplied by them to users (which is much cheaper than shipping replacement processor hardware). Thus an ordinary programmer would not expect to ever have to deal with it.
Programming tools
Programming tools are also software in the form of programs or applications that developers use to create, debug, maintain, or otherwise support software.
Software is written in one or more programming languages; there are many programming languages in existence, and each has at least one implementation, each of which consists of its own set of programming tools. These tools may be relatively self-contained programs such as compilers, debuggers, interpreters, linkers, and text editors, that can be combined to accomplish a task; or they may form an integrated development environment (IDE), which combines much or all of the functionality of such self-contained tools. IDEs may do this by either invoking the relevant individual tools or by re-implementing their functionality in a new way. An IDE can make it easier to do specific tasks, such as searching in files in a particular project. Many programming language implementations provide the option of using both individual tools or an IDE.
Topics
Architecture
People who use modern general purpose computers (as opposed to embedded systems, analog computers and supercomputers) usually see three layers of software performing a variety of tasks: platform, application, and user software.
Platform software: The platform includes the firmware, device drivers, an operating system, and typically a graphical user interface which, in total, allow a user to interact with the computer and its peripherals (associated equipment). Platform software often comes bundled with the computer. On a PC one will usually have the ability to change the platform software.
Application software: Application software is what most people think of when they think of software. Typical examples include office suites and video games. Application software is often purchased separately from computer hardware. Sometimes applications are bundled with the computer, but that does not change the fact that they run as independent applications. Applications are usually independent programs from the operating system, though they are often tailored for specific platforms. Most users think of compilers, databases, and other "system software" as applications.
User-written software: End-user development tailors systems to meet users' specific needs. User software includes spreadsheet templates and word processor templates. Even email filters are a kind of user software. Users create this software themselves and often overlook how important it is. Depending on how competently the user-written software has been integrated into default application packages, many users may not be aware of the distinction between the original packages, and what has been added by co-workers.
Execution
Computer software has to be "loaded" into the computer's storage (such as the hard drive or memory). Once the software has loaded, the computer is able to execute the software. This involves passing instructions from the application software, through the system software, to the hardware which ultimately receives the instruction as machine code. Each instruction causes the computer to carry out an operation—moving data, carrying out a computation, or altering the control flow of instructions.
Data movement is typically from one place in memory to another. Sometimes it involves moving data between memory and registers which enable high-speed data access in the CPU. Moving data, especially large amounts of it, can be costly; this is sometimes avoided by using "pointers" to data instead. Computations include simple operations such as incrementing the value of a variable data element. More complex computations may involve many operations and data elements together.
Quality and reliability
Software quality is very important, especially for commercial and system software. If software is faulty, it can delete a person's work, crash the computer and do other unexpected things. Faults and errors are called "bugs" which are often discovered during alpha and beta testing. Software is often also a victim to what is known as software aging, the progressive performance degradation resulting from a combination of unseen bugs.
Many bugs are discovered and fixed through software testing. However, software testing rarely—if ever—eliminates every bug; some programmers say that "every program has at least one more bug" (Lubarsky's Law). In the waterfall method of software development, separate testing teams are typically employed, but in newer approaches, collectively termed agile software development, developers often do all their own testing, and demonstrate the software to users/clients regularly to obtain feedback. Software can be tested through unit testing, regression testing and other methods, which are done manually, or most commonly, automatically, since the amount of code to be tested can be large. Programs containing command software enable hardware engineering and system operations to function much easier together.
License
The software's license gives the user the right to use the software in the licensed environment, and in the case of free software licenses, also grants other rights such as the right to make copies.
Proprietary software can be divided into two types:
freeware, which includes the category of "free trial" software or "freemium" software (in the past, the term shareware was often used for free trial/freemium software). As the name suggests, freeware can be used for free, although in the case of free trials or freemium software, this is sometimes only true for a limited period of time or with limited functionality.
software available for a fee, which can only be legally used on purchase of a license.
Open-source software comes with a free software license, granting the recipient the rights to modify and redistribute the software.
Patents
Software patents, like other types of patents, are theoretically supposed to give an inventor an exclusive, time-limited license for a detailed idea (e.g. an algorithm) on how to implement a piece of software, or a component of a piece of software. Ideas for useful things that software could do, and user requirements, are not supposed to be patentable, and concrete implementations (i.e. the actual software packages implementing the patent) are not supposed to be patentable either—the latter are already covered by copyright, generally automatically. So software patents are supposed to cover the middle area, between requirements and concrete implementation. In some countries, a requirement for the claimed invention to have an effect on the physical world may also be part of the requirements for a software patent to be held valid—although since all useful software has effects on the physical world, this requirement may be open to debate. Meanwhile, American copyright law was applied to various aspects of the writing of the software code.
Software patents are controversial in the software industry with many people holding different views about them. One of the sources of controversy is that the aforementioned split between initial ideas and patent does not seem to be honored in practice by patent lawyers—for example the patent for aspect-oriented programming (AOP), which purported to claim rights over any programming tool implementing the idea of AOP, howsoever implemented. Another source of controversy is the effect on innovation, with many distinguished experts and companies arguing that software is such a fast-moving field that software patents merely create vast additional litigation costs and risks, and actually retard innovation. In the case of debates about software patents outside the United States, the argument has been made that large American corporations and patent lawyers are likely to be the primary beneficiaries of allowing or continue to allow software patents.
Design and implementation
Design and implementation of software vary depending on the complexity of the software. For instance, the design and creation of Microsoft Word took much more time than designing and developing Microsoft Notepad because the former has much more basic functionality.
Software is usually developed in integrated development environments (IDE) like Eclipse, IntelliJ and Microsoft Visual Studio that can simplify the process and compile the software. As noted in a different section, software is usually created on top of existing software and the application programming interface (API) that the underlying software provides like GTK+, JavaBeans or Swing. Libraries (APIs) can be categorized by their purpose. For instance, the Spring Framework is used for implementing enterprise applications, the Windows Forms library is used for designing graphical user interface (GUI) applications like Microsoft Word, and Windows Communication Foundation is used for designing web services. When a program is designed, it relies upon the API. For instance, a Microsoft Windows desktop application might call API functions in the .NET Windows Forms library like Form1.Close() and Form1.Show() to close or open the application. Without these APIs, the programmer needs to write these functionalities entirely themselves. Companies like Oracle and Microsoft provide their own APIs so that many applications are written using their software libraries that usually have numerous APIs in them.
Data structures such as hash tables, arrays, and binary trees, and algorithms such as quicksort, can be useful for creating software.
Computer software has special economic characteristics that make its design, creation, and distribution different from most other economic goods.
A person who creates software is called a programmer, software engineer or software developer, terms that all have a similar meaning. More informal terms for programmer also exist such as "coder" and "hacker"although use of the latter word may cause confusion, because it is more often used to mean someone who illegally breaks into computer systems.
See also
Computer program
Independent software vendor
Open-source software
Outline of software
Software asset management
Software release life cycle
References
Sources
External links
Software at Encyclopædia Britannica |
5311 | https://en.wikipedia.org/wiki/Computer%20programming | Computer programming | Computer programming or coding is the composition of sequences of instructions, called programs, that computers can follow to perform tasks. It involves designing and implementing algorithms, step-by-step specifications of procedures, by writing code in one or more programming languages. Programmers typically use high-level programming languages that are more easily intelligible to humans than machine code, which is directly executed by the central processing unit. Proficient programming usually requires expertise in several different subjects, including knowledge of the application domain, details of programming languages and generic code libraries, specialized algorithms, and formal logic.
Auxiliary tasks accompanying and related to programming include analyzing requirements, testing, debugging (investigating and fixing problems), implementation of build systems, and management of derived artifacts, such as programs' machine code. While these are sometimes considered programming, often the term software development is used for this larger overall process – with the terms programming, implementation, and coding reserved for the writing and editing of code per se. Sometimes software development is known as software engineering, especially when it employs formal methods or follows an engineering design process.
History
Programmable devices have existed for centuries. As early as the 9th century, a programmable music sequencer was invented by the Persian Banu Musa brothers, who described an automated mechanical flute player in the Book of Ingenious Devices. In 1206, the Arab engineer Al-Jazari invented a programmable drum machine where a musical mechanical automaton could be made to play different rhythms and drum patterns, via pegs and cams. In 1801, the Jacquard loom could produce entirely different weaves by changing the "program" – a series of pasteboard cards with holes punched in them.
Code-breaking algorithms have also existed for centuries. In the 9th century, the Arab mathematician Al-Kindi described a cryptographic algorithm for deciphering encrypted code, in A Manuscript on Deciphering Cryptographic Messages. He gave the first description of cryptanalysis by frequency analysis, the earliest code-breaking algorithm.
The first computer program is generally dated to 1843, when mathematician Ada Lovelace published an algorithm to calculate a sequence of Bernoulli numbers, intended to be carried out by Charles Babbage's Analytical Engine. However, Charles Babbage had already written his first program for the Analytical Engine in 1837.
In the 1880s, Herman Hollerith invented the concept of storing data in machine-readable form. Later a control panel (plug board) added to his 1906 Type I Tabulator allowed it to be programmed for different jobs, and by the late 1940s, unit record equipment such as the IBM 602 and IBM 604, were programmed by control panels in a similar way, as were the first electronic computers. However, with the concept of the stored-program computer introduced in 1949, both programs and data were stored and manipulated in the same way in computer memory.
Machine language
Machine code was the language of early programs, written in the instruction set of the particular machine, often in binary notation. Assembly languages were soon developed that let the programmer specify instruction in a text format (e.g., ADD X, TOTAL), with abbreviations for each operation code and meaningful names for specifying addresses. However, because an assembly language is little more than a different notation for a machine language, two machines with different instruction sets also have different assembly languages.
Compiler languages
High-level languages made the process of developing a program simpler and more understandable, and less bound to the underlying hardware.
The first compiler related tool, the A-0 System, was developed in 1952 by Grace Hopper, who also coined the term 'compiler'. FORTRAN, the first widely used high-level language to have a functional implementation, came out in 1957, and many other languages were soon developed—in particular, COBOL aimed at commercial data processing, and Lisp for computer research.
These compiled languages allow the programmer to write programs in terms that are syntactically richer, and more capable of abstracting the code, making it easy to target varying machine instruction sets via compilation declarations and heuristics. Compilers harnessed the power of computers to make programming easier by allowing programmers to specify calculations by entering a formula using infix notation.
Source code entry
Programs were mostly entered using punched cards or paper tape. By the late 1960s, data storage devices and computer terminals became inexpensive enough that programs could be created by typing directly into the computers. Text editors were also developed that allowed changes and corrections to be made much more easily than with punched cards.
Modern programming
Quality requirements
Whatever the approach to development may be, the final program must satisfy some fundamental properties. The following properties are among the most important:
Reliability: how often the results of a program are correct. This depends on conceptual correctness of algorithms and minimization of programming mistakes, such as mistakes in resource management (e.g., buffer overflows and race conditions) and logic errors (such as division by zero or off-by-one errors).
Robustness: how well a program anticipates problems due to errors (not bugs). This includes situations such as incorrect, inappropriate or corrupt data, unavailability of needed resources such as memory, operating system services, and network connections, user error, and unexpected power outages.
Usability: the ergonomics of a program: the ease with which a person can use the program for its intended purpose or in some cases even unanticipated purposes. Such issues can make or break its success even regardless of other issues. This involves a wide range of textual, graphical, and sometimes hardware elements that improve the clarity, intuitiveness, cohesiveness and completeness of a program's user interface.
Portability: the range of computer hardware and operating system platforms on which the source code of a program can be compiled/interpreted and run. This depends on differences in the programming facilities provided by the different platforms, including hardware and operating system resources, expected behavior of the hardware and operating system, and availability of platform-specific compilers (and sometimes libraries) for the language of the source code.
Maintainability: the ease with which a program can be modified by its present or future developers in order to make improvements or to customize, fix bugs and security holes, or adapt it to new environments. Good practices during initial development make the difference in this regard. This quality may not be directly apparent to the end user but it can significantly affect the fate of a program over the long term.
Efficiency/performance: Measure of system resources a program consumes (processor time, memory space, slow devices such as disks, network bandwidth and to some extent even user interaction): the less, the better. This also includes careful management of resources, for example cleaning up temporary files and eliminating memory leaks. This is often discussed under the shadow of a chosen programming language. Although the language certainly affects performance, even slower languages, such as Python, can execute programs instantly from a human perspective. Speed, resource usage, and performance are important for programs that bottleneck the system, but efficient use of programmer time is also important and is related to cost: more hardware may be cheaper.
Readability of source code
In computer programming, readability refers to the ease with which a human reader can comprehend the purpose, control flow, and operation of source code. It affects the aspects of quality above, including portability, usability and most importantly maintainability.
Readability is important because programmers spend the majority of their time reading, trying to understand, reusing and modifying existing source code, rather than writing new source code. Unreadable code often leads to bugs, inefficiencies, and duplicated code. A study found that a few simple readability transformations made code shorter and drastically reduced the time to understand it.
Following a consistent programming style often helps readability. However, readability is more than just programming style. Many factors, having little or nothing to do with the ability of the computer to efficiently compile and execute the code, contribute to readability. Some of these factors include:
Different indent styles (whitespace)
Comments
Decomposition
Naming conventions for objects (such as variables, classes, functions, procedures, etc.)
The presentation aspects of this (such as indents, line breaks, color highlighting, and so on) are often handled by the source code editor, but the content aspects reflect the programmer's talent and skills.
Various visual programming languages have also been developed with the intent to resolve readability concerns by adopting non-traditional approaches to code structure and display. Integrated development environments (IDEs) aim to integrate all such help. Techniques like Code refactoring can enhance readability.
Algorithmic complexity
The academic field and the engineering practice of computer programming are both largely concerned with discovering and implementing the most efficient algorithms for a given class of problems. For this purpose, algorithms are classified into orders using so-called Big O notation, which expresses resource use, such as execution time or memory consumption, in terms of the size of an input. Expert programmers are familiar with a variety of well-established algorithms and their respective complexities and use this knowledge to choose algorithms that are best suited to the circumstances.
Methodologies
The first step in most formal software development processes is requirements analysis, followed by testing to determine value modeling, implementation, and failure elimination (debugging). There exist a lot of different approaches for each of those tasks. One approach popular for requirements analysis is Use Case analysis. Many programmers use forms of Agile software development where the various stages of formal software development are more integrated together into short cycles that take a few weeks rather than years. There are many approaches to the Software development process.
Popular modeling techniques include Object-Oriented Analysis and Design (OOAD) and Model-Driven Architecture (MDA). The Unified Modeling Language (UML) is a notation used for both the OOAD and MDA.
A similar technique used for database design is Entity-Relationship Modeling (ER Modeling).
Implementation techniques include imperative languages (object-oriented or procedural), functional languages, and logic languages.
Measuring language usage
It is very difficult to determine what are the most popular modern programming languages. Methods of measuring programming language popularity include: counting the number of job advertisements that mention the language, the number of books sold and courses teaching the language (this overestimates the importance of newer languages), and estimates of the number of existing lines of code written in the language (this underestimates the number of users of business languages such as COBOL).
Some languages are very popular for particular kinds of applications, while some languages are regularly used to write many different kinds of applications. For example, COBOL is still strong in corporate data centers often on large mainframe computers, Fortran in engineering applications, scripting languages in Web development, and C in embedded software. Many applications use a mix of several languages in their construction and use. New languages are generally designed around the syntax of a prior language with new functionality added, (for example C++ adds object-orientation to C, and Java adds memory management and bytecode to C++, but as a result, loses efficiency and the ability for low-level manipulation).
Debugging
Debugging is a very important task in the software development process since having defects in a program can have significant consequences for its users. Some languages are more prone to some kinds of faults because their specification does not require compilers to perform as much checking as other languages. Use of a static code analysis tool can help detect some possible problems. Normally the first step in debugging is to attempt to reproduce the problem. This can be a non-trivial task, for example as with parallel processes or some unusual software bugs. Also, specific user environment and usage history can make it difficult to reproduce the problem.
After the bug is reproduced, the input of the program may need to be simplified to make it easier to debug. For example, when a bug in a compiler can make it crash when parsing some large source file, a simplification of the test case that results in only few lines from the original source file can be sufficient to reproduce the same crash. Trial-and-error/divide-and-conquer is needed: the programmer will try to remove some parts of the original test case and check if the problem still exists. When debugging the problem in a GUI, the programmer can try to skip some user interaction from the original problem description and check if remaining actions are sufficient for bugs to appear. Scripting and breakpointing is also part of this process.
Debugging is often done with IDEs. Standalone debuggers like GDB are also used, and these often provide less of a visual environment, usually using a command line. Some text editors such as Emacs allow GDB to be invoked through them, to provide a visual environment.
Programming languages
Different programming languages support different styles of programming (called programming paradigms). The choice of language used is subject to many considerations, such as company policy, suitability to task, availability of third-party packages, or individual preference. Ideally, the programming language best suited for the task at hand will be selected. Trade-offs from this ideal involve finding enough programmers who know the language to build a team, the availability of compilers for that language, and the efficiency with which programs written in a given language execute. Languages form an approximate spectrum from "low-level" to "high-level"; "low-level" languages are typically more machine-oriented and faster to execute, whereas "high-level" languages are more abstract and easier to use but execute less quickly. It is usually easier to code in "high-level" languages than in "low-level" ones.
Programming languages are essential for software development. They are the building blocks for all software, from the simplest applications to the most sophisticated ones.
Allen Downey, in his book How To Think Like A Computer Scientist, writes:
The details look different in different languages, but a few basic instructions appear in just about every language:
Input: Gather data from the keyboard, a file, or some other device.
Output: Display data on the screen or send data to a file or other device.
Arithmetic: Perform basic arithmetical operations like addition and multiplication.
Conditional Execution: Check for certain conditions and execute the appropriate sequence of statements.
Repetition: Perform some action repeatedly, usually with some variation.
Many computer languages provide a mechanism to call functions provided by shared libraries. Provided the functions in a library follow the appropriate run-time conventions (e.g., method of passing arguments), then these functions may be written in any other language.
Programmers
Computer programmers are those who write computer software. Their jobs usually involve:
Prototyping
Coding
Debugging
Documentation
Integration
Maintenance
Requirements analysis
Software architecture
Software testing
Specification
Although programming has been presented in the media as a somewhat mathematical subject, some research shows that good programmers have strong skills in natural human languages, and that learning to code is similar to learning a foreign language.
See also
ACCU
Association for Computing Machinery
Computer networking
Hello world program
Institution of Analysts and Programmers
National Coding Week
Object hierarchy
Programming best practices
System programming
Computer programming in the punched card era
The Art of Computer Programming
Women in computing
Timeline of women in computing
References
Sources
Further reading
A.K. Hartmann, Practical Guide to Computer Simulations, Singapore: World Scientific (2009)
A. Hunt, D. Thomas, and W. Cunningham, The Pragmatic Programmer. From Journeyman to Master, Amsterdam: Addison-Wesley Longman (1999)
Brian W. Kernighan, The Practice of Programming, Pearson (1999)
Weinberg, Gerald M., The Psychology of Computer Programming, New York: Van Nostrand Reinhold (1971)
Edsger W. Dijkstra, A Discipline of Programming, Prentice-Hall (1976)
O.-J. Dahl, E.W.Dijkstra, C.A.R. Hoare, Structured Programming, Academic Press (1972)
David Gries, The Science of Programming, Springer-Verlag (1981)
External links
Programming |
5313 | https://en.wikipedia.org/wiki/Crouching%20Tiger%2C%20Hidden%20Dragon | Crouching Tiger, Hidden Dragon | Crouching Tiger, Hidden Dragon is a 2000 Mandarin-language wuxia martial arts adventure film directed by Ang Lee and written for the screen by Wang Hui-ling, James Schamus, and Tsai Kuo-jung. The film stars Chow Yun-fat, Michelle Yeoh, Zhang Ziyi, and Chang Chen. It is based on the Chinese novel of the same name serialized between 1941 and 1942 by Wang Dulu, the fourth part of his Crane Iron pentalogy.
A multinational venture, the film was made on a US$17 million budget, and was produced by Edko Films and Zoom Hunt Productions in collaboration with China Film Co-productions Corporation and Asian Union Film & Entertainment for Columbia Pictures Film Production Asia in association with Good Machine International. The film premiered at the Cannes Film Festival on 18 May 2000, and was theatrically released in the United States on 8 December. With dialogue in Standard Chinese, subtitled for various markets, Crouching Tiger, Hidden Dragon became a surprise international success, grossing $213.5 million worldwide. It grossed US$128 million in the United States, becoming the highest-grossing foreign-language film produced overseas in American history. The film was the first foreign-language film to break the $100 million mark in the United States.
The film received universal acclaim from critics, praised for its story, direction, cinematography, and martial arts sequences. Crouching Tiger, Hidden Dragon won over 40 awards and was nominated for 10 Academy Awards in 2001, including Best Picture, and won Best Foreign Language Film, Best Art Direction, Best Original Score, and Best Cinematography, receiving the most nominations ever for a non-English-language film at the time, until 2018's Roma tied this record. The film also won four BAFTAs and two Golden Globe Awards, each of them for Best Foreign Film. For retrospective years, Crouching Tiger is often cited as one of the finest wuxia films ever made and has been widely regarded one of the greatest films in the 21st century.
Plot
In Qing dynasty China, Li Mu Bai is a renowned Wudang swordsman, and his friend Yu Shu Lien, a female warrior, heads a private security company. Shu Lien and Mu Bai have long had feelings for each other, but because Shu Lien had been engaged to Mu Bai's close friend, Meng Sizhao before his death, Shu Lien and Mu Bai feel bound by loyalty to Meng Sizhao and have not revealed their feelings to each other. Mu Bai, choosing to retire from the life of a swordsman, asks Shu Lien to give his fabled 400-year-old sword "Green Destiny" to their benefactor Sir Te in Beijing. Long ago, Mu Bai's teacher was killed by Jade Fox, a woman who sought to learn Wudang secrets. While at Sir Te's place, Shu Lien meets Yu Jiaolong, or Jen, who is the daughter of the rich and powerful Governor Yu and is about to get married.
One evening, a masked thief sneaks into Sir Te's estate and steals the Green Destiny. Sir Te's servant Master Bo and Shu Lien trace the theft to Governor Yu's compound, where Jade Fox had been posing as Jen's governess for many years. Soon after, Mu Bai arrives in Beijing and discusses the theft with Shu Lien. Master Bo makes the acquaintance of Inspector Tsai, a police investigator from the provinces, and his daughter May, who have come to Beijing in pursuit of Fox. Fox challenges the pair and Master Bo to a showdown that night. Following a protracted battle, the group is on the verge of defeat when Mu Bai arrives and outmaneuvers Fox. She reveals that she killed Mu Bai's teacher because he would sleep with her, but refuse to take a woman as a disciple, and she felt it poetic justice for him to die at a woman's hand. Just as Mu Bai is about to kill her, the masked thief reappears and helps Fox. Fox kills Tsai before fleeing with the thief (who is revealed to be Jen). After seeing Jen fight Mu Bai, Fox realizes Jen had been secretly studying the Wudang manual. Fox is illiterate and could only follow the diagrams, whereas Jen's ability to read the manual allowed her to surpass her teacher in martial arts.
At night, a bandit named Lo breaks into Jen's bedroom and asks her to leave with him. In the past, when Governor Yu and his family were traveling in the western deserts of Xinjiang, Lo and his bandits raided Jen's caravan and Lo stole her comb. She pursued him to his desert cave to retrieve her comb. However, the pair soon fell in love. Lo eventually convinced Jen to return to her family, though not before telling her a legend of a man who jumped off a mountain to make his wishes come true. Because the man's heart was pure, his wish was granted and he was unharmed, but flew away never to be seen again. Lo has come now to Beijing to persuade Jen not to go through with her arranged marriage. However, Jen refuses to leave with him. Later, Lo interrupts Jen's wedding procession, begging her to leave with him. Shu Lien and Mu Bai convince Lo to wait for Jen at Mount Wudang, where he will be safe from Jen's family, who are furious with him. Jen runs away from her husband on their wedding night before the marriage can be consummated. Disguised in men's clothing, she is accosted at an inn by a large group of warriors; armed with the Green Destiny and her own superior combat skills, she emerges victorious.
Jen visits Shu Lien, who tells her that Lo is waiting for her at Mount Wudang. After an angry exchange, the two women engage in a duel. Shu Lien is the superior fighter, but Jen wields the Green Destiny and is able to destroy each weapon that Shu Lien wields, until Shu Lien finally manages to defeat Jen with a broken sword. When Shu Lien shows mercy, Jen wounds Shu Lien in the arm. Mu Bai arrives and pursues Jen into a bamboo forest, where he offers to take her as his student. Jen agrees if he can take Green Destiny from her in three moves. Mu Bai is able to take the sword in only one move, but Jen reneges on her promise, and Mu Bai throws the sword over a waterfall. Jen dives after the sword and is rescued by Fox. Fox puts Jen into a drugged sleep and places her in a cavern, where Mu Bai and Shu Lien discover her. Fox suddenly attacks them with poisoned needles. Mu Bai mortally wounds Fox, only to realize that one of the needles has hit him in the neck. Before dying, Fox confesses that her goal had been to kill Jen because Jen had hidden the secrets of Wudang's fighting techniques from her.
Contrite, Jen leaves to prepare an antidote for the poisoned dart. With his last breath, Mu Bai finally confesses his love for Shu Lien. He dies in her arms as Jen returns. Shu Lien forgives Jen, telling her to go to Lo and always be true to herself. The Green Destiny is returned to Sir Te. Jen goes to Mount Wudang and spends the night with Lo. The next morning, Lo finds Jen standing on a bridge overlooking the edge of the mountain. In an echo of the legend that they spoke about in the desert, she asks him to make a wish. Lo wishes for them to be together again, back in the desert. Jen leaps from the bridge, falling into the mists below.
Cast
Credits from British Film Institute:
Chow Yun-fat as Li Mu Bai (, )
Michelle Yeoh as Yu Shu Lien (, )
Zhang Ziyi as Jen Yu (, )
Chang Chen as Lo "Dark Cloud" Xiao Hou (, )
Lang Sihung as Sir Te (, )
Cheng Pei-pei as Jade Fox (, )
Li Fazeng as Governor Yu (, )
Wang Deming as Inspector Tsai (, )
Li Li as Tsai May (, )
Hai Yan as Madam Yu (, )
Gao Xi'an as Bo (, )
Huang Suying as Aunt Wu (, )
Zhang Jinting as De Lu (, )
Du Zhenxi as Uncle Jiao (, )
Li Kai as Gou Jun Pei (, )
Feng Jianhua as Shining Phoenix Mountain Gou (, )
Ma Zhongxuan as Iron Arm Mi (, )
Li Bao-Cheng as Flying Machete Chang (, )
Yang Yongde as Monk Jing (, )
Themes and interpretations
Title
The title "Crouching Tiger, Hidden Dragon" is a literal translation of the Chinese idiom "臥虎藏龍" which describes a place or situation that is full of unnoticed masters. It is from a poem of the ancient Chinese poet Yu Xin (513–581) that reads "暗石疑藏虎,盤根似臥龍", which means "behind the rock in the dark probably hides a tiger, and the coiling giant root resembles a crouching dragon". The title also has several other layers of meaning. On one level, the Chinese characters in the title connect to the narrative that the last character in Xiaohu and Jiaolong's names mean "tiger" and "dragon", respectively. On another level, the Chinese idiomatic phrase is an expression referring to the undercurrents of emotion, passion, and secret desire that lie beneath the surface of polite society and civil behavior, which alludes to the film's storyline.
Gender roles
The success of the Disney animated feature Mulan (1998) popularized the image of the Chinese woman warrior in the west. The storyline of Crouching Tiger, Hidden Dragon is mostly driven by the three female characters. In particular, Jen is driven by her desire to be free from the gender role imposed on her, while Shu Lien, herself oppressed by the gender role, tries to lead Jen back into the role deemed appropriate for her. Some prominent martial arts disciplines are traditionally held to have been originated by women, e.g., Wing Chun. The film's title refers to masters one does not notice, which necessarily includes mostly women, and therefore suggests the advantage of a female bodyguard.
Poison
Poison is also a significant theme in the film. The Chinese word "毒" (dú) means not only physical poison but also cruelty and sinfulness. In the world of martial arts, the use of poison is considered an act of one who is too cowardly and dishonorable to fight; and indeed, the only character who explicitly fits these characteristics is Jade Fox. The poison is a weapon of her bitterness and quest for vengeance: she poisons the master of Wudang, attempts to poison Jen, and succeeds in killing Mu Bai using a poisoned needle. In further play on this theme by the director, Jade Fox, as she dies, refers to the poison from a young child, "the deceit of an eight-year-old girl", referring to what she considers her own spiritual poisoning by her young apprentice Jen. Li Mu Bai himself warns that, without guidance, Jen could become a "poison dragon".
China of the imagination
The story is set during the Qing dynasty (1644–1912), but it does not specify an exact time. Lee sought to present a "China of the imagination" rather than an accurate vision of Chinese history. At the same time, Lee also wanted to make a film that Western audiences would want to see. Thus, the film is shot for a balance between Eastern and Western aesthetics. There are some scenes showing uncommon artistry for the typical martial arts film such as an airborne battle among wispy bamboo plants.
Production
The film was adapted from the novel Crouching Tiger, Hidden Dragon by Wang Dulu, serialized between 1941 and 1942 in Qingdao Xinmin News. The novel is the fourth in a sequence of five. In the contract reached between Columbia Pictures and Ang Lee and Hsu Li-kong, they agreed to invest US$6 million in filming, but the stipulated recovery amount must be more than six times before the two parties will start to pay dividends.
Casting
Shu Qi was Ang Lee's first choice for the role of Jen, but she turned it down.
Filming
Although its Academy Award for Best Foreign Language Film was presented to Taiwan, Crouching Tiger, Hidden Dragon was in fact an international co-production between companies in four regions: the Chinese company China Film Co-production Corporation, the American companies Columbia Pictures Film Production Asia, Sony Pictures Classics, and Good Machine, the Hong Kong company Edko Films, and the Taiwanese Zoom Hunt Productions, as well as the unspecified United China Vision and Asia Union Film & Entertainment, created solely for this film.
The film was made in Beijing, with location shooting in Urumchi, Western Provinces, Taklamakan Plateau,
Shanghai and Anji of China. The first phase of shooting was in the Gobi Desert where it consistently rained. Director Ang Lee noted, "I didn't take one break in eight months, not even for half a day. I was miserable—I just didn't have the extra energy to be happy. Near the end, I could hardly breathe. I thought I was about to have a stroke." The stunt work was mostly performed by the actors themselves and Ang Lee stated in an interview that computers were used "only to remove the safety wires that held the actors" aloft. "Most of the time you can see their faces," he added. "That's really them in the trees."
Another compounding issue was the difference between accents of the four lead actors: Chow Yun-fat is from Hong Kong and speaks Cantonese natively; Michelle Yeoh is from Malaysia and grew up speaking English and Malay, so she learned the Standard Chinese lines phonetically; Chang Chen is from Taiwan and he speaks Standard Chinese in a Taiwanese accent. Only Zhang Ziyi spoke with a native Mandarin accent that Ang Lee wanted. Chow Yun Fat said, on "the first day [of shooting], I had to do 28 takes just because of the language. That's never happened before in my life."
The film specifically targeted Western audiences rather than the domestic audiences who were already used to Wuxia films. As a result, high-quality English subtitles were needed. Ang Lee, who was educated in the West, personally edited the subtitles to ensure they were satisfactory for Western audiences.
Soundtrack
The score was composed by Dun TAN in 1999. It was played for the movie by the Shanghai Symphony Orchestra, the Shanghai National Orchestra and the Shanghai Percussion Ensemble. It features solo passages for cello played by Yo-Yo Ma. The "last track" ("A Love Before Time") features Coco Lee, who later sang it at the Academy Awards. The composer Chen Yuanlin also collaborated in the project. The music for the entire film was produced in two weeks. Tan the next year (2000) adapted his filmscore as a cello concerto called simply "Crouching Tiger."
Release
Marketing
The film was adapted into a video game and a series of comics, and it led to the original novel being adapted into a 34-episode Taiwanese television series. The latter was released in 2004 as New Crouching Tiger, Hidden Dragon for Northern American release.
Home media
The film was released on VHS and DVD on 5 June 2001 by Columbia TriStar Home Entertainment. It was also released on UMD on 26 June 2005. In the United Kingdom, it was watched by viewers on television in 2004, making it the year's most-watched foreign-language film on television.
Restoration
The film was re-released in a 4K restoration by Sony Pictures Classics in 2023.
Reception
Box office
The film premiered in cinemas on 8 December 2000, in limited release within the United States. During its opening weekend, the film opened in 15th place, grossing $663,205 in business, showing at 16 locations. On 12 January 2001, Crouching Tiger, Hidden Dragon premiered in cinemas in wide release throughout the U.S., grossing $8,647,295 in business, ranking in sixth place. The film Save the Last Dance came in first place during that weekend, grossing $23,444,930. The film's revenue dropped by almost 30% in its second week of release, earning $6,080,357. For that particular weekend, the film fell to eighth place, screening in 837 theaters. Save the Last Dance remained unchanged in first place, grossing $15,366,047 in box-office revenue. During its final week in release, Crouching Tiger, Hidden Dragon opened in a distant 50th place with $37,233 in revenue. The film went on to top out domestically at $128,078,872 in total ticket sales through a 31-week theatrical run. Internationally, the film took in an additional $85,446,864 in box-office business for a combined worldwide total of $213,525,736. For 2000 as a whole, the film cumulatively ranked at a worldwide box-office performance position of 19.
Critical response
Crouching Tiger, Hidden Dragon was widely acclaimed in the Western world, receiving numerous awards. On Rotten Tomatoes, the film holds an approval rating of 98% based on 168 reviews, with an average rating of 8.6/10. The site's critical consensus states: "The movie that catapulted Ang Lee into the ranks of upper echelon Hollywood filmmakers, Crouching Tiger, Hidden Dragon features a deft mix of amazing martial arts battles, beautiful scenery, and tasteful drama." Metacritic reported the film had an average score of 94 out of 100, based on 32 reviews, indicating "universal acclaim".
Some Chinese-speaking viewers were bothered by the accents of the leading actors. Neither Chow (a native Cantonese speaker) nor Yeoh (who was born and raised in Malaysia) spoke Mandarin Chinese as a mother tongue. All four main actors spoke Standard Chinese with vastly different accents: Chow speaks with a Cantonese accent, Yeoh with a Malaysian accent, Chang Chen with a Taiwanese accent, and Zhang Ziyi with a Beijing accent. Yeoh responded to this complaint in a 28 December 2000, interview with Cinescape. She argued, "My character lived outside of Beijing, and so I didn't have to do the Beijing accent." When the interviewer, Craig Reid, remarked, "My mother-in-law has this strange Sichuan-Mandarin accent that's hard for me to understand," Yeoh responded: "Yes, provinces all have their very own strong accents. When we first started the movie, Cheng Pei Pei was going to have her accent, and Chang Zhen was going to have his accent, and this person would have that accent. And in the end nobody could understand what they were saying. Forget about us, even the crew from Beijing thought this was all weird."
The film led to a boost in popularity of Chinese wuxia films in the western world, where they were previously little known, and led to films such as Hero and House of Flying Daggers, both directed by Zhang Yimou, being marketed towards Western audiences. The film also provided the breakthrough role for Zhang Ziyi's career, who noted:
Film Journal noted that Crouching Tiger, Hidden Dragon "pulled off the rare trifecta of critical acclaim, boffo box-office and gestalt shift", in reference to its ground-breaking success for a subtitled film in the American market.
Accolades
Gathering widespread critical acclaim at the Toronto and New York film festivals, the film also became a favorite when Academy Awards nominations were announced in 2001. The film was screened out of competition at the 2000 Cannes Film Festival. The film received ten Academy Award nominations, which was the highest ever for a non-English language film, up until it was tied by Roma (2018).
The film is ranked at number 497 on Empire's 2008 list of the 500 greatest movies of all time. and at number 66 in the magazine's 100 Best Films of World Cinema, published in 2010.
In 2010, the Independent Film & Television Alliance selected the film as one of the 30 Most Significant Independent Films of the last 30 years.
In 2016, it was voted the 35th-best film of the 21st century as picked by 177 film critics from around the world in a poll conducted by BBC.
The film was included in BBC's 2018 list of The 100 greatest foreign language films ranked by 209 critics from 43 countries around the world. In 2019, The Guardian ranked the film 51st in its 100 best films of the 21st century list.
Sequel
A sequel to the film, Crouching Tiger, Hidden Dragon: Sword of Destiny, was released in 2016. It was directed by Yuen Wo-ping, who was the action choreographer for the first film. It is a co-production between Pegasus Media, China Film Group Corporation, and the Weinstein Company. Unlike the original film, the sequel was filmed in English for international release and dubbed into Chinese for Chinese releases.
Sword of Destiny is based on Iron Knight, Silver Vase, the next (and last) novel in the Crane–Iron Pentalogy. It features a mostly new cast, headed by Donnie Yen. Michelle Yeoh reprised her role from the original. Zhang Ziyi was also approached to appear in Sword of Destiny but refused, stating that she would only appear in a sequel if Ang Lee were directing it.
In the West, the sequel was for the most part not shown in theaters, instead being distributed direct-to-video by the streaming service Netflix.
Posterity
The theme of Janet Jackson's song "China Love" was related to the film by MTV News, in which Jackson sings of the daughter of an emperor in love with a warrior, unable to sustain relations when forced to marry into royalty.
The names of the pterosaur genus Kryptodrakon and the ceratopsian genus Yinlong (both meaning "hidden dragon" in Greek and Chinese respectively) allude to the film.
The character of Lo, or "Dark Cloud" the desert bandit, influenced the development of the protagonist of the Prince of Persia series of video games.
In the video game Def Jam Fight for NY: The Takeover there are two hybrid fighting styles that pay homage to this movie. Which have the following combinations: Crouching tiger (Martial Arts + Streetfighting + Submissions) and Hidden Dragon (Martial Arts + Streetfighting + Kickboxing).
See also
Anji County
Clear waters and green mountains
References
Further reading
– Collection of articles
External links
2000 films
2000 fantasy films
2000 martial arts films
2000s adventure films
American martial arts films
Martial arts fantasy films
BAFTA winners (films)
Best Film HKFA
Best Foreign Language Film Academy Award winners
Best Foreign Language Film BAFTA Award winners
Best Foreign Language Film Golden Globe winners
Chinese martial arts films
Films based on Chinese novels
Films directed by Ang Lee
Films scored by Tan Dun
Films set in 18th-century Qing dynasty
Films set in Beijing
Films set in the 1770s
Films that won the Best Original Score Academy Award
Films whose art director won the Best Art Direction Academy Award
Films whose cinematographer won the Best Cinematography Academy Award
Films whose director won the Best Direction BAFTA Award
Films whose director won the Best Director Golden Globe
Films with screenplays by James Schamus
Georges Delerue Award winners
Hong Kong martial arts films
Hugo Award for Best Dramatic Presentation winning works
Independent Spirit Award for Best Film winners
Toronto International Film Festival People's Choice Award winners
Magic realism films
2000s Mandarin-language films
Nebula Award for Best Script-winning works
Sony Pictures Classics films
Taiwanese martial arts films
Wuxia films
2000s American films
2000s Chinese films
2000s Hong Kong films
Chinese-language American films |
5314 | https://en.wikipedia.org/wiki/Charlemagne | Charlemagne | Charlemagne ( ) or Charles the Great (, Frankish: Karl; 2 April 747 – 28 January 814), a member of the Carolingian dynasty, was King of the Franks from 768, King of the Lombards from 774, and was crowned as the Emperor of the Romans by Pope Leo III in 800. Charlemagne succeeded in uniting the majority of western and central Europe and was the first recognized emperor to rule from western Europe after the fall of the Western Roman Empire approximately three centuries earlier. The expanded Frankish state that Charlemagne founded was the Carolingian Empire, which is considered the first phase in the history of the Holy Roman Empire. He was canonized by Antipope Paschal III—an act later treated as invalid—and he is now regarded by some as beatified (which is a step on the path to sainthood) in the Catholic Church.
Charlemagne was the eldest son of Pepin the Short and Bertrada of Laon. He was born before their canonical marriage. He became king of the Franks in 768 following his father's death, and was initially co-ruler with his brother Carloman I until the latter's death in 771. As sole ruler, he continued his father's policy towards the protection of the papacy and became its sole defender, removing the Lombards from power in northern Italy and leading an incursion into Muslim Spain. He also campaigned against the Saxons to his east, Christianizing them (upon penalty of death). These mass executions led to events such as the Massacre of Verden. He reached the height of his power in 800 when he was crowned Emperor of the Romans by Pope Leo III on Christmas Day at Old St. Peter's Basilica in Rome.
Charlemagne has been called the "Father of Europe" (Pater Europae), as he united most of Western Europe for the first time since the classical era of the Roman Empire, as well as uniting parts of Europe that had never been under Frankish or Roman rule. His reign spurred the Carolingian Renaissance, a period of energetic cultural and intellectual activity within the Western Church. The Eastern Orthodox Church viewed Charlemagne less favourably, due to his support of the filioque and the Pope's preference of him as emperor over the Byzantine Empire's first female monarch, Irene of Athens. These and other disputes led to the eventual split of Rome and Constantinople in the Great Schism of 1054.
Charlemagne died in 814 after contracting an infectious lung disease. He was laid to rest in the Aachen Cathedral, in his imperial capital city of Aachen. He married at least four times, and three of his legitimate sons lived to adulthood. Only the youngest of them, Louis the Pious, survived to succeed him. Charlemagne is a direct ancestor of many of Europe's royal houses, including the Capetian dynasty, the Ottonian dynasty, the House of Luxembourg, the House of Ivrea and the House of Habsburg.
Names and nicknames
The name Charlemagne ( ), by which the emperor is normally known in English, comes from the French Charles-le-magne, meaning "Charles the Great". In modern German, Karl der Große has the same meaning. His given name in his native Frankish dialect was Karl ("Charles", Latin: Carolus; Old High German: Karlus; Gallo-Romance: Karlo). He was named after his grandfather, Charles Martel, a choice which intentionally marked him as Martel's true heir.
The nickname magnus (great) may have been associated with him already in his lifetime, but this is not certain. The contemporary Latin Royal Frankish Annals routinely call him Carolus magnus rex, "Charles the great king". As a nickname, it is only certainly attested in the works of the Poeta Saxo around 900 and it only became standard in all the lands of his former empire around 1000.
Charles' achievements gave a new meaning to his name. In many languages of Europe, the very word for "king" derives from his name; e.g., , , , , , , , , , , . This development parallels that of the name of the Caesars in the original Roman Empire, which became kaiser and tsar (or czar), among others.
Political background
By the 6th century, the western Germanic tribe of the Franks had been Christianised, due in considerable measure to the Catholic conversion of Clovis I. Francia, ruled by the Merovingians, was the most powerful of the kingdoms that succeeded the Western Roman Empire. Following the Battle of Tertry, the Merovingians declined into powerlessness, for which they have been dubbed the rois fainéants ("do-nothing kings"). Almost all government powers were exercised by their chief officer, the mayor of the palace.
In 687, Pepin of Herstal, mayor of the palace of Austrasia, ended the strife between various kings and their mayors with his victory at Tertry. He became the sole governor of the entire Frankish kingdom. Pepin was the grandson of two important figures of the Austrasian Kingdom: Saint Arnulf of Metz and Pepin of Landen. Pepin of Herstal was eventually succeeded by his son Charles, later known as Charles Martel (Charles the Hammer).
After 737, Charles governed the Franks in lieu of a king and declined to call himself king. Charles was succeeded in 741 by his sons Carloman and Pepin the Short, the father of Charlemagne. In 743, the brothers placed Childeric III on the throne to curb separatism in the periphery. He was the last Merovingian king. Carloman resigned office in 746, preferring to enter the church as a monk. Pepin brought the question of the kingship before Pope Zachary, asking whether it was logical for a king to have no royal power. The pope handed down his decision in 749, decreeing that it was better for Pepin to be called king, as he had the powers of high office as Mayor, so as not to confuse the hierarchy. He, therefore, ordered him to become the true king.
In 750, Pepin was elected by an assembly of the Franks, anointed by the archbishop, and then raised to the office of king. The Pope branded Childeric III as "the false king" and ordered him into a monastery. The Merovingian dynasty was thereby replaced by the Carolingian dynasty, named after Charles Martel. In 753, Pope Stephen II fled from Italy to Francia, appealing to Pepin for assistance for the rights of St. Peter. He was supported in this appeal by Carloman, Charles' brother. In return, the pope could provide only legitimacy. He did this by again anointing and confirming Pepin, this time adding his young sons Carolus (Charlemagne) and Carloman to the royal patrimony. They thereby became heirs to the realm that already covered most of western Europe. In 754, Pepin accepted the Pope's invitation to visit Italy on behalf of St. Peter's rights, dealing successfully with the Lombards.
Under the Carolingians, the Frankish kingdom spread to encompass an area including most of Western Europe; the later east–west division of the kingdom formed the basis for modern France and Germany. Orman portrays the Treaty of Verdun (843) between the warring grandsons of Charlemagne as the foundation event of an independent France under its first king Charles the Bald; an independent Germany under its first king Louis the German; and an independent intermediate state stretching from the Low Countries along the borderlands to south of Rome under Lothair I, who retained the title of emperor and the capitals Aachen and Rome without the jurisdiction. The middle kingdom had broken up by 890 and partly absorbed into the Western kingdom (later France) and the Eastern kingdom (Germany) and the rest developing into smaller "buffer" states that exist between France and Germany to this day, namely Benelux and Switzerland.
Rise to power
Early life
The most likely date of Charlemagne's birth is reconstructed from several sources. The date of 742—calculated from Einhard's date of death of January 814 at age 72—predates the marriage of his parents in 744. The year given in the Annales Petaviani, 747, would be more likely, except that it contradicts Einhard and a few other sources in making Charlemagne sixty-seven years old at his death. The month and day of 2 April are based on a calendar from Lorsch Abbey. Charlemagne claimed descent from the Roman emperor, Constantine I.
In 747, Easter fell on 2 April, a coincidence that likely would have been remarked upon by chroniclers. If Easter was being used as the beginning of the calendar year, then 2 April 747 could have been, by modern reckoning, April 748 (not on Easter). The date favoured by the preponderance of evidence is 2 April 742, based on Charlemagne's age at the time of his death. This date supports the concept that Charlemagne was technically an illegitimate child, although that is not mentioned by Einhard in either since he was born out of wedlock; Pepin and Bertrada were bound by a private contract or Friedelehe at the time of his birth, but did not marry until 744.
Charlemagne's exact birthplace is unknown, although historians have suggested Aachen in modern-day Germany, and Liège (Herstal) in present-day Belgium as possible locations. Aachen and Liège are close to the region whence the Merovingian and Carolingian families originated. Other cities have been suggested, including Düren, Gauting, Mürlenbach, Quierzy, and Prüm. No definitive evidence resolves the question.
Ancestry
Charlemagne was the eldest child of Pepin the Short (714 – 24 September 768, reigned from 751) and his wife Bertrada of Laon (720 – 12 July 783), daughter of Caribert of Laon. Many historians consider Charlemagne (Charles) to have been illegitimate, although some state that this is arguable, because Pepin did not marry Bertrada until 744, which was after Charles' birth; this status did not exclude him from the succession.
Records name only Carloman, Gisela, and three short-lived children named Pepin, Chrothais and Adelais as his younger siblings.
Ambiguous high office
The most powerful officers of the Frankish people, the Mayor of the Palace (Maior Domus) and one or more kings (rex, reges), were appointed by the election of the people. Elections were not periodic, but were held as required to elect officers ad quos summa imperii pertinebat, "to whom the highest matters of state pertained". Evidently, interim decisions could be made by the Pope, which ultimately needed to be ratified using an assembly of the people that met annually.
Before he was elected king in 751, Pepin was initially a mayor, a high office he held "as though hereditary" (velut hereditario fungebatur). Einhard explains that "the honour" was usually "given by the people" to the distinguished, but Pepin the Great and his brother Carloman the Wise received it as though hereditary, as had their father, Charles Martel. There was, however, a certain ambiguity about quasi-inheritance. The office was treated as joint property: one Mayorship held by two brothers jointly. Each, however, had his own geographic jurisdiction. When Carloman decided to resign, becoming ultimately a Benedictine at Monte Cassino, the question of the disposition of his quasi-share was settled by the pope. He converted the mayorship into a kingship and awarded the joint property to Pepin, who gained the right to pass it on by inheritance.
This decision was not accepted by all family members. Carloman had consented to the temporary tenancy of his own share, which he intended to pass on to his son, Drogo, when the inheritance should be settled at someone's death. By the Pope's decision, in which Pepin had a hand, Drogo was to be disqualified as an heir in favour of his cousin Charles. He took up arms in opposition to the decision and was joined by Grifo, a half-brother of Pepin and Carloman, who had been given a share by Charles Martel, but was stripped of it and held under loose arrest by his half-brothers after an attempt to seize their shares by military action. Grifo perished in combat in the Battle of Saint-Jean-de-Maurienne while Drogo was hunted down and taken into custody.
According to the Life, Pepin died in Paris on 24 September 768, whereupon the kingship passed jointly to his sons, "with divine assent" (divino nutu). The Franks "in general assembly" (generali conventu) gave them both the rank of a king (reges) but "partitioned the whole body of the kingdom equally" (totum regni corpus ex aequo partirentur). The annals tell a slightly different version, with the king dying at St-Denis, near Paris. The two "lords" (domni) were "elevated to kingship" (elevati sunt in regnum), Charles on 9 October in Noyon, Carloman on an unspecified date in Soissons. If born in 742, Charles was 26 years old, but he had been campaigning at his father's right hand for several years, which may help to account for his military skill. Carloman was 17.
The language, in either case, suggests that there were not two inheritances, which would have created distinct kings ruling over distinct kingdoms, but a single joint inheritance and a joint kingship tenanted by two equal kings, Charles and his brother Carloman. As before, distinct jurisdictions were awarded. Charles received Pepin's original share as Mayor: the outer parts of the kingdom bordering on the sea, namely Neustria, western Aquitaine, and the northern parts of Austrasia; while Carloman was awarded his uncle's former share, the inner parts: southern Austrasia, Septimania, eastern Aquitaine, Burgundy, Provence, and Swabia, lands bordering Italy. The question of whether these jurisdictions were joint shares reverting to the other brother if one brother died or were inherited property passed on to the descendants of the brother who died was never definitely settled. It came up repeatedly over the succeeding decades until the grandsons of Charlemagne created distinct sovereign kingdoms.
Aquitainian rebellion
Formation of a new Aquitaine
In southern Gaul, Aquitaine had been Romanised and people spoke a Romance language. Similarly, Hispania had been populated by peoples who spoke various languages, including Celtic, but these had now been mostly replaced by Romance languages. Between Aquitaine and Hispania were the Euskaldunak, Latinised to Vascones, or Basques, whose country, Vasconia, extended, according to the distributions of place names attributable to the Basques, mainly in the western Pyrenees but also as far south as the upper river Ebro in Spain and as far north as the river Garonne in France. The French name Gascony derives from Vasconia. The Romans were never able to subjugate the whole of Vasconia. The border with Aquitaine was at Toulouse.
In about 660, the Duchy of Vasconia united with the Duchy of Aquitaine to form a single realm under Felix of Aquitaine, ruling from Toulouse. This was a joint kingship with a Basque Duke, Lupus I. Lupus is the Latin translation of Basque Otsoa, "wolf". At Felix's death in 670 the joint property of the kingship reverted entirely to Lupus. As the Basques had no law of joint inheritance but relied on primogeniture, Lupus in effect founded a hereditary dynasty of Basque rulers of an expanded Aquitaine.
Acquisition of Aquitaine by the Carolingians
The Latin chronicles of the end of Visigothic Hispania omit many details, such as identification of characters, filling in the gaps and reconciliation of numerous contradictions. Muslim sources, however, present a more coherent view, such as in the Ta'rikh iftitah al-Andalus ("History of the Conquest of al-Andalus") by Ibn al-Qūṭiyya ("the son of the Gothic woman", referring to the granddaughter of Wittiza, the last Visigothic king of a united Hispania, who married a Moor). Ibn al-Qūṭiyya, who had another, much longer name, must have been relying to some degree on family oral tradition.
According to Ibn al-Qūṭiyya Wittiza, the last Visigothic king of a united Hispania, died before his three sons, Almund, Romulo, and Ardabast reached maturity. Their mother was queen regent at Toledo, but Roderic, army chief of staff, staged a rebellion, capturing Córdoba. He chose to impose a joint rule over distinct jurisdictions on the true heirs. Evidence of a division of some sort can be found in the distribution of coins imprinted with the name of each king and in the king lists. Wittiza was succeeded by Roderic, who reigned for seven and a half years, followed by Achila (Aquila), who reigned three and a half years. If the reigns of both terminated with the incursion of the Saracens, then Roderic appears to have reigned a few years before the majority of Achila. The latter's kingdom was securely placed to the northeast, while Roderic seems to have taken the rest, notably modern Portugal.
The Saracens crossed the mountains to claim Ardo's Septimania, only to encounter the Basque dynasty of Aquitaine, always the allies of the Goths. Odo the Great of Aquitaine was at first victorious at the Battle of Toulouse in 721. Saracen troops gradually massed in Septimania and, in 732, an army under Emir Abdul Rahman Al Ghafiqi advanced into Vasconia, and Odo was defeated at the Battle of the River Garonne. They took Bordeaux and were advancing towards Tours when Odo, powerless to stop them, appealed to his arch-enemy, Charles Martel, mayor of the Franks. In one of the first of the lightning marches for which the Carolingian kings became famous, Charles and his army appeared in the path of the Saracens between Tours and Poitiers, and in the Battle of Tours decisively defeated and killed al-Ghafiqi. The Moors returned twice more, each time suffering defeat at Charles' hands—at the River Berre near Narbonne in 737 and in the Dauphiné in 740. Odo's price for salvation from the Saracens was incorporation into the Frankish kingdom, a decision that was repugnant to him and also to his heirs.
Loss and recovery of Aquitaine
After the death of his father, Hunald I allied himself with free Lombardy. However, Odo had ambiguously left the kingdom jointly to his two sons, Hunald and Hatto. The latter, loyal to Francia, now went to war with his brother over full possession. Victorious, Hunald blinded and imprisoned his brother, only to be so stricken by conscience that he resigned and entered the church as a monk to do penance. The story is told in Annales Mettenses priores. His son Waifer took an early inheritance, becoming duke of Aquitaine and ratifying the alliance with Lombardy. Waifer, deciding to honour it, repeated his father's decision, which he justified by arguing that any agreements with Charles Martel became invalid on Martel's death. Since Aquitaine was now Pepin's inheritance because of the earlier assistance given by Charles Martel, according to some, the latter and his son, the young Charles, hunted down Waifer, who could only conduct a guerrilla war, and executed him.
Among the contingents of the Frankish army were Bavarians under Tassilo III, Duke of Bavaria, an Agilofing, the hereditary Bavarian ducal family. Grifo had installed himself as Duke of Bavaria, but Pepin replaced him with a member of the ducal family yet a child, Tassilo, whose protector he had become after the death of his father. The loyalty of the Agilolfings was perpetually in question, but Pepin exacted numerous oaths of loyalty from Tassilo. However, the latter had married Liutperga, a daughter of Desiderius, king of Lombardy. At a critical point in the campaign, Tassilo left the field with all his Bavarians. Out of reach of Pepin, he repudiated all loyalty to Francia. Pepin had no chance to respond as he grew ill and died within a few weeks after Waifer's execution.
The first event of the brothers' reign was the uprising of the Aquitainians and Gascons in 769, in that territory split between the two kings. One year earlier, Pepin had finally defeated Waifer, Duke of Aquitaine, after waging a destructive, ten-year war against Aquitaine. Now, Hunald II led the Aquitainians as far north as Angoulême. Charles met Carloman, but Carloman refused to participate and returned to Burgundy. Charles went to war, leading an army to Bordeaux, where he built a fortified camp on the mound at Fronsac. Hunald was forced to flee to the court of Duke Lupus II of Gascony. Lupus, fearing Charles, turned Hunald over in exchange for peace, and Hunald was put in a monastery. Gascon lords also surrendered, and Aquitaine and Gascony were finally fully subdued by the Franks.
Marriage to Desiderata
The brothers maintained lukewarm relations with the assistance of their mother Bertrada, but in 770 Charles signed a treaty with Duke Tassilo III of Bavaria and married a Lombard Princess (commonly known today as Desiderata), the daughter of King Desiderius, to surround Carloman with his own allies. Though Pope Stephen III first opposed the marriage with the Lombard princess, he found little to fear from a Frankish-Lombard alliance.
Less than a year after his marriage, Charlemagne repudiated Desiderata and married a 13-year-old Swabian named Hildegard. The repudiated Desiderata returned to her father's court at Pavia. Her father's wrath was now aroused, and he would have gladly allied with Carloman to defeat Charles. Before any open hostilities could be declared, however, Carloman died on 5 December 771, apparently of natural causes. Carloman's widow Gerberga fled to Desiderius' court with her sons for protection.
Wives, concubines, and children
Charlemagne had eighteen children with seven of his ten known wives or concubines. Nonetheless, he had only four legitimate grandsons, the four sons of his fourth son, Louis. In addition, he had a grandson (Bernard of Italy, the only son of his third son, Pepin of Italy), who was illegitimate but included in the line of inheritance. Among his descendants are several royal dynasties, including the Habsburg, and Capetian dynasties. By consequence, most if not all established European noble families ever since can genealogically trace some of their background to Charlemagne.
Children
During the first peace of any substantial length (780–782), Charles began to appoint his sons to positions of authority. In 781, during a visit to Rome, he made his two youngest sons kings, crowned by the Pope. The elder of these two, Carloman, was made the king of Italy, taking the Iron Crown that his father had first worn in 774, and in the same ceremony was renamed "Pepin" (not to be confused with Charlemagne's eldest, possibly illegitimate son, Pepin the Hunchback). The younger of the two, Louis, became King of Aquitaine. Charlemagne ordered Pepin and Louis to be raised in the customs of their kingdoms, and he gave their regents some control of their subkingdoms, but kept the real power, though he intended his sons to inherit their realms. He did not tolerate insubordination in his sons: in 792, he banished Pepin the Hunchback to Prüm Abbey because the young man had joined a rebellion against him.
Charles was determined to have his children educated, including his daughters, as his parents had instilled the importance of learning in him at an early age. His children were also taught skills in accord with their aristocratic status, which included training in riding and weaponry for his sons, and embroidery, spinning and weaving for his daughters.
The sons fought many wars on behalf of their father. Charles was mostly preoccupied with the Bretons, whose border he shared and who insurrected on at least two occasions and were easily put down. He also fought the Saxons on multiple occasions. In 805 and 806, he was sent into the Böhmerwald (modern Bohemia) to deal with the Slavs living there (Bohemian tribes, ancestors of the modern Czechs). He subjected them to Frankish authority and devastated the valley of the Elbe, forcing tribute from them. Pippin had to hold the Avar and Beneventan borders and fought the Slavs to his north. He was uniquely poised to fight the Byzantine Empire when that conflict arose after Charlemagne's imperial coronation and a Venetian rebellion. Finally, Louis was in charge of the Spanish March and fought the Duke of Benevento in southern Italy on at least one occasion. He took Barcelona in a great siege in 801.
Charlemagne kept his daughters at home with him and refused to allow them to contract sacramental marriages (though he originally condoned an engagement between his eldest daughter Rotrude and Constantine VI of Byzantium, this engagement was annulled when Rotrude was 11). Charlemagne's opposition to his daughters' marriages may possibly have intended to prevent the creation of cadet branches of the family to challenge the main line, as had been the case with Tassilo of Bavaria. However, he tolerated their extramarital relationships, even rewarding their common-law husbands and treasuring the illegitimate grandchildren they produced for him. He also refused to believe stories of their wild behaviour. After his death the surviving daughters were banished from the court by their brother, the pious Louis, to take up residence in the convents they had been bequeathed by their father. At least one of them, Bertha, had a recognised relationship, if not a marriage, with Angilbert, a member of Charlemagne's court circle.
Italian campaigns
Conquest of the Lombard kingdom
At his succession in 772, Pope Adrian I demanded the return of certain cities in the former exarchate of Ravenna in accordance with a promise at the succession of Desiderius. Instead, Desiderius took over certain papal cities and invaded the Pentapolis, heading for Rome. Adrian sent ambassadors to Charlemagne in autumn requesting he enforce the policies of his father, Pepin. Desiderius sent his own ambassadors denying the pope's charges. The ambassadors met at Thionville, and Charlemagne upheld the pope's side. Charlemagne demanded what the pope had requested, but Desiderius swore never to comply. Charlemagne and his uncle Bernard crossed the Alps in 773 and chased the Lombards back to Pavia, which they then besieged. Charlemagne temporarily left the siege to deal with Adelchis, son of Desiderius, who was raising an army at Verona. The young prince was chased to the Adriatic littoral and fled to Constantinople to plead for assistance from Constantine V, who was waging war with Bulgaria.
The siege lasted until the spring of 774 when Charlemagne visited the pope in Rome. There he confirmed his father's grants of land, with some later chronicles falsely claiming that he also expanded them, granting Tuscany, Emilia, Venice and Corsica. The pope granted him the title patrician. He then returned to Pavia, where the Lombards were on the verge of surrendering. In return for their lives, the Lombards surrendered and opened the gates in early summer. Desiderius was sent to the abbey of Corbie, and his son Adelchis died in Constantinople, a patrician. Charles, unusually, had himself crowned with the Iron Crown and made the magnates of Lombardy pay homage to him at Pavia. Only Duke Arechis II of Benevento refused to submit and proclaimed independence. Charlemagne was then master of Italy as king of the Lombards. He left Italy with a garrison in Pavia and a few Frankish counts in place the same year.
Instability continued in Italy. In 776, Dukes Hrodgaud of Friuli and Hildeprand of Spoleto rebelled. Charlemagne rushed back from Saxony and defeated the Duke of Friuli in battle; the Duke was slain. The Duke of Spoleto signed a treaty. Their co-conspirator, Arechis, was not subdued, and Adelchis, their candidate in Byzantium, never left that city. Northern Italy was now faithfully his.
Southern Italy
In 787, Charlemagne directed his attention towards the Duchy of Benevento, where Arechis II was reigning independently with the self-given title of Princeps. Charlemagne's siege of Salerno forced Arechis into submission, and in return for peace, Arechis recognized Charlemagne's suzerainty and handed his son Grimoald III over as a hostage. After Arechis' death in 787, Grimoald was allowed to return to Benevento. In 788, the principality was invaded by Byzantine troops led by Adelchis, but his attempts were thwarted by Grimoald. The Franks assisted in the repulsion of Adelchis, but, in turn, attacked Benevento's territories several times, obtaining small gains, notably the annexation of Chieti to the duchy of Spoleto. Later, Grimoald tried to throw off Frankish suzerainty, but Charles' sons, Pepin of Italy and Charles the Younger, forced him to submit in 792.
Carolingian expansion to the south
Vasconia and the Pyrenees
The destructive war led by Pepin in Aquitaine, although brought to a satisfactory conclusion for the Franks, proved the Frankish power structure south of the Loire was feeble and unreliable. After the defeat and death of Waifer in 768, while Aquitaine submitted again to the Carolingian dynasty, a new rebellion broke out in 769 led by Hunald II, a possible son of Waifer. He took refuge with the ally Duke Lupus II of Gascony, but probably out of fear of Charlemagne's reprisal, Lupus handed him over to the new King of the Franks to whom he pledged loyalty, which seemed to confirm the peace in the Basque area south of the Garonne. In the campaign of 769, Charlemagne seems to have followed a policy of "overwhelming force" and avoided a major pitched battle
Wary of new Basque uprisings, Charlemagne seems to have tried to contain Duke Lupus's power by appointing Seguin as the Count of Bordeaux (778) and other counts of Frankish background in bordering areas (Toulouse, County of Fézensac). The Basque Duke, in turn, seems to have contributed decisively or schemed the Battle of Roncevaux Pass (referred to as "Basque treachery"). The defeat of Charlemagne's army in Roncevaux (778) confirmed his determination to rule directly by establishing the Kingdom of Aquitaine (ruled by Louis the Pious) based on a power base of Frankish officials, distributing lands among colonisers and allocating lands to the Church, which he took as an ally. A Christianisation programme was put in place across the high Pyrenees (778).
The new political arrangement for Vasconia did not sit well with local lords. As of 788 Adalric was fighting and capturing Chorson, Carolingian Count of Toulouse. He was eventually released, but Charlemagne, enraged at the compromise, decided to depose him and appointed his trustee William of Gellone. William, in turn, fought the Basques and defeated them after banishing Adalric (790).
From 781 (Pallars, Ribagorça) to 806 (Pamplona under Frankish influence), taking the County of Toulouse for a power base, Charlemagne asserted Frankish authority over the Pyrenees by subduing the south-western marches of Toulouse (790) and establishing vassal counties on the southern Pyrenees that were to make up the Marca Hispanica. As of 794, a Frankish vassal, the Basque lord Belasko (al-Galashki, 'the Gaul') ruled Álava, but Pamplona remained under Cordovan and local control up to 806. Belasko and the counties in the Marca Hispánica provided the necessary base to attack the Andalusians (an expedition led by William Count of Toulouse and Louis the Pious to capture Barcelona in 801). Events in the Duchy of Vasconia (rebellion in Pamplona, count overthrown in Aragon, Duke Seguin of Bordeaux deposed, uprising of the Basque lords, etc.) were to prove it ephemeral upon Charlemagne's death.
Roncesvalles campaign
According to the Muslim historian Ibn al-Athir, the Diet of Paderborn had received the representatives of the Muslim rulers of Zaragoza, Girona, Barcelona and Huesca. Their masters had been cornered in the Iberian peninsula by Abd ar-Rahman I, the Umayyad emir of Cordova. These "Saracen" (Moorish and Muwallad) rulers offered their homage to the king of the Franks in return for military support. Seeing an opportunity to extend Christendom and his own power, and believing the Saxons to be a fully conquered nation, Charlemagne agreed to go to Spain.
In 778, he led the Neustrian army across the Western Pyrenees, while the Austrasians, Lombards, and Burgundians passed over the Eastern Pyrenees. The armies met at Saragossa and Charlemagne received the homage of the Muslim rulers, Sulayman al-Arabi and Kasmin ibn Yusuf, but the city did not fall for him. Indeed, Charlemagne faced the toughest battle of his career. The Muslims forced him to retreat, so he decided to go home, as he could not trust the Basques, whom he had subdued by conquering Pamplona. He turned to leave Iberia, but as his army was crossing back through the Pass of Roncesvalles, one of the most famous events of his reign occurred: the Basques attacked and destroyed his rearguard and baggage train. The Battle of Roncevaux Pass, though less a battle than a skirmish, left many famous dead, including the seneschal Eggihard, the count of the palace Anselm, and the warden of the Breton March, Roland, inspiring the subsequent creation of The Song of Roland (La Chanson de Roland), regarded as the first major work in the French language.
Contact with Muslims
The conquest of Italy brought Charlemagne in contact with Muslims who, at the time, controlled the Mediterranean. Charlemagne's eldest son, Pepin the Hunchback, was much occupied with Muslims in Italy. Charlemagne conquered Corsica and Sardinia at an unknown date and in 799 the Balearic Islands. The islands were often attacked by Muslim pirates, but the counts of Genoa and Tuscany (Boniface) controlled them with large fleets until the end of Charlemagne's reign. Charlemagne even had contact with the caliphal court in Baghdad. In 797 (or possibly 801), the caliph of Baghdad, Harun al-Rashid, presented Charlemagne with an Asian elephant named Abul-Abbas and a clock.
Wars with the Moors
In Hispania, the struggle against Islam continued unabated throughout the latter half of his reign. Louis was in charge of the Spanish border. In 785, his men captured Girona permanently and extended Frankish control into the Catalan littoral for the duration of Charlemagne's reign (the area remained nominally Frankish until the Treaty of Corbeil in 1258). The Muslim chiefs in the northeast of Islamic Spain were constantly rebelling against Cordovan authority, and they often turned to the Franks for help. The Frankish border was slowly extended until 795, when Girona, Cardona, Ausona and Urgell were united into the new Spanish March, within the old duchy of Septimania.
In 797, Barcelona, the greatest city of the region, fell to the Franks when Zeid, its governor, rebelled against Cordova and, failing, handed it to them. The Umayyad authority recaptured it in 799. However, Louis of Aquitaine marched the entire army of his kingdom over the Pyrenees and besieged it for two years, wintering there from 800 to 801, when it capitulated. The Franks continued to press forward against the emir. They probably took Tarragona and forced the submission of Tortosa in 809. The last conquest brought them to the mouth of the Ebro and gave them raiding access to Valencia, prompting the Emir al-Hakam I to recognise their conquests in 813.
Eastern campaigns
Saxon Wars
Charlemagne was engaged in almost constant warfare throughout his reign, often at the head of his elite scara bodyguard squadrons. In the Saxon Wars, spanning thirty years and eighteen battles, he conquered Saxonia and proceeded to convert it to Christianity.
The Germanic Saxons were divided into four subgroups in four regions. Nearest to Austrasia was Westphalia and farthest away was Eastphalia. Between them was Engria and north of these three, at the base of the Jutland peninsula, was Nordalbingia.
In his first campaign, in 773, Charlemagne forced the Engrians to submit and cut down an Irminsul pillar near Paderborn. The campaign was cut short by his first expedition to Italy. He returned in 775, marching through Westphalia and conquering the Saxon fort at Sigiburg. He then crossed Engria, where he defeated the Saxons again. Finally, in Eastphalia, he defeated a Saxon force, and its leader converted to Christianity. Charlemagne returned through Westphalia, leaving encampments at Sigiburg and Eresburg, which had been important Saxon bastions. He then controlled Saxony with the exception of Nordalbingia, but Saxon resistance had not ended.
Following his subjugation of the Dukes of Friuli and Spoleto, Charlemagne returned rapidly to Saxony in 776, where a rebellion had destroyed his fortress at Eresburg. The Saxons were once again defeated, but their main leader, Widukind, escaped to Denmark, his wife's home. Charlemagne built a new camp at Karlstadt. In 777, he called a national diet at Paderborn to integrate Saxony fully into the Frankish kingdom. Many Saxons were baptised as Christians.
In the summer of 779, he again invaded Saxony and reconquered Eastphalia, Engria and Westphalia. At a diet near Lippe, he divided the land into missionary districts and himself assisted in several mass baptisms (780). He then returned to Italy and, for the first time, the Saxons did not immediately revolt. Saxony was peaceful from 780 to 782.
He returned to Saxony in 782 and instituted a code of law and appointed counts, both Saxon and Frank. The laws were draconian on religious issues; for example, the Capitulatio de partibus Saxoniae prescribed death to Saxon pagans who refused to convert to Christianity. This led to renewed conflict. That year, in autumn, Widukind returned and led a new revolt. In response, at Verden in Lower Saxony, Charlemagne is recorded as having ordered the execution of 4,500 Saxon prisoners by beheading, known as the Massacre of Verden ("Verdener Blutgericht"). The killings triggered three years of renewed bloody warfare. During this war, the East Frisians between the Lauwers and the Weser joined the Saxons in revolt and were finally subdued. The war ended with Widukind accepting baptism. The Frisians afterwards asked for missionaries to be sent to them and a bishop of their own nation, Ludger, was sent. Charlemagne also promulgated a law code, the Lex Frisonum, as he did for most subject peoples.
Thereafter, the Saxons maintained the peace for seven years, but in 792 Westphalia again rebelled. The Eastphalians and Nordalbingians joined them in 793, but the insurrection was unpopular and was put down by 794. An Engrian rebellion followed in 796, but the presence of Charlemagne, Christian Saxons and Slavs quickly crushed it. The last insurrection occurred in 804, more than thirty years after Charlemagne's first campaign against them, but also failed. According to Einhard:
Submission of Bavaria
By 774, Charlemagne had invaded the Kingdom of Lombardy, and he later annexed the Lombardian territories and assumed its crown, placing the Papal States under Frankish protection. The Duchy of Spoleto south of Rome was acquired in 774, while in the central western parts of Europe, the Duchy of Bavaria was absorbed and the Bavarian policy continued of establishing tributary marches, (borders protected in return for tribute or taxes) among the Slavic Sorbs and Czechs. The remaining power confronting the Franks in the east were the Avars. However, Charlemagne acquired other Slavic areas, including Bohemia, Moravia, Austria and Croatia.
In 789, Charlemagne turned to Bavaria. He claimed that Tassilo III, Duke of Bavaria was an unfit ruler, due to his oath-breaking. The charges were exaggerated, but Tassilo was deposed anyway and put in the monastery of Jumièges. In 794, Tassilo was made to renounce any claim to Bavaria for himself and his family (the Agilolfings) at the synod of Frankfurt; he formally handed over to the king all of the rights he had held. Bavaria was subdivided into Frankish counties, as had been done with Saxony.
Avar campaigns
In 788, the Avars, an Asian nomadic group that had settled down in what is today Hungary (Einhard called them Huns), invaded Friuli and Bavaria. Charlemagne was preoccupied with other matters until 790 when he marched down the Danube and ravaged Avar territory to the Győr. A Lombard army under Pippin then marched into the Drava valley and ravaged Pannonia. The campaigns ended when the Saxons revolted again in 792.
For the next two years, Charlemagne was occupied, along with the Slavs, against the Saxons. Pippin and Duke Eric of Friuli continued, however, to assault the Avars' ring-shaped strongholds. The great Ring of the Avars, their capital fortress, was taken twice. The booty was sent to Charlemagne at his capital, Aachen, and redistributed to his followers and to foreign rulers, including King Offa of Mercia. Soon the Avar tuduns had lost the will to fight and travelled to Aachen to become vassals to Charlemagne and to become Christians. Charlemagne accepted their surrender and sent one native chief, baptised Abraham, back to Avaria with the ancient title of khagan. Abraham kept his people in line, but in 800, the Bulgarians under Khan Krum attacked the remains of the Avar state.
In 803, Charlemagne sent a Bavarian army into Pannonia, defeating and bringing an end to the Avar confederation.
In November of the same year, Charlemagne went to Regensburg where the Avar leaders acknowledged him as their ruler. In 805, the Avar khagan, who had already been baptised, went to Aachen to ask permission to settle with his people south-eastward from Vienna. The Transdanubian territories became integral parts of the Frankish realm, which was abolished by the Magyars in 899–900.
Northeast Slav expeditions
In 789, in recognition of his new pagan neighbours, the Slavs, Charlemagne marched an Austrasian-Saxon army across the Elbe into Obotrite territory. The Slavs ultimately submitted, led by their leader Witzin. Charlemagne then accepted the surrender of the Veleti under Dragovit and demanded many hostages. He also demanded permission to send missionaries into this pagan region unmolested. The army marched to the Baltic before turning around and marching to the Rhine, winning much booty with no harassment. The tributary Slavs became loyal allies. In 795, when the Saxons broke the peace, the Abotrites and Veleti rebelled with their new ruler against the Saxons. Witzin died in battle and Charlemagne avenged him by harrying the Eastphalians on the Elbe. Thrasuco, his successor, led his men to conquest over the Nordalbingians and handed their leaders over to Charlemagne, who honoured him. The Abotrites remained loyal until Charles' death and fought later against the Danes.
Southeast Slav expeditions
When Charlemagne incorporated much of Central Europe, he brought the Frankish state face to face with the Avars and Slavs in the southeast. The most southeast Frankish neighbours were Croats, who settled in Lower Pannonia and Duchy of Croatia. While fighting the Avars, the Franks had called for their support. During the 790s, he won a major victory over them in 796. Duke Vojnomir of Lower Pannonia aided Charlemagne, and the Franks made themselves overlords over the Croats of northern Dalmatia, Slavonia and Pannonia.
The Frankish commander Eric of Friuli wanted to extend his dominion by conquering the Littoral Croat Duchy. During that time, Dalmatian Croatia was ruled by Duke Višeslav of Croatia. In the Battle of Trsat, the forces of Eric fled their positions and were routed by the forces of Višeslav. Eric was among those killed which was a great blow for the Carolingian Empire.
Charlemagne also directed his attention to the Slavs to the west of the Avar khaganate: the Carantanians and Carniolans. These people were subdued by the Lombards and Bavarii and made tributaries, but were never fully incorporated into the Frankish state.
Imperium
Coronation
In 799, Pope Leo III had been assaulted by some of the Romans, who tried to pull out his eyes and tear out his tongue. Leo escaped and fled to Charlemagne at Paderborn. Charlemagne, advised by scholar Alcuin, travelled to Rome, in November 800 and held a synod. On 23 December, Leo swore an oath of innocence to Charlemagne. His position having thereby been weakened, the Pope sought to restore his status. Two days later, at Mass, on Christmas Day (25 December), when Charlemagne knelt at the altar to pray, the Pope crowned him Imperator Romanorum ("Emperor of the Romans") in Saint Peter's Basilica. In so doing, the Pope rejected the legitimacy of Empress Irene of Constantinople:
Charlemagne's coronation as Emperor, though intended to represent the continuation of the unbroken line of Emperors from Augustus to Constantine VI, had the effect of setting up two separate (and often opposing) Empires and two separate claims to imperial authority. It led to war in 802, and for centuries to come, the Emperors of both West and East would make competing claims of sovereignty over the whole.
Einhard says that Charlemagne was ignorant of the Pope's intent and did not want any such coronation:
A number of modern scholars, however, suggest that Charlemagne was indeed aware of the coronation; certainly, he cannot have missed the bejewelled crown waiting on the altar when he came to pray—something even contemporary sources support.
Debate
Historians have debated for centuries whether Charlemagne was aware before the coronation of the Pope's intention to crown him Emperor (Charlemagne declared that he would not have entered Saint Peter's had he known, according to chapter twenty-eight of Einhard's Vita Karoli Magni), but that debate obscured the more significant question of why the Pope granted the title and why Charlemagne accepted it.
Collins points out "[t]hat the motivation behind the acceptance of the imperial title was a romantic and antiquarian interest in reviving the Roman Empire is highly unlikely." For one thing, such romance would not have appealed either to Franks or Roman Catholics at the turn of the ninth century, both of whom viewed the Classical heritage of the Roman Empire with distrust. The Franks took pride in having "fought against and thrown from their shoulders the heavy yoke of the Romans" and "from the knowledge gained in baptism, clothed in gold and precious stones the bodies of the holy martyrs whom the Romans had killed by fire, by the sword and by wild animals", as Pepin III described it in a law of 763 or 764.
Furthermore, the new title—carrying with it the risk that the new emperor would "make drastic changes to the traditional styles and procedures of government" or "concentrate his attentions on Italy or on Mediterranean concerns more generally"—risked alienating the Frankish leadership.
For both the Pope and Charlemagne, the Roman Empire remained a significant power in European politics at this time. The Byzantine Empire, based in Constantinople, continued to hold a substantial portion of Italy, with borders not far south of Rome. Charles' sitting in judgment of the Pope could be seen as usurping the prerogatives of the Emperor in Constantinople:
For the Pope, then, there was "no living Emperor at that time" though Henri Pirenne disputes this saying that the coronation "was not in any sense explained by the fact that at this moment a woman was reigning in Constantinople". Nonetheless, the Pope took the extraordinary step of creating one. The papacy had since 727 been in conflict with Irene's predecessors in Constantinople over a number of issues, chiefly the continued Byzantine adherence to the doctrine of iconoclasm, the destruction of Christian images; while from 750, the secular power of the Byzantine Empire in central Italy had been nullified.
By bestowing the Imperial crown upon Charlemagne, the Pope arrogated to himself "the right to appoint ... the Emperor of the Romans, ... establishing the imperial crown as his own personal gift but simultaneously granting himself implicit superiority over the Emperor whom he had created." And "because the Byzantines had proved so unsatisfactory from every point of view—political, military and doctrinal—he would select a westerner: the one man who by his wisdom and statesmanship and the vastness of his dominions ... stood out head and shoulders above his contemporaries."
With Charlemagne's coronation, therefore, "the Roman Empire remained, so far as either of them [Charlemagne and Leo] were concerned, one and indivisible, with Charles as its Emperor", though there can have been "little doubt that the coronation, with all that it implied, would be furiously contested in Constantinople".
Alcuin writes hopefully in his letters of an Imperium Christianum ("Christian Empire"), wherein, "just as the inhabitants of the [Roman Empire] had been united by a common Roman citizenship", presumably this new empire would be united by a common Christian faith. This is the view of Pirenne when he says "Charles was the Emperor of the ecclesia as the Pope conceived it, of the Roman Church, regarded as the universal Church". The Imperium Christianum was further supported at a number of synods all across Europe by Paulinus of Aquileia.
What is known, from the Byzantine chronicler Theophanes, is that Charlemagne's reaction to his coronation was to take the initial steps towards securing the Constantinopolitan throne by sending envoys of marriage to Irene, and that Irene reacted somewhat favourably to them.
Distinctions between the universalist and localist conceptions of the empire remain controversial among historians. According to the former, the empire was a universal monarchy, a "commonwealth of the whole world, whose sublime unity transcended every minor distinction"; and the emperor "was entitled to the obedience of Christendom". According to the latter, the emperor had no ambition for universal dominion; his realm was limited in the same way as that of every other ruler, and when he made more far-reaching claims his object was normally to ward off the attacks either of the Pope or of the Byzantine emperor. According to this view, also, the origin of the empire is to be explained by specific local circumstances rather than by overarching theories.
According to Ohnsorge, for a long time, it had been the custom of Byzantium to designate the German princes as spiritual "sons" of the Romans. What might have been acceptable in the fifth century had become provoking and insulting to the Franks in the eighth century. Charles came to believe that the Roman emperor, who claimed to head the world hierarchy of states, was, in reality, no greater than Charles himself, a king as other kings, since beginning in 629 he had entitled himself "Basileus" (translated literally as "king"). Ohnsorge finds it significant that the chief wax seal of Charles, which bore only the inscription: "Christe, protege Carolum regem Francorum" [Christ, protect Charles, king of the Franks], was used from 772 to 813, even during the imperial period and was not replaced by a special imperial seal; indicating that Charles felt himself to be just the king of the Franks. Finally, Ohnsorge points out that in the spring of 813 at Aachen, Charles crowned his only surviving son, Louis, as the emperor without recourse to Rome with only the acclamation of his Franks. The form in which this acclamation was offered was Frankish-Christian rather than Roman. This implies both independence from Rome and a Frankish (non-Roman) understanding of empire.
Mayr-Harting argues that the Imperial title was Charlemagne's face-saving offer to incorporate the recently conquered Saxons. Since the Saxons did not have an institution of kingship for their own ethnicity, claiming the right to rule them as King of the Saxons was not possible. Hence, it is argued, Charlemagne used the supra-ethnic Imperial title to incorporate the Saxons, which helped to cement the diverse peoples under his rule.
Imperial title
Charlemagne used these circumstances to claim that he was the "renewer of the Roman Empire", which had declined under the Byzantines. In his official charters, Charles preferred the style Karolus serenissimus Augustus a Deo coronatus magnus pacificus imperator Romanum gubernans imperium ("Charles, most serene Augustus crowned by God, the great, peaceful emperor ruling the Roman empire") to the more direct Imperator Romanorum ("Emperor of the Romans").
The title of Emperor remained in the Carolingian family for years to come, but divisions of territory and in-fighting over supremacy of the Frankish state weakened its significance. The papacy itself never forgot the title nor abandoned the right to bestow it. When the family of Charles ceased to produce worthy heirs, the Pope gladly crowned whichever Italian magnate could best protect him from his local enemies. The empire would remain in continuous existence for over a millennium, as the Holy Roman Empire, a true imperial successor to Charles.
Imperial diplomacy
The iconoclasm of the Byzantine Isaurian Dynasty was endorsed by the Franks. The Second Council of Nicaea reintroduced the veneration of icons under Empress Irene. The council was not recognised by Charlemagne since no Frankish emissaries had been invited, even though Charlemagne ruled more than three provinces of the classical Roman empire and was considered equal in rank to the Byzantine emperor. And while the Pope supported the reintroduction of the iconic veneration, he politically digressed from Byzantium. He certainly desired to increase the influence of the papacy, to honour his saviour Charlemagne, and to solve the constitutional issues then most troubling to European jurists in an era when Rome was not in the hands of an emperor. Thus, Charlemagne's assumption of the imperial title was not a usurpation in the eyes of the Franks or Italians. It was, however, seen as such in Byzantium, where it was protested by Irene and her successor Nikephoros I—neither of whom had any great effect in enforcing their protests.
The East Romans, however, still held several territories in Italy: Venice (what was left of the Exarchate of Ravenna), Reggio (in Calabria), Otranto (in Apulia), and Naples (the Ducatus Neapolitanus). These regions remained outside of Frankish hands until 804, when the Venetians, torn by infighting, transferred their allegiance to the Iron Crown of Pippin, Charles' son. The Pax Nicephori ended. Nicephorus ravaged the coasts with a fleet, initiating the only instance of war between the Byzantines and the Franks. The conflict lasted until 810 when the pro-Byzantine party in Venice gave their city back to the Byzantine Emperor, and the two emperors of Europe made peace: Charlemagne received the Istrian peninsula and in 812 the emperor Michael I Rangabe recognised his status as Emperor, although not necessarily as "Emperor of the Romans".
Danish attacks
After the conquest of Nordalbingia, the Frankish frontier was brought into contact with Scandinavia. The pagan Danes, "a race almost unknown to his ancestors, but destined to be only too well known to his sons" as Charles Oman described them, inhabiting the Jutland peninsula, had heard many stories from Widukind and his allies who had taken refuge with them about the dangers of the Franks and the fury which their Christian king could direct against pagan neighbours.
In 808, the king of the Danes, Godfred, expanded the vast Danevirke across the isthmus of Schleswig. This defence, last employed in the Danish-Prussian War of 1864, was at its beginning a long earthenwork rampart. The Danevirke protected Danish land and gave Godfred the opportunity to harass Frisia and Flanders with pirate raids. He also subdued the Frank-allied Veleti and fought the Abotrites.
Godfred invaded Frisia, joked of visiting Aachen, but was murdered before he could do any more, either by a Frankish assassin or by one of his own men. Godfred was succeeded by his nephew Hemming, who concluded the Treaty of Heiligen with Charlemagne in late 811.
Death
In 813, Charlemagne called Louis the Pious, king of Aquitaine, his only surviving legitimate son, to his court. There Charlemagne crowned his son as co-emperor and sent him back to Aquitaine. He then spent the autumn hunting before returning to Aachen on 1 November. In January, he fell ill with pleurisy. In deep depression (mostly because many of his plans were not yet realised), he took to his bed on 21 January and as Einhard tells it:
He was buried that same day, in Aachen Cathedral. The earliest surviving planctus, the Planctus de obitu Karoli, was composed by a monk of Bobbio, which he had patronised. A later story, told by Otho of Lomello, Count of the Palace at Aachen in the time of Emperor Otto III, would claim that he and Otto had discovered Charlemagne's tomb: Charlemagne, they claimed, was seated upon a throne, wearing a crown and holding a sceptre, his flesh almost entirely incorrupt. In 1165, Emperor Frederick I re-opened the tomb again and placed the emperor in a sarcophagus beneath the floor of the cathedral. In 1215 Emperor Frederick II re-interred him in a casket made of gold and silver known as the Karlsschrein.
Charlemagne's death emotionally affected many of his subjects, particularly those of the literary clique who had surrounded him at Aachen. An anonymous monk of Bobbio lamented:
Louis succeeded him as Charles had intended. He left a testament allocating his assets in 811 that was not updated prior to his death. He left most of his wealth to the Church, to be used for charity. His empire lasted only another generation in its entirety; its division, according to custom, between Louis's own sons after their father's death laid the foundation for the modern states of Germany and France.
Administration
Organisation
The Carolingian king exercised the bannum, the right to rule and command. Under the Franks, it was a royal prerogative but could be delegated. He had supreme jurisdiction in judicial matters, made legislation, led the army, and protected both the Church and the poor. His administration was an attempt to organise the kingdom, church and nobility around him. As an administrator, Charlemagne stands out for his many reforms: monetary, governmental, military, cultural and ecclesiastical. He is the main protagonist of the "Carolingian Renaissance".
Military
Charlemagne's success rested primarily on novel siege technologies and excellent logistics rather than the long-claimed "cavalry revolution" led by Charles Martel in 730s. However, the stirrup, which made the "shock cavalry" lance charge possible, was not introduced to the Frankish kingdom until the late eighth century.
Horses were used extensively by the Frankish military because they provided a quick, long-distance method of transporting troops, which was critical to building and maintaining the large empire.
Economic and monetary reforms
Charlemagne had an important role in determining Europe's immediate economic future. Pursuing his father's reforms, Charlemagne abolished the monetary system based on the gold . Instead, he and the Anglo-Saxon King Offa of Mercia took up Pippin's system for pragmatic reasons, notably a shortage of the metal.
The gold shortage was a direct consequence of the conclusion of peace with Byzantium, which resulted in ceding Venice and Sicily to the East and losing their trade routes to Africa. The resulting standardisation economically harmonised and unified the complex array of currencies that had been in use at the commencement of his reign, thus simplifying trade and commerce.
Charlemagne established a new standard, the (from the Latin , the modern pound), which was based upon a pound of silver—a unit of both money and weight—worth 20 sous (from the Latin [which was primarily an accounting device and never actually minted], the modern shilling) or 240 (from the Latin , the modern penny). During this period, the and the were counting units; only the was a coin of the realm.
Charlemagne instituted principles for accounting practice by means of the Capitulare de villis of 802, which laid down strict rules for the way in which incomes and expenses were to be recorded.
Charlemagne applied this system to much of the European continent, and Offa's standard was voluntarily adopted by much of England. After Charlemagne's death, continental coinage degraded, and most of Europe resorted to using the continued high-quality English coin until about 1100.
Jews in Charlemagne's realm
Early in Charlemagne's rule he tacitly allowed Jews to monopolise money lending. He invited Italian Jews to immigrate, as royal clients independent of the feudal landowners, and form trading communities in the agricultural regions of Provence and the Rhineland. Their trading activities augmented the otherwise almost exclusively agricultural economies of these regions. His personal physician was Jewish, and he employed a Jew named Isaac as his personal representative to the Muslim caliphate of Baghdad.
Education reforms
Part of Charlemagne's success as a warrior, an administrator and ruler can be traced to his admiration for learning and education. His reign is often referred to as the Carolingian Renaissance because of the flowering of scholarship, literature, art and architecture that characterise it. Charlemagne came into contact with the culture and learning of other countries (especially Moorish Spain, Anglo-Saxon England, and Lombard Italy) due to his vast conquests. He greatly increased the provision of monastic schools and scriptoria (centres for book-copying) in Francia.
Charlemagne was a lover of books, sometimes having them read to him during meals. He was thought to enjoy the works of Augustine of Hippo. His court played a key role in producing books that taught elementary Latin and different aspects of the church. It also played a part in creating a royal library that contained in-depth works on language and Christian faith.
Charlemagne encouraged clerics to translate Christian creeds and prayers into their respective vernaculars as well to teach grammar and music. Due to the increased interest of intellectual pursuits and the urging of their king, the monks accomplished so much copying that almost every manuscript from that time was preserved. At the same time, at the urging of their king, scholars were producing more secular books on many subjects, including history, poetry, art, music, law, theology, etc. Due to the increased number of titles, private libraries flourished. These were mainly supported by aristocrats and churchmen who could afford to sustain them. At Charlemagne's court, a library was founded and a number of copies of books were produced, to be distributed by Charlemagne. Book production was completed slowly by hand and took place mainly in large monastic libraries. Books were so in demand during Charlemagne's time that these libraries lent out some books, but only if that borrower offered valuable collateral in return.
Most of the surviving works of classical Latin were copied and preserved by Carolingian scholars. Indeed, the earliest manuscripts available for many ancient texts are Carolingian. It is almost certain that a text which survived to the Carolingian age survives still.
The pan-European nature of Charlemagne's influence is indicated by the origins of many of the men who worked for him: Alcuin, an Anglo-Saxon from York; Theodulf, a Visigoth, probably from Septimania; Paul the Deacon, Lombard; Italians Peter of Pisa and Paulinus of Aquileia; and Franks Angilbert, Angilram, Einhard and Waldo of Reichenau.
Charlemagne promoted the liberal arts at court, ordering that his children and grandchildren be well-educated, and even studying himself (in a time when even leaders who promoted education did not take time to learn themselves) under the tutelage of Peter of Pisa, from whom he learned grammar; Alcuin, with whom he studied rhetoric, dialectic (logic), and astronomy (he was particularly interested in the movements of the stars); and Einhard, who tutored him in arithmetic.
His great scholarly failure, as Einhard relates, was his inability to write: when in his old age he attempted to learn—practising the formation of letters in his bed during his free time on books and wax tablets he hid under his pillow—"his effort came too late in life and achieved little success", and his ability to read—which Einhard is silent about, and which no contemporary source supports—has also been called into question.
In 800, Charlemagne enlarged the hostel at the Muristan in Jerusalem and added a library to it. He certainly had not been personally in Jerusalem.
Church reforms
Charlemagne expanded the reform Church's programme unlike his father, Pippin, and uncle, Carloman. The deepening of the spiritual life was later to be seen as central to public policy and royal governance. His reform focused on strengthening the church's power structure, improving clergy's skill and moral quality, standardising liturgical practices, improvements on the basic tenets of the faith and the rooting out of paganism. His authority extended over church and state. He could discipline clerics, control ecclesiastical property and define orthodox doctrine. Despite the harsh legislation and sudden change, he had developed support from clergy who approved his desire to deepen the piety and morals of his subjects.
In 809–810, Charlemagne called a church council in Aachen, which confirmed the unanimous belief in the West that the Holy Spirit proceeds from the Father and the Son (ex Patre Filioque) and sanctioned inclusion in the Nicene Creed of the phrase Filioque (and the Son). For this Charlemagne sought the approval of Pope Leo III. The Pope, while affirming the doctrine and approving its use in teaching, opposed its inclusion in the text of the Creed as adopted in the 381 First Council of Constantinople. This spoke of the procession of the Holy Spirit from the Father, without adding phrases such as "and the Son", "through the Son", or "alone". Stressing his opposition, the Pope had the original text inscribed in Greek and Latin on two heavy shields that were displayed in Saint Peter's Basilica.
Writing reforms
During Charles' reign, the Roman half uncial script and its cursive version, which had given rise to various continental minuscule scripts, were combined with features from the insular scripts in use in Irish and English monasteries. Carolingian minuscule was created partly under the patronage of Charlemagne. Alcuin, who ran the palace school and scriptorium at Aachen, was probably a chief influence.
The revolutionary character of the Carolingian reform, however, can be overemphasised; efforts at taming Merovingian and Germanic influence had been underway before Alcuin arrived at Aachen. The new minuscule was disseminated first from Aachen and later from the influential scriptorium at Tours, where Alcuin retired as an abbot.
Political reforms
Charlemagne engaged in many reforms of Frankish governance while continuing many traditional practices, such as the division of the kingdom among sons.
Divisio regnorum
In 806, Charlemagne first made provision for the traditional division of the empire on his death. For Charles the Younger he designated Austrasia and Neustria, Saxony, Burgundy and Thuringia. To Pippin, he gave Italy, Bavaria, and Swabia. Louis received Aquitaine, the Spanish March and Provence. The imperial title was not mentioned, which led to the suggestion that, at that particular time, Charlemagne regarded the title as an honorary achievement that held no hereditary significance.
Pepin died in 810 and Charles in 811. Charlemagne then reconsidered the matter, and in 813, crowned his youngest son, Louis, co-emperor and co-King of the Franks, granting him a half-share of the empire and the rest upon Charlemagne's own death. The only part of the Empire that Louis was not promised was Italy, which Charlemagne specifically bestowed upon Pippin's illegitimate son Bernard.
Appearance
Manner
Einhard tells in his twenty-fourth chapter: Charlemagne threw grand banquets and feasts for special occasions such as religious holidays and four of his weddings. When he was not working, he loved Christian books, horseback riding, swimming, bathing in natural hot springs with his friends and family, and hunting. Franks were well known for horsemanship and hunting skills. Charles was a light sleeper and would stay in his bed chambers for entire days at a time due to restless nights. During these days, he would not get out of bed when a quarrel occurred in his kingdom, instead summoning all members of the situation into his bedroom to be given orders. Einhard tells again in the twenty-fourth chapter: "In summer after the midday meal, he would eat some fruit, drain a single cup, put off his clothes and shoes, just as he did for the night, and rest for two or three hours. He was in the habit of awaking and rising from bed four or five times during the night."
Language
Charlemagne probably spoke a Rhenish Franconian dialect.
He also spoke Latin and had at least some understanding of Greek, according to Einhard (Grecam vero melius intellegere quam pronuntiare poterat, "he could understand Greek better than he could speak it").
The largely fictional account of Charlemagne's Iberian campaigns by Pseudo-Turpin, written some three centuries after his death, gave rise to the legend that the king also spoke Arabic.
Physical appearance
Charlemagne's personal appearance is known from a good description by Einhard after his death in the biography Vita Karoli Magni. Einhard states:
The physical portrait provided by Einhard is confirmed by contemporary depictions such as coins and his bronze statuette kept in the Louvre. In 1861, Charlemagne's tomb was opened by scientists who reconstructed his skeleton and estimated it to be measured . A 2010 estimate of his height from an X-ray and CT scan of his tibia was . This puts him in the 99th percentile of height for his period, given that average male height of his time was . The width of the bone suggested he was gracile in body build.
Dress
Charlemagne wore the traditional costume of the Frankish people, described by Einhard thus:
He wore a blue cloak and always carried a sword typically of a golden or silver hilt. He wore intricately jeweled swords to banquets or ambassadorial receptions. Nevertheless:
On great feast days, he wore embroidery and jewels on his clothing and shoes. He had a golden buckle for his cloak on such occasions and would appear with his great diadem, but he despised such apparel according to Einhard, and usually dressed like the common people.
Homes
Charlemagne had residences across his kingdom, including numerous private estates that were governed in accordance with the Capitulare de villis. A 9th-century document detailing the inventory of an estate at Asnapium listed amounts of livestock, plants and vegetables and kitchenware including cauldrons, drinking cups, brass kettles and firewood. The manor contained seventeen houses built inside the courtyard for nobles and family members and was separated from its supporting villas.
Beatification
Charlemagne was revered as a saint in the Holy Roman Empire and some other locations after the twelfth century. The Apostolic See did not recognise his invalid canonisation by Antipope Paschal III, done to gain the favour of Frederick Barbarossa in 1165. The Apostolic See annulled all of Paschal's ordinances at the Third Lateran Council in 1179. He is not enumerated among the 28 saints named "Charles" in the Roman Martyrology. His beatification has been acknowledged as cultus confirmed and is celebrated on 28 January.
Cultural impact
Middle Ages
The author of the Visio Karoli Magni written around 865 uses facts gathered apparently from Einhard and his own observations on the decline of Charlemagne's family after the dissensions war (840–43) as the basis for a visionary tale of Charles' meeting with a prophetic spectre in a dream.
Charlemagne was a model knight as one of the Nine Worthies who enjoyed an important legacy in European culture. One of the great medieval literary cycles, the Charlemagne cycle or the Matter of France, centres on his deeds—the Emperor with the Flowing Beard of Roland fame—and his historical commander of the border with Brittany, Roland, and the 12 paladins. These are analogous to, and inspired the myth of, the Knights of the Round Table of King Arthur's court. Their tales constitute the first chansons de geste.
In the 12th century, Geoffrey of Monmouth based his stories of Arthur largely on stories of Charlemagne. During the Hundred Years' War in the 14th century, there was considerable cultural conflict in England, where the Norman rulers were aware of their French roots and identified with Charlemagne, Anglo-Saxon natives felt more affinity for Arthur, whose own legends were relatively primitive. Therefore, storytellers in England adapted legends of Charlemagne and his 12 Peers to the Arthurian tales.
In the Divine Comedy, the spirit of Charlemagne appears to Dante in the Heaven of Mars, among the other "warriors of the faith".
19th century
Charlemagne's capitularies were quoted by Pope Benedict XIV in his apostolic constitution 'Providas' against freemasonry: "For in no way are we able to understand how they can be faithful to us, who have shown themselves unfaithful to God and disobedient to their Priests".
Charlemagne appears in Adelchi, the second tragedy by Italian writer Alessandro Manzoni, first published in 1822.
In 1867, an equestrian statue of Charlemagne was made by Louis Jehotte and was inaugurated in 1868 on the Boulevard d'Avroy in Liège. In the niches of the neo-roman pedestal are six statues of Charlemagne's ancestors (Sainte Begge, Pépin de Herstal, Charles Martel, Bertrude, Pépin de Landen and Pépin le Bref).
The North Wall Frieze in the courtroom of the Supreme Court of the United States depicts Charlemagne as a legal reformer.
20th century
The city of Aachen has, since 1949, awarded an international prize (called the Karlspreis der Stadt Aachen) in honour of Charlemagne. It is awarded annually to "personages of merit who have promoted the idea of Western unity by their political, economic and literary endeavours." Winners of the prize include Richard von Coudenhove-Kalergi, the founder of the pan-European movement, Alcide De Gasperi, and Winston Churchill.
In its national anthem, "El Gran Carlemany", the microstate of Andorra credits Charlemagne with its independence.
In 1964, young French singer France Gall released the hit song "Sacré Charlemagne" in which the lyrics blame the great king for imposing the burden of compulsory education on French children.
Charlemagne is quoted by Henry Jones, Sr. in Indiana Jones and the Last Crusade. After using his umbrella to induce a flock of seagulls to smash through the glass cockpit of a pursuing German fighter plane, Henry Jones remarks, "I suddenly remembered my Charlemagne: 'Let my armies be the rocks and the trees and the birds in the sky. Despite the quote's popularity since the movie, there is no evidence that Charlemagne actually said this.
21st century
A 2010 episode of QI discussed the mathematics completed by Mark Humphrys that calculated that all modern Europeans are highly likely to share Charlemagne as a common ancestor (see most recent common ancestor).
The Economist featured a weekly column entitled "Charlemagne", focusing generally on European affairs and, more usually and specifically, on the European Union and its politics.
Actor and singer Christopher Lee's symphonic metal concept album Charlemagne: By the Sword and the Cross and its heavy metal follow-up Charlemagne: The Omens of Death feature the events of Charlemagne's life.
In April 2014, on the occasion of the 1200th anniversary of Charlemagne's death, public art Mein Karl by Ottmar Hörl at Katschhof place was installed between city hall and the Aachen cathedral, displaying 500 Charlemagne statues.
Charlemagne features as a playable character in the 2014 Charlemagne expansion for the grand strategy video game Crusader Kings 2.
Charlemagne is a playable character in the Mobile/PC Game Rise of Kingdoms.
In the 2018 video game Fate/Extella Link, Charlemagne appears as a Heroic Spirit separated into two Saint Graphs: the adventurous hero Charlemagne, who embodies the fantasy aspect as leader of the Twelve Paladins, and the villain Karl de Große, who embodies the historical aspect as Holy Roman Emperor.
In July 2022, Charlemagne featured as a character in an episode of The Family Histories Podcast, and it references his role as an ancestor of all modern Europeans. He is portrayed here in later life, and is speaking Latin, which is translated by a device. He is returned to 9th Century Aquitaine by the end of the episode after a DNA sample has been extracted.
Notes
References
Citations
Bibliography
Charlemagne, from Encyclopædia Britannica, full-article, latest edition.
Comprises the Annales regni Francorum and The History of the Sons of Louis the Pious
External links
The Making of Charlemagne's Europe (freely available database of prosopographical and socio-economic data from legal documents dating to Charlemagne's reign, produced by King's College London)
The Sword of Charlemagne (myArmoury.com article)
Charter given by Charlemagne for St. Emmeram's Abbey showing the Emperor's seal, 22.2.794 . Taken from the collections of the Lichtbildarchiv älterer Originalurkunden at Marburg University
An interactive map of Charlemagne's travels
740s births
814 deaths
8th-century dukes of Bavaria
8th-century Frankish kings
8th-century Lombard monarchs
9th-century dukes of Bavaria
9th-century Holy Roman Emperors
9th-century kings of Italy
Beatifications by Pope Benedict XIV
Captains General of the Church
Carolingian dynasty
Chansons de geste
Characters in Orlando Innamorato and Orlando Furioso
Characters in The Song of Roland
Christian royal saints
Deaths from respiratory disease
Founding monarchs
Frankish warriors
French bibliophiles
French Christians
German bibliophiles
German Christians
Matter of France
Medieval Low Countries
Medieval Roman consuls |
5320 | https://en.wikipedia.org/wiki/Carbon%20nanotube | Carbon nanotube | A carbon nanotube (CNT) is a tube made of carbon with a diameter in the nanometer range (nanoscale). They are one of the allotropes of carbon.
Single-walled carbon nanotubes (SWCNTs) have diameters around 0.5–2.0 nanometers, about 100,000 times smaller than the width of a human hair. They can be idealized as cutouts from a two-dimensional graphene sheet rolled up to form a hollow cylinder.
Multi-walled carbon nanotubes (MWCNTs) consist of nested single-wall carbon nanotubes in a nested, tube-in-tube structure. Double- and triple-walled carbon nanotubes are special cases of MWCNT.
Carbon nanotubes can exhibit remarkable properties, such as exceptional tensile strength and thermal conductivity because of their nanostructure and strength of the bonds between carbon atoms. Some SWCNT structures exhibit high electrical conductivity while others are semiconductors. In addition, carbon nanotubes can be chemically modified. These properties are expected to be valuable in many areas of technology, such as electronics, optics, composite materials (replacing or complementing carbon fibers), nanotechnology, and other applications of materials science.
The predicted properties for SWCNTs were tantalizing, but a path to synthesizing them was lacking until 1993, when Iijima and Ichihashi at NEC and Bethune et al. at IBM independently discovered that co-vaporizing carbon and transition metals such as iron and cobalt could specifically catalyze SWCNT formation. These discoveries triggered research that succeeded in greatly increasing the efficiency of the catalytic production technique, and led to an explosion of work to characterize and find applications for SWCNTs.
Structure of SWNTs
Basic details
The structure of an ideal (infinitely long) single-walled carbon nanotube is that of a regular hexagonal lattice drawn on an infinite cylindrical surface, whose vertices are the positions of the carbon atoms. Since the length of the carbon-carbon bonds is fairly fixed, there are constraints on the diameter of the cylinder and the arrangement of the atoms on it.
In the study of nanotubes, one defines a zigzag path on a graphene-like lattice as a path that turns 60 degrees, alternating left and right, after stepping through each bond. It is also conventional to define an armchair path as one that makes two left turns of 60 degrees followed by two right turns every four steps. On some carbon nanotubes, there is a closed zigzag path that goes around the tube. One says that the tube is of the zigzag type or configuration, or simply is a zigzag nanotube. If the tube is instead encircled by a closed armchair path, it is said to be of the armchair type, or an armchair nanotube. An infinite nanotube that is of the zigzag (or armchair) type consists entirely of closed zigzag (or armchair) paths, connected to each other.
The zigzag and armchair configurations are not the only structures that a single-walled nanotube can have. To describe the structure of a general infinitely long tube, one should imagine it being sliced open by a cut parallel to its axis, that goes through some atom A, and then unrolled flat on the plane, so that its atoms and bonds coincide with those of an imaginary graphene sheet—more precisely, with an infinitely long strip of that sheet. The two halves of the atom A will end up on opposite edges of the strip, over two atoms A1 and A2 of the graphene. The line from A1 to A2 will correspond to the circumference of the cylinder that went through the atom A, and will be perpendicular to the edges of the strip. In the graphene lattice, the atoms can be split into two classes, depending on the directions of their three bonds. Half the atoms have their three bonds directed the same way, and half have their three bonds rotated 180 degrees relative to the first half. The atoms A1 and A2, which correspond to the same atom A on the cylinder, must be in the same class. It follows that the circumference of the tube and the angle of the strip are not arbitrary, because they are constrained to the lengths and directions of the lines that connect pairs of graphene atoms in the same class.
Let u and v be two linearly independent vectors that connect the graphene atom A1 to two of its nearest atoms with the same bond directions. That is, if one numbers consecutive carbons around a graphene cell with C1 to C6, then u can be the vector from C1 to C3, and v be the vector from C1 to C5. Then, for any other atom A2 with same class as A1, the vector from A1 to A2 can be written as a linear combination n u + m v, where n and m are integers. And, conversely, each pair of integers (n,m) defines a possible position for A2. Given n and m, one can reverse this theoretical operation by drawing the vector w on the graphene lattice, cutting a strip of the latter along lines perpendicular to w through its endpoints A1 and A2, and rolling the strip into a cylinder so as to bring those two points together. If this construction is applied to a pair (k,0), the result is a zigzag nanotube, with closed zigzag paths of 2k atoms. If it is applied to a pair (k,k), one obtains an armchair tube, with closed armchair paths of 4k atoms.
Types
The structure of the nanotube is not changed if the strip is rotated by 60 degrees clockwise around A1 before applying the hypothetical reconstruction above. Such a rotation changes the corresponding pair (n,m) to the pair (−2m,n+m). It follows that many possible positions of A2 relative to A1 — that is, many pairs (n,m) — correspond to the same arrangement of atoms on the nanotube. That is the case, for example, of the six pairs (1,2), (−2,3), (−3,1), (−1,−2), (2,−3), and (3,−1). In particular, the pairs (k,0) and (0,k) describe the same nanotube geometry. These redundancies can be avoided by considering only pairs (n,m) such that n > 0 and m ≥ 0; that is, where the direction of the vector w lies between those of u (inclusive) and v (exclusive). It can be verified that every nanotube has exactly one pair (n,m) that satisfies those conditions, which is called the tube's type. Conversely, for every type there is a hypothetical nanotube. In fact, two nanotubes have the same type if and only if one can be conceptually rotated and translated so as to match the other exactly. Instead of the type (n,m), the structure of a carbon nanotube can be specified by giving the length of the vector w (that is, the circumference of the nanotube), and the angle α between the directions of u and w,
may range from 0 (inclusive) to 60 degrees clockwise (exclusive). If the diagram is drawn with u horizontal, the latter is the tilt of the strip away from the vertical.
Chirality and mirror symmetry
A nanotube is chiral if it has type (n,m), with m > 0 and m ≠ n; then its enantiomer (mirror image) has type (m,n), which is different from (n,m). This operation corresponds to mirroring the unrolled strip about the line L through A1 that makes an angle of 30 degrees clockwise from the direction of the u vector (that is, with the direction of the vector u+v). The only types of nanotubes that are achiral are the (k,0) "zigzag" tubes and the (k,k) "armchair" tubes. If two enantiomers are to be considered the same structure, then one may consider only types (n,m) with 0 ≤ m ≤ n and n > 0. Then the angle α between u and w, which may range from 0 to 30 degrees (inclusive both), is called the "chiral angle" of the nanotube.
Circumference and diameter
From n and m one can also compute the circumference c, which is the length of the vector w, which turns out to be:
in picometres. The diameter of the tube is then , that is
also in picometres. (These formulas are only approximate, especially for small n and m where the bonds are strained; and they do not take into account the thickness of the wall.)
The tilt angle α between u and w and the circumference c are related to the type indices n and m by:
where arg(x,y) is the clockwise angle between the X-axis and the vector (x,y); a function that is available in many programming languages as atan2(y,x). Conversely, given c and α, one can get the type (n,m) by the formulas:
which must evaluate to integers.
Physical limits
Narrowest examples
If n and m are too small, the structure described by the pair (n,m) will describe a molecule that cannot be reasonably called a "tube", and may not even be stable. For example, the structure theoretically described by the pair (1,0) (the limiting "zigzag" type) would be just a chain of carbons. That is a real molecule, the carbyne; which has some characteristics of nanotubes (such as orbital hybridization, high tensile strength, etc.) — but has no hollow space, and may not be obtainable as a condensed phase. The pair (2,0) would theoretically yield a chain of fused 4-cycles; and (1,1), the limiting "armchair" structure, would yield a chain of bi-connected 4-rings. These structures may not be realizable.
The thinnest carbon nanotube proper is the armchair structure with type (2,2), which has a diameter of 0.3 nm. This nanotube was grown inside a multi-walled carbon nanotube. Assigning of the carbon nanotube type was done by a combination of high-resolution transmission electron microscopy (HRTEM), Raman spectroscopy, and density functional theory (DFT) calculations.
The thinnest freestanding single-walled carbon nanotube is about 0.43 nm in diameter. Researchers suggested that it can be either (5,1) or (4,2) SWCNT, but the exact type of the carbon nanotube remains questionable. (3,3), (4,3), and (5,1) carbon nanotubes (all about 0.4 nm in diameter) were unambiguously identified using aberration-corrected high-resolution transmission electron microscopy inside double-walled CNTs.
Length
The observation of the longest carbon nanotubes grown so far, around 0.5 metre (550 mm) long, was reported in 2013. These nanotubes were grown on silicon substrates using an improved chemical vapor deposition (CVD) method and represent electrically uniform arrays of single-walled carbon nanotubes.
The shortest carbon nanotube can be considered to be the organic compound cycloparaphenylene, which was synthesized in 2008 by Ramesh Jasti. Other small molecule carbon nanotubes have been synthesized since.
Density
The highest density of CNTs was achieved in 2013, grown on a conductive titanium-coated copper surface that was coated with co-catalysts cobalt and molybdenum at lower than typical temperatures of 450 °C. The tubes averaged a height of 380 nm and a mass density of 1.6 g cm−3. The material showed ohmic conductivity (lowest resistance ~22 kΩ).
Variants
There is no consensus on some terms describing carbon nanotubes in scientific literature: both "-wall" and "-walled" are being used in combination with "single", "double", "triple", or "multi", and the letter C is often omitted in the abbreviation, for example, multi-walled carbon nanotube (MWNT). The International Standards Organization uses single-wall or multi-wall in its documents.
Multi-walled
Multi-walled nanotubes (MWNTs) consist of multiple rolled layers (concentric tubes) of graphene. There are two models that can be used to describe the structures of multi-walled nanotubes. In the Russian Doll model, sheets of graphite are arranged in concentric cylinders, e.g., a (0,8) single-walled nanotube (SWNT) within a larger (0,17) single-walled nanotube. In the Parchment model, a single sheet of graphite is rolled in around itself, resembling a scroll of parchment or a rolled newspaper. The interlayer distance in multi-walled nanotubes is close to the distance between graphene layers in graphite, approximately 3.4 Å. The Russian Doll structure is observed more commonly. Its individual shells can be described as SWNTs, which can be metallic or semiconducting. Because of statistical probability and restrictions on the relative diameters of the individual tubes, one of the shells, and thus the whole MWNT, is usually a zero-gap metal.
Double-walled carbon nanotubes (DWNTs) form a special class of nanotubes because their morphology and properties are similar to those of SWNTs but they are more resistant to attacks by chemicals. This is especially important when it is necessary to graft chemical functions to the surface of the nanotubes (functionalization) to add properties to the CNT. Covalent functionalization of SWNTs will break some C=C double bonds, leaving "holes" in the structure on the nanotube and thus modifying both its mechanical and electrical properties. In the case of DWNTs, only the outer wall is modified. DWNT synthesis on the gram-scale by the CCVD technique was first proposed in 2003 from the selective reduction of oxide solutions in methane and hydrogen.
The telescopic motion ability of inner shells and their unique mechanical properties will permit the use of multi-walled nanotubes as the main movable arms in upcoming nanomechanical devices. The retraction force that occurs to telescopic motion is caused by the Lennard-Jones interaction between shells, and its value is about 1.5 nN.
Junctions and crosslinking
Junctions between two or more nanotubes have been widely discussed theoretically. Such junctions are quite frequently observed in samples prepared by arc discharge as well as by chemical vapor deposition. The electronic properties of such junctions were first considered theoretically by Lambin et al., who pointed out that a connection between a metallic tube and a semiconducting one would represent a nanoscale heterojunction. Such a junction could therefore form a component of a nanotube-based electronic circuit. The adjacent image shows a junction between two multiwalled nanotubes.
Junctions between nanotubes and graphene have been considered theoretically and studied experimentally. Nanotube-graphene junctions form the basis of pillared graphene, in which parallel graphene sheets are separated by short nanotubes. Pillared graphene represents a class of three-dimensional carbon nanotube architectures.
Recently, several studies have highlighted the prospect of using carbon nanotubes as building blocks to fabricate three-dimensional macroscopic (>100 nm in all three dimensions) all-carbon devices. Lalwani et al. have reported a novel radical-initiated thermal crosslinking method to fabricate macroscopic, free-standing, porous, all-carbon scaffolds using single- and multi-walled carbon nanotubes as building blocks. These scaffolds possess macro-, micro-, and nano-structured pores, and the porosity can be tailored for specific applications. These 3D all-carbon scaffolds/architectures may be used for the fabrication of the next generation of energy storage, supercapacitors, field emission transistors, high-performance catalysis, photovoltaics, and biomedical devices, implants, and sensors.
Other morphologies
Carbon nanobuds are a newly created material combining two previously discovered allotropes of carbon: carbon nanotubes and fullerenes. In this new material, fullerene-like "buds" are covalently bonded to the outer sidewalls of the underlying carbon nanotube. This hybrid material has useful properties of both fullerenes and carbon nanotubes. In particular, they have been found to be exceptionally good field emitters. In composite materials, the attached fullerene molecules may function as molecular anchors preventing slipping of the nanotubes, thus improving the composite's mechanical properties.
A carbon peapod is a novel hybrid carbon material which traps fullerene inside a carbon nanotube. It can possess interesting magnetic properties with heating and irradiation. It can also be applied as an oscillator during theoretical investigations and predictions.
In theory, a nanotorus is a carbon nanotube bent into a torus (doughnut shape). Nanotori are predicted to have many unique properties, such as magnetic moments 1000 times larger than that previously expected for certain specific radii. Properties such as magnetic moment, thermal stability, etc. vary widely depending on the radius of the torus and the radius of the tube.
Graphenated carbon nanotubes are a relatively new hybrid that combines graphitic foliates grown along the sidewalls of multiwalled or bamboo style CNTs. The foliate density can vary as a function of deposition conditions (e.g., temperature and time) with their structure ranging from a few layers of graphene (< 10) to thicker, more graphite-like. The fundamental advantage of an integrated graphene-CNT structure is the high surface area three-dimensional framework of the CNTs coupled with the high edge density of graphene. Depositing a high density of graphene foliates along the length of aligned CNTs can significantly increase the total charge capacity per unit of nominal area as compared to other carbon nanostructures.
Cup-stacked carbon nanotubes (CSCNTs) differ from other quasi-1D carbon structures, which normally behave as quasi-metallic conductors of electrons. CSCNTs exhibit semiconducting behavior because of the stacking microstructure of graphene layers.
Properties
Many properties of single-walled carbon nanotubes depend significantly on the (n,m) type, and this dependence is non-monotonic (see Kataura plot). In particular, the band gap can vary from zero to about 2 eV and the electrical conductivity can show metallic or semiconducting behavior.
Mechanical
Carbon nanotubes are the strongest and stiffest materials yet discovered in terms of tensile strength and elastic modulus. This strength results from the covalent sp2 bonds formed between the individual carbon atoms. In 2000, a multiwalled carbon nanotube was tested to have a tensile strength of . (For illustration, this translates into the ability to endure tension of a weight equivalent to on a cable with cross-section of ). Further studies, such as one conducted in 2008, revealed that individual CNT shells have strengths of up to ≈, which is in agreement with quantum/atomistic models. Because carbon nanotubes have a low density for a solid of 1.3 to 1.4 g/cm3, its specific strength of up to 48,000 kN·m·kg−1 is the best of known materials, compared to high-carbon steel's 154 kN·m·kg−1.
Although the strength of individual CNT shells is extremely high, weak shear interactions between adjacent shells and tubes lead to significant reduction in the effective strength of multiwalled carbon nanotubes and carbon nanotube bundles down to only a few GPa. This limitation has been recently addressed by applying high-energy electron irradiation, which crosslinks inner shells and tubes, and effectively increases the strength of these materials to ≈60 GPa for multiwalled carbon nanotubes and ≈17 GPa for double-walled carbon nanotube bundles. CNTs are not nearly as strong under compression. Because of their hollow structure and high aspect ratio, they tend to undergo buckling when placed under compressive, torsional, or bending stress.
On the other hand, there was evidence that in the radial direction they are rather soft. The first transmission electron microscope observation of radial elasticity suggested that even van der Waals forces can deform two adjacent nanotubes. Later, nanoindentations with an atomic force microscope were performed by several groups to quantitatively measure radial elasticity of multiwalled carbon nanotubes and tapping/contact mode atomic force microscopy was also performed on single-walled carbon nanotubes. Young's modulus of on the order of several GPa showed that CNTs are in fact very soft in the radial direction.
It was reported in 2020, CNT-filled polymer nanocomposites with 4 wt% and 6 wt% loadings are the most optimal concentrations, as they provide a good balance between mechanical properties and resilience of mechanical properties against UV exposure for the offshore umbilical sheathing layer.
Electrical
Unlike graphene, which is a two-dimensional semimetal, carbon nanotubes are either metallic or semiconducting along the tubular axis. For a given (n,m) nanotube, if n = m, the nanotube is metallic; if n − m is a multiple of 3 and n ≠ m, then the nanotube is quasi-metallic with a very small band gap, otherwise the nanotube is a moderate semiconductor.
Thus, all armchair (n = m) nanotubes are metallic, and nanotubes (6,4), (9,1), etc. are semiconducting.
Carbon nanotubes are not semimetallic because the degenerate point (the point where the π [bonding] band meets the π* [anti-bonding] band, at which the energy goes to zero) is slightly shifted away from the K point in the Brillouin zone because of the curvature of the tube surface, causing hybridization between the σ* and π* anti-bonding bands, modifying the band dispersion.
The rule regarding metallic versus semiconductor behavior has exceptions because curvature effects in small-diameter tubes can strongly influence electrical properties. Thus, a (5,0) SWCNT that should be semiconducting in fact is metallic according to the calculations. Likewise, zigzag and chiral SWCNTs with small diameters that should be metallic have a finite gap (armchair nanotubes remain metallic). In theory, metallic nanotubes can carry an electric current density of 4 × 109 A/cm2, which is more than 1,000 times greater than those of metals such as copper, where for copper interconnects, current densities are limited by electromigration. Carbon nanotubes are thus being explored as interconnects and conductivity-enhancing components in composite materials, and many groups are attempting to commercialize highly conducting electrical wire assembled from individual carbon nanotubes. There are significant challenges to be overcome however, such as undesired current saturation under voltage, and the much more resistive nanotube-to-nanotube junctions and impurities, all of which lower the electrical conductivity of the macroscopic nanotube wires by orders of magnitude, as compared to the conductivity of the individual nanotubes.
Because of its nanoscale cross-section, electrons propagate only along the tube's axis. As a result, carbon nanotubes are frequently referred to as one-dimensional conductors. The maximum electrical conductance of a single-walled carbon nanotube is 2G0, where G0 = 2e2/h is the conductance of a single ballistic quantum channel.
Because of the role of the π-electron system in determining the electronic properties of graphene, doping in carbon nanotubes differs from that of bulk crystalline semiconductors from the same group of the periodic table (e.g., silicon). Graphitic substitution of carbon atoms in the nanotube wall by boron or nitrogen dopants leads to p-type and n-type behavior, respectively, as would be expected in silicon. However, some non-substitutional (intercalated or adsorbed) dopants introduced into a carbon nanotube, such as alkali metals and electron-rich metallocenes, result in n-type conduction because they donate electrons to the π-electron system of the nanotube. By contrast, π-electron acceptors such as FeCl3 or electron-deficient metallocenes function as p-type dopants because they draw π-electrons away from the top of the valence band.
Intrinsic superconductivity has been reported, although other experiments found no evidence of this, leaving the claim a subject of debate.
In 2021, Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering at MIT, published department findings on the use of carbon nanotubes to create an electric current. By immersing the structures in an organic solvent, the liquid drew electrons out of the carbon particles. Strano was quoted as saying, "This allows you to do electrochemistry, but with no wires," and represents a significant breakthrough in the technology. Future applications include powering micro- or nanoscale robots, as well as driving alcohol oxidation reactions, which are important in the chemicals industry.
Crystallographic defects also affect the tube's electrical properties. A common result is lowered conductivity through the defective region of the tube. A defect in metallic armchair-type tubes (which can conduct electricity) can cause the surrounding region to become semiconducting, and single monatomic vacancies induce magnetic properties.
Optical
Carbon nanotubes have useful absorption, photoluminescence (fluorescence), and Raman spectroscopy properties. Spectroscopic methods offer the possibility of quick and non-destructive characterization of relatively large amounts of carbon nanotubes. There is a strong demand for such characterization from the industrial point of view: numerous parameters of nanotube synthesis can be changed, intentionally or unintentionally, to alter the nanotube quality, such as the non-tubular carbon content, structure (chirality) of the produced nanotubes, and structural defects. These features then determine nearly all other significant optical, mechanical, and electrical properties.
Carbon nanotube optical properties have been explored for use in applications such as for light-emitting diodes (LEDs) and photo-detectors based on a single nanotube have been produced in the lab. Their unique feature is not the efficiency, which is yet relatively low, but the narrow selectivity in the wavelength of emission and detection of light and the possibility of its fine tuning through the nanotube structure. In addition, bolometer and optoelectronic memory devices have been realised on ensembles of single-walled carbon nanotubes. Nanotube fluorescence has been investigated for the purposes of imaging and sensing in biomedical applications.
Thermal
All nanotubes are expected to be very good thermal conductors along the tube, exhibiting a property known as "ballistic conduction", but good insulators lateral to the tube axis. Measurements show that an individual SWNT has a room-temperature thermal conductivity along its axis of about 3500 W·m−1·K−1; compare this to copper, a metal well known for its good thermal conductivity, which transmits 385 W·m−1·K−1. An individual SWNT has a room-temperature thermal conductivity lateral to its axis (in the radial direction) of about 1.52 W·m−1·K−1, which is about as thermally conductive as soil. Macroscopic assemblies of nanotubes such as films or fibres have reached up to 1500 W·m−1·K−1 so far. Networks composed of nanotubes demonstrate different values of thermal conductivity, from the level of thermal insulation with the thermal conductivity of 0.1 W·m−1·K−1 to such high values. That is dependent on the amount of contribution to the thermal resistance of the system caused by the presence of impurities, misalignments and other factors. The temperature stability of carbon nanotubes is estimated to be up to 2800 °C in vacuum and about 750 °C in air.
Crystallographic defects strongly affect the tube's thermal properties. Such defects lead to phonon scattering, which in turn increases the relaxation rate of the phonons. This reduces the mean free path and reduces the thermal conductivity of nanotube structures. Phonon transport simulations indicate that substitutional defects such as nitrogen or boron will primarily lead to scattering of high-frequency optical phonons. However, larger-scale defects such as Stone–Wales defects cause phonon scattering over a wide range of frequencies, leading to a greater reduction in thermal conductivity.
Synthesis
Techniques have been developed to produce nanotubes in sizeable quantities, including arc discharge, laser ablation, chemical vapor deposition (CVD) and high-pressure carbon monoxide disproportionation (HiPCO). Among these arc discharge, laser ablation are batch by batch process, Chemical Vapor Deposition can be used both for batch by batch or continuous processes, and HiPCO is gas phase continuous process. Most of these processes take place in a vacuum or with process gases. The CVD growth method is popular, as it yields high quantity and has a degree of control over diameter, length and morphology. Using particulate catalysts, large quantities of nanotubes can be synthesized by these methods, and industrialisation is well on its way, with several CNT and CNT fibers factory in the world. One problem of CVD processes is the high variability in the nanotube's characteristics The HiPCO process advances in catalysis and continuous growth are making CNTs more commercially viable. The HiPCO process helps in producing high purity single walled carbon nanotubes in higher quantity. The HiPCO reactor operates at high temperature 900-1100 °C and high pressure ~30-50 bar. It uses carbon monoxide as the carbon source and iron pentacarbonyl or nickel tetracarbonyl as a catalyst. These catalysts provide a nucleation site for the nanotubes to grow, while cheaper iron based catalysts like Ferrocene can be used for CVD process.
Vertically aligned carbon nanotube arrays are also grown by thermal chemical vapor deposition. A substrate (quartz, silicon, stainless steel, carbon fibers, etc.) is coated with a catalytic metal (Fe, Co, Ni) layer. Typically that layer is iron and is deposited via sputtering to a thickness of 1–5 nm. A 10–50 nm underlayer of alumina is often also put down on the substrate first. This imparts controllable wetting and good interfacial properties.
When the substrate is heated to the growth temperature (~600 to 850 °C), the continuous iron film breaks up into small islands with each island then nucleating a carbon nanotube. The sputtered thickness controls the island size and this in turn determines the nanotube diameter. Thinner iron layers drive down the diameter of the islands and drive down the diameter of the nanotubes grown. The amount of time the metal island can sit at the growth temperature is limited as they are mobile and can merge into larger (but fewer) islands. Annealing at the growth temperature reduces the site density (number of CNT/mm2) while increasing the catalyst diameter.
The as-prepared carbon nanotubes always have impurities such as other forms of carbon (amorphous carbon, fullerene, etc.) and non-carbonaceous impurities (metal used for catalyst). These impurities need to be removed to make use of the carbon nanotubes in applications.
Functionalization
CNTs are known to have weak dispersibility in many solvents such as water as a consequence of strong intermolecular p–p interactions. This hinders the processability of CNTs in industrial applications. In order to tackle the issue, various techniques have been developed to modify the surface of CNTs in order to improve their stability and solubility in water. This enhances the processing and manipulation of insoluble CNTs rendering them useful for synthesizing innovative CNT nanofluids with impressive properties that are tunable for a wide range of applications.
Chemical routes such as covalent functionalization have been studied extensively, which involves the oxidation of CNTs via strong acids (e.g. sulfuric acid, nitric acid, or a mixture of both) in order to set the carboxylic groups onto the surface of the CNTs as the final product or for further modification by esterification or amination. Free radical grafting is a promising technique among covalent functionalization methods, in which alkyl or aryl peroxides, substituted anilines, and diazonium salts are used as the starting agents.
Free radical grafting of macromolecules (as the functional group) onto the surface of CNTs can improve the solubility of CNTs compared to common acid treatments which involve the attachment of small molecules such as hydroxyl onto the surface of CNTs. The solubility of CNTs can be improved significantly by free-radical grafting because the large functional molecules facilitate the dispersion of CNTs in a variety of solvents even at a low degree of functionalization. Recently an innovative environmentally friendly approach has been developed for the covalent functionalization of multi-walled carbon nanotubes (MWCNTs) using clove buds. This approach is innovative and green because it does not use toxic and hazardous acids which are typically used in common carbon nanomaterial functionalization procedures. The MWCNTs are functionalized in one pot using a free radical grafting reaction. The clove-functionalized MWCNTs are then dispersed in water producing a highly stable multi-walled carbon nanotube aqueous suspension (nanofluids).
Modeling
Carbon nanotubes are modelled in a similar manner as traditional composites in which a reinforcement phase is surrounded by a matrix phase. Ideal models such as cylindrical, hexagonal and square models are common. The size of the micromechanics model is highly function of the studied mechanical properties. The concept of representative volume element (RVE) is used to determine the appropriate size and configuration of computer model to replicate the actual behavior of CNT reinforced nanocomposite. Depending on the material property of interest (thermal, electrical, modulus, creep), one RVE might predict the property better than the alternatives. While the implementation of ideal model is computationally efficient, they do not represent microstructural features observed in scanning electron microscopy of actual nanocomposites. To incorporate realistic modeling, computer models are also generated to incorporate variability such as waviness, orientation and agglomeration of multiwall or single wall carbon nanotubes.
Metrology
There are many metrology standards and reference materials available for carbon nanotubes.
For single-wall carbon nanotubes, ISO/TS 10868 describes a measurement method for the diameter, purity, and fraction of metallic nanotubes through optical absorption spectroscopy, while ISO/TS 10797 and ISO/TS 10798 establish methods to characterize the morphology and elemental composition of single-wall carbon nanotubes, using transmission electron microscopy and scanning electron microscopy respectively, coupled with energy dispersive X-ray spectrometry analysis.
NIST SRM 2483 is a soot of single-wall carbon nanotubes used as a reference material for elemental analysis, and was characterized using thermogravimetric analysis, prompt gamma activation analysis, induced neutron activation analysis, inductively coupled plasma mass spectroscopy, resonant Raman scattering, UV-visible-near infrared fluorescence spectroscopy and absorption spectroscopy, scanning electron microscopy, and transmission electron microscopy. The Canadian National Research Council also offers a certified reference material SWCNT-1 for elemental analysis using neutron activation analysis and inductively coupled plasma mass spectroscopy. NIST RM 8281 is a mixture of three lengths of single-wall carbon nanotube.
For multiwall carbon nanotubes, ISO/TR 10929 identifies the basic properties and the content of impurities, while ISO/TS 11888 describes morphology using scanning electron microscopy, transmission electron microscopy, viscometry, and light scattering analysis. ISO/TS 10798 is also valid for multiwall carbon nanotubes.
Chemical modification
Carbon nanotubes can be functionalized to attain desired properties that can be used in a wide variety of applications. The two main methods of carbon nanotube functionalization are covalent and non-covalent modifications. Because of their apparent hydrophobic nature, carbon nanotubes tend to agglomerate hindering their dispersion in solvents or viscous polymer melts. The resulting nanotube bundles or aggregates reduce the mechanical performance of the final composite. The surface of the carbon nanotubes can be modified to reduce the hydrophobicity and improve interfacial adhesion to a bulk polymer through chemical attachment.
The surface of carbon nanotubes can be chemically modified by coating spinel nanoparticles by hydrothermal synthesis and can be used for water oxidation purposes.
In addition, the surface of carbon nanotubes can be fluorinated or halofluorinated by heating while in contact with a fluoroorganic substance, thereby forming partially fluorinated carbons (so called Fluocar materials) with grafted (halo)fluoroalkyl functionality.
Applications
Carbon nanotubes are currently used in multiple industrial and consumer applications. These include battery components, polymer composites, to improve the mechanical, thermal and electrical properties of the bulk product, and as a highly absorptive black paint. Many other applications are under development, including field effect transistors for electronics, high-strength fabrics, biosensors for biomedical and agricultural applications, and many others.
Current industrial applications
Easton-Bell Sports, Inc. have been in partnership with Zyvex Performance Materials, using CNT technology in a number of their bicycle components – including flat and riser handlebars, cranks, forks, seatposts, stems and aero bars.
Amroy Europe Oy manufactures Hybtonite carbon nano-epoxy resins where carbon nanotubes have been chemically activated to bond to epoxy, resulting in a composite material that is 20% to 30% stronger than other composite materials. It has been used for wind turbines, marine paints and a variety of sports gear such as skis, ice hockey sticks, baseball bats, hunting arrows, and surfboards.
Surrey NanoSystems synthesizes carbon nanotubes to create vantablack ultra-absorptive black paint.
"Gecko tape" (also called "nano tape") is often commercially sold as double-sided adhesive tape. It can be used to hang lightweight items such as pictures and decorative items on smooth walls without punching holes in the wall. The carbon nanotube arrays comprising the synthetic setae leave no residue after removal and can stay sticky in extreme temperatures.
Tips for atomic force microscope probes.
Applications under development
Applications of nanotubes in development in academia and industry include:
Utilizing carbon nanotubes as the channel material of carbon nanotube field-effect transistors.
Using carbon nanotubes as a scaffold for diverse microfabrication techniques.
Energy dissipation in self-organized nanostructures under influence of an electric field.
Using carbon nanotubes for environmental monitoring due to their active surface area and their ability to absorb gases.
Jack Andraka used carbon nanotubes in his pancreatic cancer test. His method of testing won the Intel International Science and Engineering Fair Gordon E. Moore Award in the spring of 2012.
The Boeing Company has patented the use of carbon nanotubes for structural health monitoring of composites used in aircraft structures. This technology will greatly reduce the risk of an in-flight failure caused by structural degradation of aircraft.
Zyvex Technologies has also built a 54' maritime vessel, the Piranha Unmanned Surface Vessel, as a technology demonstrator for what is possible using CNT technology. CNTs help improve the structural performance of the vessel, resulting in a lightweight 8,000 lb boat that can carry a payload of 15,000 lb over a range of 2,500 miles.
IMEC is using carbon nanotubes for pellicles in semiconductor lithography.
In tissue engineering, carbon nanotubes have been used as scaffolding for bone growth.
Carbon nanotubes can serve as additives to various structural materials. For instance, nanotubes form a tiny portion of the material(s) in some (primarily carbon fiber) baseball bats, golf clubs, car parts, or damascus steel.
IBM expected carbon nanotube transistors to be used on Integrated Circuits by 2020.
Potential/Future
The strength and flexibility of carbon nanotubes makes them of potential use in controlling other nanoscale structures, which suggests they will have an important role in nanotechnology engineering. The highest tensile strength of an individual multi-walled carbon nanotube has been tested to be 63 GPa. Carbon nanotubes were found in Damascus steel from the 17th century, possibly helping to account for the legendary strength of the swords made of it. Recently, several studies have highlighted the prospect of using carbon nanotubes as building blocks to fabricate three-dimensional macroscopic (>1mm in all three dimensions) all-carbon devices. Lalwani et al. have reported a novel radical initiated thermal crosslinking method to fabricated macroscopic, free-standing, porous, all-carbon scaffolds using single- and multi-walled carbon nanotubes as building blocks. These scaffolds possess macro-, micro-, and nano- structured pores and the porosity can be tailored for specific applications. These 3D all-carbon scaffolds/architectures may be used for the fabrication of the next generation of energy storage, supercapacitors, field emission transistors, high-performance catalysis, photovoltaics, and biomedical devices and implants.
CNTs are potential candidates for future via and wire material in nano-scale VLSI circuits. Eliminating electromigration reliability concerns that plague today's Cu interconnects, isolated (single and multi-wall) CNTs can carry current densities in excess of 1000 MA/cm2 without electromigration damage.
Single-walled nanotubes are likely candidates for miniaturizing electronics. The most basic building block of these systems is an electric wire, and SWNTs with diameters of an order of a nanometre can be excellent conductors. One useful application of SWNTs is in the development of the first intermolecular field-effect transistors (FET). The first intermolecular logic gate using SWCNT FETs was made in 2001. A logic gate requires both a p-FET and an n-FET. Because SWNTs are p-FETs when exposed to oxygen and n-FETs otherwise, it is possible to expose half of an SWNT to oxygen and protect the other half from it. The resulting SWNT acts as a not logic gate with both p- and n-type FETs in the same molecule.
Large quantities of pure CNTs can be made into a freestanding sheet or film by surface-engineered tape-casting (SETC) fabrication technique which is a scalable method to fabricate flexible and foldable sheets with superior properties. Another reported form factor is CNT fiber (a.k.a. filament) by wet spinning. The fiber is either directly spun from the synthesis pot or spun from pre-made dissolved CNTs. Individual fibers can be turned into a yarn. Apart from its strength and flexibility, the main advantage is making an electrically conducting yarn. The electronic properties of individual CNT fibers (i.e. bundle of individual CNT) are governed by the two-dimensional structure of CNTs. The fibers were measured to have a resistivity only one order of magnitude higher than metallic conductors at 300K. By further optimizing the CNTs and CNT fibers, CNT fibers with improved electrical properties could be developed.
CNT-based yarns are suitable for applications in energy and electrochemical water treatment when coated with an ion-exchange membrane. Also, CNT-based yarns could replace copper as a winding material. Pyrhönen et al. (2015) have built a motor using CNT winding.
Safety and health
The National Institute for Occupational Safety and Health (NIOSH) is the leading United States federal agency conducting research and providing guidance on the occupational safety and health implications and applications of nanomaterials. Early scientific studies have indicated that nanoscale particles may pose a greater health risk than bulk materials due to a relative increase in surface area per unit mass. Increase in length and diameter of CNT is correlated to increased toxicity and pathological alterations in lung. The biological interactions of nanotubes are not well understood, and the field is open to continued toxicological studies. It is often difficult to separate confounding factors, and since carbon is relatively biologically inert, some of the toxicity attributed to carbon nanotubes may be instead due to residual metal catalyst contamination. In previous studies, only Mitsui-7 was reliably demonstrated to be carcinogenic, although for unclear/unknown reasons. Unlike many common mineral fibers (such as asbestos), most SWCNTs and MWCNTs do not fit the size and aspect-ratio criteria to be classified as respirable fibers. In 2013, given that the long-term health effects have not yet been measured, NIOSH published a Current Intelligence Bulletin detailing the potential hazards and recommended exposure limit for carbon nanotubes and fibers. The U.S. National Institute for Occupational Safety and Health has determined non-regulatory recommended exposure limits (RELs) of 1 μg/m3 for carbon nanotubes and carbon nanofibers as background-corrected elemental carbon as an 8-hour time-weighted average (TWA) respirable mass concentration. Although CNT caused pulmonary inflammation and toxicity in mice, exposure to aerosols generated from sanding of composites containing polymer-coated MWCNTs, representative of the actual end-product, did not exert such toxicity.
As of October 2016, single wall carbon nanotubes have been registered through the European Union's Registration, Evaluation, Authorization and Restriction of Chemicals (REACH) regulations, based on evaluation of the potentially hazardous properties of SWCNT. Based on this registration, SWCNT commercialization is allowed in the EU up to 10 metric tons. Currently, the type of SWCNT registered through REACH is limited to the specific type of single wall carbon nanotubes manufactured by OCSiAl, which submitted the application.
History
The true identity of the discoverers of carbon nanotubes is a subject of some controversy. A 2006 editorial written by Marc Monthioux and Vladimir Kuznetsov in the journal Carbon described the origin of the carbon nanotube. A large percentage of academic and popular literature attributes the discovery of hollow, nanometre-size tubes composed of graphitic carbon to Sumio Iijima of NEC in 1991. His paper initiated a flurry of excitement and could be credited with inspiring the many scientists now studying applications of carbon nanotubes. Though Iijima has been given much of the credit for discovering carbon nanotubes, it turns out that the timeline of carbon nanotubes goes back much further than 1991.
In 1952, L. V. Radushkevich and V. M. Lukyanovich published clear images of 50 nanometre diameter tubes made of carbon in the Journal of Physical Chemistry Of Russia. This discovery was largely unnoticed, as the article was published in Russian, and Western scientists' access to Soviet press was limited during the Cold War. Monthioux and Kuznetsov mentioned in their Carbon editorial:
In 1976, Morinobu Endo of CNRS observed hollow tubes of rolled up graphite sheets synthesised by a chemical vapour-growth technique. The first specimens observed would later come to be known as single-walled carbon nanotubes (SWNTs). Endo, in his early review of vapor-phase-grown carbon fibers (VPCF), also reminded us that he had observed a hollow tube, linearly extended with parallel carbon layer faces near the fiber core. This appears to be the observation of multi-walled carbon nanotubes at the center of the fiber. The mass-produced MWCNTs today are strongly related to the VPGCF developed by Endo. In fact, they call it the "Endo-process", out of respect for his early work and patents. In 1979, John Abrahamson presented evidence of carbon nanotubes at the 14th Biennial Conference of Carbon at Pennsylvania State University. The conference paper described carbon nanotubes as carbon fibers that were produced on carbon anodes during arc discharge. A characterization of these fibers was given, as well as hypotheses for their growth in a nitrogen atmosphere at low pressures.
In 1981, a group of Soviet scientists published the results of chemical and structural characterization of carbon nanoparticles produced by a thermocatalytic disproportionation of carbon monoxide. Using TEM images and XRD patterns, the authors suggested that their "carbon multi-layer tubular crystals" were formed by rolling graphene layers into cylinders. They speculated that via this rolling, many different arrangements of graphene hexagonal nets are possible. They suggested two such possible arrangements: circular arrangement (armchair nanotube); and a spiral, helical arrangement (chiral tube).
In 1987, Howard G. Tennent of Hyperion Catalysis was issued a U.S. patent for the production of "cylindrical discrete carbon fibrils" with a "constant diameter between about 3.5 and about 70 nanometers..., length 102 times the diameter, and an outer region of multiple essentially continuous layers of ordered carbon atoms and a distinct inner core...."
Helping to create the initial excitement associated with carbon nanotubes were Iijima's 1991 discovery of multi-walled carbon nanotubes in the insoluble material of arc-burned graphite rods; and Mintmire, Dunlap, and White's independent prediction that if single-walled carbon nanotubes could be made, they would exhibit remarkable conducting properties. Nanotube research accelerated greatly following the independent discoveries by Iijima and Ichihashi at NEC and Bethune et al. at IBM of methods to specifically produce single-walled carbon nanotubes by adding transition-metal catalysts to the carbon in an arc discharge. Thess et al. refined this catalytic method by vaporizing the carbon/transition-metal combination in a high temperature furnace, which greatly improved the yield and purity of the SWNTs and made them widely available for characterization and application experiments. The arc discharge technique, well known to produce the famed Buckminsterfullerene , thus played a role in the discoveries of both multi- and single-wall nanotubes, extending the run of serendipitous discoveries relating to fullerenes. The discovery of nanotubes remains a contentious issue. Many believe that Iijima's report in 1991 is of particular importance because it brought carbon nanotubes into the awareness of the scientific community as a whole.
In 2020, during archaeological excavation of Keezhadi in Tamil Nadu, India, ~2600-year-old pottery was discovered whose coatings appear to contain carbon nanotubes. The robust mechanical properties of the nanotubes are partially why the coatings have lasted for so many years, say the scientists.
See also
Buckypaper
Carbide-derived carbon
Carbon nanocone
Carbon nanofibers
Carbon nanoscrolls
Carbon nanotube computer
Carbon nanotubes in photovoltaics
Colossal carbon tube
Diamond nanothread
Filamentous carbon
Molecular modelling
Nanoflower
Ninithi (nanotube modelling software)
Optical properties of carbon nanotubes
Organic semiconductor
References
This article incorporates public domain text from National Institute of Environmental Health Sciences (NIEHS) as quoted.
External links
Nanocarbon: From Graphene to Buckyballs. Interactive 3D models of cyclohexane, benzene, graphene, graphite, chiral & non-chiral nanotubes, and C60 Buckyballs - WeCanFigureThisOut.org.
The Nanotube site . Last updated 2013.04.12
EU Marie Curie Network CARBIO: Multifunctional carbon nanotubes for biomedical applications
C60 and Carbon Nanotubes a short video explaining how nanotubes can be made from modified graphite sheets and the three different types of nanotubes that are formed
Learning module for Bandstructure of Carbon Nanotubes and Nanoribbons
Selection of free-download articles on carbon nanotubes
WOLFRAM Demonstrations Project: Electronic Band Structure of a Single-Walled Carbon Nanotube by the Zone-Folding Method
WOLFRAM Demonstrations Project: Electronic Structure of a Single-Walled Carbon Nanotube in Tight-Binding Wannier Representation
Electrospinning
Allotropes of carbon
Emerging technologies
Transparent electrodes
Refractory materials
Space elevator
Discovery and invention controversies
Nanomaterials |
5321 | https://en.wikipedia.org/wiki/Czech%20Republic | Czech Republic | The Czech Republic, also known as Czechia, is a landlocked country in Central Europe. Historically known as Bohemia, it is bordered by Austria to the south, Germany to the west, Poland to the northeast, and Slovakia to the southeast. The Czech Republic has a hilly landscape that covers an area of with a mostly temperate continental and oceanic climate. The capital and largest city is Prague; other major cities and urban areas include Brno, Ostrava, Plzeň and Liberec.
The Duchy of Bohemia was founded in the late 9th century under Great Moravia. It was formally recognized as an Imperial State of the Holy Roman Empire in 1002 and became a kingdom in 1198. Following the Battle of Mohács in 1526, all of the Crown lands of Bohemia were gradually integrated into the Habsburg monarchy. Nearly a hundred years later, the Protestant Bohemian Revolt led to the Thirty Years' War. After the Battle of White Mountain, the Habsburgs consolidated their rule. With the dissolution of the Holy Roman Empire in 1806, the Crown lands became part of the Austrian Empire.
In the 19th century, the Czech lands became more industrialized, and in 1918 most of it became part of the First Czechoslovak Republic following the collapse of Austria-Hungary after World War I. Czechoslovakia was the only country in Central and Eastern Europe to remain a parliamentary democracy during the entirety of the interwar period. After the Munich Agreement in 1938, Nazi Germany systematically took control over the Czech lands. Czechoslovakia was restored in 1945 and three years later became an Eastern Bloc communist state following a coup d'état in 1948. Attempts to liberalize the government and economy were suppressed by a Soviet-led invasion of the country during the Prague Spring in 1968. In November 1989, the Velvet Revolution ended communist rule in the country and restored democracy. On 31 December 1992, Czechoslovakia was peacefully dissolved, with its constituent states becoming the independent states of the Czech Republic and Slovakia.
The Czech Republic is a unitary parliamentary republic and developed country with an advanced, high-income social market economy. It is a welfare state with a European social model, universal health care and free-tuition university education. It ranks 32nd in the Human Development Index. The Czech Republic is a member of the United Nations, NATO, the European Union, the OECD, the OSCE, the Council of Europe and the Visegrád Group.
Name
The traditional English name "Bohemia" derives from Latin: Boiohaemum, which means "home of the Boii" (a Gallic tribe). The current English name ultimately comes from the Czech word . The name comes from the Slavic tribe () and, according to legend, their leader Čech, who brought them to Bohemia, to settle on Říp Mountain. The etymology of the word can be traced back to the Proto-Slavic root , meaning "member of the people; kinsman", thus making it cognate to the Czech word (a person).
The country has been traditionally divided into three lands, namely Bohemia () in the west, Moravia () in the east, and Czech Silesia (; the smaller, south-eastern part of historical Silesia, most of which is located within modern Poland) in the northeast. Known as the lands of the Bohemian Crown since the 14th century, a number of other names for the country have been used, including Czech/Bohemian lands, Bohemian Crown, Czechia and the lands of the Crown of Saint Wenceslaus. When the country regained its independence after the dissolution of the Austro-Hungarian empire in 1918, the new name of Czechoslovakia was coined to reflect the union of the Czech and Slovak nations within one country.
After Czechoslovakia dissolved on the last day of 1992, was adopted as the Czech short name for the new state and the Ministry of Foreign Affairs of the Czech Republic recommended Czechia for the English-language equivalent. This form was not widely adopted at the time, leading to the long name Czech Republic being used in English in nearly all circumstances. The Czech government directed use of Czechia as the official English short name in 2016. The short name has been listed by the United Nations and is used by other organizations such as the European Union, NATO, the CIA, Google Maps, and the European Broadcasting Union. In 2022, the American AP Stylebook stated in its entry on the country that "Czechia, the Czech Republic. Both are acceptable. The shorter name Czechia is preferred by the Czech government. If using Czechia, clarify in the story that the country is more widely known in English as the Czech Republic."
History
Prehistory
Archaeologists have found evidence of prehistoric human settlements in the area, dating back to the Paleolithic era.
In the classical era, as a result of the 3rd century BC Celtic migrations, Bohemia became associated with the Boii. The Boii founded an oppidum near the site of modern Prague. Later in the 1st century, the Germanic tribes of the Marcomanni and Quadi settled there.
Slavs from the Black Sea–Carpathian region settled in the area (their migration was pushed by an invasion of peoples from Siberia and Eastern Europe into their area: Huns, Avars, Bulgars and Magyars). In the sixth century, the Huns had moved westwards into Bohemia, Moravia, and some of present-day Austria and Germany.
During the 7th century, the Frankish merchant Samo, supporting the Slavs fighting against nearby settled Avars, became the ruler of the first documented Slavic state in Central Europe, Samo's Empire. The principality of Great Moravia, controlled by Moymir dynasty, arose in the 8th century. It reached its zenith in the 9th (during the reign of Svatopluk I of Moravia), holding off the influence of the Franks. Great Moravia was Christianized, with a role being played by the Byzantine mission of Cyril and Methodius. They codified the Old Church Slavonic language, the first literary and liturgical language of the Slavs, and the Glagolitic script.
Bohemia
The Duchy of Bohemia emerged in the late 9th century when it was unified by the Přemyslid dynasty. Bohemia was from 1002 until 1806 an Imperial Estate of the Holy Roman Empire.
In 1212, Přemysl Ottokar I extracted the Golden Bull of Sicily from the emperor, confirming Ottokar and his descendants' royal status; the Duchy of Bohemia was raised to a Kingdom. German immigrants settled in the Bohemian periphery in the 13th century. The Mongols in the invasion of Europe carried their raids into Moravia but were defensively defeated at Olomouc.
After a series of dynastic wars, the House of Luxembourg gained the Bohemian throne.
Efforts for a reform of the church in Bohemia started already in the late 14th century. Jan Hus' followers seceded from some practices of the Roman Church and in the Hussite Wars (1419–1434) defeated five crusades organized against them by Sigismund. During the next two centuries, 90% of the population in Bohemia and Moravia were considered Hussites. The pacifist thinker Petr Chelčický inspired the movement of the Moravian Brethren (by the middle of the 15th century) that completely separated from the Roman Catholic Church.
On 21 December 1421, Jan Žižka, a successful military commander and mercenary, led his group of forces in the Battle of Kutná Hora, resulting in a victory for the Hussites. He is honoured to this day as a national hero.
After 1526 Bohemia came increasingly under Habsburg control as the Habsburgs became first the elected and then in 1627 the hereditary rulers of Bohemia. Between 1583 and 1611 Prague was the official seat of the Holy Roman Emperor Rudolf II and his court.
The Defenestration of Prague and subsequent revolt against the Habsburgs in 1618 marked the start of the Thirty Years' War. In 1620, the rebellion in Bohemia was crushed at the Battle of White Mountain and the ties between Bohemia and the Habsburgs' hereditary lands in Austria were strengthened. The leaders of the Bohemian Revolt were executed in 1621. The nobility and the middle class Protestants had to either convert to Catholicism or leave the country.
The following era of 1620 to the late 18th century became known as the "Dark Age". During the Thirty Years' War, the population of the Czech lands declined by a third through the expulsion of Czech Protestants as well as due to the war, disease and famine. The Habsburgs prohibited all Christian confessions other than Catholicism. The flowering of Baroque culture shows the ambiguity of this historical period.
Ottoman Turks and Tatars invaded Moravia in 1663. In 1679–1680 the Czech lands faced the Great Plague of Vienna and an uprising of serfs.
There were peasant uprisings influenced by famine. Serfdom was abolished between 1781 and 1848. Several battles of the Napoleonic Wars took place on the current territory of the Czech Republic.
The end of the Holy Roman Empire in 1806 led to degradation of the political status of Bohemia which lost its position of an electorate of the Holy Roman Empire as well as its own political representation in the Imperial Diet. Bohemian lands became part of the Austrian Empire. During the 18th and 19th century the Czech National Revival began its rise, with the purpose to revive Czech language, culture, and national identity. The Revolution of 1848 in Prague, striving for liberal reforms and autonomy of the Bohemian Crown within the Austrian Empire, was suppressed.
It seemed that some concessions would be made also to Bohemia, but in the end, the Emperor Franz Joseph I affected a compromise with Hungary only. The Austro-Hungarian Compromise of 1867 and the never realized coronation of Franz Joseph as King of Bohemia led to a disappointment of some Czech politicians. The Bohemian Crown lands became part of the so-called Cisleithania.
The Czech Social Democratic and progressive politicians started the fight for universal suffrage. The first elections under universal male suffrage were held in 1907.
Czechoslovakia
In 1918, during the collapse of the Habsburg monarchy at the end of World War I, the independent republic of Czechoslovakia, which joined the winning Allied powers, was created, with Tomáš Garrigue Masaryk in the lead. This new country incorporated the Bohemian Crown.
The First Czechoslovak Republic comprised only 27% of the population of the former Austria-Hungary, but nearly 80% of the industry, which enabled it to compete with Western industrial states. In 1929 compared to 1913, the gross domestic product increased by 52% and industrial production by 41%. In 1938 Czechoslovakia held 10th place in the world industrial production. Czechoslovakia was the only country in Central and Eastern Europe to remain a liberal democracy throughout the entire
interwar period. Although the First Czechoslovak Republic was a unitary state, it provided certain rights to its minorities, the largest being Germans (23.6% in 1921), Hungarians (5.6%) and Ukrainians (3.5%).
Western Czechoslovakia was occupied by Nazi Germany, which placed most of the region into the Protectorate of Bohemia and Moravia. The Protectorate was proclaimed part of the Third Reich, and the president and prime minister were subordinated to Nazi Germany's Reichsprotektor. One Nazi concentration camp was located within the Czech territory at Terezín, north of Prague. The vast majority of the Protectorate's Jews were murdered in Nazi-run concentration camps. The Nazi called for the extermination, expulsion, Germanization or enslavement of most or all Czechs for the purpose of providing more living space for the German people. There was Czechoslovak resistance to Nazi occupation as well as reprisals against the Czechoslovaks for their anti-Nazi resistance. The German occupation ended on 9 May 1945, with the arrival of the Soviet and American armies and the Prague uprising. Most of Czechoslovakia's German-speakers were forcibly expelled from the country, first as a result of local acts of violence and then under the aegis of an "organized transfer" confirmed by the Soviet Union, the United States, and Great Britain at the Potsdam Conference.
In the 1946 elections, the Communist Party gained 38% of the votes and became the largest party in the Czechoslovak parliament, formed a coalition with other parties, and consolidated power. A coup d'état came in 1948 and a single-party government was formed. For the next 41 years, the Czechoslovak Communist state conformed to Eastern Bloc economic and political features. The Prague Spring political liberalization was stopped by the 1968 Warsaw Pact invasion of Czechoslovakia. Analysts believe that the invasion caused the communist movement to fracture, ultimately leading to the Revolutions of 1989.
Czech Republic
In November 1989, Czechoslovakia again became a liberal democracy through the Velvet Revolution. However, Slovak national aspirations strengthened (Hyphen War) and on 31 December 1992, the country peacefully split into the independent countries of the Czech Republic and Slovakia. Both countries went through economic reforms and privatizations, with the intention of creating a market economy, as they have been trying to do since 1990, when Czechs and Slovaks still shared the common state. This process was largely successful; in 2006 the Czech Republic was recognized by the World Bank as a "developed country", and in 2009 the Human Development Index ranked it as a nation of "Very High Human Development".
From 1991, the Czech Republic, originally as part of Czechoslovakia and since 1993 in its own right, has been a member of the Visegrád Group and from 1995, the OECD. The Czech Republic joined NATO on 12 March 1999 and the European Union on 1 May 2004. On 21 December 2007 the Czech Republic joined the Schengen Area.
Until 2017, either the centre-left Czech Social Democratic Party or the centre-right Civic Democratic Party led the governments of the Czech Republic. In October 2017, the populist movement ANO 2011, led by the country's second-richest man, Andrej Babiš, won the elections with three times more votes than its closest rival, the Civic Democrats. In December 2017, Czech president Miloš Zeman appointed Andrej Babiš as the new prime minister.
In the 2021 elections, ANO 2011 was narrowly defeated and Petr Fiala became the new prime minister. He formed a government coalition of the alliance SPOLU (Civic Democratic Party, KDU-ČSL and TOP 09) and the alliance of Pirates and Mayors. In January 2023, retired general Petr Pavel won the presidential election, becoming new Czech president to succeed Miloš Zeman. Following the 2022 Russian invasion of Ukraine, the country took in half a million Ukrainian refugees, the largest number per capita in the world.
Geography
The Czech Republic lies mostly between latitudes 48° and 51° N and longitudes 12° and 19° E.
Bohemia, to the west, consists of a basin drained by the Elbe () and the Vltava rivers, surrounded by mostly low mountains, such as the Krkonoše range of the Sudetes. The highest point in the country, Sněžka at , is located here. Moravia, the eastern part of the country, is also hilly. It is drained mainly by the Morava River, but it also contains the source of the Oder River ().
Water from the Czech Republic flows to three different seas: the North Sea, Baltic Sea, and Black Sea. The Czech Republic also leases the Moldauhafen, a lot in the middle of the Hamburg Docks, which was awarded to Czechoslovakia by Article 363 of the Treaty of Versailles, to allow the landlocked country a place where goods transported down river could be transferred to seagoing ships. The territory reverts to Germany in 2028.
Phytogeographically, the Czech Republic belongs to the Central European province of the Circumboreal Region, within the Boreal Kingdom. According to the World Wide Fund for Nature, the territory of the Czech Republic can be subdivided into four ecoregions: the Western European broadleaf forests, Central European mixed forests, Pannonian mixed forests, and Carpathian montane conifer forests.
There are four national parks in the Czech Republic. The oldest is Krkonoše National Park (Biosphere Reserve), and the others are Šumava National Park (Biosphere Reserve), Podyjí National Park, and Bohemian Switzerland.
The three historical lands of the Czech Republic (formerly some countries of the Bohemian Crown) correspond with the river basins of the Elbe and the Vltava basin for Bohemia, the Morava one for Moravia, and the Oder river basin for Czech Silesia (in terms of the Czech territory).
Climate
The Czech Republic has a temperate climate, situated in the transition zone between the oceanic and continental climate types, with warm summers and cold, cloudy and snowy winters. The temperature difference between summer and winter is due to the landlocked geographical position.
Temperatures vary depending on the elevation. In general, at higher altitudes, the temperatures decrease and precipitation increases. The wettest area in the Czech Republic is found around Bílý Potok in Jizera Mountains and the driest region is the Louny District to the northwest of Prague. Another factor is the distribution of the mountains.
At the highest peak of Sněžka (), the average temperature is , whereas in the lowlands of the South Moravian Region, the average temperature is as high as . The country's capital, Prague, has a similar average temperature, although this is influenced by urban factors.
The coldest month is usually January, followed by February and December. During these months, there is snow in the mountains and sometimes in the cities and lowlands. During March, April, and May, the temperature usually increases, especially during April, when the temperature and weather tends to vary during the day. Spring is also characterized by higher water levels in the rivers, due to melting snow with occasional flooding.
The warmest month of the year is July, followed by August and June. On average, summer temperatures are about higher than during winter. Summer is also characterized by rain and storms.
Autumn generally begins in September, which is still warm and dry. During October, temperatures usually fall below or and deciduous trees begin to shed their leaves. By the end of November, temperatures usually range around the freezing point.
The coldest temperature ever measured was in Litvínovice near České Budějovice in 1929, at and the hottest measured, was at in Dobřichovice in 2012.
Most rain falls during the summer. Sporadic rainfall is throughout the year (in Prague, the average number of days per month experiencing at least of rain varies from 12 in September and October to 16 in November) but concentrated rainfall (days with more than per day) are more frequent in the months of May to August (average around two such days per month). Severe thunderstorms, producing damaging straight-line winds, hail, and occasional tornadoes occur, especially during the summer period.
Environment
As of 2020, the Czech Republic ranks as the 21st most environmentally conscious country in the world in Environmental Performance Index. It had a 2018 Forest Landscape Integrity Index mean score of 1.71/10, ranking it 160th globally out of 172 countries. The Czech Republic has four National Parks (Šumava National Park, Krkonoše National Park, České Švýcarsko National Park, Podyjí National Park) and 25 Protected Landscape Areas.
Government
The Czech Republic is a pluralist multi-party parliamentary representative democracy. The Parliament (Parlament České republiky) is bicameral, with the Chamber of Deputies (, 200 members) and the Senate (, 81 members). The members of the Chamber of Deputies are elected for a four-year term by proportional representation, with a 5% election threshold. There are 14 voting districts, identical to the country's administrative regions. The Chamber of Deputies, the successor to the Czech National Council, has the powers and responsibilities of the now defunct federal parliament of the former Czechoslovakia. The members of the Senate are elected in single-seat constituencies by two-round runoff voting for a six-year term, with one-third elected every even year in the autumn. This arrangement is modeled on the U.S. Senate, but each constituency is roughly the same size and the voting system used is a two-round runoff.
The president is a formal head of state with limited and specific powers, who appoints the prime minister, as well the other members of the cabinet on a proposal by the prime minister. From 1993 until 2012, the President of the Czech Republic was selected by a joint session of the parliament for a five-year term, with no more than two consecutive terms (2x Václav Havel, 2x Václav Klaus). Since 2013, the president has been elected directly. Some commentators have argued that, with the introduction of direct election of the President, the Czech Republic has moved away from the parliamentary system and towards a semi-presidential one. The Government's exercise of executive power derives from the Constitution. The members of the government are the Prime Minister, Deputy prime ministers and other ministers. The Government is responsible to the Chamber of Deputies. The Prime Minister is the head of government and wields powers such as the right to set the agenda for most foreign and domestic policy and choose government ministers.
|President
|Petr Pavel
|Independent
|9 March 2023
|-
|President of the Senate
|Miloš Vystrčil
|ODS
|19 February 2020
|-
|President of the Chamber of Deputies
|Markéta Pekarová Adamová
|TOP 09
|10 November 2021
|-
|Prime Minister
|Petr Fiala
|ODS
|28 November 2021
|}
Law
The Czech Republic is a unitary state, with a civil law system based on the continental type, rooted in Germanic legal culture. The basis of the legal system is the Constitution of the Czech Republic adopted in 1993. The Penal Code is effective from 2010. A new Civil code became effective in 2014. The court system includes district, county, and supreme courts and is divided into civil, criminal, and administrative branches. The Czech judiciary has a triumvirate of supreme courts. The Constitutional Court consists of 15 constitutional judges and oversees violations of the Constitution by either the legislature or by the government. The Supreme Court is formed of 67 judges and is the court of highest appeal for most legal cases heard in the Czech Republic. The Supreme Administrative Court decides on issues of procedural and administrative propriety. It also has jurisdiction over certain political matters, such as the formation and closure of political parties, jurisdictional boundaries between government entities, and the eligibility of persons to stand for public office. The Supreme Court and the Supreme Administrative Court are both based in Brno, as is the Supreme Public Prosecutor's Office.
Foreign relations
The Czech Republic has ranked as one of the safest or most peaceful countries for the past few decades. It is a member of the United Nations, the European Union, NATO, OECD, Council of Europe and is an observer to the Organization of American States. The embassies of most countries with diplomatic relations with the Czech Republic are located in Prague, while consulates are located across the country.
The Czech passport is restricted by visas. According to the 2018 Henley & Partners Visa Restrictions Index, Czech citizens have visa-free access to 173 countries, which ranks them 7th along with Malta and New Zealand. The World Tourism Organization ranks the Czech passport 24th. The US Visa Waiver Program applies to Czech nationals.
The Prime Minister and Minister of Foreign Affairs have primary roles in setting foreign policy, although the President also has influence and represents the country abroad. Membership in the European Union and NATO is central to the Czech Republic's foreign policy. The Office for Foreign Relations and Information (ÚZSI) serves as the foreign intelligence agency responsible for espionage and foreign policy briefings, as well as protection of Czech Republic's embassies abroad.
The Czech Republic has ties with Slovakia, Poland and Hungary as a member of the Visegrád Group, as well as with Germany, Israel, the United States and the European Union and its members. After 2020, relations with Asian democratic states, such as Taiwan, are being strengthened. On the contrary, the Czech Republic has long had bad relations with Russia, and from 2021 the Czech Republic appears on Russia's official list of enemy countries. The Czech Republic also has problematic relations with China.
Czech officials have supported dissenters in Belarus, Moldova, Myanmar and Cuba.
Famous Czech diplomats of the past included Jaroslav Lev of Rožmitál, Humprecht Jan Czernin, Count Philip Kinsky of Wchinitz and Tettau, Wenzel Anton, Prince of Kaunitz-Rietberg, Prince Karl Philipp Schwarzenberg, Alois Lexa von Aehrenthal, Ottokar Czernin, Edvard Beneš, Jan Masaryk, Jiří Hájek, Jiří Dienstbier, Michael Žantovský, Petr Kolář, Alexandr Vondra, Prince Karel Schwarzenberg and Petr Pavel.
Military
The Czech armed forces consist of the Czech Land Forces, the Czech Air Force and of specialized support units. The armed forces are managed by the Ministry of Defence. The President of the Czech Republic is Commander-in-chief of the armed forces. In 2004 the army transformed itself into a fully professional organization and compulsory military service was abolished. The country has been a member of NATO since 12 March 1999. Defence spending is approximately 1.28% of the GDP (2021). The armed forces are charged with protecting the Czech Republic and its allies, promoting global security interests, and contributing to NATO.
Currently, as a member of NATO, the Czech military are participating in the Resolute Support and KFOR operations and have soldiers in Afghanistan, Mali, Bosnia and Herzegovina, Kosovo, Egypt, Israel and Somalia. The Czech Air Force also served in the Baltic states and Iceland. The main equipment of the Czech military includes JAS 39 Gripen multi-role fighters, Aero L-159 Alca combat aircraft, Mi-35 attack helicopters, armored vehicles (Pandur II, OT-64, OT-90, BVP-2) and tanks (T-72 and T-72M4CZ).
The most famous Czech, and therefore Czechoslovak, soldiers and military leaders of the past were Ottokar II of Bohemia, John of Bohemia, Jan Žižka, Albrecht von Wallenstein, Karl Philipp, Prince of Schwarzenberg, Joseph Radetzky von Radetz, Josef Šnejdárek, Heliodor Píka, Ludvík Svoboda, Jan Kubiš, Jozef Gabčík, František Fajtl and Petr Pavel.
Human rights
Human rights in the Czech Republic are guaranteed by the Charter of Fundamental Rights and Freedoms and international treaties on human rights. Nevertheless, there were cases of human rights violations such as discrimination against Roma children, for which the European Commission asked the Czech Republic to provide an explanation, or the illegal sterilization of Roma women, for which the government apologized.
Prague is the seat of Radio Free Europe/Radio Liberty. Today, the station is based in Hagibor. At the beginning of the 1990s, Václav Havel personally invited her to Czechoslovakia.
People of the same sex can enter into a "registered partnership" in the Czech Republic. Conducting same-sex marriage is not legal under current Czech law.
The best-known Czech activists and supporters of human rights include Berta von Suttner, born in Prague, who won the Nobel Peace Prize for her pacifist struggle, philosopher and the first Czechoslovak president Tomáš Garrigue Masaryk, student Jan Palach, who set himself on fire in 1969 in protest against the Soviet occupation, Karel Schwarzenberg, who was chairman of the International Helsinki Committee for Human Rights between 1984 and 1990, Václav Havel, long-time dissident and later president, sociologist and dissident Jiřina Šiklová and Šimon Pánek, founder and director of the People in Need organization.
Administrative divisions
Since 2000, the Czech Republic has been divided into thirteen regions (Czech: kraje, singular kraj) and the capital city of Prague. Every region has its own elected regional assembly and a regional governor. In Prague, the assembly and presidential powers are executed by the city council and the mayor.
The older seventy-six districts (okresy, singular okres) including three "statutory cities" (without Prague, which had special status) lost most of their importance in 1999 in an administrative reform; they remain as territorial divisions and seats of various branches of state administration.
The smallest administrative units are obce (municipalities). As of 2021, the Czech Republic is divided into 6,254 municipalities. Cities and towns are also municipalities. The capital city of Prague is a region and municipality at the same time.
Economy
The Czech Republic has a developed, high-income export-oriented social market economy based in services, manufacturing and innovation, that maintains a welfare state and the European social model. The Czech Republic participates in the European Single Market as a member of the European Union and is therefore a part of the economy of the European Union, but uses its own currency, the Czech koruna, instead of the euro. It has a per capita GDP rate that is 91% of the EU average and is a member of the OECD. Monetary policy is conducted by the Czech National Bank, whose independence is guaranteed by the Constitution. The Czech Republic ranks 12th in the UN inequality-adjusted human development and 24th in World Bank Human Capital Index. It was described by The Guardian as "one of Europe's most flourishing economies".
, the country's GDP per capita at purchasing power parity is $51,329 and $29,856 at nominal value. According to Allianz A.G., in 2018 the country was an MWC (mean wealth country), ranking 26th in net financial assets. The country experienced a 4.5% GDP growth in 2017. The 2016 unemployment rate was the lowest in the EU at 2.4%, and the 2016 poverty rate was the second lowest of OECD members. Czech Republic ranks 27th in the 2021 Index of Economic Freedom, 31st in the 2023 Global Innovation Index, down from 24th in the 2016, 29th in the Global Competitiveness Report, and 25th in the Global Enabling Trade Report.
The Czech Republic has a diverse economy that ranks 7th in the 2016 Economic Complexity Index. The industrial sector accounts for 37.5% of the economy, while services account for 60% and agriculture for 2.5%. The largest trading partner for both export and import is Germany and the EU in general. Dividends worth CZK 270 billion were paid to the foreign owners of Czech companies in 2017, which has become a political issue. The country has been a member of the Schengen Area since 1 May 2004, having abolished border controls, completely opening its borders with all of its neighbors on 21 December 2007.
Industry
the largest companies by revenue in the Czech Republic were: automobile manufacturer Škoda Auto, utility company ČEZ Group, conglomerate Agrofert, energy trading company EPH, oil processing company Unipetrol, electronics manufacturer Foxconn CZ and steel producer Moravia Steel. Other Czech transportation companies include: Škoda Transportation (tramways, trolleybuses, metro), Tatra (heavy trucks, the second oldest car maker in the world), Avia (medium trucks), Karosa and SOR Libchavy (buses), Aero Vodochody (military aircraft), Let Kunovice (civil aircraft), Zetor (tractors), Jawa Moto (motorcycles) and Čezeta (electric scooters).
Škoda Transportation is the fourth largest tram producer in the world; nearly one third of all trams in the world come from Czech factories. The Czech Republic is also the world's largest vinyl records manufacturer, with GZ Media producing about 6 million pieces annually in Loděnice. Česká zbrojovka is among the ten largest firearms producers in the world and five who produce automatic weapons.
In the food industry, Czech companies include Agrofert, Kofola and Hamé.
Energy
Production of Czech electricity exceeds consumption by about 10 TWh per year, the excess being exported. Nuclear power presently provides about 30 percent of the total power needs, its share is projected to increase to 40 percent. In 2005, 65.4 percent of electricity was produced by steam and combustion power plants (mostly coal); 30 percent by nuclear plants; and 4.6 percent came from renewable sources, including hydropower. The largest Czech power resource is Temelín Nuclear Power Station, with another nuclear power plant in Dukovany.
The Czech Republic is reducing its dependence on highly polluting low-grade brown coal as a source of energy. Natural gas is purchased from Norwegian companies and as liquefied gas LNG from the Netherlands and Belgium. In the past, three-quarters of gas supplies came from Russia, but after the outbreak of the war in Ukraine, the government gradually stopped these supplies. Gas consumption (approx. 100 TWh in 2003–2005) is almost double electricity consumption. South Moravia has small oil and gas deposits.
Transportation infrastructure
the road network in the Czech Republic is long, out of which are motorways. The speed limit is within towns, outside of towns and on motorways.
The Czech Republic has one of the densest rail networks in the world. the country has of lines. Of that number, is electrified, are single-line tracks and are double and multiple-line tracks. The length of tracks is , out of which is electrified.
České dráhy (the Czech Railways) is the main railway operator in the country, with about 180 million passengers carried yearly. Maximum speed is limited to .
Václav Havel Airport in Prague is the main international airport in the country. In 2019, it handled 17.8 million passengers. In total, the Czech Republic has 91 airports, six of which provide international air services. The public international airports are in Brno, Karlovy Vary, Mnichovo Hradiště, Mošnov (near Ostrava), Pardubice and Prague. The non-public international airports capable of handling airliners are in Kunovice and Vodochody.
Russia, via pipelines through Ukraine and to a lesser extent, Norway, via pipelines through Germany, supply the Czech Republic with liquid and natural gas.
Communications and IT
The Czech Republic ranks in the top 10 countries worldwide with the fastest average internet speed. By the beginning of 2008, there were over 800 mostly local WISPs, with about 350,000 subscribers in 2007. Plans based on either GPRS, EDGE, UMTS or CDMA2000 are being offered by all three mobile phone operators (T-Mobile, O2, Vodafone) and internet provider U:fon. Government-owned Český Telecom slowed down broadband penetration. At the beginning of 2004, local-loop unbundling began and alternative operators started to offer ADSL and also SDSL. This and later privatization of Český Telecom helped drive down prices.
On 1 July 2006, Český Telecom was acquired by globalized company (Spain-owned) Telefónica group and adopted the new name Telefónica O2 Czech Republic. , VDSL and ADSL2+ are offered in variants, with download speeds of up to 50 Mbit/s and upload speeds of up to 5 Mbit/s. Cable internet is gaining more popularity with its higher download speeds ranging from 50 Mbit/s to 1 Gbit/s.
Two computer security companies, Avast and AVG, were founded in the Czech Republic. In 2016, Avast led by Pavel Baudiš bought rival AVG for US$1.3 billion, together at the time, these companies had a user base of about 400 million people and 40% of the consumer market outside of China. Avast is the leading provider of antivirus software, with a 20.5% market share.
Tourism
Prague is the fifth most visited city in Europe after London, Paris, Istanbul and Rome. In 2001, the total earnings from tourism reached 118 billion CZK, making up 5.5% of GNP and 9% of overall export earnings. The industry employs more than 110,000 people – over 1% of the population.
Guidebooks and tourists reporting overcharging by taxi drivers and pickpocketing problems are mainly in Prague, though the situation has improved recently. Since 2005, Prague's mayor, Pavel Bém, has worked to improve this reputation by cracking down on petty crime and, aside from these problems, Prague is a "safe" city. The Czech Republic's crime rate is described by the United States State department as "low".
One of the tourist attractions in the Czech Republic is the Nether district Vítkovice in Ostrava.
The Czech Republic boasts 16 UNESCO World Heritage Sites, 3 of them are transnational. , further 14 sites are on the tentative list.
Architectural heritage is an object of interest to visitors – it includes castles and châteaux from different historical epoques, namely Karlštejn Castle, Český Krumlov and the Lednice–Valtice Cultural Landscape. There are 12 cathedrals and 15 churches elevated to the rank of basilica by the Pope, calm monasteries.
Away from the towns, areas such as Bohemian Paradise, Bohemian Forest and the Giant Mountains attract visitors seeking outdoor pursuits. There is a number of beer festivals.
The country is also known for its various museums. Puppetry and marionette exhibitions are with a number of puppet festivals throughout the country. Aquapalace Prague in Čestlice is the largest water park in the country.
Science
The Czech lands have a long and well-documented history of scientific innovation. Today, the Czech Republic has a highly sophisticated, developed, high-performing, innovation-oriented scientific community supported by the government, industry, and leading universities. Czech scientists are embedded members of the global scientific community. They contribute annually to multiple international academic journals and collaborate with their colleagues across boundaries and fields. The Czech Republic was ranked 24th in the Global Innovation Index in 2020 and 2021, up from 26th in 2019.
Historically, the Czech lands, especially Prague, have been the seat of scientific discovery going back to early modern times, including Tycho Brahe, Nicolaus Copernicus, and Johannes Kepler. In 1784 the scientific community was first formally organized under the charter of the Royal Czech Society of Sciences. Currently, this organization is known as the Czech Academy of Sciences. Similarly, the Czech lands have a well-established history of scientists, including Nobel laureates biochemists Gerty and Carl Ferdinand Cori, chemists Jaroslav Heyrovský and Otto Wichterle, physicists Ernst Mach and Peter Grünberg, physiologist Jan Evangelista Purkyně and chemist Antonín Holý. Sigmund Freud, the founder of psychoanalysis, was born in Příbor, Gregor Mendel, the founder of genetics, was born in Hynčice and spent most of his life in Brno, logician and mathematician Kurt Gödel was born in Brno.
Historically, most scientific research was recorded in Latin, but from the 18th century onwards increasingly in German and later in Czech, archived in libraries supported and managed by religious groups and other denominations as evidenced by historical locations of international renown and heritage such as the Strahov Monastery and the Clementinum in Prague. Increasingly, Czech scientists publish their work and that of their history in English.
The current important scientific institution is the already mentioned Academy of Sciences of the Czech Republic, the CEITEC Institute in Brno or the HiLASE and Eli Beamlines centers with the most powerful laser in the world in Dolní Břežany. Prague is the seat of the administrative center of the GSA Agency operating the European navigation system Galileo and the European Union Agency for the Space Programme.
Demographics
The total fertility rate (TFR) in 2020 was estimated at 1.71 children per woman, which is below the replacement rate of 2.1. The Czech Republic's population has an average age of 43.3 years. The life expectancy in 2021 was estimated to be 79.5 years (76.55 years male, 82.61 years female). About 77,000 people immigrate to the Czech Republic annually. Vietnamese immigrants began settling in the country during the Communist period, when they were invited as guest workers by the Czechoslovak government. In 2009, there were about 70,000 Vietnamese in the Czech Republic. Most decide to stay in the country permanently.
According to results of the 2021 census, the majority of the inhabitants of the Czech Republic are Czechs (57.3%), followed by Moravians (3.4%), Slovaks (0.9%), Ukrainians (0.7%), Viets (0.3%), Poles (0.3%), Russians (0.2%), Silesians (0.1%) and Germans (0.1%). Another 4.0% declared combination of two nationalities (3.6% combination of Czech and other nationality). As the 'nationality' was an optional item, a number of people left this field blank (31.6%). According to some estimates, there are about 250,000 Romani people in the Czech Republic. The Polish minority resides mainly in the Trans-Olza region.
There were 658,564 foreigners residing in the country in 2021, according to the Czech Statistical Office, with the largest groups being Ukrainian (22%), Slovak (22%), Vietnamese (12%), Russian (7%) and German (4%). Most of the foreign population lives in Prague (37.3%) and Central Bohemia Region (13.2%).
The Jewish population of Bohemia and Moravia, 118,000 according to the 1930 census, was nearly annihilated by the Nazi Germans during the Holocaust. There were approximately 3,900 Jews in the Czech Republic in 2021. The former Czech prime minister, Jan Fischer, is of Jewish faith.
Nationality of residents, who answered the question in the Census 2021:
Largest cities
Religion
About 75% to 79% of residents of the Czech Republic do not declare having any religion or faith in surveys, and the proportion of convinced atheists (30%) is the third highest in the world behind those of China (47%) and Japan (31%). The Czech people have been historically characterized as "tolerant and even indifferent towards religion". The religious identity of the country has changed drastically since the first half of the 20th century, when more than 90% of Czechs were Christians.
Christianization in the 9th and 10th centuries introduced Catholicism. After the Bohemian Reformation, most Czechs became followers of Jan Hus, Petr Chelčický and other regional Protestant Reformers. Taborites and Utraquists were Hussite groups. Towards the end of the Hussite Wars, the Utraquists changed sides and allied with the Catholic Church. Following the joint Utraquist—Catholic victory, Utraquism was accepted as a distinct form of Christianity to be practiced in Bohemia by the Catholic Church while all remaining Hussite groups were prohibited. After the Reformation, some Bohemians went with the teachings of Martin Luther, especially Sudeten Germans. In the wake of the Reformation, Utraquist Hussites took a renewed increasingly anti-Catholic stance, while some of the defeated Hussite factions were revived. After the Habsburgs regained control of Bohemia, the whole population was forcibly converted to Catholicism—even the Utraquist Hussites. Going forward, Czechs have become more wary and pessimistic of religion as such. A history of resistance to the Catholic Church followed. It suffered a schism with the neo-Hussite Czechoslovak Hussite Church in 1920, lost the bulk of its adherents during the Communist era and continues to lose in the modern, ongoing secularization. Protestantism never recovered after the Counter-Reformation was introduced by the Austrian Habsburgs in 1620. Prior to the Holocaust, the Czech Republic had a sizable Jewish community of around 100,000. There are many historically important and culturally relevant Synagogues in the Czech Republic such as Europe's oldest active Synagogue, The Old New Synagogue and the second largest Synagogue in Europe, the Great Synagogue (Plzeň). The Holocaust decimated Czech Jewry and the Jewish population as of 2021 is 3,900.
According to the 2011 census, 34% of the population stated they had no religion, 10.3% was Catholic, 0.8% was Protestant (0.5% Czech Brethren and 0.4% Hussite), and 9% followed other forms of religion both denominational or not (of which 863 people answered they are Pagan). 45% of the population did not answer the question about religion. From 1991 to 2001 and further to 2011 the adherence to Catholicism decreased from 39% to 27% and then to 10%; Protestantism similarly declined from 3.7% to 2% and then to 0.8%. The Muslim population is estimated to be 20,000 representing 0.2% of the population.
The proportion of religious believers varies significantly across the country, from 55% in Zlín Region to 16% in Ústí nad Labem Region.
Education and health care
Education in the Czech Republic is compulsory for nine years and citizens have access to a free-tuition university education, while the average number of years of education is 13.1. Additionally, the Czech Republic has a "relatively equal" educational system in comparison with other countries in Europe. Founded in 1348, Charles University was the first university in Central Europe. Other major universities in the country are Masaryk University, Czech Technical University, Palacký University, Academy of Performing Arts and University of Economics.
The Programme for International Student Assessment, coordinated by the OECD, currently ranks the Czech education system as the 15th most successful in the world, higher than the OECD average. The UN Education Index ranks the Czech Republic 10th (positioned behind Denmark and ahead of South Korea).
Health care in the Czech Republic is similar in quality to that of other developed nations. The Czech universal health care system is based on a compulsory insurance model, with fee-for-service care funded by mandatory employment-related insurance plans. According to the 2016 Euro health consumer index, a comparison of healthcare in Europe, the Czech healthcare is 13th, ranked behind Sweden and two positions ahead of the United Kingdom.
Culture
Art
Venus of Dolní Věstonice is the treasure of prehistoric art. Theodoric of Prague was a painter in the Gothic era who decorated the castle Karlstejn. In the Baroque era, there were Wenceslaus Hollar, Jan Kupecký, Karel Škréta, Anton Raphael Mengs or Petr Brandl, sculptors Matthias Braun and Ferdinand Brokoff. In the first half of the 19th century, Josef Mánes joined the romantic movement. In the second half of the 19th century had the main say the so-called "National Theatre generation": sculptor Josef Václav Myslbek and painters Mikoláš Aleš, Václav Brožík, Vojtěch Hynais or Julius Mařák. At the end of the century came a wave of Art Nouveau. Alfons Mucha became the main representative. He is known for Art Nouveau posters and his cycle of 20 large canvases named the Slav Epic, which depicts the history of Czechs and other Slavs.
, the Slav Epic can be seen in the Veletržní Palace of the National Gallery in Prague, which manages the largest collection of art in the Czech Republic. Max Švabinský was another Art nouveau painter. The 20th century brought an avant-garde revolution. In the Czech lands mainly expressionist and cubist: Josef Čapek, Emil Filla, Bohumil Kubišta, Jan Zrzavý. Surrealism emerged particularly in the work of Toyen, Josef Šíma and Karel Teige. In the world, however, he pushed mainly František Kupka, a pioneer of abstract painting. As illustrators and cartoonists in the first half of the 20th century gained fame Josef Lada, Zdeněk Burian or Emil Orlík. Art photography has become a new field (František Drtikol, Josef Sudek, later Jan Saudek or Josef Koudelka).
The Czech Republic is known for its individually made, mouth-blown, and decorated Bohemian glass.
Architecture
The earliest preserved stone buildings in Bohemia and Moravia date back to the time of the Christianization in the 9th and 10th centuries. Since the Middle Ages, the Czech lands have been using the same architectural styles as most of Western and Central Europe. The oldest still standing churches were built in the Romanesque style. During the 13th century, it was replaced by the Gothic style. In the 14th century, Emperor Charles IV invited architects from France and Germany, Matthias of Arras and Peter Parler, to his court in Prague. During the Middle Ages, some fortified castles were built by the king and aristocracy, as well as some monasteries.
The Renaissance style penetrated the Bohemian Crown in the late 15th century when the older Gothic style started to be mixed with Renaissance elements. An example of pure Renaissance architecture in Bohemia is the Queen Anne's Summer Palace, which was situated in the garden of Prague Castle. Evidence of the general reception of the Renaissance in Bohemia, involving an influx of Italian architects, can be found in spacious chateaus with arcade courtyards and geometrically arranged gardens. Emphasis was placed on comfort, and buildings that were built for entertainment purposes also appeared.
In the 17th century, the Baroque style spread throughout the Crown of Bohemia.
In the 18th century, Bohemia produced an architectural peculiarity – the Baroque Gothic style, a synthesis of the Gothic and Baroque styles.
During the 19th century stands the revival architectural styles. Some churches were restored to their presumed medieval appearance and there were constructed buildings in the Neo-Romanesque, Neo-Gothic and Neo-Renaissance styles. At the turn of the 19th and 20th centuries, the new art style appeared in the Czech lands – Art Nouveau.
Bohemia contributed an unusual style to the world's architectural heritage when Czech architects attempted to transpose the Cubism of painting and sculpture into architecture.
Between World Wars I and II, Functionalism, with its sober, progressive forms, took over as the main architectural style.
After World War II and the Communist coup in 1948, art in Czechoslovakia became Soviet-influenced. The Czechoslovak avant-garde artistic movement is known as the Brussels style came up in the time of political liberalization of Czechoslovakia in the 1960s. Brutalism dominated in the 1970s and 1980s.
The Czech Republic is not shying away from the more modern trends of international architecture, an example is the Dancing House (Tančící dům) in Prague, Golden Angel in Prague or Congress Centre in Zlín.
Influential Czech architects include Peter Parler, Benedikt Rejt, Jan Santini Aichel, Kilian Ignaz Dientzenhofer, Josef Fanta, Josef Hlávka, Josef Gočár, Pavel Janák, Jan Kotěra, Věra Machoninová, Karel Prager, Karel Hubáček, Jan Kaplický, Eva Jiřičná or Josef Pleskot.
Literature
The literature from the area of today's Czech Republic was mostly written in Czech, but also in Latin and German or even Old Church Slavonic. Franz Kafka, although a competent user of Czech, wrote in his mother tongue, German. His included: (The Trial and The Castle).
In the second half of the 13th century, the royal court in Prague became one of the centers of German Minnesang and courtly literature. The Czech German-language literature can be seen in the first half of the 20th century.
Bible translations played a role in the development of Czech literature. The oldest Czech translation of the Psalms originated in the late 13th century and the first complete Czech translation of the Bible was finished around 1360. The first complete printed Czech Bible was published in 1488. The first complete Czech Bible translation from the original languages was published between 1579 and 1593. The Codex Gigas from the 12th century is the largest extant medieval manuscript in the world.
Czech-language literature can be divided into several periods: the Middle Ages; the Hussite period; the Renaissance humanism; the Baroque period; the Enlightenment and Czech reawakening in the first half of the 19th century, modern literature in the second half of the 19th century; the avant-garde of the interwar period; the years under Communism; and the Czech Republic.
The antiwar comedy novel The Good Soldier Švejk is the most translated Czech book in history.
The international literary award the Franz Kafka Prize is awarded in the Czech Republic.
The Czech Republic has the densest network of libraries in Europe.
Czech literature and culture played a role on at least two occasions when Czechs lived under oppression and political activity was suppressed. On both of these occasions, in the early 19th century and then again in the 1960s, the Czechs used their cultural and literary effort to strive for political freedom, establishing a confident, politically aware nation.
Music
The musical tradition of the Czech lands arose from the first church hymns, whose first evidence is suggested at the break of the 10th and 11th centuries. Some pieces of Czech music include two chorales, which in their time performed the function of anthems: "Lord, Have Mercy on Us" and the hymn "Saint Wenceslaus" or "Saint Wenceslaus Chorale". The authorship of the anthem "Lord, Have Mercy on Us" is ascribed by some historians to Saint Adalbert of Prague (sv.Vojtěch), bishop of Prague, living between 956 and 997.
The wealth of musical culture lies in the classical music tradition during all historical periods, especially in the Baroque, Classicism, Romantic, modern classical music and in the traditional folk music of Bohemia, Moravia and Silesia. Since the early era of artificial music, Czech musicians and composers have been influenced the folk music of the region and dance.
Czech music can be considered to have been "beneficial" in both the European and worldwide context, several times co-determined or even determined a newly arriving era in musical art, above all of Classical era, as well as by original attitudes in Baroque, Romantic and modern classical music. Some Czech musical works are The Bartered Bride, New World Symphony, Sinfonietta and Jenůfa.
A music festival in the country is Prague Spring International Music Festival of classical music, a permanent showcase for performing artists, symphony orchestras and chamber music ensembles of the world.
Theatre
The roots of Czech theatre can be found in the Middle Ages, especially in the cultural life of the Gothic period. In the 19th century, the theatre played a role in the national awakening movement and later, in the 20th century, it became a part of modern European theatre art. The original Czech cultural phenomenon came into being at the end of the 1950s. This project called Laterna magika, resulting in productions that combined theater, dance, and film in a poetic manner, considered the first multimedia art project in an international context.
A drama is Karel Čapek's play R.U.R., which introduced the word "robot".
The country has a tradition of puppet theater. In 2016, Czech and Slovak Puppetry was included on the UNESCO Intangible Cultural Heritage Lists.
Film
The tradition of Czech cinematography started in the second half of the 1890s. Peaks of the production in the era of silent movies include the historical drama The Builder of the Temple and the social and erotic drama Erotikon directed by Gustav Machatý. The early Czech sound film era was productive, above all in mainstream genres, with the comedies of Martin Frič or Karel Lamač. There were dramatic movies sought internationally.
Hermína Týrlová was a prominent Czech animator, screenwriter, and film director. She was often called the mother of Czech animation. Over the course of her career, she produced over 60 animated children's short films using puppets and the technique of stop motion animation.
Before the German occupation, in 1933, filmmaker and animator established the first Czech animation studio "IRE Film" with her husband Karel Dodal.
After the period of Nazi occupation and early communist official dramaturgy of socialist realism in movies at the turn of the 1940s and 1950s with fewer exceptions such as Krakatit or Men without wings (awarded by in 1946), an era of the Czech film began with animated films, performed in anglophone countries under the name "The Fabulous World of Jules Verne" from 1958, which combined acted drama with animation, and Jiří Trnka, the founder of the modern puppet film. This began a tradition of animated films (Mole etc.).
In the 1960s, the hallmark of Czechoslovak New Wave's films were improvised dialogues, black and absurd humor and the occupation of non-actors. Directors are trying to preserve natural atmosphere without refinement and artificial arrangement of scenes. A personality of the 1960s and the beginning of the 1970s with original manuscript and psychological impact is František Vláčil. Another international author is Jan Švankmajer, a filmmaker and artist whose work spans several media. He is a self-labeled surrealist known for animations and features.
The Barrandov Studios in Prague are the largest film studios with film locations in the country. Filmmakers have come to Prague to shoot scenery no longer found in Berlin, Paris and Vienna. The city of Karlovy Vary was used as a location for the 2006 James Bond film Casino Royale.
The Czech Lion is the highest Czech award for film achievement. Karlovy Vary International Film Festival is one of the film festivals that have been given competitive status by the FIAPF. Other film festivals held in the country include Febiofest, Jihlava International Documentary Film Festival, One World Film Festival, Zlín Film Festival and Fresh Film Festival.
Media
Czech journalists and media enjoy a degree of freedom. There are restrictions against writing in support of Nazism, racism or violating Czech law. The Czech press was ranked as the 40th most free press in the World Freedom Index by Reporters Without Borders in 2021. Radio Free Europe/Radio Liberty has its headquarters in Prague.
The national public television service is Czech Television that operates the 24-hour news channel ČT24 and the news website ct24.cz. As of 2020, Czech Television is the most watched television, followed by private televisions TV Nova and Prima TV. However, TV Nova has the most watched main news program and prime time program. Other public services include the Czech Radio and the Czech News Agency.
The best-selling daily national newspapers in 2020/21 are Blesk (average 703,000 daily readers), Mladá fronta DNES (average 461,000 daily readers), Právo (average 182,000 daily readers), Lidové noviny (average 163,000 daily readers) and Hospodářské noviny (average 162,000 daily readers).
Most Czechs (87%) read their news online, with Seznam.cz, iDNES.cz, Novinky.cz, iPrima.cz and Seznam Zprávy.cz being the most visited as of 2021.
Cuisine
Czech cuisine is marked by an emphasis on meat dishes with pork, beef, and chicken. Goose, duck, rabbit, and venison are served. Fish is less common, with the occasional exception of fresh trout and carp, which is served at Christmas.
There is a variety of local sausages, wurst, pâtés, and smoked and cured meats. Czech desserts include a variety of whipped cream, chocolate, and fruit pastries and tarts, crêpes, creme desserts and cheese, poppy-seed-filled and other types of traditional cakes such as buchty, koláče and štrúdl.
Czech beer has a history extending more than a millennium; the earliest known brewery existed in 993. Today the Czech Republic has the highest beer consumption per capita in the world. The pilsner style beer (pils) originated in Plzeň, where the world's first blond lager Pilsner Urquell is still produced. It has served as the inspiration for more than two-thirds of the beer produced in the world today. The city of České Budějovice has similarly lent its name to its beer, known as Budweiser Budvar.
The South Moravian region has been producing wine since the Middle Ages; about 94% of vineyards in the Czech Republic are Moravian. Aside from beer, slivovitz and wine, the Czech Republic also produces two liquors, Fernet Stock and Becherovka. Kofola is a non-alcoholic domestic cola soft drink which competes with Coca-Cola and Pepsi.
Sport
The two leading sports in the Czech Republic are football and ice hockey. The most watched sporting events are the Olympic tournament and World Championships of ice hockey. Other most popular sports include tennis, volleyball, floorball, golf, ball hockey, athletics, basketball and skiing.
The country has won 15 gold medals in the Summer Olympics and nine in the Winter Games. (See Olympic history.) The Czech ice hockey team won the gold medal at the 1998 Winter Olympics and has won twelve gold medals at the World Championships, including three straight from 1999 to 2001.
The Škoda Motorsport is engaged in competition racing since 1901 and has gained a number of titles with various vehicles around the world. MTX automobile company was formerly engaged in the manufacture of racing and formula cars since 1969.
Hiking is a popular sport. The word for 'tourist' in Czech, turista, also means 'trekker' or 'hiker'. For hikers, thanks to the more than 120-year-old tradition, there is the Czech Hiking Markers System of trail blazing, that has been adopted by countries worldwide. There is a network of around 40,000 km of marked short- and long-distance trails crossing the whole country and all the Czech mountains.
See also
List of Czech Republic-related topics
Outline of the Czech Republic
Notes
References
Citations
General sources
Further reading
Hochman, Jiří (1998). Historical dictionary of the Czech State. Scarecrow Press.
Bryant, Chad. Prague: Belonging and the Modern City. Cambridge MA: Harvard University Press, 2021.
External links
Governmental website
Presidential website
Senate
Portal of the Public Administration
#VisitCzechia – official tourist portal of the Czech Republic
Czechia – Central Intelligence Agency: The World Factbook
Central Europe
Countries in Europe
Landlocked countries
Member states of NATO
Member states of the European Union
Member states of the United Nations
Member states of the Three Seas Initiative
Republics
Member states of the Council of Europe
States and territories established in 1993
OECD members |
5322 | https://en.wikipedia.org/wiki/Czechoslovakia | Czechoslovakia | Czechoslovakia (; Czech and , Česko-Slovensko) was a landlocked state in Central Europe, created in 1918, when it declared its independence from Austria-Hungary. In 1938, after the Munich Agreement, the Sudetenland became part of Nazi Germany, while the country lost further territories to Hungary and Poland (Carpathian Ruthenia to Hungary and Zaolzie to Poland). Between 1939 and 1945, the state ceased to exist, as Slovakia proclaimed its independence and the remaining territories in the east became part of Hungary, while in the remainder of the Czech Lands, the German Protectorate of Bohemia and Moravia was proclaimed. In 1939, after the outbreak of World War II, former Czechoslovak President Edvard Beneš formed a government-in-exile and sought recognition from the Allies.
After World War II, Czechoslovakia was reestablished under its pre-1938 borders, with the exception of Carpathian Ruthenia, which became part of the Ukrainian SSR (a republic of the Soviet Union). The Communist Party seized power in a coup in 1948. From 1948 to 1989, Czechoslovakia was part of the Eastern Bloc with a planned economy. Its economic status was formalized in membership of Comecon from 1949 and its defense status in the Warsaw Pact of 1955. A period of political liberalization in 1968, the Prague Spring, ended violently when the Soviet Union, assisted by other Warsaw Pact countries, invaded Czechoslovakia. In 1989, as Marxist–Leninist governments and communism were ending all over Central and Eastern Europe, Czechoslovaks peacefully deposed their communist government during the Velvet Revolution, which began on 17 November 1989 and ended 11 days later on 28 November when all of the top Communist leaders and Communist party itself resigned. On 31 December 1992, Czechoslovakia peacefully split into the two sovereign states of the Czech Republic and Slovakia.
Characteristics
Form of state
1918–1937: A democratic republic championed by Tomáš Masaryk.
1938–1939: After the annexation of Sudetenland by Nazi Germany in 1938, the region gradually turned into a state with loosened connections among the Czech, Slovak, and Ruthenian parts. A strip of southern Slovakia and Carpathian Ruthenia was redeemed by Hungary, and the Trans-Olza region was annexed by Poland.
1939–1945: The remainder of the state was dismembered and became split into the Protectorate of Bohemia and Moravia and the Slovak Republic, while the rest of Carpathian Ruthenia was occupied and annexed by Hungary. A government-in-exile continued to exist in London, supported by the United Kingdom, United States and their Allies; after the German invasion of Soviet Union, it was also recognized by the Soviet Union. Czechoslovakia adhered to the Declaration by United Nations and was a founding member of the United Nations.
1946–1948: The country was governed by a coalition government with communist ministers, including the prime minister and the minister of interior. Carpathian Ruthenia was ceded to the Soviet Union.
1948–1989: The country became a Marxist-Leninist state under Soviet domination with a command economy. In 1960, the country officially became a socialist republic, the Czechoslovak Socialist Republic. It was a satellite state of the Soviet Union.
1989–1990: Czechoslovakia formally became a federal republic comprising the Czech Socialist Republic and the Slovak Socialist Republic. In late 1989, the communist rule came to an end during the Velvet Revolution followed by the re-establishment of a democratic parliamentary republic.
1990–1992: Shortly after the Velvet Revolution, the state was renamed the Czech and Slovak Federative Republic, consisting of the Czech Republic and the Slovak Republic (Slovakia) until the peaceful dissolution on 31 December 1992.
Neighbors
Austria 1918–1938, 1945–1992
Germany (both predecessors, West Germany and East Germany, were neighbors between 1949 and 1990)
Hungary
Poland
Romania 1918–1938
Soviet Union 1945–1991
Ukraine 1991–1992 (Soviet Union member until 1991)
Topography
The country was of generally irregular terrain. The western area was part of the north-central European uplands. The eastern region was composed of the northern reaches of the Carpathian Mountains and lands of the Danube River basin.
Climate
The weather is mild winters and mild summers. Influenced by the Atlantic Ocean from the west, the Baltic Sea from the north, and Mediterranean Sea from the south. There is no continental weather.
Names
1918–1938: Czechoslovak Republic (abbreviated ČSR), or Czechoslovakia, before the formalization of the name in 1920, also known as Czecho-Slovakia or the Czecho-Slovak state
1938–1939: Czecho-Slovak Republic, or Czecho-Slovakia
1945–1960: Czechoslovak Republic (ČSR), or Czechoslovakia
1960–1990: Czechoslovak Socialist Republic (ČSSR), or Czechoslovakia
1990: Czechoslovak Federative Republic (ČSFR)
1990–1992: Czech and Slovak Federative Republic (ČSFR), or Czechoslovakia
History
Origins
The area was part of the Austro-Hungarian Empire until it collapsed at the end of World War I. The new state was founded by Tomáš Garrigue Masaryk, who served as its first president from 14 November 1918 to 14 December 1935. He was succeeded by his close ally Edvard Beneš (1884–1948).
The roots of Czech nationalism go back to the 19th century, when philologists and educators, influenced by Romanticism, promoted the Czech language and pride in the Czech people. Nationalism became a mass movement in the second half of the 19th century. Taking advantage of the limited opportunities for participation in political life under Austrian rule, Czech leaders such as historian František Palacký (1798–1876) founded various patriotic, self-help organizations which provided a chance for many of their compatriots to participate in communal life before independence. Palacký supported Austro-Slavism and worked for a reorganized federal Austrian Empire, which would protect the Slavic speaking peoples of Central Europe against Russian and German threats.
An advocate of democratic reform and Czech autonomy within Austria-Hungary, Masaryk was elected twice to the Reichsrat (Austrian Parliament), from 1891 to 1893 for the Young Czech Party, and from 1907 to 1914 for the Czech Realist Party, which he had founded in 1889 with Karel Kramář and Josef Kaizl.
During World War I a number of Czechs and Slovaks, the Czechoslovak Legions, fought with the Allies in France and Italy, while large numbers deserted to Russia in exchange for its support for the independence of Czechoslovakia from the Austrian Empire. With the outbreak of World War I, Masaryk began working for Czech independence in a union with Slovakia. With Edvard Beneš and Milan Rastislav Štefánik, Masaryk visited several Western countries and won support from influential publicists. The Czechoslovak National Council was the main organization that advanced the claims for a Czechoslovak state.
First Czechoslovak Republic
Formation
The Bohemian Kingdom ceased to exist in 1918 when it was incorporated into Czechoslovakia. Czechoslovakia was founded in October 1918, as one of the successor states of the Austro-Hungarian Empire at the end of World War I and as part of the Treaty of Saint-Germain-en-Laye. It consisted of the present day territories of Bohemia, Moravia, Slovakia and Carpathian Ruthenia. Its territory included some of the most industrialized regions of the former Austria-Hungary. The land consisted of modern day Czechia, Slovakia, and a region of Ukraine called Carpathian Ruthenia
Ethnicity
The new country was a multi-ethnic state, with Czechs and Slovaks as constituent peoples. The population consisted of Czechs (51%), Slovaks (16%), Germans (22%), Hungarians (5%) and Rusyns (4%). Many of the Germans, Hungarians, Ruthenians and Poles and some Slovaks, felt oppressed because the political elite did not generally allow political autonomy for minority ethnic groups. This policy led to unrest among the non-Czech population, particularly in German-speaking Sudetenland, which initially had proclaimed itself part of the Republic of German-Austria in accordance with the self-determination principle.
The state proclaimed the official ideology that there were no separate Czech and Slovak nations, but only one nation of Czechoslovaks (see Czechoslovakism), to the disagreement of Slovaks and other ethnic groups. Once a unified Czechoslovakia was restored after World War II (after the country had been divided during the war), the conflict between the Czechs and the Slovaks surfaced again. The governments of Czechoslovakia and other Central European nations deported ethnic Germans, reducing the presence of minorities in the nation. Most of the Jews had been killed during the war by the Nazis.
*Jews identified themselves as Germans or Hungarians (and Jews only by religion not ethnicity), the sum is, therefore, more than 100%.
Interwar period
During the period between the two world wars Czechoslovakia was a democratic state. The population was generally literate, and contained fewer alienated groups. The influence of these conditions was augmented by the political values of Czechoslovakia's leaders and the policies they adopted. Under Tomas Masaryk, Czech and Slovak politicians promoted progressive social and economic conditions that served to defuse discontent.
Foreign minister Beneš became the prime architect of the Czechoslovak-Romanian-Yugoslav alliance (the "Little Entente", 1921–38) directed against Hungarian attempts to reclaim lost areas. Beneš worked closely with France. Far more dangerous was the German element, which after 1933 became allied with the Nazis in Germany.
Czech-Slovak relations came to be a central issue in Czechoslovak politics during the 1930s. The increasing feeling of inferiority among the Slovaks, who were hostile to the more numerous Czechs, weakened the country in the late 1930s. Slovakia became autonomous in the fall of 1938, and by mid-1939, Slovakia had become independent, with the First Slovak Republic set up as a satellite state of Nazi Germany and the far-right Slovak People's Party in power .
After 1933, Czechoslovakia remained the only democracy in central and eastern Europe.
Munich Agreement, and Two-Step German Occupation
In September 1938, Adolf Hitler demanded control of the Sudetenland. On 29 September 1938, Britain and France ceded control in the Appeasement at the Munich Conference; France ignored the military alliance it had with Czechoslovakia. During October 1938, Nazi Germany occupied the Sudetenland border region, effectively crippling Czechoslovak defences.
The First Vienna Award assigned a strip of southern Slovakia and Carpathian Ruthenia to Hungary. Poland occupied Zaolzie, an area whose population was majority Polish, in October 1938.
On 14 March 1939, the remainder ("rump") of Czechoslovakia was dismembered by the proclamation of the Slovak State, the next day the rest of Carpathian Ruthenia was occupied and annexed by Hungary, while the following day the German Protectorate of Bohemia and Moravia was proclaimed.
The eventual goal of the German state under Nazi leadership was to eradicate Czech nationality through assimilation, deportation, and extermination of the Czech intelligentsia; the intellectual elites and middle class made up a considerable number of the 200,000 people who passed through concentration camps and the 250,000 who died during German occupation. Under , it was assumed that around 50% of Czechs would be fit for Germanization. The Czech intellectual elites were to be removed not only from Czech territories but from Europe completely. The authors of Generalplan Ost believed it would be best if they emigrated overseas, as even in Siberia they were considered a threat to German rule. Just like Jews, Poles, Serbs, and several other nations, Czechs were considered to be untermenschen by the Nazi state. In 1940, in a secret Nazi plan for the Germanization of the Protectorate of Bohemia and Moravia it was declared that those considered to be of racially Mongoloid origin and the Czech intelligentsia were not to be Germanized.
The deportation of Jews to concentration camps was organized under the direction of Reinhard Heydrich, and the fortress town of Terezín was made into a ghetto way station for Jewish families. On 4 June 1942 Heydrich died after being wounded by an assassin in Operation Anthropoid. Heydrich's successor, Colonel General Kurt Daluege, ordered mass arrests and executions and the destruction of the villages of Lidice and Ležáky. In 1943 the German war effort was accelerated. Under the authority of Karl Hermann Frank, German minister of state for Bohemia and Moravia, some 350,000 Czech laborers were dispatched to the Reich. Within the protectorate, all non-war-related industry was prohibited. Most of the Czech population obeyed quiescently up until the final months preceding the end of the war, while thousands were involved in the resistance movement.
For the Czechs of the Protectorate Bohemia and Moravia, German occupation was a period of brutal oppression. Czech losses resulting from political persecution and deaths in concentration camps totaled between 36,000 and 55,000. The Jewish populations of Bohemia and Moravia (118,000 according to the 1930 census) were virtually annihilated. Many Jews emigrated after 1939; more than 70,000 were killed; 8,000 survived at Terezín. Several thousand Jews managed to live in freedom or in hiding throughout the occupation.
Despite the estimated 136,000 deaths at the hands of the Nazi regime, the population in the Reichsprotektorate saw a net increase during the war years of approximately 250,000 in line with an increased birth rate.
On 6 May 1945, the third US Army of General Patton entered Pilsen from the south west. On 9 May 1945, Soviet Red Army troops entered Prague.
Communist Czechoslovakia
After World War II, prewar Czechoslovakia was reestablished, with the exception of Subcarpathian Ruthenia, which was annexed by the Soviet Union and incorporated into the Ukrainian Soviet Socialist Republic. The Beneš decrees were promulgated concerning ethnic Germans (see Potsdam Agreement) and ethnic Hungarians. Under the decrees, citizenship was abrogated for people of German and Hungarian ethnic origin who had accepted German or Hungarian citizenship during the occupations. In 1948, this provision was cancelled for the Hungarians, but only partially for the Germans. The government then confiscated the property of the Germans and expelled about 90% of the ethnic German population, over 2 million people. Those who remained were collectively accused of supporting the Nazis after the Munich Agreement, as 97.32% of Sudeten Germans had voted for the NSDAP in the December 1938 elections. Almost every decree explicitly stated that the sanctions did not apply to antifascists. Some 250,000 Germans, many married to Czechs, some antifascists, and also those required for the post-war reconstruction of the country, remained in Czechoslovakia. The Beneš Decrees still cause controversy among nationalist groups in the Czech Republic, Germany, Austria and Hungary.
Following the expulsion of the ethnic German population from Czechoslovakia, parts of the former Sudetenland, especially around Krnov and the surrounding villages of the Jesenik mountain region in northeastern Czechoslovakia, were settled in 1949 by Communist refugees from Northern Greece who had left their homeland as a result of the Greek Civil War. These Greeks made up a large proportion of the town and region's population until the late 1980s/early 1990s. Although defined as "Greeks", the Greek Communist community of Krnov and the Jeseniky region actually consisted of an ethnically diverse population, including Greek Macedonians, Macedonians, Vlachs, Pontic Greeks and Turkish speaking Urums or Caucasus Greeks.
Carpathian Ruthenia (Podkarpatská Rus) was occupied by (and in June 1945 formally ceded to) the Soviet Union. In the 1946 parliamentary election, the Communist Party of Czechoslovakia was the winner in the Czech lands, and the Democratic Party won in Slovakia. In February 1948 the Communists seized power. Although they would maintain the fiction of political pluralism through the existence of the National Front, except for a short period in the late 1960s (the Prague Spring) the country had no liberal democracy. Since citizens lacked significant electoral methods of registering protest against government policies, periodically there were street protests that became violent. For example, there were riots in the town of Plzeň in 1953, reflecting economic discontent. Police and army units put down the rebellion, and hundreds were injured but no one was killed. While its economy remained more advanced than those of its neighbors in Eastern Europe, Czechoslovakia grew increasingly economically weak relative to Western Europe.
The currency reform of 1953 caused dissatisfaction among Czechoslovak laborers. To equalize the wage rate, Czechoslovaks had to turn in their old money for new at a decreased value. The banks also confiscated savings and bank deposits to control the amount of money in circulation. In the 1950s, Czechoslovakia experienced high economic growth (averaging 7% per year), which allowed for a substantial increase in wages and living standards, thus promoting the stability of the regime.
In 1968, when the reformer Alexander Dubček was appointed to the key post of First Secretary of the Czechoslovak Communist Party, there was a brief period of liberalization known as the Prague Spring. In response, after failing to persuade the Czechoslovak leaders to change course, five other members of the Warsaw Pact invaded. Soviet tanks rolled into Czechoslovakia on the night of 20–21 August 1968. Soviet Communist Party General Secretary Leonid Brezhnev viewed this intervention as vital for the preservation of the Soviet, socialist system and vowed to intervene in any state that sought to replace Marxism-Leninism with capitalism.
In the week after the invasion there was a spontaneous campaign of civil resistance against the occupation. This resistance involved a wide range of acts of non-cooperation and defiance: this was followed by a period in which the Czechoslovak Communist Party leadership, having been forced in Moscow to make concessions to the Soviet Union, gradually put the brakes on their earlier liberal policies.
Meanwhile, one plank of the reform program had been carried out: in 1968–69, Czechoslovakia was turned into a federation of the Czech Socialist Republic and Slovak Socialist Republic. The theory was that under the federation, social and economic inequities between the Czech and Slovak halves of the state would be largely eliminated. A number of ministries, such as education, now became two formally equal bodies in the two formally equal republics. However, the centralized political control by the Czechoslovak Communist Party severely limited the effects of federalization.
The 1970s saw the rise of the dissident movement in Czechoslovakia, represented among others by Václav Havel. The movement sought greater political participation and expression in the face of official disapproval, manifested in limitations on work activities, which went as far as a ban on professional employment, the refusal of higher education for the dissidents' children, police harassment and prison.
During the 1980s, Czechoslovakia became one of the most tightly controlled Communist regimes in the Warsaw Pact in resistance to the mitigation of controls notified by Soviet president Mikhail Gorbachev.
After 1989
In 1989, the Velvet Revolution restored democracy. This occurred around the same time as the fall of communism in Romania, Bulgaria, Hungary, East Germany and Poland.
The word "socialist" was removed from the country's full name on 29 March 1990 and replaced by "federal".
Pope John Paul II made a papal visit to Czechoslovakia on 21 April 1990, hailing it as a symbolic step of reviving Christianity in the newly-formed post-communist state.
Czechoslovakia participated in the Gulf War with a small force of 200 troops under the command of the U.S.-led coalition.
In 1992, because of growing nationalist tensions in the government, Czechoslovakia was peacefully dissolved by parliament. On 31 December 1992 it formally separated into two independent countries, the Czech Republic and the Slovak Republic.
Government and politics
After World War II, a political monopoly was held by the Communist Party of Czechoslovakia (KSČ). The leader of the KSČ was de facto the most powerful person in the country during this period. Gustáv Husák was elected first secretary of the KSČ in 1969 (changed to general secretary in 1971) and president of Czechoslovakia in 1975. Other parties and organizations existed but functioned in subordinate roles to the KSČ. All political parties, as well as numerous mass organizations, were grouped under umbrella of the National Front. Human rights activists and religious activists were severely repressed.
Constitutional development
Czechoslovakia had the following constitutions during its history (1918–1992):
Temporary constitution of 14 November 1918 (democratic): see History of Czechoslovakia (1918–1938)
The 1920 constitution (The Constitutional Document of the Czechoslovak Republic), democratic, in force until 1948, several amendments
The Communist 1948 Ninth-of-May Constitution
The Communist 1960 Constitution of the Czechoslovak Socialist Republic with major amendments in 1968 (Constitutional Law of Federation), 1971, 1975, 1978, and 1989 (at which point the leading role of the Communist Party was abolished). It was amended several more times during 1990–1992 (for example, 1990, name change to Czecho-Slovakia, 1991 incorporation of the human rights charter)
Heads of state and government
List of presidents of Czechoslovakia
List of prime ministers of Czechoslovakia
Foreign policy
International agreements and membership
In the 1930s, the nation formed a military alliance with France, which collapsed in the Munich Agreement of 1938. After World War II, an active participant in Council for Mutual Economic Assistance (Comecon), Warsaw Pact, United Nations and its specialized agencies; signatory of conference on Security and Cooperation in Europe.
Administrative divisions
1918–1923: Different systems in former Austrian territory (Bohemia, Moravia, a small part of Silesia) compared to former Hungarian territory (Slovakia and Ruthenia): three lands (země) (also called district units (kraje)): Bohemia, Moravia, Silesia, plus 21 counties (župy) in today's Slovakia and three counties in today's Ruthenia; both lands and counties were divided into districts (okresy).
1923–1927: As above, except that the Slovak and Ruthenian counties were replaced by six (grand) counties ((veľ)župy) in Slovakia and one (grand) county in Ruthenia, and the numbers and boundaries of the okresy were changed in those two territories.
1928–1938: Four lands (Czech: země, Slovak: krajiny): Bohemia, Moravia-Silesia, Slovakia and Sub-Carpathian Ruthenia, divided into districts (okresy).
Late 1938 – March 1939: As above, but Slovakia and Ruthenia gained the status of "autonomous lands". Slovakia was called Slovenský štát, with its own currency and government.
1945–1948: As in 1928–1938, except that Ruthenia became part of the Soviet Union.
1949–1960: 19 regions (kraje) divided into 270 okresy.
1960–1992: 10 kraje, Prague, and (from 1970) Bratislava (capital of Slovakia); these were divided into 109–114 okresy; the kraje were abolished temporarily in Slovakia in 1969–1970 and for many purposes from 1991 in Czechoslovakia; in addition, the Czech Socialist Republic and the Slovak Socialist Republic were established in 1969 (without the word Socialist from 1990).
Population and ethnic groups
Economy
Before World War II, the economy was about the fourth in all industrial countries in Europe. The state was based on strong economy, manufacturing cars (Škoda, Tatra), trams, aircraft (Aero, Avia), ships, ship engines (Škoda), cannons, shoes (Baťa), turbines, guns (Zbrojovka Brno). It was the industrial workshop for the Austro-Hungarian empire. The Slovak lands relied more heavily on agriculture than the Czech lands.
After World War II, the economy was centrally planned, with command links controlled by the communist party, similarly to the Soviet Union. The large metallurgical industry was dependent on imports of iron and non-ferrous ores.
Industry: Extractive industry and manufacturing dominated the sector, including machinery, chemicals, food processing, metallurgy, and textiles. The sector was wasteful in its use of energy, materials, and labor and was slow to upgrade technology, but the country was a major supplier of high-quality machinery, instruments, electronics, aircraft, airplane engines and arms to other socialist countries.
Agriculture: Agriculture was a minor sector, but collectivized farms of large acreage and relatively efficient mode of production enabled the country to be relatively self-sufficient in the food supply. The country depended on imports of grains (mainly for livestock feed) in years of adverse weather. Meat production was constrained by a shortage of feed, but the country still recorded high per capita consumption of meat.
Foreign Trade: Exports were estimated at US$17.8 billion in 1985. Exports were machinery (55%), fuel and materials (14%), and manufactured consumer goods (16%). Imports stood at an estimated US$17.9 billion in 1985, including fuel and materials (41%), machinery (33%), and agricultural and forestry products (12%). In 1986, about 80% of foreign trade was with other socialist countries.
Exchange rate: Official, or commercial, the rate was crowns (Kčs) 5.4 per US$1 in 1987. Tourist, or non-commercial, the rate was Kčs 10.5 per US$1. Neither rate reflected purchasing power. The exchange rate on the black market was around Kčs 30 per US$1, which became the official rate once the currency became convertible in the early 1990s.
Fiscal year: Calendar year.
Fiscal policy: The state was the exclusive owner of means of production in most cases. Revenue from state enterprises was the primary source of revenues followed by turnover tax. The government spent heavily on social programs, subsidies, and investment. The budget was usually balanced or left a small surplus.
Resource base
After World War II, the country was short of energy, relying on imported crude oil and natural gas from the Soviet Union, domestic brown coal, and nuclear and hydroelectric energy. Energy constraints were a major factor in the 1980s.
Transport and communications
Slightly after the foundation of Czechoslovakia in 1918, there was a lack of essential infrastructure in many areas – paved roads, railways, bridges, etc. Massive improvement in the following years enabled Czechoslovakia to develop its industry. Prague's civil airport in Ruzyně became one of the most modern terminals in the world when it was finished in 1937. Tomáš Baťa, a Czech entrepreneur and visionary, outlined his ideas in the publication "Budujme stát pro 40 milionů lidí", where he described the future motorway system. Construction of the first motorways in Czechoslovakia begun in 1939, nevertheless, they were stopped after German occupation during World War II.
Society
Education
Education was free at all levels and compulsory from ages 6 to 15. The vast majority of the population was literate. There was a highly developed system of apprenticeship training and vocational schools supplemented general secondary schools and institutions of higher education.
Religion
In 1991, 46% of the population were Roman Catholics, 5.3% were Evangelical Lutheran, 30% were Atheist, and other religions made up 17% of the country, but there were huge differences in religious practices between the two constituent republics; see Czech Republic and Slovakia.
Health, social welfare and housing
After World War II, free health care was available to all citizens. National health planning emphasized preventive medicine; factory and local health care centres supplemented hospitals and other inpatient institutions. There was a substantial improvement in rural health care during the 1960s and 1970s.
Mass media
During the era between the World Wars, Czechoslovak democracy and liberalism facilitated conditions for free publication. The most significant daily newspapers in these times were Lidové noviny, Národní listy, Český deník and Československá Republika.
During Communist rule, the mass media in Czechoslovakia were controlled by the Communist Party. Private ownership of any publication or agency of the mass media was generally forbidden, although churches and other organizations published small periodicals and newspapers. Even with this information monopoly in the hands of organizations under KSČ control, all publications were reviewed by the government's Office for Press and Information.
Sports
The Czechoslovakia national football team was a consistent performer on the international scene, with eight appearances in the FIFA World Cup Finals, finishing in second place in 1934 and 1962. The team also won the European Football Championship in 1976, came in third in 1980 and won the Olympic gold in 1980.
Well-known football players such as Pavel Nedvěd, Antonín Panenka, Milan Baroš, Tomáš Rosický, Vladimír Šmicer or Petr Čech were all born in Czechoslovakia.
The International Olympic Committee code for Czechoslovakia is TCH, which is still used in historical listings of results.
The Czechoslovak national ice hockey team won many medals from the world championships and Olympic Games. Peter Šťastný, Jaromír Jágr, Dominik Hašek, Peter Bondra, Petr Klíma, Marián Gáborík, Marián Hossa, Miroslav Šatan and Pavol Demitra all come from Czechoslovakia.
Emil Zátopek, winner of four Olympic gold medals in athletics, is considered one of the top athletes in Czechoslovak history.
Věra Čáslavská was an Olympic gold medallist in gymnastics, winning seven gold medals and four silver medals. She represented Czechoslovakia in three consecutive Olympics.
Several accomplished professional tennis players including Jaroslav Drobný, Ivan Lendl, Jan Kodeš, Miloslav Mečíř, Hana Mandlíková, Martina Hingis, Martina Navratilova, Jana Novotna, Petra Kvitová and Daniela Hantuchová were born in Czechoslovakia.
Culture
Czech RepublicSlovakia
List of CzechsList of Slovaks
MDŽ (International Women's Day)
Jazz in dissident Czechoslovakia
Postage stamps
Postage stamps and postal history of Czechoslovakia
Czechoslovakia stamp reused by Slovak Republic after 18 January 1939 by overprinting country and value
See also
Effects on the environment in Czechoslovakia from Soviet influence during the Cold War
Former countries in Europe after 1815
List of former sovereign states
Notes
References
Sources
Further reading
Heimann, Mary. Czechoslovakia: The State That Failed (2009).
Hermann, A. H. A History of the Czechs (1975).
Kalvoda, Josef. The Genesis of Czechoslovakia (1986).
Leff, Carol Skalnick. National Conflict in Czechoslovakia: The Making and Remaking of a State, 1918–87 (1988).
Mantey, Victor. A History of the Czechoslovak Republic (1973).
Myant, Martin. The Czechoslovak Economy, 1948–88 (1989).
Naimark, Norman, and Leonid Gibianskii, eds. The Establishment of Communist Regimes in Eastern Europe, 1944–1949 (1997) online edition
Orzoff, Andrea. Battle for the Castle: The Myth of Czechoslovakia in Europe 1914–1948 (Oxford University Press, 2009); online review online
Paul, David. Czechoslovakia: Profile of a Socialist Republic at the Crossroads of Europe (1990).
Renner, Hans. A History of Czechoslovakia since 1945 (1989).
Seton-Watson, R. W. A History of the Czechs and Slovaks (1943).
Stone, Norman, and E. Strouhal, eds.Czechoslovakia: Crossroads and Crises, 1918–88 (1989).
Wheaton, Bernard; Zdenek Kavav. "The Velvet Revolution: Czechoslovakia, 1988–1991" (1992).
Williams, Kieran, "Civil Resistance in Czechoslovakia: From Soviet Invasion to "Velvet Revolution", 1968–89",in Adam Roberts and Timothy Garton Ash (eds.), Civil Resistance and Power Politics: The Experience of Non-violent Action from Gandhi to the Present (Oxford University Press, 2009).
Windsor, Philip, and Adam Roberts, Czechoslovakia 1968: Reform, Repression and Resistance (1969).
Wolchik, Sharon L. Czechoslovakia: Politics, Society, and Economics (1990).
External links
Online books and articles
U.S. Library of Congress Country Studies, "Czechoslovakia"
English/Czech: Orders and Medals of Czechoslovakia including Order of the White Lion
Czechoslovakia by Encyclopædia Britannica
Katrin Boeckh: Crumbling of Empires and Emerging States: Czechoslovakia and Yugoslavia as (Multi)national Countries, in: 1914-1918-online. International Encyclopedia of the First World War.
Maps with Hungarian-language rubrics:
Border changes after the creation of Czechoslovakia
Interwar Czechoslovakia
Czechoslovakia after Munich Agreement
Eastern Bloc
Former republics
Geography of Europe
History of Central Europe
1918 establishments in Czechoslovakia
1939 disestablishments in Czechoslovakia
1945 establishments in Czechoslovakia
1992 disestablishments in Czechoslovakia
States and territories established in 1918
States and territories disestablished in 1939
States and territories established in 1945
States and territories disestablished in 1992
1918 establishments in Europe
1992 disestablishments in Europe
Former member states of the United Nations |
5323 | https://en.wikipedia.org/wiki/Computer%20science | Computer science | Computer science is the study of computation, information, and automation. Computer science spans theoretical disciplines (such as algorithms, theory of computation, and information theory) to applied disciplines (including the design and implementation of hardware and software). Though more often considered an academic discipline, computer science is closely related to computer programming.
Algorithms and data structures are central to computer science.
The theory of computation concerns abstract models of computation and general classes of problems that can be solved using them. The fields of cryptography and computer security involve studying the means for secure communication and for preventing security vulnerabilities. Computer graphics and computational geometry address the generation of images. Programming language theory considers different ways to describe computational processes, and database theory concerns the management of repositories of data. Human–computer interaction investigates the interfaces through which humans and computers interact, and software engineering focuses on the design and principles behind developing software. Areas such as operating systems, networks and embedded systems investigate the principles and design behind complex systems. Computer architecture describes the construction of computer components and computer-operated equipment. Artificial intelligence and machine learning aim to synthesize goal-orientated processes such as problem-solving, decision-making, environmental adaptation, planning and learning found in humans and animals. Within artificial intelligence, computer vision aims to understand and process image and video data, while natural language processing aims to understand and process textual and linguistic data.
The fundamental concern of computer science is determining what can and cannot be automated. The Turing Award is generally recognized as the highest distinction in computer science.
History
The earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, aiding in computations such as multiplication and division. Algorithms for performing computations have existed since antiquity, even before the development of sophisticated computing equipment.
Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623. In 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner. Leibniz may be considered the first computer scientist and information theorist, because of various reasons, including the fact that he documented the binary number system. In 1820, Thomas de Colmar launched the mechanical calculator industry when he invented his simplified arithmometer, the first calculating machine strong enough and reliable enough to be used daily in an office environment. Charles Babbage started the design of the first automatic mechanical calculator, his Difference Engine, in 1822, which eventually gave him the idea of the first programmable mechanical calculator, his Analytical Engine. He started developing this machine in 1834, and "in less than two years, he had sketched out many of the salient features of the modern computer". "A crucial step was the adoption of a punched card system derived from the Jacquard loom" making it infinitely programmable. In 1843, during the translation of a French article on the Analytical Engine, Ada Lovelace wrote, in one of the many notes she included, an algorithm to compute the Bernoulli numbers, which is considered to be the first published algorithm ever specifically tailored for implementation on a computer. Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information; eventually his company became part of IBM. Following Babbage, although unaware of his earlier work, Percy Ludgate in 1909 published the 2nd of the only two designs for mechanical analytical engines in history. In 1914, the Spanish engineer Leonardo Torres Quevedo published his Essays on Automatics, and designed, inspired by Babbage, a theoretical electromechanical calculating machine which was to be controlled by a read-only program. The paper also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, a prototype that demonstrated the feasibility of an electromechanical analytical engine, on which commands could be typed and the results printed automatically. In 1937, one hundred years after Babbage's impossible dream, Howard Aiken convinced IBM, which was making all kinds of punched card equipment and was also in the calculator business to develop his giant programmable calculator, the ASCC/Harvard Mark I, based on Babbage's Analytical Engine, which itself used cards and a central computing unit. When the machine was finished, some hailed it as "Babbage's dream come true".
During the 1940s, with the development of new and more powerful computing machines such as the Atanasoff–Berry computer and ENIAC, the term computer came to refer to the machines rather than their human predecessors. As it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. In 1945, IBM founded the Watson Scientific Computing Laboratory at Columbia University in New York City. The renovated fraternity house on Manhattan's West Side was IBM's first laboratory devoted to pure science. The lab is the forerunner of IBM's Research Division, which today operates research facilities around the world. Ultimately, the close relationship between IBM and Columbia University was instrumental in the emergence of a new scientific discipline, with Columbia offering one of the first academic-credit courses in computer science in 1946. Computer science began to be established as a distinct academic discipline in the 1950s and early 1960s. The world's first computer science degree program, the Cambridge Diploma in Computer Science, began at the University of Cambridge Computer Laboratory in 1953. The first computer science department in the United States was formed at Purdue University in 1962. Since practical computers became available, many applications of computing have become distinct areas of study in their own rights.
Etymology
Although first proposed in 1956, the term "computer science" appears in a 1959 article in Communications of the ACM,
in which Louis Fein argues for the creation of a Graduate School in Computer Sciences analogous to the creation of Harvard Business School in 1921. Louis justifies the name by arguing that, like management science, the subject is applied and interdisciplinary in nature, while having the characteristics typical of an academic discipline.
His efforts, and those of others such as numerical analyst George Forsythe, were rewarded: universities went on to create such departments, starting with Purdue in 1962. Despite its name, a significant amount of computer science does not involve the study of computers themselves. Because of this, several alternative names have been proposed. Certain departments of major universities prefer the term computing science, to emphasize precisely that difference. Danish scientist Peter Naur suggested the term datalogy, to reflect the fact that the scientific discipline revolves around data and data treatment, while not necessarily involving computers. The first scientific institution to use the term was the Department of Datalogy at the University of Copenhagen, founded in 1969, with Peter Naur being the first professor in datalogy. The term is used mainly in the Scandinavian countries. An alternative term, also proposed by Naur, is data science; this is now used for a multi-disciplinary field of data analysis, including statistics and databases.
In the early days of computing, a number of terms for the practitioners of the field of computing were suggested in the Communications of the ACM—turingineer, turologist, flow-charts-man, applied meta-mathematician, and applied epistemologist. Three months later in the same journal, comptologist was suggested, followed next year by hypologist. The term computics has also been suggested. In Europe, terms derived from contracted translations of the expression "automatic information" (e.g. "informazione automatica" in Italian) or "information and mathematics" are often used, e.g. informatique (French), Informatik (German), informatica (Italian, Dutch), informática (Spanish, Portuguese), informatika (Slavic languages and Hungarian) or pliroforiki (πληροφορική, which means informatics) in Greek. Similar words have also been adopted in the UK (as in the School of Informatics, University of Edinburgh). "In the U.S., however, informatics is linked with applied computing, or computing in the context of another domain."
A folkloric quotation, often attributed to—but almost certainly not first formulated by—Edsger Dijkstra, states that "computer science is no more about computers than astronomy is about telescopes." The design and deployment of computers and computer systems is generally considered the province of disciplines other than computer science. For example, the study of computer hardware is usually considered part of computer engineering, while the study of commercial computer systems and their deployment is often called information technology or information systems. However, there has been exchange of ideas between the various computer-related disciplines. Computer science research also often intersects other disciplines, such as cognitive science, linguistics, mathematics, physics, biology, Earth science, statistics, philosophy, and logic.
Computer science is considered by some to have a much closer relationship with mathematics than many scientific disciplines, with some observers saying that computing is a mathematical science. Early computer science was strongly influenced by the work of mathematicians such as Kurt Gödel, Alan Turing, John von Neumann, Rózsa Péter and Alonzo Church and there continues to be a useful interchange of ideas between the two fields in areas such as mathematical logic, category theory, domain theory, and algebra.
The relationship between computer science and software engineering is a contentious issue, which is further muddied by disputes over what the term "software engineering" means, and how computer science is defined. David Parnas, taking a cue from the relationship between other engineering and science disciplines, has claimed that the principal focus of computer science is studying the properties of computation in general, while the principal focus of software engineering is the design of specific computations to achieve practical goals, making the two separate but complementary disciplines.
The academic, political, and funding aspects of computer science tend to depend on whether a department is formed with a mathematical emphasis or with an engineering emphasis. Computer science departments with a mathematics emphasis and with a numerical orientation consider alignment with computational science. Both types of departments tend to make efforts to bridge the field educationally if not across all research.
Philosophy
Epistemology of computer science
Despite the word "science" in its name, there is debate over whether or not computer science is a discipline of science, mathematics, or engineering. Allen Newell and Herbert A. Simon argued in 1975, It has since been argued that computer science can be classified as an empirical science since it makes use of empirical testing to evaluate the correctness of programs, but a problem remains in defining the laws and theorems of computer science (if any exist) and defining the nature of experiments in computer science. Proponents of classifying computer science as an engineering discipline argue that the reliability of computational systems is investigated in the same way as bridges in civil engineering and airplanes in aerospace engineering. They also argue that while empirical sciences observe what presently exists, computer science observes what is possible to exist and while scientists discover laws from observation, no proper laws have been found in computer science and it is instead concerned with creating phenomena.
Proponents of classifying computer science as a mathematical discipline argue that computer programs are physical realizations of mathematical entities and programs can be deductively reasoned through mathematical formal methods. Computer scientists Edsger W. Dijkstra and Tony Hoare regard instructions for computer programs as mathematical sentences and interpret formal semantics for programming languages as mathematical axiomatic systems.
Paradigms of computer science
A number of computer scientists have argued for the distinction of three separate paradigms in computer science. Peter Wegner argued that those paradigms are science, technology, and mathematics. Peter Denning's working group argued that they are theory, abstraction (modeling), and design. Amnon H. Eden described them as the "rationalist paradigm" (which treats computer science as a branch of mathematics, which is prevalent in theoretical computer science, and mainly employs deductive reasoning), the "technocratic paradigm" (which might be found in engineering approaches, most prominently in software engineering), and the "scientific paradigm" (which approaches computer-related artifacts from the empirical perspective of natural sciences, identifiable in some branches of artificial intelligence).
Computer science focuses on methods involved in design, specification, programming, verification, implementation and testing of human-made computing systems.
Fields
As a discipline, computer science spans a range of topics from theoretical studies of algorithms and the limits of computation to the practical issues of implementing computing systems in hardware and software.
CSAB, formerly called Computing Sciences Accreditation Board—which is made up of representatives of the Association for Computing Machinery (ACM), and the IEEE Computer Society (IEEE CS)—identifies four areas that it considers crucial to the discipline of computer science: theory of computation, algorithms and data structures, programming methodology and languages, and computer elements and architecture. In addition to these four areas, CSAB also identifies fields such as software engineering, artificial intelligence, computer networking and communication, database systems, parallel computation, distributed computation, human–computer interaction, computer graphics, operating systems, and numerical and symbolic computation as being important areas of computer science.
Theoretical computer science
Theoretical Computer Science is mathematical and abstract in spirit, but it derives its motivation from the practical and everyday computation. Its aim is to understand the nature of computation and, as a consequence of this understanding, provide more efficient methodologies.
Theory of computation
According to Peter Denning, the fundamental question underlying computer science is, "What can be automated?" Theory of computation is focused on answering fundamental questions about what can be computed and what amount of resources are required to perform those computations. In an effort to answer the first question, computability theory examines which computational problems are solvable on various theoretical models of computation. The second question is addressed by computational complexity theory, which studies the time and space costs associated with different approaches to solving a multitude of computational problems.
The famous P = NP? problem, one of the Millennium Prize Problems, is an open problem in the theory of computation.
Information and coding theory
Information theory, closely related to probability and statistics, is related to the quantification of information. This was developed by Claude Shannon to find fundamental limits on signal processing operations such as compressing data and on reliably storing and communicating data.
Coding theory is the study of the properties of codes (systems for converting information from one form to another) and their fitness for a specific application. Codes are used for data compression, cryptography, error detection and correction, and more recently also for network coding. Codes are studied for the purpose of designing efficient and reliable data transmission methods.
Data structures and algorithms
Data structures and algorithms are the studies of commonly used computational methods and their computational efficiency.
Programming language theory and formal methods
Programming language theory is a branch of computer science that deals with the design, implementation, analysis, characterization, and classification of programming languages and their individual features. It falls within the discipline of computer science, both depending on and affecting mathematics, software engineering, and linguistics. It is an active research area, with numerous dedicated academic journals.
Formal methods are a particular kind of mathematically based technique for the specification, development and verification of software and hardware systems. The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design. They form an important theoretical underpinning for software engineering, especially where safety or security is involved. Formal methods are a useful adjunct to software testing since they help avoid errors and can also give a framework for testing. For industrial use, tool support is required. However, the high cost of using formal methods means that they are usually only used in the development of high-integrity and life-critical systems, where safety or security is of utmost importance. Formal methods are best described as the application of a fairly broad variety of theoretical computer science fundamentals, in particular logic calculi, formal languages, automata theory, and program semantics, but also type systems and algebraic data types to problems in software and hardware specification and verification.
Applied computer science
Computer graphics and visualization
Computer graphics is the study of digital visual contents and involves the synthesis and manipulation of image data. The study is connected to many other fields in computer science, including computer vision, image processing, and computational geometry, and is heavily applied in the fields of special effects and video games.
Image and sound processing
Information can take the form of images, sound, video or other multimedia. Bits of information can be streamed via signals. Its processing is the central notion of informatics, the European view on computing, which studies information processing algorithms independently of the type of information carrier – whether it is electrical, mechanical or biological. This field plays important role in information theory, telecommunications, information engineering and has applications in medical image computing and speech synthesis, among others. What is the lower bound on the complexity of fast Fourier transform algorithms? is one of unsolved problems in theoretical computer science.
Computational science, finance and engineering
Scientific computing (or computational science) is the field of study concerned with constructing mathematical models and quantitative analysis techniques and using computers to analyze and solve scientific problems. A major usage of scientific computing is simulation of various processes, including computational fluid dynamics, physical, electrical, and electronic systems and circuits, as well as societies and social situations (notably war games) along with their habitats, among many others. Modern computers enable optimization of such designs as complete aircraft. Notable in electrical and electronic circuit design are SPICE, as well as software for physical realization of new (or modified) designs. The latter includes essential design software for integrated circuits.
Social computing and human–computer interaction
Social computing is an area that is concerned with the intersection of social behavior and computational systems. Human–computer interaction research develops theories, principles, and guidelines for user interface designers.
Software engineering
Software engineering is the study of designing, implementing, and modifying the software in order to ensure it is of high quality, affordable, maintainable, and fast to build. It is a systematic approach to software design, involving the application of engineering practices to software. Software engineering deals with the organizing and analyzing of software—it does not just deal with the creation or manufacture of new software, but its internal arrangement and maintenance. For example software testing, systems engineering, technical debt and software development processes.
Artificial intelligence
Artificial intelligence (AI) aims to or is required to synthesize goal-orientated processes such as problem-solving, decision-making, environmental adaptation, learning, and communication found in humans and animals. From its origins in cybernetics and in the Dartmouth Conference (1956), artificial intelligence research has been necessarily cross-disciplinary, drawing on areas of expertise such as applied mathematics, symbolic logic, semiotics, electrical engineering, philosophy of mind, neurophysiology, and social intelligence. AI is associated in the popular mind with robotic development, but the main field of practical application has been as an embedded component in areas of software development, which require computational understanding. The starting point in the late 1940s was Alan Turing's question "Can computers think?", and the question remains effectively unanswered, although the Turing test is still used to assess computer output on the scale of human intelligence. But the automation of evaluative and predictive tasks has been increasingly successful as a substitute for human monitoring and intervention in domains of computer application involving complex real-world data.
Computer systems
Computer architecture and organization
Computer architecture, or digital computer organization, is the conceptual design and fundamental operational structure of a computer system. It focuses largely on the way by which the central processing unit performs internally and accesses addresses in memory. Computer engineers study computational logic and design of computer hardware, from individual processor components, microcontrollers, personal computers to supercomputers and embedded systems. The term "architecture" in computer literature can be traced to the work of Lyle R. Johnson and Frederick P. Brooks, Jr., members of the Machine Organization department in IBM's main research center in 1959.
Concurrent, parallel and distributed computing
Concurrency is a property of systems in which several computations are executing simultaneously, and potentially interacting with each other. A number of mathematical models have been developed for general concurrent computation including Petri nets, process calculi and the Parallel Random Access Machine model. When multiple computers are connected in a network while using concurrency, this is known as a distributed system. Computers within that distributed system have their own private memory, and information can be exchanged to achieve common goals.
Computer networks
This branch of computer science aims to manage networks between computers worldwide.
Computer security and cryptography
Computer security is a branch of computer technology with the objective of protecting information from unauthorized access, disruption, or modification while maintaining the accessibility and usability of the system for its intended users.
Historical cryptography is the art of writing and deciphering secret messages. Modern cryptography is the scientific study of problems relating to distributed computations that can be attacked. Technologies studied in modern cryptography include symmetric and asymmetric encryption, digital signatures, cryptographic hash functions, key-agreement protocols, blockchain, zero-knowledge proofs, and garbled circuits.
Databases and data mining
A database is intended to organize, store, and retrieve large amounts of data easily. Digital databases are managed using database management systems to store, create, maintain, and search data, through database models and query languages. Data mining is a process of discovering patterns in large data sets.
Discoveries
The philosopher of computing Bill Rapaport noted three Great Insights of Computer Science:
Gottfried Wilhelm Leibniz's, George Boole's, Alan Turing's, Claude Shannon's, and Samuel Morse's insight: there are only two objects that a computer has to deal with in order to represent "anything".
All the information about any computable problem can be represented using only 0 and 1 (or any other bistable pair that can flip-flop between two easily distinguishable states, such as "on/off", "magnetized/de-magnetized", "high-voltage/low-voltage", etc.).
Alan Turing's insight: there are only five actions that a computer has to perform in order to do "anything".
Every algorithm can be expressed in a language for a computer consisting of only five basic instructions:
move left one location;
move right one location;
read symbol at current location;
print 0 at current location;
print 1 at current location.
Corrado Böhm and Giuseppe Jacopini's insight: there are only three ways of combining these actions (into more complex ones) that are needed in order for a computer to do "anything".
Only three rules are needed to combine any set of basic instructions into more complex ones:
sequence: first do this, then do that;
selection: IF such-and-such is the case, THEN do this, ELSE do that;
repetition: WHILE such-and-such is the case, DO this.
The three rules of Boehm's and Jacopini's insight can be further simplified with the use of goto (which means it is more elementary than structured programming).
Programming paradigms
Programming languages can be used to accomplish different tasks in different ways. Common programming paradigms include:
Functional programming, a style of building the structure and elements of computer programs that treats computation as the evaluation of mathematical functions and avoids state and mutable data. It is a declarative programming paradigm, which means programming is done with expressions or declarations instead of statements.
Imperative programming, a programming paradigm that uses statements that change a program's state. In much the same way that the imperative mood in natural languages expresses commands, an imperative program consists of commands for the computer to perform. Imperative programming focuses on describing how a program operates.
Object-oriented programming, a programming paradigm based on the concept of "objects", which may contain data, in the form of fields, often known as attributes; and code, in the form of procedures, often known as methods. A feature of objects is that an object's procedures can access and often modify the data fields of the object with which they are associated. Thus object-oriented computer programs are made out of objects that interact with one another.
Service-oriented programming, a programming paradigm that uses "services" as the unit of computer work, to design and implement integrated business applications and mission critical software programs
Many languages offer support for multiple paradigms, making the distinction more a matter of style than of technical capabilities.
Research
Conferences are important events for computer science research. During these conferences, researchers from the public and private sectors present their recent work and meet. Unlike in most other academic fields, in computer science, the prestige of conference papers is greater than that of journal publications. One proposed explanation for this is the quick development of this relatively new field requires rapid review and distribution of results, a task better handled by conferences than by journals.
Education
Computer Science, known by its near synonyms, Computing, Computer Studies, has been taught in UK schools since the days of batch processing, mark sensitive cards and paper tape but usually to a select few students. In 1981, the BBC produced a micro-computer and classroom network and Computer Studies became common for GCE O level students (11–16-year-old), and Computer Science to A level students. Its importance was recognised, and it became a compulsory part of the National Curriculum, for Key Stage 3 & 4. In September 2014 it became an entitlement for all pupils over the age of 4.
In the US, with 14,000 school districts deciding the curriculum, provision was fractured. According to a 2010 report by the Association for Computing Machinery (ACM) and Computer Science Teachers Association (CSTA), only 14 out of 50 states have adopted significant education standards for high school computer science. According to a 2021 report, only 51% of high schools in the US offer computer science.
Israel, New Zealand, and South Korea have included computer science in their national secondary education curricula, and several others are following.
See also
Glossary of computer science
List of computer scientists
List of computer science awards
List of pioneers in computer science
Outline of computer science
Notes
References
Further reading
Peter J. Denning. Is computer science science?, Communications of the ACM, April 2005.
Peter J. Denning, Great principles in computing curricula, Technical Symposium on Computer Science Education, 2004.
External links
DBLP Computer Science Bibliography
Association for Computing Machinery
Institute of Electrical and Electronics Engineers |
5326 | https://en.wikipedia.org/wiki/Creationism | Creationism | Creationism is the religious belief that nature, and aspects such as the universe, Earth, life, and humans, originated with supernatural acts of divine creation. In its broadest sense, creationism includes a continuum of religious views, which vary in their acceptance or rejection of scientific explanations such as evolution that describe the origin and development of natural phenomena.
The term creationism most often refers to belief in special creation; the claim that the universe and lifeforms were created as they exist today by divine action, and that the only true explanations are those which are compatible with a Christian fundamentalist literal interpretation of the creation myth found in the Bible's Genesis creation narrative. Since the 1970s, the most common form of this has been Young Earth creationism which posits special creation of the universe and lifeforms within the last 10,000 years on the basis of flood geology, and promotes pseudoscientific creation science. From the 18th century onward, Old Earth creationism accepted geological time harmonized with Genesis through gap or day-age theory, while supporting anti-evolution. Modern old-Earth creationists support progressive creationism and continue to reject evolutionary explanations. Following political controversy, creation science was reformulated as intelligent design and neo-creationism.
Mainline Protestants and the Catholic Church reconcile modern science with their faith in Creation through forms of theistic evolution which hold that God purposefully created through the laws of nature, and accept evolution. Some groups call their belief evolutionary creationism. Less prominently, there are also members of the Islamic and Hindu faiths who are creationists. Use of the term "creationist" in this context dates back to Charles Darwin's unpublished 1842 sketch draft for what became On the Origin of Species, and he used the term later in letters to colleagues. In 1873, Asa Gray published an article in The Nation saying a "special creationist" who held that species "were supernaturally originated just as they are, by the very terms of his doctrine places them out of the reach of scientific explanation."
Biblical basis
The basis for many creationists' beliefs is a literal or quasi-literal interpretation of the Book of Genesis. The Genesis creation narratives (Genesis 1–2) describe how God brings the Universe into being in a series of creative acts over six days and places the first man and woman (Adam and Eve) in the Garden of Eden. This story is the basis of creationist cosmology and biology. The Genesis flood narrative (Genesis 6–9) tells how God destroys the world and all life through a great flood, saving representatives of each form of life by means of Noah's Ark. This forms the basis of creationist geology, better known as flood geology.
Recent decades have seen attempts to de-link creationism from the Bible and recast it as science; these include creation science and intelligent design.
Types
To counter the common misunderstanding that the creation–evolution controversy was a simple dichotomy of views, with "creationists" set against "evolutionists", Eugenie Scott of the National Center for Science Education produced a diagram and description of a continuum of religious views as a spectrum ranging from extreme literal biblical creationism to materialist evolution, grouped under main headings. This was used in public presentations, then published in 1999 in Reports of the NCSE. Other versions of a taxonomy of creationists were produced, and comparisons made between the different groupings. In 2009 Scott produced a revised continuum taking account of these issues, emphasizing that intelligent design creationism overlaps other types, and each type is a grouping of various beliefs and positions. The revised diagram is labelled to shows a spectrum relating to positions on the age of the Earth, and the part played by special creation as against evolution. This was published in the book Evolution Vs. Creationism: An Introduction, and the NCSE website rewritten on the basis of the book version.
The main general types are listed below.
Young Earth creationism
Young Earth creationists such as Ken Ham and Doug Phillips believe that God created the Earth within the last ten thousand years, with a literalist interpretation of the Genesis creation narrative, within the approximate time-frame of biblical genealogies. Most young Earth creationists believe that the universe has a similar age as the Earth. A few assign a much older age to the universe than to Earth. Young Earth creationism gives the universe an age consistent with the Ussher chronology and other young Earth time frames. Other young Earth creationists believe that the Earth and the universe were created with the appearance of age, so that the world appears to be much older than it is, and that this appearance is what gives the geological findings and other methods of dating the Earth and the universe their much longer timelines.
The Christian organizations Answers in Genesis (AiG), Institute for Creation Research (ICR) and the Creation Research Society (CRS) promote young Earth creationism in the United States. Carl Baugh's Creation Evidence Museum in Texas, United States AiG's Creation Museum and Ark Encounter in Kentucky, United States were opened to promote young Earth creationism. Creation Ministries International promotes young Earth views in Australia, Canada, South Africa, New Zealand, the United States, and the United Kingdom.
Among Roman Catholics, the Kolbe Center for the Study of Creation promotes similar ideas.
Old Earth creationism
Old Earth creationism holds that the physical universe was created by God, but that the creation event described in the Book of Genesis is to be taken figuratively. This group generally believes that the age of the universe and the age of the Earth are as described by astronomers and geologists, but that details of modern evolutionary theory are questionable.
Old Earth creationism itself comes in at least three types:
Gap creationism
Gap creationism (also known as ruin-restoration creationism, restoration creationism, or the Gap Theory) is a form of old Earth creationism that posits that the six-yom creation period, as described in the Book of Genesis, involved six literal 24-hour days, but that there was a gap of time between two distinct creations in the first and the second verses of Genesis, which the theory states explains many scientific observations, including the age of the Earth. Thus, the six days of creation (verse 3 onwards) start sometime after the Earth was "without form and void." This allows an indefinite gap of time to be inserted after the original creation of the universe, but prior to the Genesis creation narrative, (when present biological species and humanity were created). Gap theorists can therefore agree with the scientific consensus regarding the age of the Earth and universe, while maintaining a literal interpretation of the biblical text.
Some gap creationists expand the basic version of creationism by proposing a "primordial creation" of biological life within the "gap" of time. This is thought to be "the world that then was" mentioned in 2 Peter 3:3–6. Discoveries of fossils and archaeological ruins older than 10,000 years are generally ascribed to this "world that then was," which may also be associated with Lucifer's rebellion.
Day-age creationism
Day-age creationism, a type of old Earth creationism, is a metaphorical interpretation of the creation accounts in Genesis. It holds that the six days referred to in the Genesis account of creation are not ordinary 24-hour days, but are much longer periods (from thousands to billions of years). The Genesis account is then reconciled with the age of the Earth. Proponents of the day-age theory can be found among both theistic evolutionists, who accept the scientific consensus on evolution, and progressive creationists, who reject it. The theories are said to be built on the understanding that the Hebrew word yom is also used to refer to a time period, with a beginning and an end and not necessarily that of a 24-hour day.
The day-age theory attempts to reconcile the Genesis creation narrative and modern science by asserting that the creation "days" were not ordinary 24-hour days, but actually lasted for long periods of time (as day-age implies, the "days" each lasted an age). According to this view, the sequence and duration of the creation "days" may be paralleled to the scientific consensus for the age of the earth and the universe.
Progressive creationism
Progressive creationism is the religious belief that God created new forms of life gradually over a period of hundreds of millions of years. As a form of old Earth creationism, it accepts mainstream geological and cosmological estimates for the age of the Earth, some tenets of biology such as microevolution as well as archaeology to make its case. In this view creation occurred in rapid bursts in which all "kinds" of plants and animals appear in stages lasting millions of years. The bursts are followed by periods of stasis or equilibrium to accommodate new arrivals. These bursts represent instances of God creating new types of organisms by divine intervention. As viewed from the archaeological record, progressive creationism holds that "species do not gradually appear by the steady transformation of its ancestors; [but] appear all at once and "fully formed."
The view rejects macroevolution, claiming it is biologically untenable and not supported by the fossil record, as well as rejects the concept of common descent from a last universal common ancestor. Thus the evidence for macroevolution is claimed to be false, but microevolution is accepted as a genetic parameter designed by the Creator into the fabric of genetics to allow for environmental adaptations and survival. Generally, it is viewed by proponents as a middle ground between literal creationism and evolution. Organizations such as Reasons To Believe, founded by Hugh Ross, promote this version of creationism.
Progressive creationism can be held in conjunction with hermeneutic approaches to the Genesis creation narrative such as the day-age creationism or framework/metaphoric/poetic views.
Philosophic and scientific creationism
Creation science
Creation science, or initially scientific creationism, is a pseudoscience that emerged in the 1960s with proponents aiming to have young Earth creationist beliefs taught in school science classes as a counter to teaching of evolution. Common features of creation science argument include: creationist cosmologies which accommodate a universe on the order of thousands of years old, criticism of radiometric dating through a technical argument about radiohalos, explanations for the fossil record as a record of the Genesis flood narrative (see flood geology), and explanations for the present diversity as a result of pre-designed genetic variability and partially due to the rapid degradation of the perfect genomes God placed in "created kinds" or "baramins" due to mutations.
Neo-creationism
Neo-creationism is a pseudoscientific movement which aims to restate creationism in terms more likely to be well received by the public, by policy makers, by educators and by the scientific community. It aims to re-frame the debate over the origins of life in non-religious terms and without appeals to scripture. This comes in response to the 1987 ruling by the United States Supreme Court in Edwards v. Aguillard that creationism is an inherently religious concept and that advocating it as correct or accurate in public-school curricula violates the Establishment Clause of the First Amendment.
One of the principal claims of neo-creationism propounds that ostensibly objective orthodox science, with a foundation in naturalism, is actually a dogmatically atheistic religion. Its proponents argue that the scientific method excludes certain explanations of phenomena, particularly where they point towards supernatural elements, thus effectively excluding religious insight from contributing to understanding the universe. This leads to an open and often hostile opposition to what neo-creationists term "Darwinism", which they generally mean to refer to evolution, but which they may extend to include such concepts as abiogenesis, stellar evolution and the Big Bang theory.
Unlike their philosophical forebears, neo-creationists largely do not believe in many of the traditional cornerstones of creationism such as a young Earth, or in a dogmatically literal interpretation of the Bible.
Intelligent design
Intelligent design (ID) is the pseudoscientific view that "certain features of the universe and of living things are best explained by an intelligent cause, not an undirected process such as natural selection." All of its leading proponents are associated with the Discovery Institute, a think tank whose wedge strategy aims to replace the scientific method with "a science consonant with Christian and theistic convictions" which accepts supernatural explanations. It is widely accepted in the scientific and academic communities that intelligent design is a form of creationism, and is sometimes referred to as "intelligent design creationism."
ID originated as a re-branding of creation science in an attempt to avoid a series of court decisions ruling out the teaching of creationism in American public schools, and the Discovery Institute has run a series of campaigns to change school curricula. In Australia, where curricula are under the control of state governments rather than local school boards, there was a public outcry when the notion of ID being taught in science classes was raised by the Federal Education Minister Brendan Nelson; the minister quickly conceded that the correct forum for ID, if it were to be taught, is in religious or philosophy classes.
In the US, teaching of intelligent design in public schools has been decisively ruled by a federal district court to be in violation of the Establishment Clause of the First Amendment to the United States Constitution. In Kitzmiller v. Dover, the court found that intelligent design is not science and "cannot uncouple itself from its creationist, and thus religious, antecedents," and hence cannot be taught as an alternative to evolution in public school science classrooms under the jurisdiction of that court. This sets a persuasive precedent, based on previous US Supreme Court decisions in Edwards v. Aguillard and Epperson v. Arkansas (1968), and by the application of the Lemon test, that creates a legal hurdle to teaching intelligent design in public school districts in other federal court jurisdictions.
Geocentrism
In astronomy, the geocentric model (also known as geocentrism, or the Ptolemaic system), is a description of the cosmos where Earth is at the orbital center of all celestial bodies. This model served as the predominant cosmological system in many ancient civilizations such as ancient Greece. As such, they assumed that the Sun, Moon, stars, and naked eye planets circled Earth, including the noteworthy systems of Aristotle (see Aristotelian physics) and Ptolemy.
Articles arguing that geocentrism was the biblical perspective appeared in some early creation science newsletters associated with the Creation Research Society pointing to some passages in the Bible, which, when taken literally, indicate that the daily apparent motions of the Sun and the Moon are due to their actual motions around the Earth rather than due to the rotation of the Earth about its axis. For example, where the Sun and Moon are said to stop in the sky, and where the world is described as immobile. Contemporary advocates for such religious beliefs include Robert Sungenis, co-author of the self-published Galileo Was Wrong: The Church Was Right (2006). These people subscribe to the view that a plain reading of the Bible contains an accurate account of the manner in which the universe was created and requires a geocentric worldview.
Most contemporary creationist organizations reject such perspectives.
Omphalos hypothesis
The Omphalos hypothesis is one attempt to reconcile the scientific evidence that the universe is billions of years old with a literal interpretation of the Genesis creation narrative, which implies that the Earth is only a few thousand years old. It is based on the religious belief that the universe was created by a divine being, within the past six to ten thousand years (in keeping with flood geology), and that the presence of objective, verifiable evidence that the universe is older than approximately ten millennia is due to the creator introducing false evidence that makes the universe appear significantly older.
The idea was named after the title of an 1857 book, Omphalos by Philip Henry Gosse, in which Gosse argued that in order for the world to be functional God must have created the Earth with mountains and canyons, trees with growth rings, Adam and Eve with fully grown hair, fingernails, and navels (ὀμφαλός omphalos is Greek for "navel"), and all living creatures with fully formed evolutionary features, etc..., and that, therefore, no empirical evidence about the age of the Earth or universe can be taken as reliable.
Various supporters of Young Earth creationism have given different explanations for their belief that the universe is filled with false evidence of the universe's age, including a belief that some things needed to be created at a certain age for the ecosystems to function, or their belief that the creator was deliberately planting deceptive evidence. The idea has seen some revival in the 20th century by some modern creationists, who have extended the argument to address the "starlight problem". The idea has been criticised as Last Thursdayism, and on the grounds that it requires a deliberately deceptive creator.
Theistic evolution
Theistic evolution, or evolutionary creation, is a belief that "the personal God of the Bible created the universe and life through evolutionary processes." According to the American Scientific Affiliation:
Through the 19th century the term creationism most commonly referred to direct creation of individual souls, in contrast to traducianism. Following the publication of Vestiges of the Natural History of Creation, there was interest in ideas of Creation by divine law. In particular, the liberal theologian Baden Powell argued that this illustrated the Creator's power better than the idea of miraculous creation, which he thought ridiculous. When On the Origin of Species was published, the cleric Charles Kingsley wrote of evolution as "just as noble a conception of Deity." Darwin's view at the time was of God creating life through the laws of nature, and the book makes several references to "creation," though he later regretted using the term rather than calling it an unknown process. In America, Asa Gray argued that evolution is the secondary effect, or modus operandi, of the first cause, design, and published a pamphlet defending the book in theistic terms, Natural Selection not inconsistent with Natural Theology. Theistic evolution, also called, evolutionary creation, became a popular compromise, and St. George Jackson Mivart was among those accepting evolution but attacking Darwin's naturalistic mechanism. Eventually it was realised that supernatural intervention could not be a scientific explanation, and naturalistic mechanisms such as neo-Lamarckism were favoured as being more compatible with purpose than natural selection.
Some theists took the general view that, instead of faith being in opposition to biological evolution, some or all classical religious teachings about Christian God and creation are compatible with some or all of modern scientific theory, including specifically evolution; it is also known as "evolutionary creation." In Evolution versus Creationism, Eugenie Scott and Niles Eldredge state that it is in fact a type of evolution.
It generally views evolution as a tool used by God, who is both the first cause and immanent sustainer/upholder of the universe; it is therefore well accepted by people of strong theistic (as opposed to deistic) convictions. Theistic evolution can synthesize with the day-age creationist interpretation of the Genesis creation narrative; however most adherents consider that the first chapters of the Book of Genesis should not be interpreted as a "literal" description, but rather as a literary framework or allegory.
From a theistic viewpoint, the underlying laws of nature were designed by God for a purpose, and are so self-sufficient that the complexity of the entire physical universe evolved from fundamental particles in processes such as stellar evolution, life forms developed in biological evolution, and in the same way the origin of life by natural causes has resulted from these laws.
In one form or another, theistic evolution is the view of creation taught at the majority of mainline Protestant seminaries. For Roman Catholics, human evolution is not a matter of religious teaching, and must stand or fall on its own scientific merits. Evolution and the Roman Catholic Church are not in conflict. The Catechism of the Catholic Church comments positively on the theory of evolution, which is neither precluded nor required by the sources of faith, stating that scientific studies "have splendidly enriched our knowledge of the age and dimensions of the cosmos, the development of life-forms and the appearance of man." Roman Catholic schools teach evolution without controversy on the basis that scientific knowledge does not extend beyond the physical, and scientific truth and religious truth cannot be in conflict. Theistic evolution can be described as "creationism" in holding that divine intervention brought about the origin of life or that divine laws govern formation of species, though many creationists (in the strict sense) would deny that the position is creationism at all. In the creation–evolution controversy, its proponents generally take the "evolutionist" side. This sentiment was expressed by Fr. George Coyne, (the Vatican's chief astronomer between 1978 and 2006):...in America, creationism has come to mean some fundamentalistic, literal, scientific interpretation of Genesis. Judaic-Christian faith is radically creationist, but in a totally different sense. It is rooted in a belief that everything depends upon God, or better, all is a gift from God.
While supporting the methodological naturalism inherent in modern science, the proponents of theistic evolution reject the implication taken by some atheists that this gives credence to ontological materialism. In fact, many modern philosophers of science, including atheists, refer to the long-standing convention in the scientific method that observable events in nature should be explained by natural causes, with the distinction that it does not assume the actual existence or non-existence of the supernatural.
Religious views
There are also non-Christian forms of creationism, notably Islamic creationism and Hindu creationism.
Bahá'í Faith
In the creation myth taught by Bahá'u'lláh, the Bahá'í Faith founder, the universe has "neither beginning nor ending," and that the component elements of the material world have always existed and will always exist. With regard to evolution and the origin of human beings, 'Abdu'l-Bahá gave extensive comments on the subject when he addressed western audiences in the beginning of the 20th century. Transcripts of these comments can be found in Some Answered Questions, Paris Talks and The Promulgation of Universal Peace. 'Abdu'l-Bahá described the human species as having evolved from a primitive form to modern man, but that the capacity to form human intelligence was always in existence.
Buddhism
Buddhism denies a creator deity and posits that mundane deities such as Mahabrahma are sometimes misperceived to be a creator. While Buddhism includes belief in divine beings called devas, it holds that they are mortal, limited in their power, and that none of them are creators of the universe. In the Saṃyutta Nikāya, the Buddha also states that the cycle of rebirths stretches back hundreds of thousands of eons, without discernible beginning.
Major Buddhist Indian philosophers such as Nagarjuna, Vasubandhu, Dharmakirti and Buddhaghosa, consistently critiqued Creator God views put forth by Hindu thinkers.
Christianity
, most Christians around the world accepted evolution as the most likely explanation for the origins of species, and did not take a literal view of the Genesis creation narrative. The United States is an exception where belief in religious fundamentalism is much more likely to affect attitudes towards evolution than it is for believers elsewhere. Political partisanship affecting religious belief may be a factor because political partisanship in the US is highly correlated with fundamentalist thinking, unlike in Europe.
Most contemporary Christian leaders and scholars from mainstream churches, such as Anglicans and Lutherans, consider that there is no conflict between the spiritual meaning of creation and the science of evolution. According to the former archbishop of Canterbury, Rowan Williams, "for most of the history of Christianity, and I think this is fair enough, most of the history of the Christianity there's been an awareness that a belief that everything depends on the creative act of God, is quite compatible with a degree of uncertainty or latitude about how precisely that unfolds in creative time."
Leaders of the Anglican and Roman Catholic churches have made statements in favor of evolutionary theory, as have scholars such as the physicist John Polkinghorne, who argues that evolution is one of the principles through which God created living beings. Earlier supporters of evolutionary theory include Frederick Temple, Asa Gray and Charles Kingsley who were enthusiastic supporters of Darwin's theories upon their publication, and the French Jesuit priest and geologist Pierre Teilhard de Chardin saw evolution as confirmation of his Christian beliefs, despite condemnation from Church authorities for his more speculative theories. Another example is that of Liberal theology, not providing any creation models, but instead focusing on the symbolism in beliefs of the time of authoring Genesis and the cultural environment.
Many Christians and Jews had been considering the idea of the creation history as an allegory (instead of historical) long before the development of Darwin's theory of evolution. For example, Philo, whose works were taken up by early Church writers, wrote that it would be a mistake to think that creation happened in six days, or in any set amount of time. Augustine of the late fourth century who was also a former neoplatonist argued that everything in the universe was created by God at the same moment in time (and not in six days as a literal reading of the Book of Genesis would seem to require); It appears that both Philo and Augustine felt uncomfortable with the idea of a seven-day creation because it detracted from the notion of God's omnipotence. In 1950, Pope Pius XII stated limited support for the idea in his encyclical . In 1996, Pope John Paul II stated that "new knowledge has led to the recognition of the theory of evolution as more than a hypothesis," but, referring to previous papal writings, he concluded that "if the human body takes its origin from pre-existent living matter, the spiritual soul is immediately created by God."
In the US, Evangelical Christians have continued to believe in a literal Genesis. , members of evangelical Protestant (70%), Mormon (76%) and Jehovah's Witnesses (90%) denominations were the most likely to reject the evolutionary interpretation of the origins of life.
Jehovah's Witnesses adhere to a combination of gap creationism and day-age creationism, asserting that scientific evidence about the age of the universe is compatible with the Bible, but that the 'days' after Genesis 1:1 were each thousands of years in length.
The historic Christian literal interpretation of creation requires the harmonization of the two creation stories, Genesis 1:1–2:3 and Genesis 2:4–25, for there to be a consistent interpretation. They sometimes seek to ensure that their belief is taught in science classes, mainly in American schools. Opponents reject the claim that the literalistic biblical view meets the criteria required to be considered scientific. Many religious groups teach that God created the Cosmos. From the days of the early Christian Church Fathers there were allegorical interpretations of the Book of Genesis as well as literal aspects.
Christian Science, a system of thought and practice derived from the writings of Mary Baker Eddy, interprets the Book of Genesis figuratively rather than literally. It holds that the material world is an illusion, and consequently not created by God: the only real creation is the spiritual realm, of which the material world is a distorted version. Christian Scientists regard the story of the creation in the Book of Genesis as having symbolic rather than literal meaning. According to Christian Science, both creationism and evolution are false from an absolute or "spiritual" point of view, as they both proceed from a (false) belief in the reality of a material universe. However, Christian Scientists do not oppose the teaching of evolution in schools, nor do they demand that alternative accounts be taught: they believe that both material science and literalist theology are concerned with the illusory, mortal and material, rather than the real, immortal and spiritual. With regard to material theories of creation, Eddy showed a preference for Darwin's theory of evolution over others.
Hinduism
Hindu creationists claim that species of plants and animals are material forms adopted by pure consciousness which live an endless cycle of births and rebirths. Ronald Numbers says that: "Hindu Creationists have insisted on the antiquity of humans, who they believe appeared fully formed as long, perhaps, as trillions of years ago." Hindu creationism is a form of old Earth creationism, according to Hindu creationists the universe may even be older than billions of years. These views are based on the Vedas, the creation myths of which depict an extreme antiquity of the universe and history of the Earth.
In Hindu cosmology, time cyclically repeats general events of creation and destruction, with many "first man", each known as Manu, the progenitor of mankind. Each Manu successively reigns over a 306.72 million year period known as a , each ending with the destruction of mankind followed by a (period of non-activity) before the next . 120.53million years have elapsed in the current (current mankind) according to calculations on Hindu units of time. The universe is cyclically created at the start and destroyed at the end of a (day of Brahma), lasting for 4.32billion years, which is followed by a (period of dissolution) of equal length. 1.97billion years have elapsed in the current (current universe). The universal elements or building blocks (unmanifest matter) exists for a period known as a , lasting for 311.04trillion years, which is followed by a (period of great dissolution) of equal length. 155.52trillion years have elapsed in the current .
Islam
Islamic creationism is the belief that the universe (including humanity) was directly created by God as explained in the Quran. It usually views the Book of Genesis as a corrupted version of God's message. The creation myths in the Quran are vaguer and allow for a wider range of interpretations similar to those in other Abrahamic religions.
Islam also has its own school of theistic evolutionism, which holds that mainstream scientific analysis of the origin of the universe is supported by the Quran. Some Muslims believe in evolutionary creation, especially among liberal movements within Islam.
Writing for The Boston Globe, Drake Bennett noted: "Without a Book of Genesis to account for[...] Muslim creationists have little interest in proving that the age of the Earth is measured in the thousands rather than the billions of years, nor do they show much interest in the problem of the dinosaurs. And the idea that animals might evolve into other animals also tends to be less controversial, in part because there are passages of the Koran that seem to support it. But the issue of whether human beings are the product of evolution is just as fraught among Muslims." Khalid Anees, president of the Islamic Society of Britain, states that Muslims do not agree that one species can develop from another.
Since the 1980s, Turkey has been a site of strong advocacy for creationism, supported by American adherents.
There are several verses in the Qur'an which some modern writers have interpreted as being compatible with the expansion of the universe, Big Bang and Big Crunch theories:
Ahmadiyya
The Ahmadiyya movement actively promotes evolutionary theory. Ahmadis interpret scripture from the Qur'an to support the concept of macroevolution and give precedence to scientific theories. Furthermore, unlike orthodox Muslims, Ahmadis believe that humans have gradually evolved from different species. Ahmadis regard Adam as being the first Prophet of Godas opposed to him being the first man on Earth. Rather than wholly adopting the theory of natural selection, Ahmadis promote the idea of a "guided evolution," viewing each stage of the evolutionary process as having been selectively woven by God. Mirza Tahir Ahmad, Fourth Caliph of the Ahmadiyya Muslim Community has stated in his magnum opus Revelation, Rationality, Knowledge & Truth (1998) that evolution did occur but only through God being the One who brings it about. It does not occur itself, according to the Ahmadiyya Muslim Community.
Judaism
For Orthodox Jews who seek to reconcile discrepancies between science and the creation myths in the Bible, the notion that science and the Bible should even be reconciled through traditional scientific means is questioned. To these groups, science is as true as the Torah and if there seems to be a problem, epistemological limits are to blame for apparently irreconcilable points. They point to discrepancies between what is expected and what actually is to demonstrate that things are not always as they appear. They note that even the root word for 'world' in the Hebrew language, , means 'hidden' (). Just as they know from the Torah that God created man and trees and the light on its way from the stars in their observed state, so too can they know that the world was created in its over the six days of Creation that reflects progression to its currently-observed state, with the understanding that physical ways to verify this may eventually be identified. This knowledge has been advanced by Rabbi Dovid Gottlieb, former philosophy professor at Johns Hopkins University. Relatively old Kabbalistic sources from well before the scientifically apparent age of the universe was first determined are also in close concord with modern scientific estimates of the age of the universe, according to Rabbi Aryeh Kaplan, and based on Sefer Temunah, an early kabbalistic work attributed to the first-century Tanna Nehunya ben HaKanah. Many kabbalists accepted the teachings of the Sefer HaTemunah, including the medieval Jewish scholar Nahmanides, his close student Isaac ben Samuel of Acre, and David ben Solomon ibn Abi Zimra. Other parallels are derived, among other sources, from Nahmanides, who expounds that there was a Neanderthal-like species with which Adam mated (he did this long before Neanderthals had even been discovered scientifically). Reform Judaism does not take the Torah as a literal text, but rather as a symbolic or open-ended work.
Some contemporary writers such as Rabbi Gedalyah Nadel have sought to reconcile the discrepancy between the account in the Torah, and scientific findings by arguing that each day referred to in the Bible was not 24 hours, but billions of years long. Others claim that the Earth was created a few thousand years ago, but was deliberately made to look as if it was five billion years old, e.g. by being created with ready made fossils. The best known exponent of this approach being Rabbi Menachem Mendel Schneerson. Others state that although the world was physically created in six 24-hour days, the Torah accounts can be interpreted to mean that there was a period of billions of years before the six days of creation.
Prevalence
Most vocal literalist creationists are from the US, and strict creationist views are much less common in other developed countries. According to a study published in Science, a survey of the US, Turkey, Japan and Europe showed that public acceptance of evolution is most prevalent in Iceland, Denmark and Sweden at 80% of the population. There seems to be no significant correlation between believing in evolution and understanding evolutionary science.
Australia
A 2009 Nielsen poll showed that 23% of Australians believe "the biblical account of human origins," 42% believe in a "wholly scientific" explanation for the origins of life, while 32% believe in an evolutionary process "guided by God".
A 2013 survey conducted by Auspoll and the Australian Academy of Science found that 80% of Australians believe in evolution (70% believe it is currently occurring, 10% believe in evolution but do not think it is currently occurring), 12% were not sure and 9% stated they do not believe in evolution.
Brazil
A 2011 Ipsos survey found that 47% of responders in Brazil identified themselves as "creationists and believe that human beings were in fact created by a spiritual force such as the God they believe in and do not believe that the origin of man came from evolving from other species such as apes".
In 2004, IBOPE conducted a poll in Brazil that asked questions about creationism and the teaching of creationism in schools. When asked if creationism should be taught in schools, 89% of people said that creationism should be taught in schools. When asked if the teaching of creationism should replace the teaching of evolution in schools, 75% of people said that the teaching of creationism should replace the teaching of evolution in schools.
Canada
A 2012 survey, by Angus Reid Public Opinion revealed that 61 percent of Canadians believe in evolution. The poll asked "Where did human beings come fromdid we start as singular cells millions of year ago and evolve into our present form, or did God create us in his image 10,000 years ago?"
In 2019, a Research Co. poll asked people in Canada if creationism "should be part of the school curriculum in their province". 38% of Canadians said that creationism should be part of the school curriculum, 39% of Canadians said that it should not be part of the school curriculum, and 23% of Canadians were undecided.
In 2023, a Research Co. poll found that 21% of Canadians "believe God created human beings in their present form within the last 10,000 years". The poll also found that "More than two-in-five Canadians (43%) think creationism should be part of the school curriculum in their province."
Europe
In Europe, literalist creationism is more widely rejected, though regular opinion polls are not available. Most people accept that evolution is the most widely accepted scientific theory as taught in most schools. In countries with a Roman Catholic majority, papal acceptance of evolutionary creationism as worthy of study has essentially ended debate on the matter for many people.
In the UK, a 2006 poll on the "origin and development of life", asked participants to choose between three different perspectives on the origin of life: 22% chose creationism, 17% opted for intelligent design, 48% selected evolutionary theory, and the rest did not know. A subsequent 2010 YouGov poll on the correct explanation for the origin of humans found that 9% opted for creationism, 12% intelligent design, 65% evolutionary theory and 13% didn't know. The former Archbishop of Canterbury Rowan Williams, head of the worldwide Anglican Communion, views the idea of teaching creationism in schools as a mistake. In 2009, an Ipsos Mori survey in the United Kingdom found that 54% of Britons agreed with the view: "Evolutionary theories should be taught in science lessons in schools together with other possible perspectives, such as intelligent design and creationism."
In Italy, Education Minister Letizia Moratti wanted to retire evolution from the secondary school level; after one week of massive protests, she reversed her opinion.
There continues to be scattered and possibly mounting efforts on the part of religious groups throughout Europe to introduce creationism into public education. In response, the Parliamentary Assembly of the Council of Europe has released a draft report titled The dangers of creationism in education on June 8, 2007, reinforced by a further proposal of banning it in schools dated October 4, 2007.
Serbia suspended the teaching of evolution for one week in September 2004, under education minister Ljiljana Čolić, only allowing schools to reintroduce evolution into the curriculum if they also taught creationism. "After a deluge of protest from scientists, teachers and opposition parties" says the BBC report, Čolić's deputy made the statement, "I have come here to confirm Charles Darwin is still alive" and announced that the decision was reversed. Čolić resigned after the government said that she had caused "problems that had started to reflect on the work of the entire government."
Poland saw a major controversy over creationism in 2006, when the Deputy Education Minister, Mirosław Orzechowski, denounced evolution as "one of many lies" taught in Polish schools. His superior, Minister of Education Roman Giertych, has stated that the theory of evolution would continue to be taught in Polish schools, "as long as most scientists in our country say that it is the right theory." Giertych's father, Member of the European Parliament Maciej Giertych, has opposed the teaching of evolution and has claimed that dinosaurs and humans co-existed.
A June 2015 - July 2016 Pew poll of Eastern European countries found that 56% of people from Armenia say that humans and other living things have "Existed in present state since the beginning of time". Armenia is followed by 52% from Bosnia, 42% from Moldova, 37% from Lithuania, 34% from Georgia and Ukraine, 33% from Croatia and Romania, 31% from Bulgaria, 29% from Greece and Serbia, 26% from Russia, 25% from Latvia, 23% from Belarus and Poland, 21% from Estonia and Hungary, and 16% from the Czech Republic.
South Africa
A 2011 Ipsos survey found that 56% of responders in South Africa identified themselves as "creationists and believe that human beings were in fact created by a spiritual force such as the God they believe in and do not believe that the origin of man came from evolving from other species such as apes".
South Korea
In 2009, an EBS survey in South Korea found that 63% of people believed that creation and evolution should both be taught in schools simultaneously.
United States
A 2017 poll by Pew Research found that 62% of Americans believe humans have evolved over time and 34% of Americans believe humans and other living things have existed in their present form since the beginning of time. A 2019 Gallup creationism survey found that 40% of adults in the United States inclined to the view that "God created humans in their present form at one time within the last 10,000 years" when asked for their views on the origin and development of human beings.
According to a 2014 Gallup poll, about 42% of Americans believe that "God created human beings pretty much in their present form at one time within the last 10,000 years or so." Another 31% believe that "human beings have developed over millions of years from less advanced forms of life, but God guided this process,"and 19% believe that "human beings have developed over millions of years from less advanced forms of life, but God had no part in this process."
Belief in creationism is inversely correlated to education; of those with postgraduate degrees, 74% accept evolution. In 1987, Newsweek reported: "By one count there are some 700 scientists with respectable academic credentials (out of a total of 480,000 U.S. earth and life scientists) who give credence to creation-science, the general theory that complex life forms did not evolve but appeared 'abruptly.'"
A 2000 poll for People for the American Way found 70% of the US public felt that evolution was compatible with a belief in God.
According to a study published in Science, between 1985 and 2005 the number of adult North Americans who accept evolution declined from 45% to 40%, the number of adults who reject evolution declined from 48% to 39% and the number of people who were unsure increased from 7% to 21%. Besides the US the study also compared data from 32 European countries, Turkey, and Japan. The only country where acceptance of evolution was lower than in the US was Turkey (25%).
According to a 2011 Fox News poll, 45% of Americans believe in creationism, down from 50% in a similar poll in 1999. 21% believe in 'the theory of evolution as outlined by Darwin and other scientists' (up from 15% in 1999), and 27% answered that both are true (up from 26% in 1999).
In September 2012, educator and television personality Bill Nye spoke with the Associated Press and aired his fears about acceptance of creationism, believing that teaching children that creationism is the only true answer without letting them understand the way science works will prevent any future innovation in the world of science. In February 2014, Nye defended evolution in the classroom in a debate with creationist Ken Ham on the topic of whether creation is a viable model of origins in today's modern, scientific era.
Education controversies
In the US, creationism has become centered in the political controversy over creation and evolution in public education, and whether teaching creationism in science classes conflicts with the separation of church and state. Currently, the controversy comes in the form of whether advocates of the intelligent design movement who wish to "Teach the Controversy" in science classes have conflated science with religion.
People for the American Way polled 1500 North Americans about the teaching of evolution and creationism in November and December 1999. They found that most North Americans were not familiar with creationism, and most North Americans had heard of evolution, but many did not fully understand the basics of the theory. The main findings were:
In such political contexts, creationists argue that their particular religiously based origin belief is superior to those of other belief systems, in particular those made through secular or scientific rationale. Political creationists are opposed by many individuals and organizations who have made detailed critiques and given testimony in various court cases that the alternatives to scientific reasoning offered by creationists are opposed by the consensus of the scientific community.
Criticism
Christian criticism
Most Christians disagree with the teaching of creationism as an alternative to evolution in schools. Several religious organizations, among them the Catholic Church, hold that their faith does not conflict with the scientific consensus regarding evolution. The Clergy Letter Project, which has collected more than 13,000 signatures, is an "endeavor designed to demonstrate that religion and science can be compatible."
In his 2002 article "Intelligent Design as a Theological Problem," George Murphy argues against the view that life on Earth, in all its forms, is direct evidence of God's act of creation (Murphy quotes Phillip E. Johnson's claim that he is speaking "of a God who acted openly and left his fingerprints on all the evidence."). Murphy argues that this view of God is incompatible with the Christian understanding of God as "the one revealed in the cross and resurrection of Christ." The basis of this theology is Isaiah 45:15, "Verily thou art a God that hidest thyself, O God of Israel, the Saviour."
Murphy observes that the execution of a Jewish carpenter by Roman authorities is in and of itself an ordinary event and did not require divine action. On the contrary, for the crucifixion to occur, God had to limit or "empty" himself. It was for this reason that Paul the Apostle wrote, in Philippians 2:5-8:
Let this mind be in you, which was also in Christ Jesus: Who, being in the form of God, thought it not robbery to be equal with God: But made himself of no reputation, and took upon him the form of a servant, and was made in the likeness of men: And being found in fashion as a man, he humbled himself, and became obedient unto death, even the death of the cross.
Murphy concludes that,Just as the Son of God limited himself by taking human form and dying on a cross, God limits divine action in the world to be in accord with rational laws which God has chosen. This enables us to understand the world on its own terms, but it also means that natural processes hide God from scientific observation.For Murphy, a theology of the cross requires that Christians accept a methodological naturalism, meaning that one cannot invoke God to explain natural phenomena, while recognizing that such acceptance does not require one to accept a metaphysical naturalism, which proposes that nature is all that there is.
The Jesuit priest George Coyne has stated that it is "unfortunate that, especially here in America, creationism has come to mean...some literal interpretation of Genesis." He argues that "...Judaic-Christian faith is radically creationist, but in a totally different sense. It is rooted in belief that everything depends on God, or better, all is a gift from God."
Teaching of creationism
Other Christians have expressed qualms about teaching creationism. In March 2006, then Archbishop of Canterbury Rowan Williams, the leader of the world's Anglicans, stated his discomfort about teaching creationism, saying that creationism was "a kind of category mistake, as if the Bible were a theory like other theories." He also said: "My worry is creationism can end up reducing the doctrine of creation rather than enhancing it." The views of the Episcopal Churcha major American-based branch of the Anglican Communionon teaching creationism resemble those of Williams.
The National Science Teachers Association is opposed to teaching creationism as a science, as is the Association for Science Teacher Education, the National Association of Biology Teachers, the American Anthropological Association, the American Geosciences Institute, the Geological Society of America, the American Geophysical Union, and numerous other professional teaching and scientific societies.
In April 2010, the American Academy of Religion issued Guidelines for Teaching About Religion in K‐12 Public Schools in the United States, which included guidance that creation science or intelligent design should not be taught in science classes, as "Creation science and intelligent design represent worldviews that fall outside of the realm of science that is defined as (and limited to) a method of inquiry based on gathering observable and measurable evidence subject to specific principles of reasoning." However, they, as well as other "worldviews that focus on speculation regarding the origins of life represent another important and relevant form of human inquiry that is appropriately studied in literature or social sciences courses. Such study, however, must include a diversity of worldviews representing a variety of religious and philosophical perspectives and must avoid privileging one view as more legitimate than others."
Randy Moore and Sehoya Cotner, from the biology program at the University of Minnesota, reflect on the relevance of teaching creationism in the article "The Creationist Down the Hall: Does It Matter When Teachers Teach Creationism?", in which they write: "Despite decades of science education reform, numerous legal decisions declaring the teaching of creationism in public-school science classes to be unconstitutional, overwhelming evidence supporting evolution, and the many denunciations of creationism as nonscientific by professional scientific societies, creationism remains popular throughout the United States."
Scientific criticism
Science is a system of knowledge based on observation, empirical evidence, and the development of theories that yield testable explanations and predictions of natural phenomena. By contrast, creationism is often based on literal interpretations of the narratives of particular religious texts. Creationist beliefs involve purported forces that lie outside of nature, such as supernatural intervention, and often do not allow predictions at all. Therefore, these can neither be confirmed nor disproved by scientists. However, many creationist beliefs can be framed as testable predictions about phenomena such as the age of the Earth, its geological history and the origins, distributions and relationships of living organisms found on it. Early science incorporated elements of these beliefs, but as science developed these beliefs were gradually falsified and were replaced with understandings based on accumulated and reproducible evidence that often allows the accurate prediction of future results.
Some scientists, such as Stephen Jay Gould, consider science and religion to be two compatible and complementary fields, with authorities in distinct areas of human experience, so-called non-overlapping magisteria. This view is also held by many theologians, who believe that ultimate origins and meaning are addressed by religion, but favor verifiable scientific explanations of natural phenomena over those of creationist beliefs. Other scientists, such as Richard Dawkins, reject the non-overlapping magisteria and argue that, in disproving literal interpretations of creationists, the scientific method also undermines religious texts as a source of truth. Irrespective of this diversity in viewpoints, since creationist beliefs are not supported by empirical evidence, the scientific consensus is that any attempt to teach creationism as science should be rejected.
Organizations
See also
Biblical inerrancy
Biogenesis
Evolution of complexity
Flying Spaghetti Monster
History of creationism
Religious cosmology
Notes
References
Citations
Works cited
"Presented as a Paleontological Society short course at the annual meeting of the Geological Society of America, Denver, Colorado, October 24, 1999."
Further reading
External links
"Creationism" at the Stanford Encyclopedia of Philosophy by Michael Ruse
"How Creationism Works" at HowStuffWorks by Julia Layton
"TIMELINE: Evolution, Creationism and Intelligent Design" Focuses on major historical and recent events in the scientific and political debate
by Warren D. Allmon, Director of the Museum of the Earth
"What is creationism?" at talk.origins by Mark Isaak
"The Creation/Evolution Continuum" by Eugenie Scott
"15 Answers to Creationist Nonsense" by John Rennie, editor in chief of Scientific American magazine
"Race, Evolution and the Science of Human Origins" by Allison Hopper, Scientific American (July 5, 2021).
Human Timeline (Interactive) Smithsonian, National Museum of Natural History (August 2016)
Christian terminology
Creation myths
Denialism
Obsolete biology theories
Origin of life
Pseudoscience
Religious cosmologies
Theism |
5329 | https://en.wikipedia.org/wiki/History%20of%20Chad | History of Chad | Chad (; ), officially the Republic of Chad, is a landlocked country in Central Africa. It borders Libya to the north, Sudan to the east, the Central African Republic to the south, Cameroon and Nigeria to the southwest, and Niger to the west. Due to its distance from the sea and its largely desert climate, the country is sometimes referred to as the "Dead Heart of Africa".
Prehistory
The territory now known as Chad possesses some of the richest archaeological sites in Africa. A hominid skull was found by Michel Brunet, that is more than 7 million years old, the oldest discovered anywhere in the world; it has been given the name Sahelanthropus tchadensis. In 1996 Michel Brunet had unearthed a hominid jaw which he named Australopithecus bahrelghazali, and unofficially dubbed Abel. It was dated using Beryllium based Radiometric dating as living circa. 3.6 million years ago.
During the 7th millennium BC, the northern half of Chad was part of a broad expanse of land, stretching from the Indus River in the east to the Atlantic Ocean in the west, in which ecological conditions favored early human settlement. Rock art of the "Round Head" style, found in the Ennedi region, has been dated to before the 7th millennium BC and, because of the tools with which the rocks were carved and the scenes they depict, may represent the oldest evidence in the Sahara of Neolithic industries. Many of the pottery-making and Neolithic activities in Ennedi date back further than any of those of the Nile Valley to the east.
In the prehistoric period, Chad was much wetter than it is today, as evidenced by large game animals depicted in rock paintings in the Tibesti and Borkou regions.
Recent linguistic research suggests that all of Africa's major language groupings south of the Sahara Desert (except Khoisan, which is not considered a valid genetic grouping anyway), i.e. the Afro-Asiatic, Nilo-Saharan and Niger–Congo phyla, originated in prehistoric times in a narrow band between Lake Chad and the Nile Valley. The origins of Chad's peoples, however, remain unclear. Several of the proven archaeological sites have been only partially studied, and other sites of great potential have yet to be mapped.
Era of Empires (AD 900–1900)
At the end of the 1st millennium AD, the formation of states began across central Chad in the sahelian zone between the desert and the savanna. For almost the next 1,000 years, these states, their relations with each other, and their effects on the peoples who lived in stateless societies along their peripheries dominated Chad's political history. Recent research suggests that indigenous Africans founded of these states, not migrating Arabic-speaking groups, as was believed previously. Nonetheless, immigrants, Arabic-speaking or otherwise, played a significant role, along with Islam, in the formation and early evolution of these states.
Most states began as kingdoms, in which the king was considered divine and endowed with temporal and spiritual powers. All states were militaristic (or they did not survive long), but none was able to expand far into southern Chad, where forests and the tsetse fly complicated the use of cavalry. Control over the trans-Saharan trade routes that passed through the region formed the economic basis of these kingdoms. Although many states rose and fell, the most important and durable of the empires were Kanem–Bornu, Baguirmi, and Ouaddai, according to most written sources (mainly court chronicles and writings of Arab traders and travelers).Chad - ERA OF EMPIRES, A.D. 900–1900
Kanem–Bornu
The Kanem Empire originated in the 9th century AD to the northeast of Lake Chad. Historians agree that the leaders of the new state were ancestors of the Kanembu people. Toward the end of the 11th century the Sayfawa king (or mai, the title of the Sayfawa rulers) Hummay, converted to Islam. In the following century the Sayfawa rulers expanded southward into Kanem, where was to rise their first capital, Njimi. Kanem's expansion peaked during the long and energetic reign of Mai Dunama Dabbalemi (c. 1221–1259).
By the end of the 14th century, internal struggles and external attacks had torn Kanem apart. Finally, around 1396 the Bulala invaders forced Mai Umar Idrismi to abandon Njimi and move the Kanembu people to Bornu on the western edge of Lake Chad. Over time, the intermarriage of the Kanembu and Bornu peoples created a new people and language, the Kanuri, and founded a new capital, Ngazargamu.
Kanem–Bornu peaked during the reign of the outstanding statesman Mai Idris Aluma (c. 1571–1603). Aluma is remembered for his military skills, administrative reforms, and Islamic piety. The administrative reforms and military brilliance of Aluma sustained the empire until the mid-17th century, when its power began to fade. By the early 19th century, Kanem–Bornu was clearly an empire in decline, and in 1808 Fulani warriors conquered Ngazargamu. Bornu survived, but the Sayfawa dynasty ended in 1846 and the Empire itself fell in 1893.
Baguirmi and Ouaddai
The Kingdom of Baguirmi, located southeast of Kanem-Bornu, was founded in the late 15th or early 16th century, and adopted Islam in the reign of Abdullah IV (1568-98). Baguirmi was in a tributary relationship with Kanem–Bornu at various points in the 17th and 18th centuries, then to Ouaddai in the 19th century. In 1893, Baguirmi sultan Abd ar Rahman Gwaranga surrendered the territory to France, and it became a French protectorate.
The Ouaddai Kingdom, west of Kanem–Bornu, was established in the early 16th century by Tunjur rulers. In the 1630s, Abd al Karim invaded and established an Islamic sultanate. Among its most impactful rulers for the next three centuries were Muhammad Sabun, who controlled a new trade route to the north and established a currency during the early 19th century, and Muhammad Sharif, whose military campaigns in the mid 19th century fended off an assimilation attempt from Darfur, conquered Baguirmi, and successfully resisted French colonization. However, Ouaddai lost its independence to France after a war from 1909 to 1912.
Colonialism (1900–1940)
The French first invaded Chad in 1891, establishing their authority through military expeditions primarily against the Muslim kingdoms. The decisive colonial battle for Chad was fought on April 22, 1900 at Battle of Kousséri between forces of French Major Amédée-François Lamy and forces of the Sudanese warlord Rabih az-Zubayr. Both leaders were killed in the battle.
In 1905, administrative responsibility for Chad was placed under a governor-general stationed at Brazzaville, capital of French Equatorial Africa (FEA). Chad did not have a separate colonial status until 1920, when it was placed under a lieutenant-governor stationed in Fort-Lamy (today N'Djamena).
Two fundamental themes dominated Chad's colonial experience with the French: an absence of policies designed to unify the territory and an exceptionally slow pace of modernization. In the French scale of priorities, the colony of Chad ranked near the bottom, and the French came to perceive Chad primarily as a source of raw cotton and untrained labour to be used in the more productive colonies to the south.
Throughout the colonial period, large areas of Chad were never governed effectively: in the huge BET Prefecture, the handful of French military administrators usually left the people alone, and in central Chad, French rule was only slightly more substantive. Truly speaking, France managed to govern effectively only the south.
Decolonization (1940–1960)
During World War II, Chad was the first French colony to rejoin the Allies (August 26, 1940), after the defeat of France by Germany. Under the administration of Félix Éboué, France's first black colonial governor, a military column, commanded by Colonel Philippe Leclerc de Hauteclocque, and including two battalions of Sara troops, moved north from N'Djamena (then Fort Lamy) to engage Axis forces in Libya, where, in partnership with the British Army's Long Range Desert Group, they captured Kufra. On 21 January 1942, N'Djamena was bombed by a German aircraft.
After the war ended, local parties started to develop in Chad. The first to be born was the radical Chadian Progressive Party (PPT) in February 1947, initially headed by Panamanian born Gabriel Lisette, but from 1959 headed by François Tombalbaye. The more conservative Chadian Democratic Union (UDT) was founded in November 1947 and represented French commercial interests and a bloc of traditional leaders composed primarily of Muslim and Ouaddaïan nobility. The confrontation between the PPT and UDT was more than simply ideological; it represented different regional identities, with the PPT representing the Christian and animist south and the UDT the Islamic north.
The PPT won the May 1957 pre-independence elections thanks to a greatly expanded franchise, and Lisette led the government of the Territorial Assembly until he lost a confidence vote on 11 February 1959. After a referendum on territorial autonomy on 28 September 1958, French Equatorial Africa was dissolved, and its four constituent states – Gabon, Congo (Brazzaville), the Central African Republic, and Chad became autonomous members of the French Community from 28 November 1958. Following Lisette's fall in February 1959 the opposition leaders Gontchome Sahoulba and Ahmed Koulamallah could not form a stable government, so the PPT was again asked to form an administration - which it did under the leadership of François Tombalbaye on 26 March 1959. On 12 July 1960 France agreed to Chad becoming fully independent. On 11 August 1960, Chad became an independent country and François Tombalbaye became its first president.
The Tombalbaye era (1960–1975)
One of the most prominent aspects of Tombalbaye's rule to prove itself was his authoritarianism and distrust of democracy. Already in January 1962 he banned all political parties except his own PPT, and started immediately concentrating all power in his own hands. His treatment of opponents, real or imagined, was extremely harsh, filling the prisons with thousands of political prisoners.
What was even worse was his constant discrimination against the central and northern regions of Chad, where the southern Chadian administrators came to be perceived as arrogant and incompetent. This resentment at last exploded in a tax revolt on September 2, 1965 in the Guéra Prefecture, causing 500 deaths. The year after saw the birth in Sudan of the National Liberation Front of Chad (FROLINAT), created to militarily oust Tombalbaye and the Southern dominance. It was the start of a bloody civil war.
Tombalbaye resorted to calling in French troops; while moderately successful, they were not fully able to quell the insurgency. Proving more fortunate was his choice to break with the French and seek friendly ties with Libyan Brotherly Leader Gaddafi, taking away the rebels' principal source of supplies.
But while he had reported some success against the rebels, Tombalbaye started behaving more and more irrationally and brutally, continuously eroding his consensus among the southern elites, which dominated all key positions in the army, the civil service and the ruling party. As a consequence on April 13, 1975, several units of N'Djamena's gendarmerie killed Tombalbaye during a coup.
Military rule (1975–1978)
The coup d'état that terminated Tombalbaye's government received an enthusiastic response in N'Djamena. The southerner General Félix Malloum emerged early as the chairman of the new junta.
The new military leaders were unable to retain for long the popularity that they had gained through their overthrow of Tombalbaye. Malloum proved himself unable to cope with the FROLINAT and at the end decided his only chance was in coopting some of the rebels: in 1978 he allied himself with the insurgent leader Hissène Habré, who entered the government as prime minister.
Civil war (1979-1982)
Internal dissent within the government led Prime Minister Habré to send his forces against Malloum's national army in the capital in February 1979. Malloum was ousted from the presidency, but the resulting civil war amongst the 11 emergent factions was so widespread that it rendered the central government largely irrelevant. At that point, other African governments decided to intervene.
A series of four international conferences held first under Nigerian and then Organization of African Unity (OAU) sponsorship attempted to bring the Chadian factions together. At the fourth conference, held in Lagos, Nigeria, in August 1979, the Lagos Accord was signed. This accord established a transitional government pending national elections. In November 1979, the Transitional Government of National Unity (GUNT) was created with a mandate to govern for 18 months. Goukouni Oueddei, a northerner, was named president; Colonel Kamougué, a southerner, Vice President; and Habré, Minister of Defense. This coalition proved fragile; in January 1980, fighting broke out again between Goukouni's and Habré's forces. With assistance from Libya, Goukouni regained control of the capital and other urban centers by year's end. However, Goukouni's January 1981 statement that Chad and Libya had agreed to work for the realization of complete unity between the two countries generated intense international pressure and Goukouni's subsequent call for the complete withdrawal of external forces.
The Habré era (1982–1990)
Libya's partial withdrawal to the Aozou Strip in northern Chad cleared the way for Habré's forces to enter N’Djamena in June. French troops and an OAU peacekeeping force of 3,500 Nigerian, Senegalese, and Zairian troops (partially funded by the United States) remained neutral during the conflict.
Habré continued to face armed opposition on various fronts, and was brutal in his repression of suspected opponents, massacring and torturing many during his rule. In the summer of 1983, GUNT forces launched an offensive against government positions in northern and eastern Chad with heavy Libyan support. In response to Libya's direct intervention, French and Zairian forces intervened to defend Habré, pushing Libyan and rebel forces north of the 16th parallel. In September 1984, the French and the Libyan governments announced an agreement for the mutual withdrawal of their forces from Chad. By the end of the year, all French and Zairian troops were withdrawn. Libya did not honor the withdrawal accord, and its forces continued to occupy the northern third of Chad.
Rebel commando groups (Codos) in southern Chad were broken up by government massacres in 1984. In 1985 Habré briefly reconciled with some of his opponents, including the Democratic Front of Chad (FDT) and the Coordinating Action Committee of the Democratic Revolutionary Council. Goukouni also began to rally toward Habré, and with his support Habré successfully expelled Libyan forces from most of Chadian territory. A cease-fire between Chad and Libya held from 1987 to 1988, and negotiations over the next several years led to the 1994 International Court of Justice decision granting Chad sovereignty over the Aouzou strip, effectively ending Libyan occupation.
The Idriss Déby era (1990–2021)
Rise to power
However, rivalry between Hadjerai, Zaghawa and Gorane groups within the government grew in the late 1980s. In April 1989, Idriss Déby, one of Habré's leading generals and a Zaghawa, defected and fled to Darfur in Sudan, from which he mounted a Zaghawa-supported series of attacks on Habré (a Gorane). In December 1990, with Libyan assistance and no opposition from French troops stationed in Chad, Déby's forces successfully marched on N’Djamena. After 3 months of provisional government, Déby's Patriotic Salvation Movement (MPS) approved a national charter on February 28, 1991, with Déby as president.
During the next two years, Déby faced at least two coup attempts. Government forces clashed violently with rebel forces, including the Movement for Democracy and Development, MDD, National Revival Committee for Peace and Democracy (CSNPD), Chadian National Front (FNT) and the Western Armed Forces (FAO), near Lake Chad and in southern regions of the country. Earlier French demands for the country to hold a National Conference resulted in the gathering of 750 delegates representing political parties (which were legalized in 1992), the government, trade unions and the army to discuss the creation of a pluralist democratic regime.
However, unrest continued, sparked in part by large-scale killings of civilians in southern Chad. The CSNPD, led by Kette Moise and other southern groups entered into a peace agreement with government forces in 1994, which later broke down. Two new groups, the Armed Forces for a Federal Republic (FARF) led by former Kette ally Laokein Barde and the Democratic Front for Renewal (FDR), and a reformulated MDD clashed with government forces from 1994 to 1995.
Multiparty elections
Talks with political opponents in early 1996 did not go well, but Déby announced his intent to hold presidential elections in June. Déby won the country's first multi-party presidential elections with support in the second round from opposition leader Kebzabo, defeating General Kamougue (leader of the 1975 coup against Tombalbaye). Déby's MPS party won 63 of 125 seats in the January 1997 legislative elections. International observers noted numerous serious irregularities in presidential and legislative election proceedings.
By mid-1997 the government signed peace deals with FARF and the MDD leadership and succeeded in cutting off the groups from their rear bases in the Central African Republic and Cameroon. Agreements also were struck with rebels from the National Front of Chad (FNT) and Movement for Social Justice and Democracy in October 1997. However, peace was short-lived, as FARF rebels clashed with government soldiers, finally surrendering to government forces in May 1998. Barde was killed in the fighting, as were hundreds of other southerners, most civilians.
Since October 1998, Chadian Movement for Justice and Democracy (MDJT) rebels, led by Youssuf Togoimi until his death in September 2002, have skirmished with government troops in the Tibesti region, resulting in hundreds of civilian, government, and rebel casualties, but little ground won or lost. No active armed opposition has emerged in other parts of Chad, although Kette Moise, following senior postings at the Ministry of Interior, mounted a smallscale local operation near Moundou which was quickly and violently suppressed by government forces in late 2000.
Déby, in the mid-1990s, gradually restored basic functions of government and entered into agreements with the World Bank and IMF to carry out substantial economic reforms. Oil exploitation in the southern Doba region began in June 2000, with World Bank Board approval to finance a small portion of a project, the Chad-Cameroon Petroleum Development Project, aimed at transport of Chadian crude through a 1000-km buried pipeline through Cameroon to the Gulf of Guinea. The project established unique mechanisms for World Bank, private sector, government, and civil society collaboration to guarantee that future oil revenues benefit local populations and result in poverty alleviation. Success of the project depended on multiple monitoring efforts to ensure that all parties keep their commitments. These "unique" mechanisms for monitoring and revenue management have faced intense criticism from the beginning. Debt relief was accorded to Chad in May 2001.
Déby won a flawed 63% first-round victory in May 2001 presidential elections after legislative elections were postponed until spring 2002. Having accused the government of fraud, six opposition leaders were arrested (twice) and one opposition party activist was killed following the announcement of election results. However, despite claims of government corruption, favoritism of Zaghawas, and abuses by the security forces, opposition party and labor union calls for general strikes and more active demonstrations against the government have been unsuccessful. Despite movement toward democratic reform, power remains in the hands of a northern ethnic oligarchy.
In 2003, Chad began receiving refugees from the Darfur region of western Sudan. More than 200,000 refugees fled the fighting between two rebel groups and government-supported militias known as Janjaweed. A number of border incidents led to the Chadian-Sudanese War.
Oil producing and military improvement
Chad become an oil producer in 2003. In order to avoid resource curse and corruption, elaborate plans sponsored by World Bank were made. This plan ensured transparency in payments, as well as that 80% of money from oil exports would be spent on five priority development sectors, two most important of these being: education and healthcare. However money started getting diverted towards the military even before the civil war broke out. In 2006 when the civil war escalated, Chad abandoned previous economic plans sponsored by World Bank and added "national security" as priority development sector, money from this sector was used to improve the military. During the civil war, more than 600 million dollars were used to buy fighter jets, attack helicopters, and armored personnel carriers.
Chad earned between 10 and 11 billion dollars from oil production, and estimated 4 billion dollars were invested in the army.
War in the East
The war started on December 23, 2005, when the government of Chad declared a state of war with Sudan and called for the citizens of Chad to mobilize themselves against the "common enemy," which the Chadian government sees as the Rally for Democracy and Liberty (RDL) militants, Chadian rebels, backed by the Sudanese government, and Sudanese militiamen. Militants have attacked villages and towns in eastern Chad, stealing cattle, murdering citizens, and burning houses. Over 200,000 refugees from the Darfur region of northwestern Sudan currently claim asylum in eastern Chad. Chadian president Idriss Déby accuses Sudanese President Omar Hasan Ahmad al-Bashir of trying to "destabilize our country, to drive our people into misery, to create disorder and export the war from Darfur to Chad."
An attack on the Chadian town of Adre near the Sudanese border led to the deaths of either one hundred rebels, as every news source other than CNN has reported, or three hundred rebels. The Sudanese government was blamed for the attack, which was the second in the region in three days, but Sudanese foreign ministry spokesman Jamal Mohammed Ibrahim denies any Sudanese involvement, "We are not for any escalation with Chad. We technically deny involvement in Chadian internal affairs." This attack was the final straw that led to the declaration of war by Chad and the alleged deployment of the Chadian airforce into Sudanese airspace, which the Chadian government denies.
An attack on N'Djamena was defeated on April 13, 2006 in the Battle of N'Djamena. The President on national radio stated that the situation was under control, but residents, diplomats and journalists reportedly heard shots of weapons fire.
On November 25, 2006, rebels captured the eastern town of Abeche, capital of the Ouaddaï Region and center for humanitarian aid to the Darfur region in Sudan. On the same day, a separate rebel group Rally of Democratic Forces had captured Biltine. On November 26, 2006, the Chadian government claimed to have recaptured both towns, although rebels still claimed control of Biltine. Government buildings and humanitarian aid offices in Abeche were said to have been looted. The Chadian government denied a warning issued by the French Embassy in N'Djamena that a group of rebels was making its way through the Batha Prefecture in central Chad. Chad insists that both rebel groups are supported by the Sudanese government.
International orphanage scandal
Nearly 100 children at the center of an international scandal that left them stranded at an orphanage in remote eastern Chad returned home after nearly five months March 14, 2008. The 97 children were taken from their homes in October 2007 by a then-obscure French charity, Zoé's Ark, which claimed they were orphans from Sudan's war-torn Darfur region.
Rebel attack on Ndjamena
On Friday, February 1, 2008, rebels, an opposition alliance of leaders Mahamat Nouri, a former defense minister, and Timane Erdimi, a nephew of Idriss Déby who was his chief of staff, attacked the Chadian capital of Ndjamena - even surrounding the Presidential Palace. But Idris Deby with government troops fought back. French forces flew in ammunition for Chadian government troops but took no active part in the fighting. UN has said that up to 20,000 people left the region, taking refuge in nearby Cameroon and Nigeria. Hundreds of people were killed, mostly civilians. The rebels accuse Deby of corruption and embezzling millions in oil revenue. While many Chadians may share that assessment, the uprising appears to be a power struggle within the elite that has long controlled Chad. The French government believes that the opposition has regrouped east of the capital. Déby has blamed Sudan for the current unrest in Chad.
Regional interventionism
During the Déby era, Chad intervened in conflicts in Mali, Central African Republic, Niger and Nigeria.
In 2013, Chad sent 2000 men from its military to help France in Operation Serval during the Mali War. Later in the same year Chad sent 850 troops to Central African Republic to help peacekeeping operation MISCA, those troops withdrew in April 2014 after allegations of human rights violations.
During the Boko Haram insurgency, Chad multiple times sent troops to assist the fight against Boko Haram in Niger and Nigeria.
In August 2018, rebel fighters of the Military Command Council for the Salvation of the Republic (CCMSR) attacked government forces in northern Chad. Chad experienced threats from jihadists fleeing the Libyan conflict. Chad had been an ally of the West in the fight against Islamist militants in West Africa.
In January 2019, after 47 years, Chad restored diplomatic relations with Israel. It was announced during a visit to N’Djamena by Israeli Prime Minister Benjamin Netanyahu.
After Idriss Déby (2021–present)
In April 2021, Chad's army announced that President Idriss Déby had died of his injuries following clashes with rebels in the north of the country. Idriss Deby ruled the country for more than 30 years since 1990. It was also announced that a military council led by Déby's son, Mahamat Idriss Déby a 37-year-old four star general, will govern for the next 18 months.
See also
2010 Sahel famine
History of Africa
List of heads of government of Chad
List of heads of state of Chad
List of human evolution fossils
Politics of Chad
Neolithic Subpluvial
Further reading
Gibbons, Ann. The First Human : The Race to Discover our Earliest Ancestor. Anchor Books (2007).
References
External links
The Library of Congress - A Country Study: Chad
Chad
he:צ'אד#היסטוריה |
5330 | https://en.wikipedia.org/wiki/Geography%20of%20Chad | Geography of Chad | Chad is one of the 47 landlocked countries in the world and is located in North Central Africa, measuring , nearly twice the size of France and slightly more than three times the size of California. Most of its ethnically and linguistically diverse population lives in the south, with densities ranging from 54 persons per square kilometer in the Logone River basin to 0.1 persons in the northern B.E.T. (Borkou-Ennedi-Tibesti) desert region, which itself is larger than France. The capital city of N'Djaména, situated at the confluence of the Chari and Logone Rivers, is cosmopolitan in nature, with a current population in excess of 700,000 people.
Chad has four climatic zones. The northernmost Saharan zone averages less than of rainfall annually. The sparse human population is largely nomadic, with some livestock, mostly small ruminants and camels. The central Sahelian zone receives between rainfall and has vegetation ranging from grass/shrub steppe to thorny, open savanna. The southern zone, often referred to as the Sudan zone, receives between , with woodland savanna and deciduous forests for vegetation. Rainfall in the Guinea zone, located in Chad's southwestern tip, ranges between .
The country's topography is generally flat, with the elevation gradually rising as one moves north and east away from Lake Chad. The highest point in Chad is Emi Koussi, a mountain that rises in the northern Tibesti Mountains. The Ennedi Plateau and the Ouaddaï highlands in the east complete the image of a gradually sloping basin, which descends towards Lake Chad. There are also central highlands in the Guera region rising to .
Lake Chad is the second largest lake in west Africa and is one of the most important wetlands on the continent. Home to 120 species of fish and at least that many species of birds, the lake has shrunk dramatically in the last four decades due to increased water usage from an expanding population and low rainfall. Bordered by Chad, Niger, Nigeria, and Cameroon, Lake Chad currently covers only 1350 square kilometers, down from 25,000 square kilometers in 1963. The Chari and Logone Rivers, both of which originate in the Central African Republic and flow northward, provide most of the surface water entering Lake Chad. Chad is also next to Niger.
Geographical placement
Located in north-central Africa, Chad stretches for about 1,800 kilometers from its northernmost point to its southern boundary. Except in the far northwest and south, where its borders converge, Chad's average width is about 800 kilometers. Its area of 1,284,000 square kilometers is roughly equal to the combined areas of Idaho, Wyoming, Utah, Nevada, and Arizona. Chad's neighbors include Libya to the north, Niger and Nigeria to the west, Sudan to the east, Central African Republic to the south, and Cameroon to the southwest.
Chad exhibits two striking geographical characteristics. First, the country is landlocked. N'Djamena, the capital, is located more than 1,100 kilometers northeast of the Atlantic Ocean; Abéché, a major city in the east, lies 2,650 kilometers from the Red Sea; and Faya-Largeau, a much smaller but strategically important center in the north, is in the middle of the Sahara Desert, 1,550 kilometers from the Mediterranean Sea. These vast distances from the sea have had a profound impact on Chad's historical and contemporary development.
The second noteworthy characteristic is that the country borders on very different parts of the African continent: North Africa, with its Islamic culture and economic orientation toward the Mediterranean Basin; and West Africa, with its diverse religions and cultures and its history of highly developed states and regional economies.
Chad also borders Northeast Africa, oriented toward the Nile Valley and the Red Sea region - and Central or Equatorial Africa, some of whose people have retained classical African religions while others have adopted Christianity, and whose economies were part of the great Congo River system. Although much of Chad's distinctiveness comes from this diversity of influences, since independence the diversity has also been an obstacle to the creation of a national identity.
Land
Although Chadian society is economically, socially, and culturally fragmented, the country's geography is unified by the Lake Chad Basin. Once a huge inland sea (the Pale-Chadian Sea) whose only remnant is shallow Lake Chad, this vast depression extends west into Nigeria and Niger. The larger, northern portion of the basin is bounded within Chad by the Tibesti Mountains in the northwest, the Ennedi Plateau in the northeast, the Ouaddaï Highlands in the east along the border with Sudan, the Guéra Massif in central Chad, and the Mandara Mountains along Chad's southwestern border with Cameroon. The smaller, southern part of the basin falls almost exclusively in Chad. It is delimited in the north by the Guéra Massif, in the south by highlands 250 kilometers south of the border with Central African Republic, and in the southwest by the Mandara Mountains.
Lake Chad, located in the southwestern part of the basin at an altitude of 282 meters, surprisingly does not mark the basin's lowest point; instead, this is found in the Bodele and Djourab regions in the north-central and northeastern parts of the country, respectively. This oddity arises because the great stationary dunes (ergs) of the Kanem region create a dam, preventing lake waters from flowing to the basin's lowest point. At various times in the past, and as late as the 1870s, the Bahr el Ghazal Depression, which extends from the northeastern part of the lake to the Djourab, acted as an overflow canal; since independence, climatic conditions have made overflows impossible.
North and northeast of Lake Chad, the basin extends for more than 800 kilometers, passing through regions characterized by great rolling dunes separated by very deep depressions. Although vegetation holds the dunes in place in the Kanem region, farther north they are bare and have a fluid, rippling character. From its low point in the Djourab, the basin then rises to the plateaus and peaks of the Tibesti Mountains in the north. The summit of this formation—as well as the highest point in the Sahara Desert—is Emi Koussi, a dormant volcano that reaches 3,414 meters above sea level.
The basin's northeastern limit is the Ennedi Plateau, whose limestone bed rises in steps etched by erosion. East of the lake, the basin rises gradually to the Ouaddaï Highlands, which mark Chad's eastern border and also divide the Chad and Nile watersheds. These highland areas are part of the East Saharan montane xeric woodlands ecoregion.
Southeast of Lake Chad, the regular contours of the terrain are broken by the Guéra Massif, which divides the basin into its northern and southern parts. South of the lake lie the floodplains of the Chari and Logone rivers, much of which are inundated during the rainy season. Farther south, the basin floor slopes upward, forming a series of low sand and clay plateaus, called koros, which eventually climb to 615 meters above sea level. South of the Chadian border, the koros divide the Lake Chad Basin from the Ubangi-Zaire river system.
Water systems
Permanent streams do not exist in northern or central Chad. Following infrequent rains in the Ennedi Plateau and Ouaddaï Highlands, water may flow through depressions called enneris and wadis. Often the result of flash floods, such streams usually dry out within a few days as the remaining puddles seep into the sandy clay soil. The most important of these streams is the Batha, which in the rainy season carries water west from the Ouaddaï Highlands and the Guéra Massif to Lake Fitri.
Chad's major rivers are the Chari and the Logone and their tributaries, which flow from the southeast into Lake Chad. Both river systems rise in the highlands of Central African Republic and Cameroon, regions that receive more than 1,250 millimeters of rainfall annually. Fed by rivers of Central African Republic, as well as by the Bahr Salamat, Bahr Aouk, and Bahr Sara rivers of southeastern Chad, the Chari River is about 1,200 kilometers long. From its origins near the city of Sarh, the middle course of the Chari makes its way through swampy terrain; the lower Chari is joined by the Logone River near N'Djamena. The Chari's volume varies greatly, from 17 cubic meters per second during the dry season to 340 cubic meters per second during the wettest part of the year.
The Logone River is formed by tributaries flowing from Cameroon and Central African Republic. Both shorter and smaller in volume than the Chari, it flows northeast for 960 kilometers; its volume ranges from five to eighty-five cubic meters per second. At N'Djamena the Logone empties into the Chari, and the combined rivers flow together for thirty kilometers through a large delta and into Lake Chad. At the end of the rainy season in the fall, the river overflows its banks and creates a huge floodplain in the delta.
The seventh largest lake in the world (and the fourth largest in Africa), Lake Chad is located in the sahelian zone, a region just south of the Sahara Desert. The Chari River contributes 95 percent of Lake Chad's water, an average annual volume of 40 billion cubic meters, 95% of which is lost to evaporation. The size of the lake is determined by rains in the southern highlands bordering the basin and by temperatures in the Sahel. Fluctuations in both cause the lake to change dramatically in size, from 9,800 square kilometers in the dry season to 25,500 at the end of the rainy season.
Lake Chad also changes greatly in size from one year to another. In 1870 its maximum area was 28,000 square kilometers. The measurement dropped to 12,700 in 1908. In the 1940s and 1950s, the lake remained small, but it grew again to 26,000 square kilometers in 1963. The droughts of the late 1960s, early 1970s, and mid-1980s caused Lake Chad to shrink once again, however. The only other lakes of importance in Chad are Lake Fitri, in Batha Prefecture, and Lake Iro, in the marshy southeast.
Climate
The Lake Chad Basin embraces a great range of tropical climates from north to south, although most of these climates tend to be dry. Apart from the far north, most regions are characterized by a cycle of alternating rainy and dry seasons. In any given year, the duration of each season is determined largely by the positions of two great air masses—a maritime mass over the Atlantic Ocean to the southwest and a much drier continental mass.
During the rainy season, winds from the southwest push the moister maritime system north over the African continent where it meets and slips under the continental mass along a front called the "intertropical convergence zone". At the height of the rainy season, the front may reach as far as Kanem Prefecture. By the middle of the dry season, the intertropical convergence zone moves south of Chad, taking the rain with it. This weather system contributes to the formation of three major regions of climate and vegetation.
Saharan region
The Saharan region covers roughly the northern half of the country, including Borkou-Ennedi-Tibesti Prefecture along with the northern parts of Kanem, Batha, and Biltine prefectures. Much of this area receives only traces of rain during the entire year; at Faya-Largeau, for example, annual rainfall averages less than , and there are nearly 3800 hours of sunshine. Scattered small oases and occasional wells provide water for a few date palms or small plots of millet and garden crops.
In much of the north, the average daily maximum temperature is about during January, the coolest month of the year, and about during May, the hottest month. On occasion, strong winds from the northeast produce violent sandstorms. In northern Biltine Prefecture, a region called the Mortcha plays a major role in animal husbandry. Dry for eight months of the year, it receives or more of rain, mostly during July and August.
A carpet of green springs from the desert during this brief wet season, attracting herders from throughout the region who come to pasture their cattle and camels. Because very few wells and springs have water throughout the year, the herders leave with the end of the rains, turning over the land to the antelopes, gazelles, and ostriches that can survive with little groundwater. Northern Chad averages over 3500 hours of sunlight per year, the south somewhat less.
Sahelian region
The semiarid sahelian zone, or Sahel, forms a belt about wide that runs from Lac and Chari-Baguirmi prefectures eastward through Guéra, Ouaddaï, and northern Salamat prefectures to the Sudanese frontier. The climate in this transition zone between the desert and the southern sudanian zone is divided into a rainy season (from June to September) and a dry period (from October to May).
In the northern Sahel, thorny shrubs and acacia trees grow wild, while date palms, cereals, and garden crops are raised in scattered oases. Outside these settlements, nomads tend their flocks during the rainy season, moving southward as forage and surface water disappear with the onset of the dry part of the year. The central Sahel is characterized by drought-resistant grasses and small woods. Rainfall is more abundant there than in the Saharan region. For example, N'Djamena records a maximum annual average rainfall of , while Ouaddaï Prefecture receives just a bit less.
During the hot season, in April and May, maximum temperatures frequently rise above . In the southern part of the Sahel, rainfall is sufficient to permit crop production on unirrigated land, and millet and sorghum are grown. Agriculture is also common in the marshlands east of Lake Chad and near swamps or wells. Many farmers in the region combine subsistence agriculture with the raising of cattle, sheep, goats, and poultry.
Sudanian region
The humid sudanian zone includes the Sahel, the southern prefectures of Mayo-Kebbi, Tandjilé, Logone Occidental, Logone Oriental, Moyen-Chari, and southern Salamat. Between April and October, the rainy season brings between of precipitation. Temperatures are high throughout the year. Daytime readings in Moundou, the major city in the southwest, range from in the middle of the cool season in January to about in the hot months of March, April, and May.
The sudanian region is predominantly East Sudanian savanna, or plains covered with a mixture of tropical or subtropical grasses and woodlands. The growth is lush during the rainy season but turns brown and dormant during the five-month dry season between November and March. Over a large part of the region, however, natural vegetation has yielded to agriculture.
2010 drought
On 22 June, the temperature reached in Faya, breaking a record set in 1961 at the same location. Similar temperature rises were also reported in Niger, which began to enter a famine situation.
On 26 July the heat reached near-record levels over Chad and Niger.
Area
Area:
total:
1.284 million km2
land:
1,259,200 km2
water:
24,800 km2
Area - comparative:
Canada: smaller than the Northwest Territories
US: slightly more than three times the size of California
Boundaries
Land boundaries:
total:
6,406 km
border countries:
Cameroon 1,116 km, Central African Republic 1,556 km, Libya 1,050 km, Niger 1,196 km, Nigeria 85 km, Sudan 1,403 km
Coastline:
0 km (landlocked)
Maritime claims:
none (landlocked)
Elevation extremes:
lowest point:
Bodélé Depression 160 m
highest point:
Emi Koussi 3,415 m
Land use and resources
Natural resources:
petroleum, uranium, natron, kaolin, fish (Chari River, Logone River), gold, limestone, sand and gravel, salt
Land use:
arable land:
3.89%
permanent crops:
0.03%
other:
96.08% (2012)
Irrigated land:
302.7 km2 (2003)
Total renewable water resources:
43 km3 (2011)
Freshwater withdrawal (domestic/industrial/agricultural):
total:
0.88 km3/yr (12%/12%/76%)
per capita:
84.81 m3/yr (2005)
Environmental issues
Natural hazards:
hot, dry, dusty, Harmattan winds occur in north; periodic droughts; locust plagues
Environment - current issues:
inadequate supplies of potable water; improper waste disposal in rural areas contributes to soil and water pollution; desertification
See also
2010 Sahel famine
Extreme points
This is a list of the extreme points of Chad, the points that are farther north, south, east or west than any other location.
Northernmost point - an unnamed location on the border with Libya, Borkou-Ennedi-Tibesti region
Easternmost point - the northern section of the Chad-Sudan border, Borkou-Ennedi-Tibesti region *
Southernmost point - unnamed location on the border with Central African Republic at a confluence in the Lébé river, Logone Oriental region
Westernmost point - unnamed location west of the town of Kanirom and immediately north of Lake Chad, Lac Region
*Note: technically Chad does not have an easternmost point, the easternmost section of the border being formed by the 24° of longitude
References
Sources
External links
– title=Field Listing: Geographic Coordinates. Geo-links for Geography of Chad.
Detailed map of Chad from www.izf.net |
5346 | https://en.wikipedia.org/wiki/Colloid | Colloid | A colloid is a mixture in which one substance consisting of microscopically dispersed insoluble particles is suspended throughout another substance. Some definitions specify that the particles must be dispersed in a liquid, while others extend the definition to include substances like aerosols and gels. The term colloidal suspension refers unambiguously to the overall mixture (although a narrower sense of the word suspension is distinguished from colloids by larger particle size). A colloid has a dispersed phase (the suspended particles) and a continuous phase (the medium of suspension). The dispersed phase particles have a diameter of approximately 1 nanometre to 1 micrometre.
Some colloids are translucent because of the Tyndall effect, which is the scattering of light by particles in the colloid. Other colloids may be opaque or have a slight color.
Colloidal suspensions are the subject of interface and colloid science. This field of study began in 1845 by Francesco Selmi and expanded by Michael Faraday and Thomas Graham, who coined the term colloid in 1861.
Classification of colloids
Colloids can be classified as follows:
Homogeneous mixtures with a dispersed phase in this size range may be called colloidal aerosols, colloidal emulsions, colloidal suspensions, colloidal foams, colloidal dispersions, or hydrosols.
Hydrocolloids
Hydrocolloids describe certain chemicals (mostly polysaccharides and proteins) that are colloidally dispersible in water. Thus becoming effectively "soluble" they change the rheology of water by raising the viscosity and/or inducing gelation. They may provide other interactive effects with other chemicals, in some cases synergistic, in others antagonistic. Using these attributes hydrocolloids are very useful chemicals since in many areas of technology from foods through pharmaceuticals, personal care and industrial applications, they can provide stabilization, destabilization and separation, gelation, flow control, crystallization control and numerous other effects. Apart from uses of the soluble forms some of the hydrocolloids have additional useful functionality in a dry form if after solubilization they have the water removed - as in the formation of films for breath strips or sausage casings or indeed, wound dressing fibers, some being more compatible with skin than others. There are many different types of hydrocolloids each with differences in structure function and utility that generally are best suited to particular application areas in the control of rheology and the physical modification of form and texture. Some hydrocolloids like starch and casein are useful foods as well as rheology modifiers, others have limited nutritive value, usually providing a source of fiber.
The term hydrocolloids also refers to a type of dressing designed to lock moisture in the skin and help the natural healing process of skin to reduce scarring, itching and soreness.
Components
Hydrocolloids contain some type of gel-forming agent, such as sodium carboxymethylcellulose (NaCMC) and gelatin. They are normally combined with some type of sealant, i.e. polyurethane to 'stick' to the skin.
Colloid compared with solution
A colloid has a dispersed phase and a continuous phase, whereas in a solution, the solute and solvent constitute only one phase. A solute in a solution are individual molecules or ions, whereas colloidal particles are bigger. For example, in a solution of salt in water, the sodium chloride (NaCl) crystal dissolves, and the Na+ and Cl− ions are surrounded by water molecules. However, in a colloid such as milk, the colloidal particles are globules of fat, rather than individual fat molecules. Because colloid is multiple phases, it has very different properties compared to fully mixed, continuous solution.
Interaction between particles
The following forces play an important role in the interaction of colloid particles:
Excluded volume repulsion: This refers to the impossibility of any overlap between hard particles.
Electrostatic interaction: Colloidal particles often carry an electrical charge and therefore attract or repel each other. The charge of both the continuous and the dispersed phase, as well as the mobility of the phases are factors affecting this interaction.
van der Waals forces: This is due to interaction between two dipoles that are either permanent or induced. Even if the particles do not have a permanent dipole, fluctuations of the electron density gives rise to a temporary dipole in a particle. This temporary dipole induces a dipole in particles nearby. The temporary dipole and the induced dipoles are then attracted to each other. This is known as van der Waals force, and is always present (unless the refractive indexes of the dispersed and continuous phases are matched), is short-range, and is attractive.
Steric forces between polymer-covered surfaces or in solutions containing non-adsorbing polymer can modulate interparticle forces, producing an additional steric repulsive force (which is predominantly entropic in origin) or an attractive depletion force between them.
Sedimentation velocity
The Earth’s gravitational field acts upon colloidal particles. Therefore, if the colloidal particles are denser than the medium of suspension, they will sediment (fall to the bottom), or if they are less dense, they will cream (float to the top). Larger particles also have a greater tendency to sediment because they have smaller Brownian motion to counteract this movement.
The sedimentation or creaming velocity is found by equating the Stokes drag force with the gravitational force:
where
is the Archimedean weight of the colloidal particles,
is the viscosity of the suspension medium,
is the radius of the colloidal particle,
and is the sedimentation or creaming velocity.
The mass of the colloidal particle is found using:
where
is the volume of the colloidal particle, calculated using the volume of a sphere ,
and is the difference in mass density between the colloidal particle and the suspension medium.
By rearranging, the sedimentation or creaming velocity is:
There is an upper size-limit for the diameter of colloidal particles because particles larger than 1 μm tend to sediment, and thus the substance would no longer be considered a colloidal suspension.
The colloidal particles are said to be in sedimentation equilibrium if the rate of sedimentation is equal to the rate of movement from Brownian motion.
Preparation
There are two principal ways to prepare colloids:
Dispersion of large particles or droplets to the colloidal dimensions by milling, spraying, or application of shear (e.g., shaking, mixing, or high shear mixing).
Condensation of small dissolved molecules into larger colloidal particles by precipitation, condensation, or redox reactions. Such processes are used in the preparation of colloidal silica or gold.
Stabilization
The stability of a colloidal system is defined by particles remaining suspended in solution and depends on the interaction forces between the particles. These include electrostatic interactions and van der Waals forces, because they both contribute to the overall free energy of the system.
A colloid is stable if the interaction energy due to attractive forces between the colloidal particles is less than kT, where k is the Boltzmann constant and T is the absolute temperature. If this is the case, then the colloidal particles will repel or only weakly attract each other, and the substance will remain a suspension.
If the interaction energy is greater than kT, the attractive forces will prevail, and the colloidal particles will begin to clump together. This process is referred to generally as aggregation, but is also referred to as flocculation, coagulation or precipitation. While these terms are often used interchangeably, for some definitions they have slightly different meanings. For example, coagulation can be used to describe irreversible, permanent aggregation where the forces holding the particles together are stronger than any external forces caused by stirring or mixing. Flocculation can be used to describe reversible aggregation involving weaker attractive forces, and the aggregate is usually called a floc. The term precipitation is normally reserved for describing a phase change from a colloid dispersion to a solid (precipitate) when it is subjected to a perturbation. Aggregation causes sedimentation or creaming, therefore the colloid is unstable: if either of these processes occur the colloid will no longer be a suspension.
Electrostatic stabilization and steric stabilization are the two main mechanisms for stabilization against aggregation.
Electrostatic stabilization is based on the mutual repulsion of like electrical charges. The charge of colloidal particles is structured in an electrical double layer, where the particles are charged on the surface, but then attract counterions (ions of opposite charge) which surround the particle. The electrostatic repulsion between suspended colloidal particles is most readily quantified in terms of the zeta potential. The combined effect of van der Waals attraction and electrostatic repulsion on aggregation is described quantitatively by the DLVO theory. A common method of stabilising a colloid (converting it from a precipitate) is peptization, a process where it is shaken with an electrolyte.
Steric stabilization consists absorbing a layer of a polymer or surfactant on the particles to prevent them from getting close in the range of attractive forces. The polymer consists of chains that are attached to the particle surface, and the part of the chain that extends out is soluble in the suspension medium. This technique is used to stabilize colloidal particles in all types of solvents, including organic solvents.
A combination of the two mechanisms is also possible (electrosteric stabilization).
A method called gel network stabilization represents the principal way to produce colloids stable to both aggregation and sedimentation. The method consists in adding to the colloidal suspension a polymer able to form a gel network. Particle settling is hindered by the stiffness of the polymeric matrix where particles are trapped, and the long polymeric chains can provide a steric or electrosteric stabilization to dispersed particles. Examples of such substances are xanthan and guar gum.
Destabilization
Destabilization can be accomplished by different methods:
Removal of the electrostatic barrier that prevents aggregation of the particles. This can be accomplished by the addition of salt to a suspension to reduce the Debye screening length (the width of the electrical double layer) of the particles. It is also accomplished by changing the pH of a suspension to effectively neutralise the surface charge of the particles in suspension. This removes the repulsive forces that keep colloidal particles separate and allows for aggregation due to van der Waals forces. Minor changes in pH can manifest in significant alteration to the zeta potential. When the magnitude of the zeta potential lies below a certain threshold, typically around ± 5mV, rapid coagulation or aggregation tends to occur.
Addition of a charged polymer flocculant. Polymer flocculants can bridge individual colloidal particles by attractive electrostatic interactions. For example, negatively charged colloidal silica or clay particles can be flocculated by the addition of a positively charged polymer.
Addition of non-adsorbed polymers called depletants that cause aggregation due to entropic effects.
Unstable colloidal suspensions of low-volume fraction form clustered liquid suspensions, wherein individual clusters of particles sediment if they are more dense than the suspension medium, or cream if they are less dense. However, colloidal suspensions of higher-volume fraction form colloidal gels with viscoelastic properties. Viscoelastic colloidal gels, such as bentonite and toothpaste, flow like liquids under shear, but maintain their shape when shear is removed. It is for this reason that toothpaste can be squeezed from a toothpaste tube, but stays on the toothbrush after it is applied.
Monitoring stability
The most widely used technique to monitor the dispersion state of a product, and to identify and quantify destabilization phenomena, is multiple light scattering coupled with vertical scanning. This method, known as turbidimetry, is based on measuring the fraction of light that, after being sent through the sample, it backscattered by the colloidal particles. The backscattering intensity is directly proportional to the average particle size and volume fraction of the dispersed phase. Therefore, local changes in concentration caused by sedimentation or creaming, and clumping together of particles caused by aggregation, are detected and monitored. These phenomena are associated with unstable colloids.
Dynamic light scattering can be used to detect the size of a colloidal particle by measuring how fast they diffuse. This method involves directing laser light towards a colloid. The scattered light will form an interference pattern, and the fluctuation in light intensity in this pattern is caused by the Brownian motion of the particles. If the apparent size of the particles increases due to them clumping together via aggregation, it will result in slower Brownian motion. This technique can confirm that aggregation has occurred if the apparent particle size is determined to be beyond the typical size range for colloidal particles.
Accelerating methods for shelf life prediction
The kinetic process of destabilisation can be rather long (up to several months or years for some products). Thus, it is often required for the formulator to use further accelerating methods to reach reasonable development time for new product design. Thermal methods are the most commonly used and consist of increasing temperature to accelerate destabilisation (below critical temperatures of phase inversion or chemical degradation). Temperature affects not only viscosity, but also interfacial tension in the case of non-ionic surfactants or more generally interactions forces inside the system. Storing a dispersion at high temperatures enables to simulate real life conditions for a product (e.g. tube of sunscreen cream in a car in the summer), but also to accelerate destabilisation processes up to 200 times.
Mechanical acceleration including vibration, centrifugation and agitation are sometimes used. They subject the product to different forces that pushes the particles / droplets against one another, hence helping in the film drainage. Some emulsions would never coalesce in normal gravity, while they do under artificial gravity. Segregation of different populations of particles have been highlighted when using centrifugation and vibration.
As a model system for atoms
In physics, colloids are an interesting model system for atoms. Micrometre-scale colloidal particles are large enough to be observed by optical techniques such as confocal microscopy. Many of the forces that govern the structure and behavior of matter, such as excluded volume interactions or electrostatic forces, govern the structure and behavior of colloidal suspensions. For example, the same techniques used to model ideal gases can be applied to model the behavior of a hard sphere colloidal suspension. Phase transitions in colloidal suspensions can be studied in real time using optical techniques, and are analogous to phase transitions in liquids. In many interesting cases optical fluidity is used to control colloid suspensions.
Crystals
A colloidal crystal is a highly ordered array of particles that can be formed over a very long range (typically on the order of a few millimeters to one centimeter) and that appear analogous to their atomic or molecular counterparts. One of the finest natural examples of this ordering phenomenon can be found in precious opal, in which brilliant regions of pure spectral color result from close-packed domains of amorphous colloidal spheres of silicon dioxide (or silica, SiO2). These spherical particles precipitate in highly siliceous pools in Australia and elsewhere, and form these highly ordered arrays after years of sedimentation and compression under hydrostatic and gravitational forces. The periodic arrays of submicrometre spherical particles provide similar arrays of interstitial voids, which act as a natural diffraction grating for visible light waves, particularly when the interstitial spacing is of the same order of magnitude as the incident lightwave.
Thus, it has been known for many years that, due to repulsive Coulombic interactions, electrically charged macromolecules in an aqueous environment can exhibit long-range crystal-like correlations with interparticle separation distances, often being considerably greater than the individual particle diameter. In all of these cases in nature, the same brilliant iridescence (or play of colors) can be attributed to the diffraction and constructive interference of visible lightwaves that satisfy Bragg’s law, in a matter analogous to the scattering of X-rays in crystalline solids.
The large number of experiments exploring the physics and chemistry of these so-called "colloidal crystals" has emerged as a result of the relatively simple methods that have evolved in the last 20 years for preparing synthetic monodisperse colloids (both polymer and mineral) and, through various mechanisms, implementing and preserving their long-range order formation.
In biology
Colloidal phase separation is an important organising principle for compartmentalisation of both the cytoplasm and nucleus of cells into biomolecular condensates—similar in importance to compartmentalisation via lipid bilayer membranes, a type of liquid crystal. The term biomolecular condensate has been used to refer to clusters of macromolecules that arise via liquid-liquid or liquid-solid phase separation within cells. Macromolecular crowding strongly enhances colloidal phase separation and formation of biomolecular condensates.
In the environment
Colloidal particles can also serve as transport vector
of diverse contaminants in the surface water (sea water, lakes, rivers, fresh water bodies) and in underground water circulating in fissured rocks
(e.g. limestone, sandstone, granite). Radionuclides and heavy metals easily sorb onto colloids suspended in water. Various types of colloids are recognised: inorganic colloids (e.g. clay particles, silicates, iron oxy-hydroxides), organic colloids (humic and fulvic substances). When heavy metals or radionuclides form their own pure colloids, the term "eigencolloid" is used to designate pure phases, i.e., pure Tc(OH)4, U(OH)4, or Am(OH)3. Colloids have been suspected for the long-range transport of plutonium on the Nevada Nuclear Test Site. They have been the subject of detailed studies for many years. However, the mobility of inorganic colloids is very low in compacted bentonites and in deep clay formations
because of the process of ultrafiltration occurring in dense clay membrane.
The question is less clear for small organic colloids often mixed in porewater with truly dissolved organic molecules.
In soil science, the colloidal fraction in soils consists of tiny clay and humus particles that are less than 1μm in diameter and carry either positive and/or negative electrostatic charges that vary depending on the chemical conditions of the soil sample, i.e. soil pH.
Intravenous therapy
Colloid solutions used in intravenous therapy belong to a major group of volume expanders, and can be used for intravenous fluid replacement. Colloids preserve a high colloid osmotic pressure in the blood, and therefore, they should theoretically preferentially increase the intravascular volume, whereas other types of volume expanders called crystalloids also increase the interstitial volume and intracellular volume. However, there is still controversy to the actual difference in efficacy by this difference, and much of the research related to this use of colloids is based on fraudulent research by Joachim Boldt. Another difference is that crystalloids generally are much cheaper than colloids.
References
Chemical mixtures
Colloidal chemistry
Condensed matter physics
Soft matter
Dosage forms |
5355 | https://en.wikipedia.org/wiki/Cooking | Cooking | Cooking, also known as cookery or professionally as the culinary arts, is the art, science and craft of using heat to make food more palatable, digestible, nutritious, or safe. Cooking techniques and ingredients vary widely, from grilling food over an open fire to using electric stoves, to baking in various types of ovens, reflecting local conditions.
Types of cooking also depend on the skill levels and training of the cooks. Cooking is done both by people in their own dwellings and by professional cooks and chefs in restaurants and other food establishments.
Preparing food with heat or fire is an activity unique to humans. Archeological evidence of cooking fires from at least 300,000 years ago exists, but some estimate that humans started cooking up to 2 million years ago.
The expansion of agriculture, commerce, trade, and transportation between civilizations in different regions offered cooks many new ingredients. New inventions and technologies, such as the invention of pottery for holding and boiling of water, expanded cooking techniques. Some modern cooks apply advanced scientific techniques to food preparation to further enhance the flavor of the dish served.
History
Phylogenetic analysis suggests that early hominids may have adopted cooking 1 million to 2 million years ago. of burnt bone fragments and plant ashes from the Wonderwerk Cave in South Africa has provided evidence supporting control of fire by early humans by 1 million years ago. In his seminal work Catching Fire: How Cooking Made Us Human, Richard Wrangham suggested that evolution of bipedalism and a large cranial capacity meant that early Homo habilis regularly cooked food. However, unequivocal evidence in the archaeological record for the controlled use of fire begins at 400,000 BCE, long after Homo erectus. Archaeological evidence from 300,000 years ago, in the form of ancient hearths, earth ovens, burnt animal bones, and flint, are found across Europe and the Middle East. The oldest evidence (via heated fish teeth from a deep cave) of controlled use of fire to cook food by archaic humans was dated to ~780,000 years ago. Anthropologists think that widespread cooking fires began about 250,000 years ago when hearths first appeared.
Recently, the earliest hearths have been reported to be at least 790,000 years old.
Communication between the Old World and the New World in the Columbian Exchange influenced the history of cooking. The movement of foods across the Atlantic from the New World, such as potatoes, tomatoes, maize, beans, bell pepper, chili pepper, vanilla, pumpkin, cassava, avocado, peanut, pecan, cashew, pineapple, blueberry, sunflower, chocolate, gourds, and squash, had a profound effect on Old World cooking. The movement of foods across the Atlantic from the Old World, such as cattle, sheep, pigs, wheat, oats, barley, rice, apples, pears, peas, chickpeas, green beans, mustard, and carrots, similarly changed New World cooking.
In the 17th and 18th centuries, food was a classic marker of identity in Europe. In the 19th-century "Age of Nationalism", cuisine became a defining symbol of national identity.
The Industrial Revolution brought mass-production, mass-marketing, and standardization of food. Factories processed, preserved, canned, and packaged a wide variety of foods, and processed cereals quickly became a defining feature of the American breakfast. In the 1920s, freezing methods, cafeterias, and fast food restaurants emerged.
Ingredients
Most ingredients in cooking are derived from living organisms. Vegetables, fruits, grains and nuts as well as herbs and spices come from plants, while meat, eggs, and dairy products come from animals. Mushrooms and the yeast used in baking are kinds of fungi. Cooks also use water and minerals such as salt. Cooks can also use wine or spirits.
Naturally occurring ingredients contain various amounts of molecules called proteins, carbohydrates and fats. They also contain water and minerals. Cooking involves a manipulation of the chemical properties of these molecules.
Carbohydrates
Carbohydrates include the common sugar, sucrose (table sugar), a disaccharide, and such simple sugars as glucose (made by enzymatic splitting of sucrose) and fructose (from fruit), and starches from sources such as cereal flour, rice, arrowroot and potato.
The interaction of heat and carbohydrate is complex. Long-chain sugars such as starch tend to break down into more digestible simpler sugars. If the sugars are heated so that all water of crystallisation is driven off, caramelization starts, with the sugar undergoing thermal decomposition with the formation of carbon, and other breakdown products producing caramel. Similarly, the heating of sugars and proteins causes the Maillard reaction, a basic flavor-enhancing technique.
An emulsion of starch with fat or water can, when gently heated, provide thickening to the dish being cooked. In European cooking, a mixture of butter and flour called a roux is used to thicken liquids to make stews or sauces. In Asian cooking, a similar effect is obtained from a mixture of rice or corn starch and water. These techniques rely on the properties of starches to create simpler mucilaginous saccharides during cooking, which causes the familiar thickening of sauces. This thickening will break down, however, under additional heat.
Fats
Types of fat include vegetable oils, animal products such as butter and lard, as well as fats from grains, including maize and flax oils. Fats are used in a number of ways in cooking and baking. To prepare stir fries, grilled cheese or pancakes, the pan or griddle is often coated with fat or oil. Fats are also used as an ingredient in baked goods such as cookies, cakes and pies. Fats can reach temperatures higher than the boiling point of water, and are often used to conduct high heat to other ingredients, such as in frying, deep frying or sautéing. Fats are used to add flavor to food (e.g., butter or bacon fat), prevent food from sticking to pans and create a desirable texture.
Proteins
Edible animal material, including muscle, offal, milk, eggs and egg whites, contains substantial amounts of protein. Almost all vegetable matter (in particular legumes and seeds) also includes proteins, although generally in smaller amounts. Mushrooms have high protein content. Any of these may be sources of essential amino acids. When proteins are heated they become denatured (unfolded) and change texture. In many cases, this causes the structure of the material to become softer or more friable – meat becomes cooked and is more friable and less flexible. In some cases, proteins can form more rigid structures, such as the coagulation of albumen in egg whites. The formation of a relatively rigid but flexible matrix from egg white provides an important component in baking cakes, and also underpins many desserts based on meringue.
Water
Cooking often involves water, and water-based liquids. These can be added in order to immerse the substances being cooked (this is typically done with water, stock or wine). Alternatively, the foods themselves can release water. A favorite method of adding flavor to dishes is to save the liquid for use in other recipes. Liquids are so important to cooking that the name of the cooking method used is often based on how the liquid is combined with the food, as in steaming, simmering, boiling, braising and blanching. Heating liquid in an open container results in rapidly increased evaporation, which concentrates the remaining flavor and ingredients; this is a critical component of both stewing and sauce making.
Vitamins and minerals
Vitamins and minerals are required for normal metabolism; and what the body cannot manufacture itself must come from external sources. Vitamins come from several sources including fresh fruit and vegetables (Vitamin C), carrots, liver (Vitamin A), cereal bran, bread, liver (B vitamins), fish liver oil (Vitamin D) and fresh green vegetables (Vitamin K). Many minerals are also essential in small quantities including iron, calcium, magnesium, sodium chloride and sulfur; and in very small quantities copper, zinc and selenium. The micronutrients, minerals, and vitamins in fruit and vegetables may be destroyed or eluted by cooking. Vitamin C is especially prone to oxidation during cooking and may be completely destroyed by protracted cooking. The bioavailability of some vitamins such as thiamin, vitamin B6, niacin, folate, and carotenoids are increased with cooking by being freed from the food microstructure. Blanching or steaming vegetables is a way of minimizing vitamin and mineral loss in cooking.
Methods
There are many methods of cooking, most of which have been known since antiquity. These include baking, roasting, frying, grilling, barbecuing, smoking, boiling, steaming and braising. A more recent innovation is microwaving. Various methods use differing levels of heat and moisture and vary in cooking time. The method chosen greatly affects the result because some foods are more appropriate to some methods than others. Some major hot cooking techniques include:
Roasting
Roasting – Barbecuing – Grilling/Broiling – Rotisserie – Searing
Baking
Baking – Baking Blind – Flashbaking
Boiling
Boiling – Blanching – Braising – Coddling – Double steaming – Infusion – Poaching – Pressure cooking – Simmering – Smothering – Steaming – Steeping – Stewing – Stone boiling – Vacuum flask cooking
Frying
Fry – Air frying — Deep frying – Gentle frying – Hot salt frying – Hot sand frying – Pan frying – Pressure frying – Sautéing – Shallow frying – Stir frying – Vacuum frying
Steaming
Steaming works by boiling water continuously, causing it to vaporise into steam; the steam then carries heat to the nearby food, thus cooking the food. By many it is considered a healthy form of cooking, holding nutrients within the vegetable or meat being cooked.
En papillote – The food is put into a pouch and then baked, allowing its own moisture to steam the food.
Smoking
Smoking is the process of flavoring, cooking, or preserving food by exposing it to smoke from burning or smoldering material, most often wood.
Health and safety
Indoor air pollution
As of 2021, over 2.6 billion people cook using open fires or inefficient stoves using kerosene, biomass, and coal as fuel. These cooking practices use fuels and technologies that produce high levels of household air pollution, causing 3.8 million premature deaths annually. Of these deaths, 27% are from pneumonia, 27% from ischaemic heart disease, 20% from chronic obstructive pulmonary disease, 18% from stroke, and 8% from lung cancer. Women and young children are disproportionately affected, since they spend the most time near the hearth.
Security while cooking
Hazards while cooking can include
Unseen slippery surfaces (such as from oil stains, water droplets, or items that have fallen on the floor)
Cuts; about a third of the US's estimated annual 400,000 knife injuries are kitchen-related.
Burns or fires
To prevent those injuries there are protections such as cooking clothing, anti-slip shoes, fire extinguisher and more.
Food safety
Cooking can prevent many foodborne illnesses that would otherwise occur if raw food is consumed. When heat is used in the preparation of food, it can kill or inactivate harmful organisms, such as bacteria and viruses, as well as various parasites such as tapeworms and Toxoplasma gondii. Food poisoning and other illness from uncooked or poorly prepared food may be caused by bacteria such as pathogenic strains of Escherichia coli, Salmonella typhimurium and Campylobacter, viruses such as noroviruses, and protozoa such as Entamoeba histolytica. Bacteria, viruses and parasites may be introduced through salad, meat that is uncooked or done rare, and unboiled water.
The sterilizing effect of cooking depends on temperature, cooking time, and technique used. Some food spoilage bacteria such as Clostridium botulinum or Bacillus cereus can form spores that survive boiling, which then germinate and regrow after the food has cooled. This makes it unsafe to reheat cooked food more than once.
Cooking increases the digestibility of many foods which are inedible or poisonous when raw. For example, raw cereal grains are hard to digest, while kidney beans are toxic when raw or improperly cooked due to the presence of phytohaemagglutinin, which is inactivated by cooking for at least ten minutes at .
Food safety depends on the safe preparation, handling, and storage of food. Food spoilage bacteria proliferate in the "Danger zone" temperature range from , food therefore should not be stored in this temperature range. Washing of hands and surfaces, especially when handling different meats, and keeping raw food separate from cooked food to avoid cross-contamination, are good practices in food preparation. Foods prepared on plastic cutting boards may be less likely to harbor bacteria than wooden ones. Washing and disinfecting cutting boards, especially after use with raw meat, poultry, or seafood, reduces the risk of contamination.
Effects on nutritional content of food
Proponents of raw foodism argue that cooking food increases the risk of some of the detrimental effects on food or health. They point out that during cooking of vegetables and fruit containing vitamin C, the vitamin elutes into the cooking water and becomes degraded through oxidation. Peeling vegetables can also substantially reduce the vitamin C content, especially in the case of potatoes where most vitamin C is in the skin. However, research has shown that in the specific case of carotenoids a greater proportion is absorbed from cooked vegetables than from raw vegetables.
Sulforaphane, a glucosinolate breakdown product, is present in vegetables such as broccoli, and is mostly destroyed when the vegetable is boiled. Although there has been some basic research on how sulforaphane might exert beneficial effects in vivo, there is no high-quality evidence for its efficacy against human diseases.
The United States Department of Agriculture has studied retention data for 16 vitamins, 8 minerals, and alcohol for approximately 290 foods across various cooking methods.
Carcinogens
In a human epidemiological analysis by Richard Doll and Richard Peto in 1981, diet was estimated to cause a large percentage of cancers. Studies suggest that around 32% of cancer deaths may be avoidable by changes to the diet. Some of these cancers may be caused by carcinogens in food generated during the cooking process, although it is often difficult to identify the specific components in diet that serve to increase cancer risk.
Several studies published since 1990 indicate that cooking meat at high temperature creates heterocyclic amines (HCAs), which are thought to increase cancer risk in humans. Researchers at the National Cancer Institute found that human subjects who ate beef rare or medium-rare had less than one third the risk of stomach cancer than those who ate beef medium-well or well-done. While avoiding meat or eating meat raw may be the only ways to avoid HCAs in meat fully, the National Cancer Institute states that cooking meat below creates "negligible amounts" of HCAs. Also, microwaving meat before cooking may reduce HCAs by 90% by reducing the time needed for the meat to be cooked at high heat. Nitrosamines are found in some food, and may be produced by some cooking processes from proteins or from nitrites used as food preservatives; cured meat such as bacon has been found to be carcinogenic, with links to colon cancer. Ascorbate, which is added to cured meat, however, reduces nitrosamine formation.
Baking, grilling or broiling food, especially starchy foods, until a toasted crust is formed generates significant concentrations of acrylamide. This discovery in 2002 led to international health concerns. Subsequent research has however found that it is not likely that the acrylamides in burnt or well-cooked food cause cancer in humans; Cancer Research UK categorizes the idea that burnt food causes cancer as a "myth".
Scientific aspects
The scientific study of cooking has become known as molecular gastronomy. This is a subdiscipline of food science concerning the physical and chemical transformations that occur during cooking.
Important contributions have been made by scientists, chefs and authors such as Hervé This (chemist), Nicholas Kurti (physicist), Peter Barham (physicist), Harold McGee (author), Shirley Corriher (biochemist, author), Robert Wolke (chemist, author.) It is different for the application of scientific knowledge to cooking, that is "molecular cooking"( (for the technique) or "molecular cuisine" (for a culinary style), for which chefs such as Raymond Blanc, Philippe and Christian Conticini, Ferran Adria, Heston Blumenthal, Pierre Gagnaire (chef).
Chemical processes central to cooking include hydrolysis (in particular beta elimination of pectins, during the thermal treatment of plant tissues), pyrolysis, and glycation reactions wrongly named Maillard reactions.
Cooking foods with heat depends on many factors: the specific heat of an object, thermal conductivity, and (perhaps most significantly) the difference in temperature between the two objects. Thermal diffusivity is the combination of specific heat, conductivity and density that determines how long it will take for the food to reach a certain temperature.
Home-cooking and commercial cooking
Home cooking has traditionally been a process carried out informally in a home or around a communal fire, and can be enjoyed by all members of the family, although in many cultures women bear primary responsibility. Cooking is also often carried out outside of personal quarters, for example at restaurants, or schools. Bakeries were one of the earliest forms of cooking outside the home, and bakeries in the past often offered the cooking of pots of food provided by their customers as an additional service. In the present day, factory food preparation has become common, with many "ready-to-eat" as well as "ready-to-cook" foods being prepared and cooked in factories and home cooks using a mixture of scratch made, and factory made foods together to make a meal. The nutritional value of including more commercially prepared foods has been found to be inferior to home-made foods. Home-cooked meals tend to be healthier with fewer calories, and less saturated fat, cholesterol and sodium on a per calorie basis while providing more fiber, calcium, and iron. The ingredients are also directly sourced, so there is control over authenticity, taste, and nutritional value. The superior nutritional quality of home-cooking could therefore play a role in preventing chronic disease. Cohort studies following the elderly over 10 years show that adults who cook their own meals have significantly lower mortality, even when controlling for confounding variables.
"Home-cooking" may be associated with comfort food, and some commercially produced foods and restaurant meals are presented through advertising or packaging as having been "home-cooked", regardless of their actual origin. This trend began in the 1920s and is attributed to people in urban areas of the U.S. wanting homestyle food even though their schedules and smaller kitchens made cooking harder.
See also
Carryover cooking
Cookbook
Cooker
Cooking weights and measures
Culinary arts
Culinary profession
Cooking school
Dishwashing
Food and cooking hygiene
Food industry
Food preservation
Food writing
Foodpairing
Gourmet Museum and Library
High altitude cooking
International food terms
List of cooking appliances
List of cuisines
List of films about cooking
List of food preparation utensils
List of ovens
List of stoves
Scented water
Staple (cooking)
References
External links
Articles containing video clips
Home economics
Survival skills |
5360 | https://en.wikipedia.org/wiki/Card%20game | Card game | A card game is any game using playing cards as the primary device with which the game is played, be they traditional or game-specific. Countless card games exist, including families of related games (such as poker). A small number of card games played with traditional decks have formally standardized rules with international tournaments being held, but most are folk games whose rules may vary by region, culture, location or from circle to circle.
Traditional card games are played with a deck or pack of playing cards which are identical in size and shape. Each card has two sides, the face and the back. Normally the backs of the cards are indistinguishable. The faces of the cards may all be unique, or there can be duplicates. The composition of a deck is known to each player. In some cases several decks are shuffled together to form a single pack or shoe. Modern card games usually have bespoke decks, often with a vast amount of cards, and can include number or action cards. This type of game is generally regarded as part of the board game hobby.
Games using playing cards exploit the fact that cards are individually identifiable from one side only, so that each player knows only the cards they hold and not those held by anyone else. For this reason card games are often characterized as games of chance or "imperfect information"—as distinct from games of strategy or perfect information, where the current position is fully visible to all players throughout the game. Many games that are not generally placed in the family of card games do in fact use cards for some aspect of their gameplay.
Some games that are placed in the card game genre involve a board. The distinction is that the gameplay of a card game chiefly depends on the use of the cards by players (the board is a guide for scorekeeping or for card placement), while board games (the principal non-card game genre to use cards) generally focus on the players' positions on the board, and use the cards for some secondary purpose.
Types
Trick-taking games
There are two main types of trick-taking game which have different objectives. Both are based on the play of multiple tricks, in each of which each player plays a single card from their hand, and based on the values of played cards one player wins or "takes" the trick.
Plain-trick games. Many common Anglo-American games fall into this category. The usual objective is to take the most tricks, but variations taking all tricks, making as few tricks (or penalty cards) as possible or taking an exact number of tricks. Bridge, Whist and Spades are popular examples. Hearts, Black Lady and Black Maria are examples of reverse games in which the aim is to avoid certain cards.
Point-trick games. These are all European or of European origin. Individual cards have specific point values and the objective is usually to amass the majority of points by taking tricks, especially those with higher value cards. The main group is the Ace-Ten family which includes many national games such as German Skat, French Belote, Dutch Klaberjass, Austrian Schnapsen, Spanish Tute, Swiss Jass, Portuguese Sueca, Italian Briscola and Czech Mariáš. Pinochle is an American example of French or Swiss origin. All Tarot card games are of the point-trick variety including German Cego, Austrian Tarock, French Tarot and Italian Minchiate.
Matching games
The object of a matching (or sometimes "melding") game is to acquire particular groups of matching cards before an opponent can do so. In Rummy, this is done through drawing and discarding, and the groups are called melds. Mahjong is a very similar game played with tiles instead of cards. Non-Rummy examples of match-type games generally fall into the "fishing" genre and include the children's games Go Fish and Old Maid.
Shedding games
In a shedding game, players start with a hand of cards, and the object of the game is to be the first player to discard all cards from one's hand. Common shedding games include Crazy Eights (commercialized by Mattel as Uno) and Daihinmin. Some matching-type games are also shedding-type games; some variants of Rummy such as Paskahousu, Phase 10, Rummikub, the bluffing game I Doubt It, and the children's games Musta Maija and Old Maid, fall into both categories.
Catch and collect games
The object of an accumulating game is to acquire all cards in the deck. Examples include most War type games, and games involving slapping a discard pile such as Slapjack. Egyptian Ratscrew has both of these features.
Fishing games
In fishing games, cards from the hand are played against cards in a layout on the table, capturing table cards if they match. Fishing games are popular in many nations, including China, where there are many diverse fishing games. Scopa is considered one of the national card games of Italy. Cassino is the only fishing game to be widely played in English-speaking countries. Zwicker has been described as a "simpler and jollier version of Cassino", played in Germany. Tablanet (tablić) is a fishing-style game popular in Balkans.
Comparing games
Comparing card games are those where hand values are compared to determine the winner, also known as "vying" or "showdown" games. Poker, blackjack, mus, and baccarat are examples of comparing card games. As seen, nearly all of these games are designed as gambling games.
Patience and solitaire games
Solitaire games are designed to be played by one player. Most games begin with a specific layout of cards, called a tableau, and the object is then either to construct a more elaborate final layout, or to clear the tableau and/or the draw pile or stock by moving all cards to one or more "discard" or "foundation" piles.
Drinking card games
Drinking card games are drinking games using cards, in which the object in playing the game is either to drink or to force others to drink. Many games are ordinary card games with the establishment of "drinking rules"; President, for instance, is virtually identical to Daihinmin but with additional rules governing drinking. Poker can also be played using a number of drinks as the wager. Another game often played as a drinking game is Toepen, quite popular in the Netherlands. Some card games are designed specifically to be played as drinking games.
Compendium games
Compendium games consist of a sequence of different contracts played in succession. A common pattern is for a number of reverse deals to be played, in which the aim is to avoid certain cards, followed by a final contract which is a domino-type game. Examples include: Barbu, Herzeln, Lorum and Rosbiratschka. In other games, such as Quodlibet and Rumpel, there is a range of widely varying contracts.
Collectible card games (CCGs)
Collectible card games (CCG) are proprietary playing card games. CCGs are games of strategy between two or more players. Each player has their own deck constructed from a very large pool of unique cards in the commercial market. The cards have different effects, costs, and art. New card sets are released periodically and sold as starter decks or booster packs. Obtaining the different cards makes the game a collectible card game, and cards are sold or traded on the secondary market. Magic: The Gathering, Pokémon, and Yu-Gi-Oh! are well-known collectible card games.
Living card games (LCGs)
Living card games (LCGs) are similar to collectible card games (CCGs), with their most distinguishing feature being a fixed distribution method, which breaks away from the traditional collectible card game format. While new cards for CCGs are usually sold in the form of starter decks or booster packs (the latter being often randomized), LCGs thrive on a model that requires players to acquire one core set in order to play the game, which players can further customize by acquiring extra sets or expansions featuring new content in the form of cards or scenarios. No randomization is involved in the process, thus players that get the same sets or expansions will get the exact same content. The term was popularized by Fantasy Flight Games (FFG) and mainly applies to its products, however some tabletop gaming companies can be seen using a very similar model.
Casino or gambling card games
These games revolve around wagers of money. Though virtually any game in which there are winning and losing outcomes can be wagered on, these games are specifically designed to make the betting process a strategic part of the game. Some of these games involve players betting against each other, such as poker, while in others, like blackjack, players wager against the house.
Poker games
Poker is a family of gambling games in which players bet into a pool, called the pot, the value of which changes as the game progresses that the value of the hand they carry will beat all others according to the ranking system. Variants largely differ on how cards are dealt and the methods by which players can improve a hand. For many reasons, including its age and its popularity among Western militaries, it is one of the most universally known card games in existence.
Other card games
Many other card games have been designed and published on a commercial or amateur basis. In some cases, the game uses the standard 52-card deck, but the object is unique. In Eleusis, for example, players play single cards, and are told whether the play was legal or illegal, in an attempt to discover the underlying rules made up by the dealer.
Most of these games however typically use a specially made deck of cards designed specifically for the game (or variations of it). The decks are thus usually proprietary, but may be created by the game's players. Uno, Phase 10, Set, and 1000 Blank White Cards are popular dedicated-deck card games; 1000 Blank White Cards is unique in that the cards for the game are designed by the players of the game while playing it; there is no commercially available deck advertised as such.
Simulation card games
A deck of either customised dedicated cards or a standard deck of playing cards with assigned meanings is used to simulate the actions of another activity, for example card football.
Fictional card games
Many games, including card games, are fabricated by science fiction authors and screenwriters to distance a culture depicted in the story from present-day Western culture. They are commonly used as filler to depict background activities in an atmosphere like a bar or rec room, but sometimes the drama revolves around the play of the game. Some of these games become real card games as the holder of the intellectual property develops and markets a suitable deck and ruleset for the game, while others lack sufficient descriptions of rules, or depend on cards or other hardware that are infeasible or physically impossible.
Typical structure of card games
Number and association of players
Any specific card game imposes restrictions on the number of players. The most significant dividing lines run between one-player games and two-player games, and between two-player games and multi-player games. Card games for one player are known as solitaire or patience card games. (See list of solitaire card games.) Generally speaking, they are in many ways special and atypical, although some of them have given rise to two- or multi-player games such as Spite and Malice.
In card games for two players, usually not all cards are distributed to the players, as they would otherwise have perfect information about the game state. Two-player games have always been immensely popular and include some of the most significant card games such as piquet, bezique, sixty-six, klaberjass, gin rummy and cribbage. Many multi-player games started as two-player games that were adapted to a greater number of players. For such adaptations a number of non-obvious choices must be made beginning with the choice of a game orientation.
One way of extending a two-player game to more players is by building two teams of equal size. A common case is four players in two fixed partnerships, sitting crosswise as in whist and contract bridge. Partners sit opposite to each other and cannot see each other's hands. If communication between the partners is allowed at all, then it is usually restricted to a specific list of permitted signs and signals. 17th-century French partnership games such as triomphe were special in that partners sat next to each other and were allowed to communicate freely so long as they did not exchange cards or play out of order.
Another way of extending a two-player game to more players is as a cut-throat or individual game, in which all players play for themselves, and win or lose alone. Most such card games are round games, i.e. they can be played by any number of players starting from two or three, so long as there are enough cards for all.
For some of the most interesting games such as ombre, tarot and skat, the associations between players change from hand to hand. Ultimately players all play on their own, but for each hand, some game mechanism divides the players into two teams. Most typically these are solo games, i.e. games in which one player becomes the soloist and has to achieve some objective against the others, who form a team and win or lose all their points jointly. But in games for more than three players, there may also be a mechanism that selects two players who then have to play against the others.
Direction of play
The players of a card game normally form a circle around a table or other space that can hold cards. The game orientation or direction of play, which is only relevant for three or more players, can be either clockwise or counterclockwise. It is the direction in which various roles in the game proceed. (In real-time card games, there may be no need for a direction of play.) Most regions have a traditional direction of play, such as:
Counterclockwise in most of Asia and in Latin America.
Clockwise in North America and Australia.
Europe is roughly divided into a clockwise area in the north and a counterclockwise area in the south. The boundary runs between England, Ireland, Netherlands, Germany, Austria (mostly), Slovakia, Ukraine and Russia (clockwise) and France, Switzerland, Spain, Italy, Slovenia, Balkans, Hungary, Romania, Bulgaria, Greece and Turkey (counterclockwise).
Games that originate in a region with a strong preference are often initially played in the original direction, even in regions that prefer the opposite direction. For games that have official rules and are played in tournaments, the direction of play is often prescribed in those rules.
Determining who deals
Most games have some form of asymmetry between players. The roles of players are normally expressed in terms of the dealer, i.e. the player whose task it is to shuffle the cards and distribute them to the players. Being the dealer can be a (minor or major) advantage or disadvantage, depending on the game. Therefore, after each played hand, the deal normally passes to the next player according to the game orientation.
As it can still be an advantage or disadvantage to be the first dealer, there are some standard methods for determining who is the first dealer. A common method is by cutting, which works as follows. One player shuffles the deck and places it on the table. Each player lifts a packet of cards from the top, reveals its bottom card, and returns it to the deck. The player who reveals the highest (or lowest) card becomes dealer. In the case of a tie, the process is repeated by the tied players. For some games such as whist this process of cutting is part of the official rules, and the hierarchy of cards for the purpose of cutting (which need not be the same as that used otherwise in the game) is also specified. But in general, any method can be used, such as tossing a coin in case of a two-player game, drawing cards until one player draws an ace, or rolling dice.
Hands, rounds and games
A hand, also called a deal, is a unit of the game that begins with the dealer shuffling and dealing the cards as described below, and ends with the players scoring and the next dealer being determined. The set of cards that each player receives and holds in his or her hands is also known as that player's hand.
The hand is over when the players have finished playing their hands. Most often this occurs when one player (or all) has no cards left. The player who sits after the dealer in the direction of play is known as eldest hand (or in two-player games as elder hand) or forehand. A game round consists of as many hands as there are players. After each hand, the deal is passed on in the direction of play, i.e. the previous eldest hand becomes the new dealer. Normally players score points after each hand. A game may consist of a fixed number of rounds. Alternatively it can be played for a fixed number of points. In this case it is over with the hand in which a player reaches the target score.
Shuffling
Shuffling is the process of bringing the cards of a pack into a random order. There are a large number of techniques with various advantages and disadvantages. Riffle shuffling is a method in which the deck is divided into two roughly equal-sized halves that are bent and then released, so that the cards interlace. Repeating this process several times randomizes the deck well, but the method is harder to learn than some others and may damage the cards. The overhand shuffle and the Hindu shuffle are two techniques that work by taking batches of cards from the top of the deck and reassembling them in the opposite order. They are easier to learn but must be repeated more often. A method suitable for small children consists in spreading the cards on a large surface and moving them around before picking up the deck again. This is also the most common method for shuffling tiles such as dominoes.
For casino games that are played for large sums it is vital that the cards be properly randomized, but for many games this is less critical, and in fact player experience can suffer when the cards are shuffled too well. The official skat rules stipulate that the cards are shuffled well, but according to a decision of the German skat court, a one-handed player should ask another player to do the shuffling, rather than use a shuffling machine, as it would shuffle the cards too well. French belote rules go so far as to prescribe that the deck never be shuffled between hands.
Deal
The dealer takes all of the cards in the pack, arranges them so that they are in a uniform stack, and shuffles them. In strict play, the dealer then offers the deck to the previous player (in the sense of the game direction) for cutting. If the deal is clockwise, this is the player to the dealer's right; if counterclockwise, it is the player to the dealer's left. The invitation to cut is made by placing the pack, face downward, on the table near the player who is to cut: who then lifts the upper portion of the pack clear of the lower portion and places it alongside. (Normally the two portions have about equal size. Strict rules often indicate that each portion must contain a certain minimum number of cards, such as three or five.) The formerly lower portion is then replaced on top of the formerly upper portion. Instead of cutting, one may also knock on the deck to indicate that one trusts the dealer to have shuffled fairly.
The actual deal (distribution of cards) is done in the direction of play, beginning with eldest hand. The dealer holds the pack, face down, in one hand, and removes cards from the top of it with his or her other hand to distribute to the players, placing them face down on the table in front of the players to whom they are dealt. The cards may be dealt one at a time, or in batches of more than one card; and either the entire pack or a determined number of cards are dealt out. The undealt cards, if any, are left face down in the middle of the table, forming the stock (also called the talon, widow, skat or kitty depending on the game and region).
Throughout the shuffle, cut, and deal, the dealer should prevent the players from seeing the faces of any of the cards. The players should not try to see any of the faces. Should a player accidentally see a card, other than one's own, proper etiquette would be to admit this. It is also dishonest to try to see cards as they are dealt, or to take advantage of having seen a card. Should a card accidentally become exposed, (visible to all), any player can demand a redeal (all the cards are gathered up, and the shuffle, cut, and deal are repeated) or that the card be replaced randomly into the deck ("burning" it) and a replacement dealt from the top to the player who was to receive the revealed card.
When the deal is complete, all players pick up their cards, or "hand", and hold them in such a way that the faces can be seen by the holder of the cards but not the other players, or vice versa depending on the game. It is helpful to fan one's cards out so that if they have corner indices all their values can be seen at once. In most games, it is also useful to sort one's hand, rearranging the cards in a way appropriate to the game. For example, in a trick-taking game it may be easier to have all one's cards of the same suit together, whereas in a rummy game one might sort them by rank or by potential combinations.
Rules
A new card game starts in a small way, either as someone's invention, or as a modification of an existing game. Those playing it may agree to change the rules as they wish. The rules that they agree on become the "house rules" under which they play the game. A set of house rules may be accepted as valid by a group of players wherever they play, as it may also be accepted as governing all play within a particular house, café, or club.
When a game becomes sufficiently popular, so that people often play it with strangers, there is a need for a generally accepted set of rules. This need is often met when a particular set of house rules becomes generally recognized. For example, when Whist became popular in 18th-century England, players in the Portland Club agreed on a set of house rules for use on its premises. Players in some other clubs then agreed to follow the "Portland Club" rules, rather than go to the trouble of codifying and printing their own sets of rules. The Portland Club rules eventually became generally accepted throughout England and Western cultures.
There is nothing static or "official" about this process. For the majority of games, there is no one set of universal rules by which the game is played, and the most common ruleset is no more or less than that. Many widely played card games, such as Canasta and Pinochle, have no official regulating body. The most common ruleset is often determined by the most popular distribution of rulebooks for card games. Perhaps the original compilation of popular playing card games was collected by Edmund Hoyle, a self-made authority on many popular parlor games. The U.S. Playing Card Company now owns the eponymous Hoyle brand, and publishes a series of rulebooks for various families of card games that have largely standardized the games' rules in countries and languages where the rulebooks are widely distributed. However, players are free to, and often do, invent "house rules" to supplement or even largely replace the "standard" rules.
If there is a sense in which a card game can have an official set of rules, it is when that card game has an "official" governing body. For example, the rules of tournament bridge are governed by the World Bridge Federation, and by local bodies in various countries such as the American Contract Bridge League in the U.S., and the English Bridge Union in England. The rules of skat are governed by The International Skat Players Association and, in Germany, by the Deutscher Skatverband which publishes the Skatordnung. The rules of French tarot are governed by the Fédération Française de Tarot. The rules of Schafkopf are laid down by the Schafkopfschule in Munich. Even in these cases, the rules must only be followed at games sanctioned by these governing bodies or where the tournament organisers specify them. Players in informal settings are free to implement agreed supplemental or substitute rules. For example, in Schafkopf there are numerous local variants sometimes known as "impure" Schafkopf and specified by assuming the official rules and describing the additions e.g. "with Geier and Bettel, tariff 5/10 cents".
Rule infractions
An infraction is any action which is against the rules of the game, such as playing a card when it is not one's turn to play or the accidental exposure of a card, informally known as "bleeding."
In many official sets of rules for card games, the rules specifying the penalties for various infractions occupy more pages than the rules specifying how to play correctly. This is tedious but necessary for games that are played seriously. Players who intend to play a card game at a high level generally ensure before beginning that all agree on the penalties to be used. When playing privately, this will normally be a question of agreeing house rules. In a tournament, there will probably be a tournament director who will enforce the rules when required and arbitrate in cases of doubt.
If a player breaks the rules of a game deliberately, this is cheating. The rest of this section is therefore about accidental infractions, caused by ignorance, clumsiness, inattention, etc.
As the same game is played repeatedly among a group of players, precedents build up about how a particular infraction of the rules should be handled. For example, "Sheila just led a card when it wasn't her turn. Last week when Jo did that, we agreed ... etc." Sets of such precedents tend to become established among groups of players, and to be regarded as part of the house rules. Sets of house rules may become formalized, as described in the previous section. Therefore, for some games, there is a "proper" way of handling infractions of the rules. But for many games, without governing bodies, there is no standard way of handling infractions.
In many circumstances, there is no need for special rules dealing with what happens after an infraction. As a general principle, the person who broke a rule should not benefit from it, and the other players should not lose by it. An exception to this may be made in games with fixed partnerships, in which it may be felt that the partner(s) of the person who broke a rule should also not benefit. The penalty for an accidental infraction should be as mild as reasonable, consistent with there being a possible benefit to the person responsible.
Playing cards
The oldest surviving reference to the card game in world history is from the 9th century China, when the Collection of Miscellanea at Duyang, written by Tang-dynasty writer Su E, described Princess Tongchang (daughter of Emperor Yizong of Tang) playing the "leaf game" with members of the Wei clan (the family of the princess's husband) in 868 . The Song dynasty statesman and historian Ouyang Xiu has noted that paper playing cards arose in connection to an earlier development in the book format from scrolls to pages.
Playing cards first appeared in Europe in the last quarter of the 14th century. The earliest European references speak of a Saracen or Moorish game called naib, and in fact an almost complete Mamluk Egyptian deck of 52 cards in a distinct oriental design has survived from around the same time, with the four suits swords, polo sticks, cups and coins and the ranks king, governor, second governor, and ten to one.
The 1430s in Italy saw the invention of the tarot deck, a full Latin-suited deck augmented by suitless cards with painted motifs that played a special role as trumps. Tarot card games are still played with (subsets of) these decks in parts of Central Europe. A full tarot deck contains 14 cards in each suit; low cards labeled 1–10, and court cards (jack), (cavalier/knight), (queen), and (king), plus the fool or excuse card, and 21 trump cards. In the 18th century the card images of the traditional Italian tarot decks became popular in cartomancy and evolved into "esoteric" decks used primarily for the purpose; today most tarot decks sold in North America are the occult type, and are closely associated with fortune telling. In Europe, "playing tarot" decks remain popular for games, and have evolved since the 18th century to use regional suits (spades, hearts, diamonds and clubs in France; leaves, hearts, bells and acorns in Germany) as well as other familiar aspects of the English-pattern pack such as corner card indices and "stamped" card symbols for non-court cards. Decks differ regionally based on the number of cards needed to play the games; the French tarot consists of the "full" 78 cards, while Germanic, Spanish and Italian Tarot variants remove certain values (usually low suited cards) from the deck, creating a deck with as few as 32 cards.
The French suits were introduced around 1480 and, in France, mostly replaced the earlier Latin suits of swords, clubs, cups and coins. (which are still common in Spanish- and Portuguese-speaking countries as well as in some northern regions of Italy) The suit symbols, being very simple and single-color, could be stamped onto the playing cards to create a deck, thus only requiring special full-color card art for the court cards. This drastically simplifies the production of a deck of cards versus the traditional Italian deck, which used unique full-color art for each card in the deck. The French suits became popular in English playing cards in the 16th century (despite historic animosity between France and England), and from there were introduced to British colonies including North America. The rise of Western culture has led to the near-universal popularity and availability of French-suited playing cards even in areas with their own regional card art.
In Japan, a distinct 48-card hanafuda deck is popular. It is derived from 16th-century Portuguese decks, after undergoing a long evolution driven by laws enacted by the Tokugawa shogunate attempting to ban the use of playing cards
The best-known deck internationally is the English pattern of the 52-card French deck, also called the International or Anglo-American pattern, used for such games as poker and contract bridge. It contains one card for each unique combination of thirteen ranks and the four French suits spades, hearts, diamonds, and clubs. The ranks (from highest to lowest in bridge and poker) are ace, king, queen, jack (or knave), and the numbers from ten down to two (or deuce). The trump cards and knight cards from the French playing tarot are not included.
Originally the term knave was more common than "jack"; the card had been called a jack as part of the terminology of All-Fours since the 17th century, but the word was considered vulgar. (Note the exclamation by Estella in Charles Dickens's novel Great Expectations: "He calls the knaves, Jacks, this boy!") However, because the card abbreviation for knave ("Kn") was so close to that of the king, it was very easy to confuse them, especially after suits and rankings were moved to the corners of the card in order to enable people to fan them in one hand and still see all the values. (The earliest known deck to place suits and rankings in the corner of the card is from 1693, but these cards did not become common until after 1864 when Hart reintroduced them along with the knave-to-jack change.) However, books of card games published in the third quarter of the 19th century evidently still referred to the "knave", and the term with this definition is still recognized in the United Kingdom.
In the 17th century, a French, five-trick, gambling game called Bête became popular and spread to Germany, where it was called La Bete and England where it was named Beast. It was a derivative of Triomphe and was the first card game in history to introduce the concept of bidding.
Chinese handmade mother-of-pearl gaming counters were used in scoring and bidding of card games in the West during the approximate period of 1700–1840. The gaming counters would bear an engraving such as a coat of arms or a monogram to identify a family or individual. Many of the gaming counters also depict Chinese scenes, flowers or animals. Queen Charlotte is one prominent British individual who is known to have played with the Chinese gaming counters. Card games such as Ombre, Quadrille and Pope Joan were popular at the time and required counters for scoring. The production of counters declined after Whist, with its different scoring method, became the most popular card game in the West.
Based on the association of card games and gambling, Pope Benedict XIV banned card games on October 17, 1750.
See also
Game of chance
Game of skill
R. F. Foster (games)
Henry Jones (writer) who wrote under the pseudonym "Cavendish"
John Scarne
Dice game
List of card games by number of cards
References
External links
International Playing Card Society
Rules for historic card games
Collection of rules to many card games
Tabletop games |
5363 | https://en.wikipedia.org/wiki/Video%20game | Video game | A video game or computer game is an electronic game that involves interaction with a user interface or input device (such as a joystick, controller, keyboard, or motion sensing device) to generate visual feedback from a display device, most commonly shown in a video format on a television set, computer monitor, flat-panel display or touchscreen on handheld devices, or a virtual reality headset. Most modern video games are audiovisual, with audio complement delivered through speakers or headphones, and sometimes also with other types of sensory feedback (e.g., haptic technology that provides tactile sensations), and some video games also allow microphone and webcam inputs for in-game chatting and livestreaming.
Video games are typically categorized according to their hardware platform, which traditionally includes arcade video games, console games, and computer (PC) games; the latter also encompasses LAN games, online games, and browser games. More recently, the video game industry has expanded onto mobile gaming through mobile devices (such as smartphones and tablet computers), virtual and augmented reality systems, and remote cloud gaming. Video games are also classified into a wide range of genres based on their style of gameplay and target audience.
The first video game prototypes in the 1950s and 1960s were simple extensions of electronic games using video-like output from large, room-sized mainframe computers. The first consumer video game was the arcade video game Computer Space in 1971. In 1972 came the iconic hit game Pong and the first home console, the Magnavox Odyssey. The industry grew quickly during the "golden age" of arcade video games from the late 1970s to early 1980s but suffered from the crash of the North American video game market in 1983 due to loss of publishing control and saturation of the market. Following the crash, the industry matured, was dominated by Japanese companies such as Nintendo, Sega, and Sony, and established practices and methods around the development and distribution of video games to prevent a similar crash in the future, many of which continue to be followed. In the 2000s, the core industry centered on "AAA" games, leaving little room for riskier experimental games. Coupled with the availability of the Internet and digital distribution, this gave room for independent video game development (or "indie games") to gain prominence into the 2010s. Since then, the commercial importance of the video game industry has been increasing. The emerging Asian markets and proliferation of smartphone games in particular are altering player demographics towards casual gaming and increasing monetization by incorporating games as a service.
Today, video game development requires numerous interdisciplinary skills, vision, teamwork, and liaisons between different parties, including developers, publishers, distributors, retailers, hardware manufacturers, and other marketers, to successfully bring a game to its consumers. , the global video game market had estimated annual revenues of across hardware, software, and services, which is three times the size of the global music industry and four times that of the film industry in 2019, making it a formidable heavyweight across the modern entertainment industry. The video game market is also a major influence behind the electronics industry, where personal computer component, console, and peripheral sales, as well as consumer demands for better game performance, have been powerful driving factors for hardware design and innovation.
Origins
Early video games use interactive electronic devices with various display formats. The earliest example is from 1947—a "cathode-ray tube amusement device" was filed for a patent on 25 January 1947, by Thomas T. Goldsmith Jr. and Estle Ray Mann, and issued on 14 December 1948, as U.S. Patent 2455992. Inspired by radar display technology, it consists of an analog device allowing a user to control the parabolic arc of a dot on the screen to simulate a missile being fired at targets, which are paper drawings fixed to the screen. Other early examples include Christopher Strachey's draughts game, the Nimrod computer at the 1951 Festival of Britain; OXO, a tic-tac-toe computer game by Alexander S. Douglas for the EDSAC in 1952; Tennis for Two, an electronic interactive game engineered by William Higinbotham in 1958; and Spacewar!, written by Massachusetts Institute of Technology students Martin Graetz, Steve Russell, and Wayne Wiitanen's on a DEC PDP-1 computer in 1961. Each game has different means of display: NIMROD has a panel of lights to play the game of Nim, OXO has a graphical display to play tic-tac-toe, Tennis for Two has an oscilloscope to display a side view of a tennis court, and Spacewar! has the DEC PDP-1's vector display to have two spaceships battle each other.
These preliminary inventions paved the way for the origins of video games today. Ralph H. Baer, while working at Sanders Associates in 1966, devised a control system to play a rudimentary game of table tennis on a television screen. With the company's approval, Baer built the prototype "Brown Box". Sanders patented Baer's inventions and licensed them to Magnavox, which commercialized it as the first home video game console, the Magnavox Odyssey, released in 1972. Separately, Nolan Bushnell and Ted Dabney, inspired by seeing Spacewar! running at Stanford University, devised a similar version running in a smaller coin-operated arcade cabinet using a less expensive computer. This was released as Computer Space, the first arcade video game, in 1971. Bushnell and Dabney went on to form Atari, Inc., and with Allan Alcorn, created their second arcade game in 1972, the hit ping pong-style Pong, which was directly inspired by the table tennis game on the Odyssey. Sanders and Magnavox sued Atari for infringement of Baer's patents, but Atari settled out of court, paying for perpetual rights to the patents. Following their agreement, Atari made a home version of Pong, which was released by Christmas 1975. The success of the Odyssey and Pong, both as an arcade game and home machine, launched the video game industry. Both Baer and Bushnell have been titled "Father of Video Games" for their contributions.
Terminology
The term "video game" was developed to distinguish this class of electronic games that were played on some type of video display rather than on a teletype printer, audio speaker or similar device. This also distinguished from many handheld electronic games like Merlin which commonly used LED lights for indicators but did not use these in combination for imaging purposes.
"Computer game" may also be used as a descriptor, as all these types of games essentially require the use of a computer processor, and in some cases, it is used interchangeably with "video game". Particularly in the United Kingdom and Western Europe, this is common due to the historic relevance of domestically produced microcomputers. Other terms used include digital game, for example by the Australian Bureau of Statistics. However, the term "computer game" can also be used to more specifically refer to games played primarily on personal computers or other type of flexible hardware systems (also known as a PC game), as a way distinguish them from console games, arcade games or mobile games. Other terms such as "television game" or "telegame" had been used in the 1970s and early 1980s, particularly for the home gaming consoles that rely on connection to a television set. In Japan, where consoles like the Odyssey were first imported and then made within the country by the large television manufacturers such as Toshiba and Sharp Corporation, such games are known as "TV games", or TV geemu or terebi geemu. "Electronic game" may also be used to refer to video games, but this also incorporates devices like early handheld electronic games that lack any video output. and the term "TV game" is still commonly used into the 21st century.
The first appearance of the term "video game" emerged around 1973. The Oxford English Dictionary cited a 10 November 1973 BusinessWeek article as the first printed use of the term. Though Bushnell believed the term came from a vending magazine review of Computer Space in 1971, a review of the major vending magazines Vending Times and Cashbox showed that the term came much earlier, appearing first around March 1973 in these magazines in mass usage including by the arcade game manufacturers. As analyzed by video game historian Keith Smith, the sudden appearance suggested that the term had been proposed and readily adopted by those involved. This appeared to trace to Ed Adlum, who ran Cashboxs coin-operated section until 1972 and then later founded RePlay Magazine, covering the coin-op amusement field, in 1975. In a September 1982 issue of RePlay, Adlum is credited with first naming these games as "video games": "RePlay's Eddie Adlum worked at 'Cash Box' when 'TV games' first came out. The personalities in those days were Bushnell, his sales manager Pat Karns and a handful of other 'TV game' manufacturers like Henry Leyser and the McEwan brothers. It seemed awkward to call their products 'TV games', so borrowing a word from Billboards description of movie jukeboxes, Adlum started to refer to this new breed of amusement machine as 'video games.' The phrase stuck." Adlum explained in 1985 that up until the early 1970s, amusement arcades typically had non-video arcade games such as pinball machines and electro-mechanical games. With the arrival of video games in arcades during the early 1970s, there was initially some confusion in the arcade industry over what term should be used to describe the new games. He "wrestled with descriptions of this type of game," alternating between "TV game" and "television game" but "finally woke up one day" and said, "what the hell... video game!"
For many years, the traveling Videotopia exhibit served as the closest representation of such a vital resource. In addition to collecting home video game consoles, the Electronics Conservancy organization set out to locate and restore 400 antique arcade cabinets after realizing that the majority of these games had been destroyed and feared the loss of their historical significance. Video games have significantly began to be seen in the real-world as a purpose to present history in a way of understanding the methodology and terms that are being compared. Researchers have looked at how historical representations affect how the public perceives the past, and digital humanists encourage historians to use video games as primary materials. Video games, considering their past and age, have over time progressed as what a video game really means. Whether played through a monitor, TV, or a hand-held device, there are many ways that video games are being displayed for users to enjoy. People have drawn comparisons between flow-state-engaged video gamers and pupils in conventional school settings. In traditional, teacher-led classrooms, students have little say in what they learn, are passive consumers of the information selected by teachers, are required to follow the pace and skill level of the group (group teaching), and receive brief, imprecise, normative feedback on their work. Video games, as they continue to develop into better graphic definition and genre's, create new terminology when something unknown tends to become known. Yearly, consoles are being created to compete against other brands with similar functioning features that tends to lead the consumer into which they'd like to purchase. Now, companies have moved towards games only the specific console can play to grasp the consumer into purchasing their product compared to when video games first began, there was little to no variety. In 1989, a console war begun with Nintendo, one of the biggest in gaming was up against target, Sega with their brand new Master System which, failed to compete, allowing the Nintendo Emulator System to be one of the most consumed product in the world. More technology continued to be created, as the computer began to be used in people's houses for more than just office and daily use. Games began being implemented into computers and have progressively grown since then with coded robots to play against you. Early games like tic-tac-toe, solitaire, and Tennis for Two were great ways to bring new gaming to another system rather than one specifically meant for gaming.
Definition
While many games readily fall into a clear, well-understood definition of video games, new genres and innovations in game development have raised the question of what are the essential factors of a video game that separate the medium from other forms of entertainment.
The introduction of interactive films in the 1980s with games like Dragon's Lair, featured games with full motion video played off a form of media but only limited user interaction. This had required a means to distinguish these games from more traditional board games that happen to also use external media, such as the Clue VCR Mystery Game which required players to watch VCR clips between turns. To distinguish between these two, video games are considered to require some interactivity that affects the visual display.
Most video games tend to feature some type of victory or winning conditions, such as a scoring mechanism or a final boss fight. The introduction of walking simulators (adventure games that allow for exploration but lack any objectives) like Gone Home, and empathy games (video games that tend to focus on emotion) like That Dragon, Cancer brought the idea of games that did not have any such type of winning condition and raising the question of whether these were actually games. These are still commonly justified as video games as they provide a game world that the player can interact with by some means.
The lack of any industry definition for a video game by 2021 was an issue during the case Epic Games v. Apple which dealt with video games offered on Apple's iOS App Store. Among concerns raised were games like Fortnite Creative and Roblox which created metaverses of interactive experiences, and whether the larger game and the individual experiences themselves were games or not in relation to fees that Apple charged for the App Store. Judge Yvonne Gonzalez Rogers, recognizing that there was yet an industry standard definition for a video game, established for her ruling that "At a bare minimum, videogames appear to require some level of interactivity or involvement between the player and the medium" compared to passive entertainment like film, music, and television, and "videogames are also generally graphically rendered or animated, as opposed to being recorded live or via motion capture as in films or television". Rogers still concluded that what is a video game "appears highly eclectic and diverse".
Video game terminology
The gameplay experience varies radically between video games, but many common elements exist. Most games will launch into a title screen and give the player a chance to review options such as the number of players before starting a game. Most games are divided into levels which the player must work the avatar through, scoring points, collecting power-ups to boost the avatar's innate attributes, all while either using special attacks to defeat enemies or moves to avoid them. This information is relayed to the player through a type of on-screen user interface such as a heads-up display atop the rendering of the game itself. Taking damage will deplete their avatar's health, and if that falls to zero or if the avatar otherwise falls into an impossible-to-escape location, the player will lose one of their lives. Should they lose all their lives without gaining an extra life or "1-UP", then the player will reach the "game over" screen. Many levels as well as the game's finale end with a type of boss character the player must defeat to continue on. In some games, intermediate points between levels will offer save points where the player can create a saved game on storage media to restart the game should they lose all their lives or need to stop the game and restart at a later time. These also may be in the form of a passage that can be written down and reentered at the title screen.
Product flaws include software bugs which can manifest as glitches which may be exploited by the player; this is often the foundation of speedrunning a video game. These bugs, along with cheat codes, Easter eggs, and other hidden secrets that were intentionally added to the game can also be exploited. On some consoles, cheat cartridges allow players to execute these cheat codes, and user-developed trainers allow similar bypassing for computer software games. Both of which might make the game easier, give the player additional power-ups, or change the appearance of the game.
Components
To distinguish from electronic games, a video game is generally considered to require a platform, the hardware which contains computing elements, to process player interaction from some type of input device and displays the results to a video output display.
Platform
Video games require a platform, a specific combination of electronic components or computer hardware and associated software, to operate. The term system is also commonly used. Games are typically designed to be played on one or a limited number of platforms, and exclusivity to a platform is used as a competitive edge in the video game market. However, games may be developed for alternative platforms than intended, which are described as ports or conversions. These also may be remasters - where most of the original game's source code is reused and art assets, models, and game levels are updated for modern systems – and remakes, where in addition to asset improvements, significant reworking of the original game and possibly from scratch is performed.
The list below is not exhaustive and excludes other electronic devices capable of playing video games such as PDAs and graphing calculators.
PC games
PC games involve a player interacting with a personal computer (PC) connected to a video monitor. Personal computers are not dedicated game platforms, so there may be differences running the same game on different hardware. Also, the openness allows some features to developers like reduced software cost, increased flexibility, increased innovation, emulation, creation of modifications or mods, open hosting for online gaming (in which a person plays a video game with people who are in a different household) and others. A gaming computer is a PC or laptop intended specifically for gaming, typically using high-performance, high-cost components. In additional to personal computer gaming, there also exist games that work on mainframe computers and other similarly shared systems, with users logging in remotely to use the computer.
Home console
A console game is played on a home console, a specialized electronic device that connects to a common television set or composite video monitor. Home consoles are specifically designed to play games using a dedicated hardware environment, giving developers a concrete hardware target for development and assurances of what features will be available, simplifying development compared to PC game development. Usually consoles only run games developed for it, or games from other platform made by the same company, but never games developed by its direct competitor, even if the same game is available on different platforms. It often comes with a specific game controller. Major console platforms include Xbox, PlayStation and Nintendo.
Handheld console
A handheld game console is a small, self-contained electronic device that is portable and can be held in a user's hands. It features the console, a small screen, speakers and buttons, joystick or other game controllers in a single unit. Like consoles, handhelds are dedicated platforms, and share almost the same characteristics. Handheld hardware usually is less powerful than PC or console hardware. Some handheld games from the late 1970s and early 1980s could only play one game. In the 1990s and 2000s, a number of handheld games used cartridges, which enabled them to be used to play many different games. The handheld console has waned in the 2010s as mobile device gaming has become a more dominant factor.
Arcade video game
An arcade video game generally refers to a game played on an even more specialized type of electronic device that is typically designed to play only one game and is encased in a special, large coin-operated cabinet which has one built-in console, controllers (joystick, buttons, etc.), a CRT screen, and audio amplifier and speakers. Arcade games often have brightly painted logos and images relating to the theme of the game. While most arcade games are housed in a vertical cabinet, which the user typically stands in front of to play, some arcade games use a tabletop approach, in which the display screen is housed in a table-style cabinet with a see-through table top. With table-top games, the users typically sit to play. In the 1990s and 2000s, some arcade games offered players a choice of multiple games. In the 1980s, video arcades were businesses in which game players could use a number of arcade video games. In the 2010s, there are far fewer video arcades, but some movie theaters and family entertainment centers still have them.
Browser game
A browser game takes advantages of standardizations of technologies for the functionality of web browsers across multiple devices providing a cross-platform environment. These games may be identified based on the website that they appear, such as with Miniclip games. Others are named based on the programming platform used to develop them, such as Java and Flash games.
Mobile game
With the introduction of smartphones and tablet computers standardized on the iOS and Android operating systems, mobile gaming has become a significant platform. These games may use unique features of mobile devices that are not necessary present on other platforms, such as accelerometers, global positing information and camera devices to support augmented reality gameplay.
Cloud gaming
Cloud gaming requires a minimal hardware device, such as a basic computer, console, laptop, mobile phone or even a dedicated hardware device connected to a display with good Internet connectivity that connects to hardware systems by the cloud gaming provider. The game is computed and rendered on the remote hardware, using a number of predictive methods to reduce the network latency between player input and output on their display device. For example, the Xbox Cloud Gaming and PlayStation Now platforms use dedicated custom server blade hardware in cloud computing centers.
Virtual reality
Virtual reality (VR) games generally require players to use a special head-mounted unit that provides stereoscopic screens and motion tracking to immerse a player within virtual environment that responds to their head movements. Some VR systems include control units for the player's hands as to provide a direct way to interact with the virtual world. VR systems generally require a separate computer, console, or other processing device that couples with the head-mounted unit.
Emulation
An emulator enables games from a console or otherwise different system to be run in a type of virtual machine on a modern system, simulating the hardware of the original and allows old games to be played. While emulators themselves have been found to be legal in United States case law, the act of obtaining the game software that one does not already own may violate copyrights. However, there are some official releases of emulated software from game manufacturers, such as Nintendo with its Virtual Console or Nintendo Switch Online offerings.
Backward compatibility
Backward compatibility is similar in nature to emulation in that older games can be played on newer platforms, but typically directly though hardware and build-in software within the platform. For example, the PlayStation 2 is capable of playing original PlayStation games simply by inserting the original game media into the newer console, while Nintendo's Wii could play GameCube titles as well in the same manner.
Game media
Early arcade games, home consoles, and handheld games were dedicated hardware units with the game's logic built into the electronic componentry of the hardware. Since then, most video game platforms are considered programmable, having means to read and play multiple games distributed on different types of media or formats. Physical formats include ROM cartridges, magnetic storage including magnetic-tape data storage and floppy discs, optical media formats including CD-ROM and DVDs, and flash memory cards. Furthermore digital distribution over the Internet or other communication methods as well as cloud gaming alleviate the need for any physical media. In some cases, the media serves as the direct read-only memory for the game, or it may be the form of installation media that is used to write the main assets to the player's platform's local storage for faster loading periods and later updates.
Games can be extended with new content and software patches through either expansion packs which are typically available as physical media, or as downloadable content nominally available via digital distribution. These can be offered freely or can be used to monetize a game following its initial release. Several games offer players the ability to create user-generated content to share with others to play. Other games, mostly those on personal computers, can be extended with user-created modifications or mods that alter or add onto the game; these often are unofficial and were developed by players from reverse engineering of the game, but other games provide official support for modding the game.
Input device
Video game can use several types of input devices to translate human actions to a game. Most common are the use of game controllers like gamepads and joysticks for most consoles, and as accessories for personal computer systems along keyboard and mouse controls. Common controls on the most recent controllers include face buttons, shoulder triggers, analog sticks, and directional pads ("d-pads"). Consoles typically include standard controllers which are shipped or bundled with the console itself, while peripheral controllers are available as a separate purchase from the console manufacturer or third-party vendors. Similar control sets are built into handheld consoles and onto arcade cabinets. Newer technology improvements have incorporated additional technology into the controller or the game platform, such as touchscreens and motion detection sensors that give more options for how the player interacts with the game. Specialized controllers may be used for certain genres of games, including racing wheels, light guns and dance pads. Digital cameras and motion detection can capture movements of the player as input into the game, which can, in some cases, effectively eliminate the control, and on other systems such as virtual reality, are used to enhance immersion into the game.
Display and output
By definition, all video games are intended to output graphics to an external video display, such as cathode-ray tube televisions, newer liquid-crystal display (LCD) televisions and built-in screens, projectors or computer monitors, depending on the type of platform the game is played on. Features such as color depth, refresh rate, frame rate, and screen resolution are a combination of the limitations of the game platform and display device and the program efficiency of the game itself. The game's output can range from fixed displays using LED or LCD elements, text-based games, two-dimensional and three-dimensional graphics, and augmented reality displays.
The game's graphics are often accompanied by sound produced by internal speakers on the game platform or external speakers attached to the platform, as directed by the game's programming. This often will include sound effects tied to the player's actions to provide audio feedback, as well as background music for the game.
Some platforms support additional feedback mechanics to the player that a game can take advantage of. This is most commonly haptic technology built into the game controller, such as causing the controller to shake in the player's hands to simulate a shaking earthquake occurring in game.
Classifications
Video games are frequently classified by a number of factors related to how one plays them.
Genre
A video game, like most other forms of media, may be categorized into genres. However, unlike film or television which use visual or narrative elements, video games are generally categorized into genres based on their gameplay interaction, since this is the primary means which one interacts with a video game. The narrative setting does not impact gameplay; a shooter game is still a shooter game, regardless of whether it takes place in a fantasy world or in outer space. An exception is the horror game genre, used for games that are based on narrative elements of horror fiction, the supernatural, and psychological horror.
Genre names are normally self-describing in terms of the type of gameplay, such as action game, role playing game, or shoot 'em up, though some genres have derivations from influential works that have defined that genre, such as roguelikes from Rogue, Grand Theft Auto clones from Grand Theft Auto III, and battle royale games from the film Battle Royale. The names may shift over time as players, developers and the media come up with new terms; for example, first-person shooters were originally called "Doom clones" based on the 1993 game. A hierarchy of game genres exist, with top-level genres like "shooter game" and "action game" that broadly capture the game's main gameplay style, and several subgenres of specific implementation, such as within the shooter game first-person shooter and third-person shooter. Some cross-genre types also exist that fall until multiple top-level genres such as action-adventure game.
Mode
A video game's mode describes how many players can use the game at the same type. This is primarily distinguished by single-player video games and multiplayer video games. Within the latter category, multiplayer games can be played in a variety of ways, including locally at the same device, on separate devices connected through a local network such as LAN parties, or online via separate Internet connections. Most multiplayer games are based on competitive gameplay, but many offer cooperative and team-based options as well as asymmetric gameplay. Online games use server structures that can also enable massively multiplayer online games (MMOs) to support hundreds of players at the same time.
A small number of video games are zero-player games, in which the player has very limited interaction with the game itself. These are most commonly simulation games where the player may establish a starting state and then let the game proceed on its own, watching the results as a passive observer, such as with many computerized simulations of Conway's Game of Life.
Types
Most video games are intended for entertainment purposes. Different game types include:
Core games
Core or hard-core games refer to the typical perception of video games, developed for entertainment purposes. These games typically require a fair amount of time to learn and master, in contrast to casual games, and thus are most appealing to gamers rather than a broader audience. Most of the AAA video game industry is based around the delivery of core games.
Casual games
In contrast to core games, casual games are designed for ease of accessibility, simple to understand gameplay and quick to grasp rule sets, and aimed at mass market audience. They frequently support the ability to jump in and out of play on demand, such as during commuting or lunch breaks. Numerous browser and mobile games fall into the casual game area, and casual games often are from genres with low intensity game elements such as match three, hidden object, time management, and puzzle games. Causal games frequently use social-network game mechanics, where players can enlist the help of friends on their social media networks for extra turns or moves each day. Popular casual games include Tetris and Candy Crush Saga. More recent, starting in the late 2010s, are hyper-casual games which use even more simplistic rules for short but infinitely replayable games, such as Flappy Bird.
Educational games
Education software has been used in homes and classrooms to help teach children and students, and video games have been similarly adapted for these reasons, all designed to provide a form of interactivity and entertainment tied to game design elements. There are a variety of differences in their designs and how they educate the user. These are broadly split between edutainment games that tend to focus on the entertainment value and rote learning but are unlikely to engage in critical thinking, and educational video games that are geared towards problem solving through motivation and positive reinforcement while downplaying the entertainment value. Examples of educational games include The Oregon Trail and the Carmen Sandiego series. Further, games not initially developed for educational purposes have found their way into the classroom after release, such as that feature open worlds or virtual sandboxes like Minecraft, or offer critical thinking skills through puzzle video games like SpaceChem.
Serious games
Further extending from educational games, serious games are those where the entertainment factor may be augmented, overshadowed, or even eliminated by other purposes for the game. Game design is used to reinforce the non-entertainment purpose of the game, such as using video game technology for the game's interactive world, or gamification for reinforcement training. Educational games are a form of serious games, but other types of games include fitness games that incorporate significant physical exercise to help keep the player fit (such as Wii Fit), simulator games that resemble fight simulators to pilot aircraft (such as Microsoft Flight Simulator), advergames that are built around the advertising of a product (such as Pepsiman), and newsgames aimed at conveying a specific advocacy message (such as NarcoGuerra).
Art games
Although video games have been considered an art form on their own, games may be developed to try to purposely communicate a story or message, using the medium as a work of art. These art or arthouse games are designed to generate emotion and empathy from the player by challenging societal norms and offering critique through the interactivity of the video game medium. They may not have any type of win condition and are designed to let the player explore through the game world and scenarios. Most art games are indie games in nature, designed based on personal experiences or stories through a single developer or small team. Examples of art games include Passage, Flower, and That Dragon, Cancer.
Content rating
Video games can be subject to national and international content rating requirements. Like with film content ratings, video game ratings typing identify the target age group that the national or regional ratings board believes is appropriate for the player, ranging from all-ages, to a teenager-or-older, to mature, to the infrequent adult-only games. Most content review is based on the level of violence, both in the type of violence and how graphic it may be represented, and sexual content, but other themes such as drug and alcohol use and gambling that can influence children may also be identified. A primary identifier based on a minimum age is used by nearly all systems, along with additional descriptors to identify specific content that players and parents should be aware of.
The regulations vary from country to country but generally are voluntary systems upheld by vendor practices, with penalty and fines issued by the ratings body on the video game publisher for misuse of the ratings. Among the major content rating systems include:
Entertainment Software Rating Board (ESRB) that oversees games released in the United States. ESRB ratings are voluntary and rated along a E (Everyone), E10+ (Everyone 10 and older), T (Teen), M (Mature), and AO (Adults Only). Attempts to mandate video games ratings in the U.S. subsequently led to the landmark Supreme Court case, Brown v. Entertainment Merchants Association in 2011 which ruled video games were a protected form of art, a key victory for the video game industry.
Pan European Game Information (PEGI) covering the United Kingdom, most of the European Union and other European countries, replacing previous national-based systems. The PEGI system uses content rated based on minimum recommended ages, which include 3+, 8+, 12+, 16+, and 18+.
Australian Classification Board (ACB) oversees the ratings of games and other works in Australia, using ratings of G (General), PG (Parental Guidance), M (Mature), MA15+ (Mature Accompanied), R18+ (Restricted), and X (Restricted for pornographic material). ACB can also deny to give a rating to game (RC – Refused Classification). The ACB's ratings are enforceable by law, and importantly, games cannot be imported or purchased digitally in Australia if they have failed to gain a rating or were given the RC rating, leading to a number of notable banned games.
Computer Entertainment Rating Organization (CERO) rates games for Japan. Their ratings include A (all ages), B (12 and older), C (15 and over), D (17 and over), and Z (18 and over).
Additionally, the major content system provides have worked to create the International Age Rating Coalition (IARC), a means to streamline and align the content ratings system between different region, so that a publisher would only need to complete the content ratings review for one provider, and use the IARC transition to affirm the content rating for all other regions.
Certain nations have even more restrictive rules related to political or ideological content. Within Germany, until 2018, the Unterhaltungssoftware Selbstkontrolle (Entertainment Software Self-Regulation) would refuse to classify, and thus allow sale, of any game depicting Nazi imagery, and thus often requiring developers to replace such imagery with fictional ones. This ruling was relaxed in 2018 to allow for such imagery for "social adequacy" purposes that applied to other works of art. China's video game segment is mostly isolated from the rest of the world due to the government's censorship, and all games published there must adhere to strict government review, disallowing content such as smearing the image of the Chinese Communist Party. Foreign games published in China often require modification by developers and publishers to meet these requirements.
Development
Video game development and authorship, much like any other form of entertainment, is frequently a cross-disciplinary field. Video game developers, as employees within this industry are commonly referred, primarily include programmers and graphic designers. Over the years this has expanded to include almost every type of skill that one might see prevalent in the creation of any movie or television program, including sound designers, musicians, and other technicians; as well as skills that are specific to video games, such as the game designer. All of these are managed by producers.
In the early days of the industry, it was more common for a single person to manage all of the roles needed to create a video game. As platforms have become more complex and powerful in the type of material they can present, larger teams have been needed to generate all of the art, programming, cinematography, and more. This is not to say that the age of the "one-man shop" is gone, as this is still sometimes found in the casual gaming and handheld markets, where smaller games are prevalent due to technical limitations such as limited RAM or lack of dedicated 3D graphics rendering capabilities on the target platform (e.g., some PDAs).
Video games are programmed like any other piece of computer software. Prior to the mid-1970s, arcade and home consoles were programmed by assembling discrete electro-mechanical components on circuit boards, which limited games to relatively simple logic. By 1975, low-cost microprocessors were available at volume to be used for video game hardware, which allowed game developers to program more detailed games, widening the scope of what was possible. Ongoing improvements in computer hardware technology has expanded what has become possible to create in video games, coupled with convergence of common hardware between console, computer, and arcade platforms to simplify the development process. Today, game developers have a number of commercial and open source tools available for use to make games, often which are across multiple platforms to support portability, or may still opt to create their own for more specialized features and direct control of the game. Today, many games are built around a game engine that handles the bulk of the game's logic, gameplay, and rendering. These engines can be augmented with specialized engines for specific features, such as a physics engine that simulates the physics of objects in real-time. A variety of middleware exists to help developers to access other features, such as for playback of videos within games, network-oriented code for games that communicate via online services, matchmaking for online games, and similar features. These features can be used from a developers' programming language of choice, or they may opt to also use game development kits that minimize the amount of direct programming they have to do but can also limit the amount of customization they can add into a game. Like all software, video games usually undergo quality testing before release to assure there are no bugs or glitches in the product, though frequently developers will release patches and updates.
With the growth of the size of development teams in the industry, the problem of cost has increased. Development studios need the best talent, while publishers reduce costs to maintain profitability on their investment. Typically, a video game console development team ranges from 5 to 50 people, and some exceed 100. In May 2009, Assassin's Creed II was reported to have a development staff of 450. The growth of team size combined with greater pressure to get completed projects into the market to begin recouping production costs has led to a greater occurrence of missed deadlines, rushed games and the release of unfinished products.
While amateur and hobbyist game programming had existed since the late 1970s with the introduction of home computers, a newer trend since the mid-2000s is indie game development. Indie games are made by small teams outside any direct publisher control, their games being smaller in scope than those from the larger "AAA" game studios, and are often experiment in gameplay and art style. Indie game development are aided by larger availability of digital distribution, including the newer mobile gaming marker, and readily-available and low-cost development tools for these platforms.
Game theory and studies
Although departments of computer science have been studying the technical aspects of video games for years, theories that examine games as an artistic medium are a relatively recent development in the humanities. The two most visible schools in this emerging field are ludology and narratology. Narrativists approach video games in the context of what Janet Murray calls "Cyberdrama". That is to say, their major concern is with video games as a storytelling medium, one that arises out of interactive fiction. Murray puts video games in the context of the Holodeck, a fictional piece of technology from Star Trek, arguing for the video game as a medium in which the player is allowed to become another person, and to act out in another world. This image of video games received early widespread popular support, and forms the basis of films such as Tron, eXistenZ and The Last Starfighter.
Ludologists break sharply and radically from this idea. They argue that a video game is first and foremost a game, which must be understood in terms of its rules, interface, and the concept of play that it deploys. Espen J. Aarseth argues that, although games certainly have plots, characters, and aspects of traditional narratives, these aspects are incidental to gameplay. For example, Aarseth is critical of the widespread attention that narrativists have given to the heroine of the game Tomb Raider, saying that "the dimensions of Lara Croft's body, already analyzed to death by film theorists, are irrelevant to me as a player, because a different-looking body would not make me play differently... When I play, I don't even see her body, but see through it and past it." Simply put, ludologists reject traditional theories of art because they claim that the artistic and socially relevant qualities of a video game are primarily determined by the underlying set of rules, demands, and expectations imposed on the player.
While many games rely on emergent principles, video games commonly present simulated story worlds where emergent behavior occurs within the context of the game. The term "emergent narrative" has been used to describe how, in a simulated environment, storyline can be created simply by "what happens to the player." However, emergent behavior is not limited to sophisticated games. In general, any place where event-driven instructions occur for AI in a game, emergent behavior will exist. For instance, take a racing game in which cars are programmed to avoid crashing, and they encounter an obstacle in the track: the cars might then maneuver to avoid the obstacle causing the cars behind them to slow or maneuver to accommodate the cars in front of them and the obstacle. The programmer never wrote code to specifically create a traffic jam, yet one now exists in the game.
Intellectual property for video games
Most commonly, video games are protected by copyright, though both patents and trademarks have been used as well.
Though local copyright regulations vary to the degree of protection, video games qualify as copyrighted visual-audio works, and enjoy cross-country protection under the Berne Convention. This typically only applies to the underlying code, as well as to the artistic aspects of the game such as its writing, art assets, and music. Gameplay itself is generally not considered copyrightable; in the United States among other countries, video games are considered to fall into the idea–expression distinction in that it is how the game is presented and expressed to the player that can be copyrighted, but not the underlying principles of the game.
Because gameplay is normally ineligible for copyright, gameplay ideas in popular games are often replicated and built upon in other games. At times, this repurposing of gameplay can be seen as beneficial and a fundamental part of how the industry has grown by building on the ideas of others. For example Doom (1993) and Grand Theft Auto III (2001) introduced gameplay that created popular new game genres, the first-person shooter and the Grand Theft Auto clone, respectively, in the few years after their release. However, at times and more frequently at the onset of the industry, developers would intentionally create video game clones of successful games and game hardware with few changes, which led to the flooded arcade and dedicated home console market around 1978. Cloning is also a major issue with countries that do not have strong intellectual property protection laws, such as within China. The lax oversight by China's government and the difficulty for foreign companies to take Chinese entities to court had enabled China to support a large grey market of cloned hardware and software systems. The industry remains challenged to distinguish between creating new games based on refinements of past successful games to create a new type of gameplay, and intentionally creating a clone of a game that may simply swap out art assets.
Industry
History
The early history of the video game industry, following the first game hardware releases and through 1983, had little structure. Video games quickly took off during the golden age of arcade video games from the late 1970s to early 1980s, but the newfound industry was mainly composed of game developers with little business experience. This led to numerous companies forming simply to create clones of popular games to try to capitalize on the market. Due to loss of publishing control and oversaturation of the market, the North American home video game market crashed in 1983, dropping from revenues of around in 1983 to by 1985. Many of the North American companies created in the prior years closed down. Japan's growing game industry was briefly shocked by this crash but had sufficient longevity to withstand the short-term effects, and Nintendo helped to revitalize the industry with the release of the Nintendo Entertainment System in North America in 1985. Along with it, Nintendo established a number of core industrial practices to prevent unlicensed game development and control game distribution on their platform, methods that continue to be used by console manufacturers today.
The industry remained more conservative following the 1983 crash, forming around the concept of publisher-developer dichotomies, and by the 2000s, leading to the industry centralizing around low-risk, triple-A games and studios with large development budgets of at least or more. The advent of the Internet brought digital distribution as a viable means to distribute games, and contributed to the growth of more riskier, experimental independent game development as an alternative to triple-A games in the late 2000s and which has continued to grow as a significant portion of the video game industry.
Industry roles
Video games have a large network effect that draw on many different sectors that tie into the larger video game industry. While video game developers are a significant portion of the industry, other key participants in the market include:
Publishers: Companies generally that oversee bringing the game from the developer to market. This often includes performing the marketing, public relations, and advertising of the game. Publishers frequently pay the developers ahead of time to make their games and will be involved in critical decisions about the direction of the game's progress, and then pay the developers additional royalties or bonuses based on sales performances. Other smaller, boutique publishers may simply offer to perform the publishing of a game for a small fee and a portion of the sales, and otherwise leave the developer with the creative freedom to proceed. A range of other publisher-developer relationships exist between these points.
Distributors: Publishers often are able to produce their own game media and take the role of distributor, but there are also third-party distributors that can mass-produce game media and distribute to retailers. Digital storefronts like Steam and the iOS App Store also serve as distributors and retailers in the digital space.
Retailers: Physical storefronts, which include large online retailers, department and electronic stores, and specialty video game stores, sell games, consoles, and other accessories to consumers. This has also including a trade-in market in certain regions, allowing players to turn in used games for partial refunds or credit towards other games. However, with the uprising of digital marketplaces and e-commerce revolution, retailers have been performing worse than in the past.
Hardware manufacturers: The video game console manufacturers produce console hardware, often through a value chain system that include numerous component suppliers and contract manufacturer that assemble the consoles. Further, these console manufacturers typically require a license to develop for their platform and may control the production of some games, such as Nintendo does with the use of game cartridges for its systems. In exchange, the manufacturers may help promote games for their system and may seek console exclusivity for certain games. For games on personal computers, a number of manufacturers are devoted to high-performance "gaming computer" hardware, particularly in the graphics card area; several of the same companies overlap with component supplies for consoles. A range of third-party manufacturers also exist to provide equipment and gear for consoles post-sale, such as additional controllers for console or carrying cases and gear for handheld devices.
Journalism: While journalism around video games used to be primarily print-based, and focused more on post-release reviews and gameplay strategy, the Internet has brought a more proactive press that use web journalism, covering games in the months prior to release as well as beyond, helping to build excitement for games ahead of release.
Influencers: With the rising importance of social media, video game companies have found that the opinions of influencers using streaming media to play through their games has had a significant impact on game sales, and have turned to use influencers alongside traditional journalism as a means to build up attention to their game before release.
Esports: Esports is a major function of several multiplayer games with numerous professional leagues established since the 2000s, with large viewership numbers, particularly out of southeast Asia since the 2010s.
Trade and advocacy groups: Trade groups like the Entertainment Software Association were established to provide a common voice for the industry in response to governmental and other advocacy concerns. They frequently set up the major trade events and conventions for the industry such as E3.
Gamers: Proactive hobbyists who are players and consumers of video games. While their representation in the industry is primarily seen through game sales, many companies follow gamers' comments on social media or on user reviews and engage with them to work to improve their products in addition to other feedback from other parts of the industry. Demographics of the larger player community also impact parts of the market; while once dominated by younger men, the market shifted in the mid-2010s towards women and older players who generally preferred mobile and causal games, leading to further growth in those sectors.
Major regional markets
The industry itself grew out from both the United States and Japan in the 1970s and 1980s before having a larger worldwide contribution. Today, the video game industry is predominantly led by major companies in North America (primarily the United States and Canada), Europe, and southeast Asia including Japan, South Korea, and China. Hardware production remains an area dominated by Asian companies either directly involved in hardware design or part of the production process, but digital distribution and indie game development of the late 2000s has allowed game developers to flourish nearly anywhere and diversify the field.
Game sales
According to the market research firm Newzoo, the global video game industry drew estimated revenues of over in 2020. Mobile games accounted for the bulk of this, with a 48% share of the market, followed by console games at 28% and personal computer games at 23%.
Sales of different types of games vary widely between countries due to local preferences. Japanese consumers tend to purchase much more handheld games than console games and especially PC games, with a strong preference for games catering to local tastes. Another key difference is that, though having declined in the West, arcade games remain an important sector of the Japanese gaming industry. In South Korea, computer games are generally preferred over console games, especially MMORPG games and real-time strategy games. Computer games are also popular in China.
Effects on society
Culture
Video game culture is a worldwide new media subculture formed around video games and game playing. As computer and video games have increased in popularity over time, they have had a significant influence on popular culture. Video game culture has also evolved over time hand in hand with internet culture as well as the increasing popularity of mobile games. Many people who play video games identify as gamers, which can mean anything from someone who enjoys games to someone who is passionate about it. As video games become more social with multiplayer and online capability, gamers find themselves in growing social networks. Gaming can both be entertainment as well as competition, as a new trend known as electronic sports is becoming more widely accepted. In the 2010s, video games and discussions of video game trends and topics can be seen in social media, politics, television, film and music. The COVID-19 pandemic during 2020–2021 gave further visibility to video games as a pastime to enjoy with friends and family online as a means of social distancing.
Since the mid-2000s there has been debate whether video games qualify as art, primarily as the form's interactivity interfered with the artistic intent of the work and that they are designed for commercial appeal. A significant debate on the matter came after film critic Roger Ebert published an essay "Video Games can never be art", which challenged the industry to prove him and other critics wrong. The view that video games were an art form was cemented in 2011 when the U.S. Supreme Court ruled in the landmark case Brown v. Entertainment Merchants Association that video games were a protected form of speech with artistic merit. Since then, video game developers have come to use the form more for artistic expression, including the development of art games, and the cultural heritage of video games as works of arts, beyond their technical capabilities, have been part of major museum exhibits, including The Art of Video Games at the Smithsonian American Art Museum and toured at other museums from 2012 to 2016.
Video games will inspire sequels and other video games within the same franchise, but also have influenced works outside of the video game medium. Numerous television shows (both animated and live-action), films, comics and novels have been created based on existing video game franchises. Because video games are an interactive medium there has been trouble in converting them to these passive forms of media, and typically such works have been critically panned or treated as children's media. For example, until 2019, no video game film had ever been received a "Fresh" rating on Rotten Tomatoes, but the releases of Detective Pikachu (2019) and Sonic the Hedgehog (2020), both receiving "Fresh" ratings, shows signs of the film industry having found an approach to adapt video games for the large screen. That said, some early video game-based films have been highly successful at the box office, such as 1995's Mortal Kombat and 2001's Lara Croft: Tomb Raider.
More recently since the 2000s, there has also become a larger appreciation of video game music, which ranges from chiptunes composed for limited sound-output devices on early computers and consoles, to fully-scored compositions for most modern games. Such music has frequently served as a platform for covers and remixes, and concerts featuring video game soundtracks performed by bands or orchestras, such as Video Games Live, have also become popular. Video games also frequently incorporate licensed music, particularly in the area of rhythm games, furthering the depth of which video games and music can work together.
Further, video games can serve as a virtual environment under full control of a producer to create new works. With the capability to render 3D actors and settings in real-time, a new type of work machinima (short for "machine cinema") grew out from using video game engines to craft narratives. As video game engines gain higher fidelity, they have also become part of the tools used in more traditional filmmaking. Unreal Engine has been used as a backbone by Industrial Light & Magic for their StageCraft technology for shows like The Mandalorian.
Separately, video games are also frequently used as part of the promotion and marketing for other media, such as for films, anime, and comics. However, these licensed games in the 1990s and 2000s often had a reputation for poor quality, developed without any input from the intellectual property rights owners, and several of them are considered among lists of games with notably negative reception, such as Superman 64. More recently, with these licensed games being developed by triple-A studios or through studios directly connected to the licensed property owner, there has been a significant improvement in the quality of these games, with an early trendsetting example of Batman: Arkham Asylum.
Beneficial uses
Besides their entertainment value, appropriately-designed video games have been seen to provide value in education across several ages and comprehension levels. Learning principles found in video games have been identified as possible techniques with which to reform the U.S. education system. It has been noticed that gamers adopt an attitude while playing that is of such high concentration, they do not realize they are learning, and that if the same attitude could be adopted at school, education would enjoy significant benefits. Students are found to be "learning by doing" while playing video games while fostering creative thinking.
Video games are also believed to be beneficial to the mind and body. It has been shown that action video game players have better hand–eye coordination and visuo-motor skills, such as their resistance to distraction, their sensitivity to information in the peripheral vision and their ability to count briefly presented objects, than nonplayers. Researchers found that such enhanced abilities could be acquired by training with action games, involving challenges that switch attention between different locations, but not with games requiring concentration on single objects. A 2018 systematic review found evidence that video gaming training had positive effects on cognitive and emotional skills in the adult population, especially with young adults. A 2019 systematic review also added support for the claim that video games are beneficial to the brain, although the beneficial effects of video gaming on the brain differed by video games types.
Organisers of video gaming events, such as the organisers of the D-Lux video game festival in Dumfries, Scotland, have emphasised the positive aspects video games can have on mental health. Organisers, mental health workers and mental health nurses at the event emphasised the relationships and friendships that can be built around video games and how playing games can help people learn about others as a precursor to discussing the person's mental health. A study in 2020 from Oxford University also suggested that playing video games can be a benefit to a person's mental health. The report of 3,274 gamers, all over the age of 18, focused on the games Animal Crossing: New Horizons and Plants vs Zombies: Battle for Neighborville and used actual play-time data. The report found that those that played more games tended to report greater "wellbeing". Also in 2020, computer science professor Regan Mandryk of the University of Saskatchewan said her research also showed that video games can have health benefits such as reducing stress and improving mental health. The university's research studied all age groups – "from pre-literate children through to older adults living in long term care homes" – with a main focus on 18 to 55-year-olds.
A study of gamers attitudes towards gaming which was reported about in 2018 found that millennials use video games as a key strategy for coping with stress. In the study of 1,000 gamers, 55% said that it "helps them to unwind and relieve stress ... and half said they see the value in gaming as a method of escapism to help them deal with daily work pressures".
Controversies
Video games have caused controversy since the 1970s. Parents and children's advocates regularly raise concerns that violent video games can influence young players into performing those violent acts in real life, and events such as the Columbine High School massacre in 1999 in which the perpetrators specifically alluded to using video games to plot out their attack, raised further fears. Medical experts and mental health professionals have also raised concerned that video games may be addictive, and the World Health Organization has included "gaming disorder" in the 11th revision of its International Statistical Classification of Diseases. Other health experts, including the American Psychiatric Association, have stated that there is insufficient evidence that video games can create violent tendencies or lead to addictive behavior, though agree that video games typically use a compulsion loop in their core design that can create dopamine that can help reinforce the desire to continue to play through that compulsion loop and potentially lead into violent or addictive behavior. Even with case law establishing that video games qualify as a protected art form, there has been pressure on the video game industry to keep their products in check to avoid over-excessive violence particularly for games aimed at younger children. The potential addictive behavior around games, coupled with increased used of post-sale monetization of video games, has also raised concern among parents, advocates, and government officials about gambling tendencies that may come from video games, such as controversy around the use of loot boxes in many high-profile games.
Numerous other controversies around video games and its industry have arisen over the years, among the more notable incidents include the 1993 United States Congressional hearings on violent games like Mortal Kombat which lead to the formation of the ESRB ratings system, numerous legal actions taken by attorney Jack Thompson over violent games such as Grand Theft Auto III and Manhunt from 2003 to 2007, the outrage over the "No Russian" level from Call of Duty: Modern Warfare 2 in 2009 which allowed the player to shoot a number of innocent non-player characters at an airport, and the Gamergate harassment campaign in 2014 that highlighted misogyny from a portion of the player demographic. The industry as a whole has also dealt with issues related to gender, racial, and LGBTQ+ discrimination and mischaracterization of these minority groups in video games. A further issue in the industry is related to working conditions, as development studios and publishers frequently use "crunch time", required extended working hours, in the weeks and months ahead of a game's release to assure on-time delivery.
Collecting and preservation
Players of video games often maintain collections of games. More recently there has been interest in retrogaming, focusing on games from the first decades. Games in retail packaging in good shape have become collectors items for the early days of the industry, with some rare publications having gone for over . Separately, there is also concern about the preservation of video games, as both game media and the hardware to play them degrade over time. Further, many of the game developers and publishers from the first decades no longer exist, so records of their games have disappeared. Archivists and preservations have worked within the scope of copyright law to save these games as part of the cultural history of the industry.
There are many video game museums around the world, including the National Videogame Museum in Frisco, Texas, which serves as the largest museum wholly dedicated to the display and preservation of the industry's most important artifacts. Europe hosts video game museums such as the Computer Games Museum in Berlin and the Museum of Soviet Arcade Machines in Moscow and Saint-Petersburg. The Museum of Art and Digital Entertainment in Oakland, California is a dedicated video game museum focusing on playable exhibits of console and computer games. The Video Game Museum of Rome is also dedicated to preserving video games and their history. The International Center for the History of Electronic Games at The Strong in Rochester, New York contains one of the largest collections of electronic games and game-related historical materials in the world, including a exhibit which allows guests to play their way through the history of video games. The Smithsonian Institution in Washington, DC has three video games on permanent display: Pac-Man, Dragon's Lair, and Pong.
The Museum of Modern Art has added a total of 20 video games and one video game console to its permanent Architecture and Design Collection since 2012. In 2012, the Smithsonian American Art Museum ran an exhibition on "The Art of Video Games". However, the reviews of the exhibit were mixed, including questioning whether video games belong in an art museum.
See also
Lists of video games
List of accessories to video games by system
Outline of video games
Notes
References
Sources
Further reading
The Ultimate History of Video Games: From Pong to Pokemon--The Story Behind the Craze That Touched Our Lives and Changed the World by Steven L. Kent, Crown, 2001,
The Ultimate History of Video Games, Volume 2: Nintendo, Sony, Microsoft, and the Billion-Dollar Battle to Shape Modern Gaming by Steven L. Kent, Crown, 2021,
External links
Video games bibliography by the French video game research association Ludoscience
The Virtual Museum of Computing (VMoC) (archived 10 October 2014)
Games and sports introduced in 1947
Digital media
American inventions |
5370 | https://en.wikipedia.org/wiki/Theory%20of%20categories | Theory of categories | In ontology, the theory of categories concerns itself with the categories of being: the highest genera or kinds of entities according to Amie Thomasson. To investigate the categories of being, or simply categories, is to determine the most fundamental and the broadest classes of entities. A distinction between such categories, in making the categories or applying them, is called an ontological distinction. Various systems of categories have been proposed, they often include categories for substances, properties, relations, states of affairs or events. A representative question within the theory of categories might articulate itself, for example, in a query like, "Are universals prior to particulars?"
Early development
The process of abstraction required to discover the number and names of the categories of being has been undertaken by many philosophers since Aristotle and involves the careful inspection of each concept to ensure that there is no higher category or categories under which that concept could be subsumed. The scholars of the twelfth and thirteenth centuries developed Aristotle's ideas. For example, Gilbert of Poitiers divides Aristotle's ten categories into two sets, primary and secondary, according to whether they inhere in the subject or not:
Primary categories: Substance, Relation, Quantity and Quality
Secondary categories: Place, Time, Situation, Condition, Action, Passion
Furthermore, following Porphyry’s likening of the classificatory hierarchy to a tree, they concluded that the major classes could be subdivided to form subclasses, for example, Substance could be divided into Genus and Species, and Quality could be subdivided into Property and Accident, depending on whether the property was necessary or contingent. An alternative line of development was taken by Plotinus in the second century who by a process of abstraction reduced Aristotle's list of ten categories to five: Substance, Relation, Quantity, Motion and Quality. Plotinus further suggested that the latter three categories of his list, namely Quantity, Motion and Quality correspond to three different kinds of relation and that these three categories could therefore be subsumed under the category of Relation. This was to lead to the supposition that there were only two categories at the top of the hierarchical tree, namely Substance and Relation. Many supposed that relations only exist in the mind. Substance and Relation, then, are closely commutative with Mind and Matter--this is expressed most clearly in the dualism of René Descartes.
Vaisheshika
Stoic
Aristotle
One of Aristotle’s early interests lay in the classification of the natural world, how for example the genus "animal" could be first divided into "two-footed animal" and then into "wingless, two-footed animal". He realised that the distinctions were being made according to the qualities the animal possesses, the quantity of its parts and the kind of motion that it exhibits. To fully complete the proposition "this animal is ..." Aristotle stated in his work on the Categories that there were ten kinds of predicate where ...
"... each signifies either substance or quantity or quality or relation or where or when or being-in-a-position or having or acting or being acted upon".
He realised that predicates could be simple or complex. The simple kinds consist of a subject and a predicate linked together by the "categorical" or inherent type of relation. For Aristotle the more complex kinds were limited to propositions where the predicate is compounded of two of the above categories for example "this is a horse running". More complex kinds of proposition were only discovered after Aristotle by the Stoic, Chrysippus, who developed the "hypothetical" and "disjunctive" types of syllogism and these were terms which were to be developed through the Middle Ages and were to reappear in Kant's system of categories.
Category came into use with Aristotle's essay Categories, in which he discussed univocal and equivocal terms, predication, and ten categories:
Substance, essence (ousia) – examples of primary substance: this man, this horse; secondary substance (species, genera): man, horse
Quantity (poson, how much), discrete or continuous – examples: two cubits long, number, space, (length of) time.
Quality (poion, of what kind or description) – examples: white, black, grammatical, hot, sweet, curved, straight.
Relation (pros ti, toward something) – examples: double, half, large, master, knowledge.
Place (pou, where) – examples: in a marketplace, in the Lyceum
Time (pote, when) – examples: yesterday, last year
Position, posture, attitude (keisthai, to lie) – examples: sitting, lying, standing
State, condition (echein, to have or be) – examples: shod, armed
Action (poiein, to make or do) – examples: to lance, to heat, to cool (something)
Affection, passion (paschein, to suffer or undergo) – examples: to be lanced, to be heated, to be cooled
Plotinus
Plotinus in writing his Enneads around AD 250 recorded that "philosophy at a very early age investigated the number and character of the existents ... some found ten, others less .... to some the genera were the first principles, to others only a generic classification of existents". He realised that some categories were reducible to others saying "why are not Beauty, Goodness and the virtues, Knowledge and Intelligence included among the primary genera?" He concluded that such transcendental categories and even the categories of Aristotle were in some way posterior to the three Eleatic categories first recorded in Plato's dialogue Parmenides and which comprised the following three coupled terms:
Unity/Plurality
Motion/Stability
Identity/Difference
Plotinus called these "the hearth of reality" deriving from them not only the three categories of Quantity, Motion and Quality but also what came to be known as "the three moments of the Neoplatonic world process":
First, there existed the "One", and his view that "the origin of things is a contemplation"
The Second "is certainly an activity ... a secondary phase ... life streaming from life ... energy running through the universe"
The Third is some kind of Intelligence concerning which he wrote "Activity is prior to Intellection ... and self knowledge"
Plotinus likened the three to the centre, the radii and the circumference of a circle, and clearly thought that the principles underlying the categories were the first principles of creation. "From a single root all being multiplies". Similar ideas were to be introduced into Early Christian thought by, for example, Gregory of Nazianzus who summed it up saying "Therefore, Unity, having from all eternity arrived by motion at duality, came to rest in trinity".
Modern development
Kant and Hegel accused the Aristotelian table of categories of being 'rhapsodic', derived arbitrarily and in bulk from experience, without any systematic necessity.
The early modern dualism, which has been described above, of Mind and Matter or Subject and Relation, as reflected in the writings of Descartes underwent a substantial revision in the late 18th century. The first objections to this stance were formulated in the eighteenth century by Immanuel Kant who realised that we can say nothing about Substance except through the relation of the subject to other things.
For example: In the sentence "This is a house" the substantive subject "house" only gains meaning in relation to human use patterns or to other similar houses. The category of Substance disappears from Kant's tables, and under the heading of Relation, Kant lists inter alia the three relationship types of Disjunction, Causality and Inherence. The three older concepts of Quantity, Motion and Quality, as Peirce discovered, could be subsumed under these three broader headings in that Quantity relates to the subject through the relation of Disjunction; Motion relates to the subject through the relation of Causality; and Quality relates to the subject through the relation of Inherence. Sets of three continued to play an important part in the nineteenth century development of the categories, most notably in G.W.F. Hegel's extensive tabulation of categories, and in C.S. Peirce's categories set out in his work on the logic of relations. One of Peirce's contributions was to call the three primary categories Firstness, Secondness and Thirdness which both emphasises their general nature, and avoids the confusion of having the same name for both the category itself and for a concept within that category.
In a separate development, and building on the notion of primary and secondary categories introduced by the Scholastics, Kant introduced the idea that secondary or "derivative" categories could be derived from the primary categories through the combination of one primary category with another. This would result in the formation of three secondary categories: the first, "Community" was an example that Kant gave of such a derivative category; the second, "Modality", introduced by Kant, was a term which Hegel, in developing Kant's dialectical method, showed could also be seen as a derivative category; and the third, "Spirit" or "Will" were terms that Hegel and Schopenhauer were developing separately for use in their own systems. Karl Jaspers in the twentieth century, in his development of existential categories, brought the three together, allowing for differences in terminology, as Substantiality, Communication and Will. This pattern of three primary and three secondary categories was used most notably in the nineteenth century by Peter Mark Roget to form the six headings of his Thesaurus of English Words and Phrases. The headings used were the three objective categories of Abstract Relation, Space (including Motion) and Matter and the three subjective categories of Intellect, Feeling and Volition, and he found that under these six headings all the words of the English language, and hence any possible predicate, could be assembled.
Kant
In the Critique of Pure Reason (1781), Immanuel Kant argued that the categories are part of our own mental structure and consist of a set of a priori concepts through which we interpret the world around us. These concepts correspond to twelve logical functions of the understanding which we use to make judgements and there are therefore two tables given in the Critique, one of the Judgements and a corresponding one for the Categories. To give an example, the logical function behind our reasoning from ground to consequence (based on the Hypothetical relation) underlies our understanding of the world in terms of cause and effect (the Causal relation). In each table the number twelve arises from, firstly, an initial division into two: the Mathematical and the Dynamical; a second division of each of these headings into a further two: Quantity and Quality, and Relation and Modality respectively; and, thirdly, each of these then divides into a further three subheadings as follows.
Table of Judgements
Mathematical
Quantity
Universal
Particular
Singular
Quality
Affirmative
Negative
Infinite
Dynamical
Relation
Categorical
Hypothetical
Disjunctive
Modality
Problematic
Assertoric
Apodictic
Table of Categories
Mathematical
Quantity
Unity
Plurality
Totality
Quality
Reality
Negation
Limitation
Dynamical
Relation
Inherence and Subsistence (substance and accident)
Causality and Dependence (cause and effect)
Community (reciprocity)
Modality
Possibility
Existence
Necessity
Criticism of Kant's system followed, firstly, by Arthur Schopenhauer, who amongst other things was unhappy with the term "Community", and declared that the tables "do open violence to truth, treating it as nature was treated by old-fashioned gardeners", and secondly, by W.T.Stace who in his book The Philosophy of Hegel suggested that in order to make Kant's structure completely symmetrical a third category would need to be added to the Mathematical and the Dynamical. This, he said, Hegel was to do with his category of concept.
Hegel
G.W.F. Hegel in his Science of Logic (1812) attempted to provide a more comprehensive system of categories than Kant and developed a structure that was almost entirely triadic. So important were the categories to Hegel that he claimed the first principle of the world, which he called the "absolute", is "a system of categories the categories must be the reason of which the world is a consequent".
Using his own logical method of sublation, later called the Hegelian dialectic, reasoning from the abstract through the negative to the concrete, he arrived at a hierarchy of some 270 categories, as explained by W. T. Stace. The three very highest categories were "logic", "nature" and "spirit". The three highest categories of "logic", however, he called "being", "essence", and "notion" which he explained as follows:
Being was differentiated from Nothing by containing with it the concept of the "other", an initial internal division that can be compared with Kant's category of disjunction. Stace called the category of Being the sphere of common sense containing concepts such as consciousness, sensation, quantity, quality and measure.
Essence. The "other" separates itself from the "one" by a kind of motion, reflected in Hegel's first synthesis of "becoming". For Stace this category represented the sphere of science containing within it firstly, the thing, its form and properties; secondly, cause, effect and reciprocity, and thirdly, the principles of classification, identity and difference.
Notion. Having passed over into the "Other" there is an almost neoplatonic return into a higher unity that in embracing the "one" and the "other" enables them to be considered together through their inherent qualities. This according to Stace is the sphere of philosophy proper where we find not only the three types of logical proposition: disjunctive, hypothetical, and categorical but also the three transcendental concepts of beauty, goodness and truth.
Schopenhauer's category that corresponded with "notion" was that of "idea", which in his Four-Fold Root of Sufficient Reason he complemented with the category of the "will". The title of his major work was The World as Will and Idea. The two other complementary categories, reflecting one of Hegel's initial divisions, were those of Being and Becoming. At around the same time, Goethe was developing his colour theories in the of 1810, and introduced similar principles of combination and complementation, symbolising, for Goethe, "the primordial relations which belong both to nature and vision". Hegel in his Science of Logic accordingly asks us to see his system not as a tree but as a circle.
Twentieth-century development
In the twentieth century the primacy of the division between the subjective and the objective, or between mind and matter, was disputed by, among others, Bertrand Russell and Gilbert Ryle. Philosophy began to move away from the metaphysics of categorisation towards the linguistic problem of trying to differentiate between, and define, the words being used. Ludwig Wittgenstein’s conclusion was that there were no clear definitions which we can give to words and categories but only a "halo" or "corona" of related meanings radiating around each term. Gilbert Ryle thought the problem could be seen in terms of dealing with "a galaxy of ideas" rather than a single idea, and suggested that category mistakes are made when a concept (e.g. "university"), understood as falling under one category (e.g. abstract idea), is used as though it falls under another (e.g. physical object). With regard to the visual analogies being used, Peirce and Lewis, just like Plotinus earlier, likened the terms of propositions to points, and the relations between the terms to lines. Peirce, taking this further, talked of univalent, bivalent and trivalent relations linking predicates to their subject and it is just the number and types of relation linking subject and predicate that determine the category into which a predicate might fall. Primary categories contain concepts where there is one dominant kind of relation to the subject. Secondary categories contain concepts where there are two dominant kinds of relation. Examples of the latter were given by Heidegger in his two propositions "the house is on the creek" where the two dominant relations are spatial location (Disjunction) and cultural association (Inherence), and "the house is eighteenth century" where the two relations are temporal location (Causality) and cultural quality (Inherence). A third example may be inferred from Kant in the proposition "the house is impressive or sublime" where the two relations are spatial or mathematical disposition (Disjunction) and dynamic or motive power (Causality). Both Peirce and Wittgenstein introduced the analogy of colour theory in order to illustrate the shades of meanings of words. Primary categories, like primary colours, are analytical representing the furthest we can go in terms of analysis and abstraction and include Quantity, Motion and Quality. Secondary categories, like secondary colours, are synthetic and include concepts such as Substance, Community and Spirit.
Apart from these, the categorial scheme of Alfred North Whitehead and his Process Philosophy, alongside Nicolai Hartmann and his Critical Realism, remain one of the most detailed and advanced systems in categorial research in metaphysics.
Peirce
Charles Sanders Peirce, who had read Kant and Hegel closely, and who also had some knowledge of Aristotle, proposed a system of merely three phenomenological categories: Firstness, Secondness, and Thirdness, which he repeatedly invoked in his subsequent writings. Like Hegel, C.S. Peirce attempted to develop a system of categories from a single indisputable principle, in Peirce's case the notion that in the first instance he could only be aware of his own ideas.
"It seems that the true categories of consciousness are first, feeling ... second, a sense of resistance ... and third, synthetic consciousness, or thought".
Elsewhere he called the three primary categories: Quality, Reaction and Meaning, and even Firstness, Secondness and Thirdness, saying, "perhaps it is not right to call these categories conceptions, they are so intangible that they are rather tones or tints upon conceptions":
Firstness (Quality): "The first is predominant in feeling ... we must think of a quality without parts, e.g. the colour of magenta ... When I say it is a quality I do not mean that it "inheres" in a subject ... The whole content of consciousness is made up of qualities of feeling, as truly as the whole of space is made up of points, or the whole of time by instants".
Secondness (Reaction): "This is present even in such a rudimentary fragment of experience as a simple feeling ... an action and reaction between our soul and the stimulus ... The idea of second is predominant in the ideas of causation and of statical force ... the real is active; we acknowledge it by calling it the actual".
Thirdness (Meaning): "Thirdness is essentially of a general nature ... ideas in which thirdness predominate [include] the idea of a sign or representation ... Every genuine triadic relation involves meaning ... the idea of meaning is irreducible to those of quality and reaction ... synthetical consciousness is the consciousness of a third or medium".
Although Peirce's three categories correspond to the three concepts of relation given in Kant's tables, the sequence is now reversed and follows that given by Hegel, and indeed before Hegel of the three moments of the world-process given by Plotinus. Later, Peirce gave a mathematical reason for there being three categories in that although monadic, dyadic and triadic nodes are irreducible, every node of a higher valency is reducible to a "compound of triadic relations". Ferdinand de Saussure, who was developing "semiology" in France just as Peirce was developing "semiotics" in the US, likened each term of a proposition to "the centre of a constellation, the point where other coordinate terms, the sum of which is indefinite, converge".
Others
Edmund Husserl (1962, 2000) wrote extensively about categorial systems as part of his phenomenology.
For Gilbert Ryle (1949), a category (in particular a "category mistake") is an important semantic concept, but one having only loose affinities to an ontological category.
Contemporary systems of categories have been proposed by John G. Bennett (The Dramatic Universe, 4 vols., 1956–65), Wilfrid Sellars (1974), Reinhardt Grossmann (1983, 1992), Johansson (1989), Hoffman and Rosenkrantz (1994), Roderick Chisholm (1996), Barry Smith (ontologist) (2003), and Jonathan Lowe (2006).
See also
Categories (Aristotle)
Categories (Peirce)
Categories (Stoic)
Category (Kant)
Metaphysics
Modal logic
Ontology
Schema (Kant)
Similarity (philosophy)
References
Selected bibliography
Aristotle, 1953. Metaphysics. Ross, W. D., trans. Oxford University Press.
--------, 2004. Categories, Edghill, E. M., trans. Uni. of Adelaide library.
John G. Bennett, 1956–1965. The Dramatic Universe. London, Hodder & Stoughton.
Gustav Bergmann, 1992. New Foundations of Ontology. Madison: Uni. of Wisconsin Press.
Browning, Douglas, 1990. Ontology and the Practical Arena. Pennsylvania State Uni.
Butchvarov, Panayot, 1979. Being qua Being: A Theory of Identity, Existence, and Predication. Indiana Uni. Press.
Roderick Chisholm, 1996. A Realistic Theory of Categories. Cambridge Uni. Press.
Feibleman, James Kern, 1951. Ontology. The Johns Hopkins Press (reprinted 1968, Greenwood Press, Publishers, New York).
Grossmann, Reinhardt, 1983. The Categorial Structure of the World. Indiana Uni. Press.
Grossmann, Reinhardt, 1992. The Existence of the World: An Introduction to Ontology. Routledge.
Haaparanta, Leila and Koskinen, Heikki J., 2012. Categories of Being: Essays on Metaphysics and Logic. New York: Oxford University Press.
Hoffman, J., and Rosenkrantz, G. S.,1994. Substance among other Categories. Cambridge Uni. Press.
Edmund Husserl, 1962. Ideas: General Introduction to Pure Phenomenology. Boyce Gibson, W. R., trans. Collier.
------, 2000. Logical Investigations, 2nd ed. Findlay, J. N., trans. Routledge.
Johansson, Ingvar, 1989. Ontological Investigations. Routledge, 2nd ed. Ontos Verlag 2004.
Kahn, Charles H., 2009. Essays on Being, Oxford University Press.
Immanuel Kant, 1998. Critique of Pure Reason. Guyer, Paul, and Wood, A. W., trans. Cambridge Uni. Press.
Charles Sanders Peirce, 1992, 1998. The Essential Peirce, vols. 1,2. Houser, Nathan et al., eds. Indiana Uni. Press.
Gilbert Ryle, 1949. The Concept of Mind. Uni. of Chicago Press.
Wilfrid Sellars, 1974, "Toward a Theory of the Categories" in Essays in Philosophy and Its History. Reidel.
Barry Smith, 2003. "Ontology" in Blackwell Guide to the Philosophy of Computing and Information. Blackwell.
External links
Aristotle's Categories at MIT.
"Ontological Categories and How to Use Them" – Amie Thomasson.
"Recent Advances in Metaphysics" – E. J. Lowe.
Theory and History of Ontology – Raul Corazzon.
Concepts in metaphysics |
5371 | https://en.wikipedia.org/wiki/Concrete | Concrete | Concrete is a composite material composed of aggregate bonded together with a fluid cement that cures over time. Concrete is the second-most-used substance in the world after water, and is the most widely used building material. Its usage worldwide, ton for ton, is twice that of steel, wood, plastics, and aluminium combined.
When aggregate is mixed with dry Portland cement and water, the mixture forms a fluid slurry that is easily poured and molded into shape. The cement reacts with the water through a process called concrete hydration that hardens it over several hours to form a hard matrix that binds the materials together into a durable stone-like material that has many uses. This time allows concrete to not only be cast in forms, but also to have a variety of tooled processes performed. The hydration process is exothermic, which means ambient temperature plays a significant role in how long it takes concrete to set. Often, additives (such as pozzolans or superplasticizers) are included in the mixture to improve the physical properties of the wet mix, delay or accelerate the curing time, or otherwise change the finished material. Most concrete is poured with reinforcing materials (such as steel rebar) embedded to provide tensile strength, yielding reinforced concrete.
In the past, lime based cement binders, such as lime putty, were often used but sometimes with other hydraulic cements, (water resistant) such as a calcium aluminate cement or with Portland cement to form Portland cement concrete (named for its visual resemblance to Portland stone). Many other non-cementitious types of concrete exist with other methods of binding aggregate together, including asphalt concrete with a bitumen binder, which is frequently used for road surfaces, and polymer concretes that use polymers as a binder. Concrete is distinct from mortar. Whereas concrete is itself a building material, mortar is a bonding agent that typically holds bricks, tiles and other masonry units together. Grout is another material associated with concrete and cement. It does not contain coarse aggregates and is usually either pourable or thixotropic, and is used to fill gaps between masonry components or coarse aggregate which has already been put in place. Some methods of concrete manufacture and repair involve pumping grout into the gaps to make up a solid mass in situ.
Etymology
The word concrete comes from the Latin word "concretus" (meaning compact or condensed), the perfect passive participle of "concrescere", from "con-" (together) and "crescere" (to grow).
History
Ancient times
Mayan concrete at the ruins of Uxmal (850-925 A.D.) is referenced in Incidents of Travel in the Yucatán by John L. Stephens. "The roof is flat and had been covered with cement". "The floors were cement, in some places hard, but, by long exposure, broken, and now crumbling under the feet." "But throughout the wall was solid, and consisting of large stones imbedded in mortar, almost as hard as rock."
Small-scale production of concrete-like materials was pioneered by the Nabatean traders who occupied and controlled a series of oases and developed a small empire in the regions of southern Syria and northern Jordan from the 4th century BC. They discovered the advantages of hydraulic lime, with some self-cementing properties, by 700 BC. They built kilns to supply mortar for the construction of rubble masonry houses, concrete floors, and underground waterproof cisterns. They kept the cisterns secret as these enabled the Nabataeans to thrive in the desert. Some of these structures survive to this day.
Classical era
In the Ancient Egyptian and later Roman eras, builders discovered that adding volcanic ash to lime allowed the mix to set underwater. They discovered the pozzolanic reaction.
Concrete floors were found in the royal palace of Tiryns, Greece, which dates roughly to 1400-1200 BC. Lime mortars were used in Greece, such as in Crete and Cyprus, in 800 BC. The Assyrian Jerwan Aqueduct (688 BC) made use of waterproof concrete. Concrete was used for construction in many ancient structures.
The Romans used concrete extensively from 300 BC to 476 AD. During the Roman Empire, Roman concrete (or opus caementicium) was made from quicklime, pozzolana and an aggregate of pumice. Its widespread use in many Roman structures, a key event in the history of architecture termed the Roman architectural revolution, freed Roman construction from the restrictions of stone and brick materials. It enabled revolutionary new designs in terms of both structural complexity and dimension. The Colosseum in Rome was built largely of concrete, and the Pantheon has the world's largest unreinforced concrete dome.
Concrete, as the Romans knew it, was a new and revolutionary material. Laid in the shape of arches, vaults and domes, it quickly hardened into a rigid mass, free from many of the internal thrusts and strains that troubled the builders of similar structures in stone or brick.
Modern tests show that opus caementicium had as much compressive strength as modern Portland-cement concrete (ca. ). However, due to the absence of reinforcement, its tensile strength was far lower than modern reinforced concrete, and its mode of application also differed:
Modern structural concrete differs from Roman concrete in two important details. First, its mix consistency is fluid and homogeneous, allowing it to be poured into forms rather than requiring hand-layering together with the placement of aggregate, which, in Roman practice, often consisted of rubble. Second, integral reinforcing steel gives modern concrete assemblies great strength in tension, whereas Roman concrete could depend only upon the strength of the concrete bonding to resist tension.
The long-term durability of Roman concrete structures has been found to be due to its use of pyroclastic (volcanic) rock and ash, whereby the crystallization of strätlingite (a specific and complex calcium aluminosilicate hydrate) and the coalescence of this and similar calcium–aluminium-silicate–hydrate cementing binders helped give the concrete a greater degree of fracture resistance even in seismically active environments. Roman concrete is significantly more resistant to erosion by seawater than modern concrete; it used pyroclastic materials which react with seawater to form Al-tobermorite crystals over time.
The widespread use of concrete in many Roman structures ensured that many survive to the present day. The Baths of Caracalla in Rome are just one example. Many Roman aqueducts and bridges, such as the magnificent Pont du Gard in southern France, have masonry cladding on a concrete core, as does the dome of the Pantheon.
Middle Ages
After the Roman Empire, the use of burned lime and pozzolana was greatly reduced. Low kiln temperatures in the burning of lime, lack of pozzolana, and poor mixing all contributed to a decline in the quality of concrete and mortar. From the 11th century, the increased use of stone in church and castle construction led to an increased demand for mortar. Quality began to improve in the 12th century through better grinding and sieving. Medieval lime mortars and concretes were non-hydraulic and were used for binding masonry, "hearting" (binding rubble masonry cores) and foundations. Bartholomaeus Anglicus in his De proprietatibus rerum (1240) describes the making of mortar. In an English translation from 1397, it reads "lyme ... is a stone brent; by medlynge thereof with sonde and water sement is made". From the 14th century, the quality of mortar was again excellent, but only from the 17th century was pozzolana commonly added.
The Canal du Midi was built using concrete in 1670.
Industrial era
Perhaps the greatest step forward in the modern use of concrete was Smeaton's Tower, built by British engineer John Smeaton in Devon, England, between 1756 and 1759. This third Eddystone Lighthouse pioneered the use of hydraulic lime in concrete, using pebbles and powdered brick as aggregate.
A method for producing Portland cement was developed in England and patented by Joseph Aspdin in 1824. Aspdin chose the name for its similarity to Portland stone, which was quarried on the Isle of Portland in Dorset, England. His son William continued developments into the 1840s, earning him recognition for the development of "modern" Portland cement.
Reinforced concrete was invented in 1849 by Joseph Monier. and the first reinforced concrete house was built by François Coignet in 1853.
The first concrete reinforced bridge was designed and built by Joseph Monier in 1875.
Prestressed concrete and post-tensioned concrete were pioneered by Eugène Freyssinet, a French structural and civil engineer. Concrete components or structures are compressed by tendon cables during, or after, their fabrication in order to strengthen them against tensile forces developing when put in service. Freyssinet patented the technique on 2 October 1928.
Composition
Concrete is an artificial composite material, comprising a matrix of cementitious binder (typically Portland cement paste or asphalt) and a dispersed phase or "filler" of aggregate (typically a rocky material, loose stones, and sand). The binder "glues" the filler together to form a synthetic conglomerate. Many types of concrete are available, determined by the formulations of binders and the types of aggregate used to suit the application of the engineered material. These variables determine strength and density, as well as chemical and thermal resistance of the finished product.
Construction aggregates consist of large chunks of material in a concrete mix, generally a coarse gravel or crushed rocks such as limestone, or granite, along with finer materials such as sand.
Cement paste, most commonly made of Portland cement, is the most prevalent kind of concrete binder. For cementitious binders, water is mixed with the dry cement powder and aggregate, which produces a semi-liquid slurry (paste) that can be shaped, typically by pouring it into a form. The concrete solidifies and hardens through a chemical process called hydration. The water reacts with the cement, which bonds the other components together, creating a robust, stone-like material. Other cementitious materials, such as fly ash and slag cement, are sometimes added—either pre-blended with the cement or directly as a concrete component—and become a part of the binder for the aggregate. Fly ash and slag can enhance some properties of concrete such as fresh properties and durability. Alternatively, other materials can also be used as a concrete binder: the most prevalent substitute is asphalt, which is used as the binder in asphalt concrete.
Admixtures are added to modify the cure rate or properties of the material. Mineral admixtures use recycled materials as concrete ingredients. Conspicuous materials include fly ash, a by-product of coal-fired power plants; ground granulated blast furnace slag, a by-product of steelmaking; and silica fume, a by-product of industrial electric arc furnaces.
Structures employing Portland cement concrete usually include steel reinforcement because this type of concrete can be formulated with high compressive strength, but always has lower tensile strength. Therefore, it is usually reinforced with materials that are strong in tension, typically steel rebar.
The mix design depends on the type of structure being built, how the concrete is mixed and delivered, and how it is placed to form the structure.
Cement
Portland cement is the most common type of cement in general usage. It is a basic ingredient of concrete, mortar, and many plasters. British masonry worker Joseph Aspdin patented Portland cement in 1824. It was named because of the similarity of its color to Portland limestone, quarried from the English Isle of Portland and used extensively in London architecture. It consists of a mixture of calcium silicates (alite, belite), aluminates and ferrites—compounds which combine calcium, silicon, aluminium and iron in forms which will react with water. Portland cement and similar materials are made by heating limestone (a source of calcium) with clay or shale (a source of silicon, aluminium and iron) and grinding this product (called clinker) with a source of sulfate (most commonly gypsum).
In modern cement kilns, many advanced features are used to lower the fuel consumption per ton of clinker produced. Cement kilns are extremely large, complex, and inherently dusty industrial installations, and have emissions which must be controlled. Of the various ingredients used to produce a given quantity of concrete, the cement is the most energetically expensive. Even complex and efficient kilns require 3.3 to 3.6 gigajoules of energy to produce a ton of clinker and then grind it into cement. Many kilns can be fueled with difficult-to-dispose-of wastes, the most common being used tires. The extremely high temperatures and long periods of time at those temperatures allows cement kilns to efficiently and completely burn even difficult-to-use fuels.
Water
Combining water with a cementitious material forms a cement paste by the process of hydration. The cement paste glues the aggregate together, fills voids within it, and makes it flow more freely.
As stated by Abrams' law, a lower water-to-cement ratio yields a stronger, more durable concrete, whereas more water gives a freer-flowing concrete with a higher slump. Impure water used to make concrete can cause problems when setting or in causing premature failure of the structure.
Portland cement consists of five major compounds of calcium silicates and aluminates ranging from 5 to 50% in weight, which all undergo hydration to contribute to final material's strength. Thus, the hydration of cement involves many reactions, often occurring at the same time. As the reactions proceed, the products of the cement hydration process gradually bond together the individual sand and gravel particles and other components of the concrete to form a solid mass.
Hydration of tricalcium silicate
Cement chemist notation: C3S + H → C-S-H + CH + heat
Standard notation: Ca3SiO5 + H2O → CaO・SiO2・H2O (gel) + Ca(OH)2 + heat
Balanced: 2 Ca3SiO5 + 7 H2O → 3 CaO・2 SiO2・4 H2O (gel) + 3 Ca(OH)2 + heat
(approximately as the exact ratios of CaO, SiO2 and H2O in C-S-H can vary)
Due to the nature of the chemical bonds created in these reactions and the final characteristics of the hardened cement paste formed, the process of cement hydration is considered irreversible.
Aggregates
Fine and coarse aggregates make up the bulk of a concrete mixture. Sand, natural gravel, and crushed stone are used mainly for this purpose. Recycled aggregates (from construction, demolition, and excavation waste) are increasingly used as partial replacements for natural aggregates, while a number of manufactured aggregates, including air-cooled blast furnace slag and bottom ash are also permitted.
The size distribution of the aggregate determines how much binder is required. Aggregate with a very even size distribution has the biggest gaps whereas adding aggregate with smaller particles tends to fill these gaps. The binder must fill the gaps between the aggregate as well as paste the surfaces of the aggregate together, and is typically the most expensive component. Thus, variation in sizes of the aggregate reduces the cost of concrete. The aggregate is nearly always stronger than the binder, so its use does not negatively affect the strength of the concrete.
Redistribution of aggregates after compaction often creates non-homogeneity due to the influence of vibration. This can lead to strength gradients.
Decorative stones such as quartzite, small river stones or crushed glass are sometimes added to the surface of concrete for a decorative "exposed aggregate" finish, popular among landscape designers.
Admixtures
Admixtures are materials in the form of powder or fluids that are added to the concrete to give it certain characteristics not obtainable with plain concrete mixes. Admixtures are defined as additions "made as the concrete mix is being prepared". The most common admixtures are retarders and accelerators. In normal use, admixture dosages are less than 5% by mass of cement and are added to the concrete at the time of batching/mixing. (See below.) The common types of admixtures are as follows:
Accelerators speed up the hydration (hardening) of the concrete. Typical materials used are calcium chloride, calcium nitrate and sodium nitrate. However, use of chlorides may cause corrosion in steel reinforcing and is prohibited in some countries, so that nitrates may be favored, even though they are less effective than the chloride salt. Accelerating admixtures are especially useful for modifying the properties of concrete in cold weather.
Air entraining agents add and entrain tiny air bubbles in the concrete, which reduces damage during freeze-thaw cycles, increasing durability. However, entrained air entails a tradeoff with strength, as each 1% of air may decrease compressive strength by 5%. If too much air becomes trapped in the concrete as a result of the mixing process, defoamers can be used to encourage the air bubble to agglomerate, rise to the surface of the wet concrete and then disperse.
Bonding agents are used to create a bond between old and new concrete (typically a type of polymer) with wide temperature tolerance and corrosion resistance.
Corrosion inhibitors are used to minimize the corrosion of steel and steel bars in concrete.
Crystalline admixtures are typically added during batching of the concrete to lower permeability. The reaction takes place when exposed to water and un-hydrated cement particles to form insoluble needle-shaped crystals, which fill capillary pores and micro-cracks in the concrete to block pathways for water and waterborne contaminates. Concrete with crystalline admixture can expect to self-seal as constant exposure to water will continuously initiate crystallization to ensure permanent waterproof protection.
Pigments can be used to change the color of concrete, for aesthetics.
Plasticizers increase the workability of plastic, or "fresh", concrete, allowing it to be placed more easily, with less consolidating effort. A typical plasticizer is lignosulfonate. Plasticizers can be used to reduce the water content of a concrete while maintaining workability and are sometimes called water-reducers due to this use. Such treatment improves its strength and durability characteristics.
Superplasticizers (also called high-range water-reducers) are a class of plasticizers that have fewer deleterious effects and can be used to increase workability more than is practical with traditional plasticizers. Superplasticizers are used to increase compressive strength. It increases the workability of the concrete and lowers the need for water content by 15–30%.
Pumping aids improve pumpability, thicken the paste and reduce separation and bleeding.
Retarders slow the hydration of concrete and are used in large or difficult pours where partial setting is undesirable before completion of the pour. Typical polyol retarders are sugar, sucrose, sodium gluconate, glucose, citric acid, and tartaric acid.
Mineral admixtures and blended cements
Inorganic materials that have pozzolanic or latent hydraulic properties, these very fine-grained materials are added to the concrete mix to improve the properties of concrete (mineral admixtures), or as a replacement for Portland cement (blended cements). Products which incorporate limestone, fly ash, blast furnace slag, and other useful materials with pozzolanic properties into the mix, are being tested and used. These developments are ever growing in relevance to minimize the impacts caused by cement use, notorious for being one of the largest producers (at about 5 to 10%) of global greenhouse gas emissions. The use of alternative materials also is capable of lowering costs, improving concrete properties, and recycling wastes, the latest being relevant for circular economy aspects of the construction industry, whose demand is ever growing with greater impacts on raw material extraction, waste generation and landfill practices.
Fly ash: A by-product of coal-fired electric generating plants, it is used to partially replace Portland cement (by up to 60% by mass). The properties of fly ash depend on the type of coal burnt. In general, siliceous fly ash is pozzolanic, while calcareous fly ash has latent hydraulic properties.
Ground granulated blast furnace slag (GGBFS or GGBS): A by-product of steel production is used to partially replace Portland cement (by up to 80% by mass). It has latent hydraulic properties.
Silica fume: A by-product of the production of silicon and ferrosilicon alloys. Silica fume is similar to fly ash, but has a particle size 100 times smaller. This results in a higher surface-to-volume ratio and a much faster pozzolanic reaction. Silica fume is used to increase strength and durability of concrete, but generally requires the use of superplasticizers for workability.
High reactivity metakaolin (HRM): Metakaolin produces concrete with strength and durability similar to concrete made with silica fume. While silica fume is usually dark gray or black in color, high-reactivity metakaolin is usually bright white in color, making it the preferred choice for architectural concrete where appearance is important.
Carbon nanofibers can be added to concrete to enhance compressive strength and gain a higher Young's modulus, and also to improve the electrical properties required for strain monitoring, damage evaluation and self-health monitoring of concrete. Carbon fiber has many advantages in terms of mechanical and electrical properties (e.g., higher strength) and self-monitoring behavior due to the high tensile strength and high electrical conductivity.
Carbon products have been added to make concrete electrically conductive, for deicing purposes.
New research from Japan's University of Kitakyushu shows that a washed and dried recycled mix of used diapers can be an environmental solution to producing less landfill and using less sand in concrete production. A model home was built in Indonesia to test the strength and durability of the new diaper-cement composite.
Production
Concrete production is the process of mixing together the various ingredients—water, aggregate, cement, and any additives—to produce concrete. Concrete production is time-sensitive. Once the ingredients are mixed, workers must put the concrete in place before it hardens. In modern usage, most concrete production takes place in a large type of industrial facility called a concrete plant, or often a batch plant. The usual method of placement is casting in formwork, which holds the mix in shape until it has set enough to hold its shape unaided.
In general usage, concrete plants come in two main types, ready mix plants and central mix plants. A ready-mix plant mixes all the ingredients except water, while a central mix plant mixes all the ingredients including water. A central-mix plant offers more accurate control of the concrete quality through better measurements of the amount of water added, but must be placed closer to the work site where the concrete will be used, since hydration begins at the plant.
A concrete plant consists of large storage hoppers for various reactive ingredients like cement, storage for bulk ingredients like aggregate and water, mechanisms for the addition of various additives and amendments, machinery to accurately weigh, move, and mix some or all of those ingredients, and facilities to dispense the mixed concrete, often to a concrete mixer truck.
Modern concrete is usually prepared as a viscous fluid, so that it may be poured into forms, which are containers erected in the field to give the concrete its desired shape. Concrete formwork can be prepared in several ways, such as slip forming and steel plate construction. Alternatively, concrete can be mixed into dryer, non-fluid forms and used in factory settings to manufacture precast concrete products.
A wide variety of equipment is used for processing concrete, from hand tools to heavy industrial machinery. Whichever equipment builders use, however, the objective is to produce the desired building material; ingredients must be properly mixed, placed, shaped, and retained within time constraints. Any interruption in pouring the concrete can cause the initially placed material to begin to set before the next batch is added on top. This creates a horizontal plane of weakness called a cold joint between the two batches. Once the mix is where it should be, the curing process must be controlled to ensure that the concrete attains the desired attributes. During concrete preparation, various technical details may affect the quality and nature of the product.
Design mix
Design mix ratios are decided by an engineer after analyzing the properties of the specific ingredients being used. Instead of using a 'nominal mix' of 1 part cement, 2 parts sand, and 4 parts aggregate (the second example from above), a civil engineer will custom-design a concrete mix to exactly meet the requirements of the site and conditions, setting material ratios and often designing an admixture package to fine-tune the properties or increase the performance envelope of the mix. Design-mix concrete can have very broad specifications that cannot be met with more basic nominal mixes, but the involvement of the engineer often increases the cost of the concrete mix.
Concrete Mixes are primarily divided into nominal mix, standard mix and design mix.
Nominal mix ratios are given in volume of . Nominal mixes are a simple, fast way of getting a basic idea of the properties of the finished concrete without having to perform testing in advance.
Various governing bodies (such as British Standards) define nominal mix ratios into a number of grades, usually ranging from lower compressive strength to higher compressive strength. The grades usually indicate the 28-day cube strength.
Mixing
Thorough mixing is essential to produce uniform, high-quality concrete.
has shown that the mixing of cement and water into a paste before combining these materials with aggregates can increase the compressive strength of the resulting concrete. The paste is generally mixed in a , shear-type mixer at a w/c (water to cement ratio) of 0.30 to 0.45 by mass. The cement paste premix may include admixtures such as accelerators or retarders, superplasticizers, pigments, or silica fume. The premixed paste is then blended with aggregates and any remaining batch water and final mixing is completed in conventional concrete mixing equipment.
Sample analysis – Workability
Workability is the ability of a fresh (plastic) concrete mix to fill the form/mold properly with the desired work (pouring, pumping, spreading, tamping, vibration) and without reducing the concrete's quality. Workability depends on water content, aggregate (shape and size distribution), cementitious content and age (level of hydration) and can be modified by adding chemical admixtures, like superplasticizer. Raising the water content or adding chemical admixtures increases concrete workability. Excessive water leads to increased bleeding or segregation of aggregates (when the cement and aggregates start to separate), with the resulting concrete having reduced quality. Changes in gradation can also affect workability of the concrete, although a wide range of gradation can be used for various applications. An undesirable gradation can mean using a large aggregate that is too large for the size of the formwork, or which has too few smaller aggregate grades to serve to fill the gaps between the larger grades, or using too little or too much sand for the same reason, or using too little water, or too much cement, or even using jagged crushed stone instead of smoother round aggregate such as pebbles. Any combination of these factors and others may result in a mix which is too harsh, i.e., which does not flow or spread out smoothly, is difficult to get into the formwork, and which is difficult to surface finish.
Workability can be measured by the concrete slump test, a simple measure of the plasticity of a fresh batch of concrete following the ASTM C 143 or EN 12350-2 test standards. Slump is normally measured by filling an "Abrams cone" with a sample from a fresh batch of concrete. The cone is placed with the wide end down onto a level, non-absorptive surface. It is then filled in three layers of equal volume, with each layer being tamped with a steel rod to consolidate the layer. When the cone is carefully lifted off, the enclosed material slumps a certain amount, owing to gravity. A relatively dry sample slumps very little, having a slump value of one or two inches (25 or 50 mm) out of . A relatively wet concrete sample may slump as much as eight inches. Workability can also be measured by the flow table test.
Slump can be increased by addition of chemical admixtures such as plasticizer or superplasticizer without changing the water-cement ratio. Some other admixtures, especially air-entraining admixture, can increase the slump of a mix.
High-flow concrete, like self-consolidating concrete, is tested by other flow-measuring methods. One of these methods includes placing the cone on the narrow end and observing how the mix flows through the cone while it is gradually lifted.
After mixing, concrete is a fluid and can be pumped to the location where needed.
Curing
Maintaining optimal conditions for cement hydration
Concrete must be kept moist during curing in order to achieve optimal strength and durability. During curing hydration occurs, allowing calcium-silicate hydrate (C-S-H) to form. Over 90% of a mix's final strength is typically reached within four weeks, with the remaining 10% achieved over years or even decades. The conversion of calcium hydroxide in the concrete into calcium carbonate from absorption of CO2 over several decades further strengthens the concrete and makes it more resistant to damage. This carbonation reaction, however, lowers the pH of the cement pore solution and can corrode the reinforcement bars.
Hydration and hardening of concrete during the first three days is critical. Abnormally fast drying and shrinkage due to factors such as evaporation from wind during placement may lead to increased tensile stresses at a time when it has not yet gained sufficient strength, resulting in greater shrinkage cracking. The early strength of the concrete can be increased if it is kept damp during the curing process. Minimizing stress prior to curing minimizes cracking. High-early-strength concrete is designed to hydrate faster, often by increased use of cement that increases shrinkage and cracking. The strength of concrete changes (increases) for up to three years. It depends on cross-section dimension of elements and conditions of structure exploitation. Addition of short-cut polymer fibers can improve (reduce) shrinkage-induced stresses during curing and increase early and ultimate compression strength.
Properly curing concrete leads to increased strength and lower permeability and avoids cracking where the surface dries out prematurely. Care must also be taken to avoid freezing or overheating due to the exothermic setting of cement. Improper curing can cause scaling, reduced strength, poor abrasion resistance and cracking.
Curing techniques avoiding water loss by evaporation
During the curing period, concrete is ideally maintained at controlled temperature and humidity. To ensure full hydration during curing, concrete slabs are often sprayed with "curing compounds" that create a water-retaining film over the concrete. Typical films are made of wax or related hydrophobic compounds. After the concrete is sufficiently cured, the film is allowed to abrade from the concrete through normal use.
Traditional conditions for curing involve spraying or ponding the concrete surface with water. The adjacent picture shows one of many ways to achieve this, ponding—submerging setting concrete in water and wrapping in plastic to prevent dehydration. Additional common curing methods include wet burlap and plastic sheeting covering the fresh concrete.
For higher-strength applications, accelerated curing techniques may be applied to the concrete. A common technique involves heating the poured concrete with steam, which serves to both keep it damp and raise the temperature so that the hydration process proceeds more quickly and more thoroughly.
Alternative types
Asphalt
Asphalt concrete (commonly called asphalt, blacktop, or pavement in North America, and tarmac, bitumen macadam, or rolled asphalt in the United Kingdom and the Republic of Ireland) is a composite material commonly used to surface roads, parking lots, airports, as well as the core of embankment dams. Asphalt mixtures have been used in pavement construction since the beginning of the twentieth century. It consists of mineral aggregate bound together with asphalt, laid in layers, and compacted. The process was refined and enhanced by Belgian inventor and U.S. immigrant Edward De Smedt.
The terms asphalt (or asphaltic) concrete, bituminous asphalt concrete, and bituminous mixture are typically used only in engineering and construction documents, which define concrete as any composite material composed of mineral aggregate adhered with a binder. The abbreviation, AC, is sometimes used for asphalt concrete but can also denote asphalt content or asphalt cement, referring to the liquid asphalt portion of the composite material.
Graphene enhanced concrete
Graphene enhanced concretes are standard designs of concrete mixes, except that during the cement-mixing or production process, a small amount of chemically engineered graphene is added. These enhanced graphene concretes are designed around the concrete application.
Microbial
Bacteria such as Bacillus pasteurii, Bacillus pseudofirmus, Bacillus cohnii, Sporosarcina pasteuri, and Arthrobacter crystallopoietes increase the compression strength of concrete through their biomass. However some forms of bacteria can also be concrete-destroying. Bacillus sp. CT-5. can reduce corrosion of reinforcement in reinforced concrete by up to four times. Sporosarcina pasteurii reduces water and chloride permeability. B. pasteurii increases resistance to acid. Bacillus pasteurii and B. sphaericuscan induce calcium carbonate precipitation in the surface of cracks, adding compression strength.
Nanoconcrete
Nanoconcrete (also spelled "nano concrete"' or "nano-concrete") is a class of materials that contains Portland cement particles that are no greater than 100 μm and particles of silica no greater than 500 μm, which fill voids that would otherwise occur in normal concrete, thereby substantially increasing the material's strength. It is widely used in foot and highway bridges where high flexural and compressive strength are indicated.
Pervious
Pervious concrete is a mix of specially graded coarse aggregate, cement, water, and little-to-no fine aggregates. This concrete is also known as "no-fines" or porous concrete. Mixing the ingredients in a carefully controlled process creates a paste that coats and bonds the aggregate particles. The hardened concrete contains interconnected air voids totaling approximately 15 to 25 percent. Water runs through the voids in the pavement to the soil underneath. Air entrainment admixtures are often used in freeze-thaw climates to minimize the possibility of frost damage. Pervious concrete also permits rainwater to filter through roads and parking lots, to recharge aquifers, instead of contributing to runoff and flooding.
Polymer
Polymer concretes are mixtures of aggregate and any of various polymers and may be reinforced. The cement is costlier than lime-based cements, but polymer concretes nevertheless have advantages; they have significant tensile strength even without reinforcement, and they are largely impervious to water. Polymer concretes are frequently used for the repair and construction of other applications, such as drains.
Volcanic
Volcanic concrete substitutes volcanic rock for the limestone that is burned to form clinker. It consumes a similar amount of energy, but does not directly emit carbon as a byproduct. Volcanic rock/ash are used as supplementary cementitious materials in concrete to improve the resistance to sulfate, chloride and alkali silica reaction due to pore refinement. Also, they are generally cost effective in comparison to other aggregates, good for semi and light weight concretes, and good for thermal and acoustic insulation.
Pyroclastic materials, such as pumice, scoria, and ashes are formed from cooling magma during explosive volcanic eruptions. They are used as supplementary cementitious materials (SCM) or as aggregates for cements and concretes. They have been extensively used since ancient times to produce materials for building applications. For example, pumice and other volcanic glasses were added as a natural pozzolanic material for mortars and plasters during the construction of the Villa San Marco in the Roman period (89 BC – 79 AD), which remain one of the best-preserved otium villae of the Bay of Naples in Italy.
Waste light
Waste light is form of polymer modified concrete. The specific polymer admixture allows the replacement of all the traditional aggregates (gravel, sand, stone) by any mixture of solid waste materials in the grain size of 3–10 mm to form a low-compressive-strength (3–20 N/mm2) product for road and building construction. One cubic meter of waste light concrete contains 1.1–1.3 m3 of shredded waste and no other aggregates.
Sulfur concrete
Sulfur concrete is a special concrete that uses sulfur as a binder and does not require cement or water.
Properties
Concrete has relatively high compressive strength, but much lower tensile strength. Therefore, it is usually reinforced with materials that are strong in tension (often steel). The elasticity of concrete is relatively constant at low stress levels but starts decreasing at higher stress levels as matrix cracking develops. Concrete has a very low coefficient of thermal expansion and shrinks as it matures. All concrete structures crack to some extent, due to shrinkage and tension. Concrete that is subjected to long-duration forces is prone to creep.
Tests can be performed to ensure that the properties of concrete correspond to specifications for the application.
The ingredients affect the strengths of the material. Concrete strength values are usually specified as the lower-bound compressive strength of either a cylindrical or cubic specimen as determined by standard test procedures.
The strengths of concrete is dictated by its function. Very low-strength— or less—concrete may be used when the concrete must be lightweight. Lightweight concrete is often achieved by adding air, foams, or lightweight aggregates, with the side effect that the strength is reduced. For most routine uses, concrete is often used. concrete is readily commercially available as a more durable, although more expensive, option. Higher-strength concrete is often used for larger civil projects. Strengths above are often used for specific building elements. For example, the lower floor columns of high-rise concrete buildings may use concrete of or more, to keep the size of the columns small. Bridges may use long beams of high-strength concrete to lower the number of spans required. Occasionally, other structural needs may require high-strength concrete. If a structure must be very rigid, concrete of very high strength may be specified, even much stronger than is required to bear the service loads. Strengths as high as have been used commercially for these reasons.
Energy efficiency
The cement produced for making concrete accounts for about 8% of worldwide emissions per year (compared to, e.g., global aviation at 1.9%). The two largest sources of are produced by the cement manufacturing process, arising from (1) the decarbonation reaction of limestone in the cement kiln (T ≈ 950 °C), and (2) from the combustion of fossil fuel to reach the sintering temperature (T ≈ 1450 °C) of cement clinker in the kiln. The energy required for extracting, crushing, and mixing the raw materials (construction aggregates used in the concrete production, and also limestone and clay feeding the cement kiln) is lower. Energy requirement for transportation of ready-mix concrete is also lower because it is produced nearby the construction site from local resources, typically manufactured within 100 kilometers of the job site. The overall embodied energy of concrete at roughly 1 to 1.5 megajoules per kilogram is therefore lower than for many structural and construction materials.
Once in place, concrete offers a great energy efficiency over the lifetime of a building. Concrete walls leak air far less than those made of wood frames. Air leakage accounts for a large percentage of energy loss from a home. The thermal mass properties of concrete increase the efficiency of both residential and commercial buildings. By storing and releasing the energy needed for heating or cooling, concrete's thermal mass delivers year-round benefits by reducing temperature swings inside and minimizing heating and cooling costs. While insulation reduces energy loss through the building envelope, thermal mass uses walls to store and release energy. Modern concrete wall systems use both external insulation and thermal mass to create an energy-efficient building. Insulating concrete forms (ICFs) are hollow blocks or panels made of either insulating foam or rastra that are stacked to form the shape of the walls of a building and then filled with reinforced concrete to create the structure.
Fire safety
Concrete buildings are more resistant to fire than those constructed using steel frames, since concrete has lower heat conductivity than steel and can thus last longer under the same fire conditions. Concrete is sometimes used as a fire protection for steel frames, for the same effect as above. Concrete as a fire shield, for example Fondu fyre, can also be used in extreme environments like a missile launch pad.
Options for non-combustible construction include floors, ceilings and roofs made of cast-in-place and hollow-core precast concrete. For walls, concrete masonry technology and Insulating Concrete Forms (ICFs) are additional options. ICFs are hollow blocks or panels made of fireproof insulating foam that are stacked to form the shape of the walls of a building and then filled with reinforced concrete to create the structure.
Concrete also provides good resistance against externally applied forces such as high winds, hurricanes, and tornadoes owing to its lateral stiffness, which results in minimal horizontal movement. However, this stiffness can work against certain types of concrete structures, particularly where a relatively higher flexing structure is required to resist more extreme forces.
Earthquake safety
As discussed above, concrete is very strong in compression, but weak in tension. Larger earthquakes can generate very large shear loads on structures. These shear loads subject the structure to both tensile and compressional loads. Concrete structures without reinforcement, like other unreinforced masonry structures, can fail during severe earthquake shaking. Unreinforced masonry structures constitute one of the largest earthquake risks globally. These risks can be reduced through seismic retrofitting of at-risk buildings, (e.g. school buildings in Istanbul, Turkey).
Construction with concrete
Concrete is one of the most durable building materials. It provides superior fire resistance compared with wooden construction and gains strength over time. Structures made of concrete can have a long service life. Concrete is used more than any other artificial material in the world. As of 2006, about 7.5 billion cubic meters of concrete are made each year, more than one cubic meter for every person on Earth.
Reinforced concrete
The use of reinforcement, in the form of iron was introduced in the 1850s by French industrialist François Coignet, and it was not until the 1880s that German civil engineer G. A. Wayss used steel as reinforcement. Concrete is a relatively brittle material that is strong under compression but less in tension. Plain, unreinforced concrete is unsuitable for many structures as it is relatively poor at withstanding stresses induced by vibrations, wind loading, and so on. Hence, to increase its overall strength, steel rods, wires, mesh or cables can be embedded in concrete before it is set. This reinforcement, often known as rebar, resists tensile forces.
Reinforced concrete (RC) is a versatile composite and one of the most widely used materials in modern construction. It is made up of different constituent materials with very different properties that complement each other. In the case of reinforced concrete, the component materials are almost always concrete and steel. These two materials form a strong bond together and are able to resist a variety of applied forces, effectively acting as a single structural element.
Reinforced concrete can be precast or cast-in-place (in situ) concrete, and is used in a wide range of applications such as; slab, wall, beam, column, foundation, and frame construction. Reinforcement is generally placed in areas of the concrete that are likely to be subject to tension, such as the lower portion of beams. Usually, there is a minimum of 50 mm cover, both above and below the steel reinforcement, to resist spalling and corrosion which can lead to structural instability. Other types of non-steel reinforcement, such as Fibre-reinforced concretes are used for specialized applications, predominately as a means of controlling cracking.
Precast concrete
Precast concrete is concrete which is cast in one place for use elsewhere and is a mobile material. The largest part of precast production is carried out in the works of specialist suppliers, although in some instances, due to economic and geographical factors, scale of product or difficulty of access, the elements are cast on or adjacent to the construction site. Precasting offers considerable advantages because it is carried out in a controlled environment, protected from the elements, but the downside of this is the contribution to greenhouse gas emission from transportation to the construction site.
Advantages to be achieved by employing precast concrete:
Preferred dimension schemes exist, with elements of tried and tested designs available from a catalogue.
Major savings in time result from manufacture of structural elements apart from the series of events which determine overall duration of the construction, known by planning engineers as the 'critical path'.
Availability of Laboratory facilities capable of the required control tests, many being certified for specific testing in accordance with National Standards.
Equipment with capability suited to specific types of production such as stressing beds with appropriate capacity, moulds and machinery dedicated to particular products.
High-quality finishes achieved direct from the mould eliminate the need for interior decoration and ensure low maintenance costs.
Mass structures
Due to cement's exothermic chemical reaction while setting up, large concrete structures such as dams, navigation locks, large mat foundations, and large breakwaters generate excessive heat during hydration and associated expansion. To mitigate these effects, post-cooling is commonly applied during construction. An early example at Hoover Dam used a network of pipes between vertical concrete placements to circulate cooling water during the curing process to avoid damaging overheating. Similar systems are still used; depending on volume of the pour, the concrete mix used, and ambient air temperature, the cooling process may last for many months after the concrete is placed. Various methods also are used to pre-cool the concrete mix in mass concrete structures.
Another approach to mass concrete structures that minimizes cement's thermal by-product is the use of roller-compacted concrete, which uses a dry mix which has a much lower cooling requirement than conventional wet placement. It is deposited in thick layers as a semi-dry material then roller compacted into a dense, strong mass.
Surface finishes
Raw concrete surfaces tend to be porous and have a relatively uninteresting appearance. Many finishes can be applied to improve the appearance and preserve the surface against staining, water penetration, and freezing.
Examples of improved appearance include stamped concrete where the wet concrete has a pattern impressed on the surface, to give a paved, cobbled or brick-like effect, and may be accompanied with coloration. Another popular effect for flooring and table tops is polished concrete where the concrete is polished optically flat with diamond abrasives and sealed with polymers or other sealants.
Other finishes can be achieved with chiseling, or more conventional techniques such as painting or covering it with other materials.
The proper treatment of the surface of concrete, and therefore its characteristics, is an important stage in the construction and renovation of architectural structures.
Prestressed structures
Prestressed concrete is a form of reinforced concrete that builds in compressive stresses during construction to oppose tensile stresses experienced in use. This can greatly reduce the weight of beams or slabs, by
better distributing the stresses in the structure to make optimal use of the reinforcement. For example, a horizontal beam tends to sag. Prestressed reinforcement along the bottom of the beam counteracts this.
In pre-tensioned concrete, the prestressing is achieved by using steel or polymer tendons or bars that are subjected to a tensile force prior to casting, or for post-tensioned concrete, after casting.
There are two different systems being used:
Pretensioned concrete is almost always precast, and contains steel wires (tendons) that are held in tension while the concrete is placed and sets around them.
Post-tensioned concrete has ducts through it. After the concrete has gained strength, tendons are pulled through the ducts and stressed. The ducts are then filled with grout. Bridges built in this way have experienced considerable corrosion of the tendons, so external post-tensioning may now be used in which the tendons run along the outer surface of the concrete.
More than of highways in the United States are paved with this material. Reinforced concrete, prestressed concrete and precast concrete are the most widely used types of concrete functional extensions in modern days. For more information see Brutalist architecture.
Placement
Once mixed, concrete is typically transported to the place where it is intended to become a structural item. Various methods of transportation and placement are used depending on the distances involve, quantity needed, and other details of application. Large amounts are often transported by truck, poured free under gravity or through a tremie, or pumped through a pipe. Smaller amounts may be carried in a skip (a metal container which can be tilted or opened to release the contents, usually transported by crane or hoist), or wheelbarrow, or carried in toggle bags for manual placement underwater.
Cold weather placement
Extreme weather conditions (extreme heat or cold; windy conditions, and humidity variations) can significantly alter the quality of concrete. Many precautions are observed in cold weather placement. Low temperatures significantly slow the chemical reactions involved in hydration of cement, thus affecting the strength development. Preventing freezing is the most important precaution, as formation of ice crystals can cause damage to the crystalline structure of the hydrated cement paste. If the surface of the concrete pour is insulated from the outside temperatures, the heat of hydration will prevent freezing.
The American Concrete Institute (ACI) definition of cold weather placement, ACI 306, is:
A period when for more than three successive days the average daily air temperature drops below 40 °F (~ 4.5 °C), and
Temperature stays below for more than one-half of any 24-hour period.
In Canada, where temperatures tend to be much lower during the cold season, the following criteria are used by CSA A23.1:
When the air temperature is ≤ 5 °C, and
When there is a probability that the temperature may fall below 5 °C within 24 hours of placing the concrete.
The minimum strength before exposing concrete to extreme cold is . CSA A 23.1 specified a compressive strength of 7.0 MPa to be considered safe for exposure to freezing.
Underwater placement
Concrete may be placed and cured underwater. Care must be taken in the placement method to prevent washing out the cement. Underwater placement methods include the tremie, pumping, skip placement, manual placement using toggle bags, and bagwork.
is an alternative method of forming a concrete mass underwater, where the forms are filled with coarse aggregate and the voids then completely filled with pumped grout.
Roads
Concrete roads are more fuel efficient to drive on, more reflective and last significantly longer than other paving surfaces, yet have a much smaller market share than other paving solutions. Modern-paving methods and design practices have changed the economics of concrete paving, so that a well-designed and placed concrete pavement will be less expensive on initial costs and significantly less expensive over the life cycle. Another major benefit is that pervious concrete can be used, which eliminates the need to place storm drains near the road, and reducing the need for slightly sloped roadway to help rainwater to run off. No longer requiring discarding rainwater through use of drains also means that less electricity is needed (more pumping is otherwise needed in the water-distribution system), and no rainwater gets polluted as it no longer mixes with polluted water. Rather, it is immediately absorbed by the ground.
Environment, health and safety
The manufacture and use of concrete produce a wide range of environmental, economic and social impacts.
Concrete, cement and the environment
A major component of concrete is cement, a fine powder used mainly to bind sand and coarser aggregates together in concrete. Although a variety of cement types exist, the most common is "Portland cement", which is produced by mixing clinker with smaller quantities of other additives such as gypsum and ground limestone. The production of clinker, the main constituent of cement, is responsible for the bulk of the sector's greenhouse gas emissions, including both energy intensity and process emissions.
The cement industry is one of the three primary producers of carbon dioxide, a major greenhouse gas – the other two being energy production and transportation industries. On average, every tonne of cement produced releases one tonne of CO2 into the atmosphere. Pioneer cement manufacturers have claimed to reach lower carbon intensities, with 590 kg of CO2eq per tonne of cement produced. The emissions are due to combustion and calcination processes, which roughly account for 40% and 60% of the greenhouse gases, respectively. Considering that cement is only a fraction of the constituents of concrete, it is estimated that a tonne of concrete is responsible for emitting about 100–200 kg of CO2. Every year more than 10 billion tonnes of concrete are used worldwide. In the coming years, large quantities of concrete will continue to be used, and the mitigation of CO2 emissions from the sector will be even more critical.
Concrete is used to create hard surfaces that contribute to surface runoff, which can cause heavy soil erosion, water pollution, and flooding, but conversely can be used to divert, dam, and control flooding. Concrete dust released by building demolition and natural disasters can be a major source of dangerous air pollution. Concrete is a contributor to the urban heat island effect, though less so than asphalt.
Concrete and climate change mitigation
Reducing the cement clinker content might have positive effects on the environmental life-cycle assessment of concrete. Some research work on reducing the cement clinker content in concrete has already been carried out. However, there exist different research strategies. Often replacement of some clinker for large amounts of slag or fly ash was investigated based on conventional concrete technology. This could lead to a waste of scarce raw materials such as slag and fly ash. The aim of other research activities is the efficient use of cement and reactive materials like slag and fly ash in concrete based on a modified mix design approach.
An environmental investigation found that the embodied carbon of a precast concrete facade can be reduced by 50% when using the presented fiber reinforced high performance concrete in place of typical reinforced concrete cladding.
Studies have been conducted about commercialization of low-carbon concretes. Life cycle assessment (LCA) of low-carbon concrete was investigated according to the ground granulated blast-furnace slag (GGBS) and fly ash (FA) replacement ratios. Global warming potential (GWP) of GGBS decreased by 1.1 kg CO2 eq/m3, while FA decreased by 17.3 kg CO2 eq/m3 when the mineral admixture replacement ratio was increased by 10%. This study also compared the compressive strength properties of binary blended low-carbon concrete according to the replacement ratios, and the applicable range of mixing proportions was derived.
Researchers at University of Auckland are working on utilizing biochar in concrete applications to reduce carbon emissions during concrete production and to improve strength.
Concrete and climate change adaptation
High-performance building materials will be particularly important for enhancing resilience, including for flood defenses and critical-infrastructure protection. Risks to infrastructure and cities posed by extreme weather events are especially serious for those places exposed to flood and hurricane damage, but also where residents need protection from extreme summer temperatures. Traditional concrete can come under strain when exposed to humidity and higher concentrations of atmospheric CO2. While concrete is likely to remain important in applications where the environment is challenging, novel, smarter and more adaptable materials are also needed.
Concrete – health and safety
Grinding of concrete can produce hazardous dust. Exposure to cement dust can lead to issues such as silicosis, kidney disease, skin irritation and similar effects. The U.S. National Institute for Occupational Safety and Health in the United States recommends attaching local exhaust ventilation shrouds to electric concrete grinders to control the spread of this dust. In addition, the Occupational Safety and Health Administration (OSHA) has placed more stringent regulations on companies whose workers regularly come into contact with silica dust. An updated silica rule, which OSHA put into effect 23 September 2017 for construction companies, restricted the amount of breathable crystalline silica workers could legally come into contact with to 50 micro grams per cubic meter of air per 8-hour workday. That same rule went into effect 23 June 2018 for general industry, hydraulic fracturing and maritime. That deadline was extended to 23 June 2021 for engineering controls in the hydraulic fracturing industry. Companies which fail to meet the tightened safety regulations can face financial charges and extensive penalties. The presence of some substances in concrete, including useful and unwanted additives, can cause health concerns due to toxicity and radioactivity. Fresh concrete (before curing is complete) is highly alkaline and must be handled with proper protective equipment.
Circular economy
Concrete is an excellent material with which to make long-lasting and energy-efficient buildings. However, even with good design, human needs change and potential waste will be generated.
End-of-life: concrete degradation and waste
Concrete can be damaged by many processes, such as the expansion of corrosion products of the steel reinforcement bars, freezing of trapped water, fire or radiant heat, aggregate expansion, sea water effects, bacterial corrosion, leaching, erosion by fast-flowing water, physical damage and chemical damage (from carbonatation, chlorides, sulfates and distillate water). The micro fungi Aspergillus alternaria and Cladosporium were able to grow on samples of concrete used as a radioactive waste barrier in the Chernobyl reactor; leaching aluminium, iron, calcium, and silicon.
Concrete may be considered waste according to the European Commission decision of 2014/955/EU for the List of Waste under the codes: 17 (construction and demolition wastes, including excavated soil from contaminated sites) 01 (concrete, bricks, tiles and ceramics), 01 (concrete), and 17.01.06* (mixtures of, separate fractions of concrete, bricks, tiles and ceramics containing hazardous substances), and 17.01.07 (mixtures of, separate fractions of concrete, bricks, tiles and ceramics other than those mentioned in 17.01.06). It is estimated that in 2018 the European Union generated 371,910 thousand tons of mineral waste from construction and demolition, and close to 4% of this quantity is considered hazardous. Germany, France and the United Kingdom were the top three polluters with 86,412 thousand tons, 68,976 and 68,732 thousand tons of construction waste generation, respectively.
Currently, there is not an End-of-Waste criteria for concrete materials in the EU. However, different sectors have been proposing alternatives for concrete waste and re purposing it as a secondary raw material in various applications, including concrete manufacturing itself.
Reuse of concrete
Reuse of blocks in original form, or by cutting into smaller blocks, has even less environmental impact; however, only a limited market currently exists. Improved building designs that allow for slab reuse and building transformation without demolition could increase this use. Hollow core concrete slabs are easy to dismantle and the span is normally constant, making them good for reuse.
Other cases of re-use are possible with pre-cast concrete pieces: through selective demolition, such pieces can be disassembled and collected for further use in other building sites. Studies show that back-building and remounting plans for building units (i.e., re-use of pre-fabricated concrete) is an alternative for a kind of construction which protects resources and saves energy. Especially long-living, durable, energy-intensive building materials, such as concrete, can be kept in the life-cycle longer through recycling. Prefabricated constructions are the prerequisites for constructions necessarily capable of being taken apart. In the case of optimal application in the building carcass, savings in costs are estimated in 26%, a lucrative complement to new building methods. However, this depends on several courses to be set. The viability of this alternative has to be studied as the logistics associated with transporting heavy pieces of concrete can impact the operation financially and also increase the carbon footprint of the project. Also, ever changing regulations on new buildings worldwide may require higher quality standards for construction elements and inhibit the use of old elements which may be classified as obsolete.
Recycling of concrete
Concrete recycling is an increasingly common method for disposing of concrete structures. Concrete debris were once routinely shipped to landfills for disposal, but recycling is increasing due to improved environmental awareness, governmental laws and economic benefits.
Contrary to general belief, concrete recovery is achievable – concrete can be crushed and reused as aggregate in new projects.
Recycling or recovering concrete reduces natural resource exploitation and associated transportation costs, and reduces waste landfill. However, it has little impact on reducing greenhouse gas emissions as most emissions occur when cement is made, and cement alone cannot be recycled. At present, most recovered concrete is used for road sub-base and civil engineering projects. From a sustainability viewpoint, these relatively low-grade uses currently provide the optimal outcome.
The recycling process can be done in situ, with mobile plants, or in specific recycling units. The input material can be returned concrete which is fresh (wet) from ready-mix trucks, production waste at a pre-cast production facility, or waste from construction and demolition. The most significant source is demolition waste, preferably pre-sorted from selective demolition processes.
By far the most common method for recycling dry and hardened concrete involves crushing. Mobile sorters and crushers are often installed on construction sites to allow on-site processing. In other situations, specific processing sites are established, which are usually able to produce higher quality aggregate. Screens are used to achieve desired particle size, and remove dirt, foreign particles and fine material from the coarse aggregate.
Chloride and sulfates are undesired contaminants originated from soil and weathering and can provoke corrosion problems on aluminium and steel structures. The final product, Recycled Concrete Aggregate (RCA), presents interesting properties such as: angular shape, rougher surface, lower specific gravity (20%), higher water absorption, and pH greater than 11 – this elevated pH increases the risk of alkali reactions.
The lower density of RCA usually Increases project efficiency and improve job cost – recycled concrete aggregates yield more volume by weight (up to 15%). The physical properties of coarse aggregates made from crushed demolition concrete make it the preferred material for applications such as road base and sub-base. This is because recycled aggregates often have better compaction properties and require less cement for sub-base uses. Furthermore, it is generally cheaper to obtain than virgin material.
Applications of recycled concrete aggregate
The main commercial applications of the final recycled concrete aggregate are:
Aggregate base course (road base), or the untreated aggregates used as foundation for roadway pavement, is the underlying layer (under pavement surfacing) which forms a structural foundation for paving. To this date this has been the most popular application for RCA due to technical-economic aspects.
Aggregate for ready-mix concrete, by replacing from 10 to 45% of the natural aggregates in the concrete mix with a blend of cement, sand and water. Some concept buildings are showing the progress of this field. Because the RCA itself contains cement, the ratios of the mix have to be adjusted to achieve desired structural requirements such as workability, strength and water absorption.
Soil Stabilization, with the incorporation of recycled aggregate, lime, or fly ash into marginal quality subgrade material used to enhance the load bearing capacity of that subgrade.
Pipe bedding: serving as a stable bed or firm foundation in which to lay underground utilities. Some countries' regulations prohibit the use of RCA and other construction and demolition wastes in filtration and drainage beds due to potential contamination with chromium and pH-value impacts.
Landscape Materials: to promote green architecture. To date, recycled concrete aggregate has been used as boulder/stacked rock walls, underpass abutment structures, erosion structures, water features, retaining walls, and more.
Cradle-to-cradle challenges
The applications developed for RCA so far are not exhaustive, and many more uses are to be developed as regulations, institutions and norms find ways to accommodate construction and demolition waste as secondary raw materials in a safe and economic way. However, considering the purpose of having a circularity of resources in the concrete life cycle, the only application of RCA that could be considered as recycling of concrete is the replacement of natural aggregates on concrete mixes. All the other applications would fall under the category of downcycling. It is estimated that even near complete recovery of concrete from construction and demolition waste will only supply about 20% of total aggregate needs in the developed world.
The path towards circularity goes beyond concrete technology itself, depending on multilateral advances in the cement industry, research and development of alternative materials, building design and management, and demolition as well as conscious use of spaces in urban areas to reduce consumption.
World records
The world record for the largest concrete pour in a single project is the Three Gorges Dam in Hubei Province, China by the Three Gorges Corporation. The amount of concrete used in the construction of the dam is estimated at 16 million cubic meters over 17 years. The previous record was 12.3 million cubic meters held by Itaipu hydropower station in Brazil.
The world record for concrete pumping was set on 7 August 2009 during the construction of the Parbati Hydroelectric Project, near the village of Suind, Himachal Pradesh, India, when the concrete mix was pumped through a vertical height of .
The Polavaram dam works in Andhra Pradesh on 6 January 2019 entered the Guinness World Records by pouring 32,100 cubic metres of concrete in 24 hours. The world record for the largest continuously poured concrete raft was achieved in August 2007 in Abu Dhabi by contracting firm Al Habtoor-CCC Joint Venture and the concrete supplier is Unibeton Ready Mix. The pour (a part of the foundation for the Abu Dhabi's Landmark Tower) was 16,000 cubic meters of concrete poured within a two-day period. The previous record, 13,200 cubic meters poured in 54 hours despite a severe tropical storm requiring the site to be covered with tarpaulins to allow work to continue, was achieved in 1992 by joint Japanese and South Korean consortiums Hazama Corporation and the Samsung C&T Corporation for the construction of the Petronas Towers in Kuala Lumpur, Malaysia.
The world record for largest continuously poured concrete floor was completed 8 November 1997, in Louisville, Kentucky by design-build firm EXXCEL Project Management. The monolithic placement consisted of of concrete placed in 30 hours, finished to a flatness tolerance of FF 54.60 and a levelness tolerance of FL 43.83. This surpassed the previous record by 50% in total volume and 7.5% in total area.
The record for the largest continuously placed underwater concrete pour was completed 18 October 2010, in New Orleans, Louisiana by contractor C. J. Mahan Construction Company, LLC of Grove City, Ohio. The placement consisted of 10,251 cubic yards of concrete placed in 58.5 hours using two concrete pumps and two dedicated concrete batch plants. Upon curing, this placement allows the cofferdam to be dewatered approximately below sea level to allow the construction of the Inner Harbor Navigation Canal Sill & Monolith Project to be completed in the dry.
See also
Further reading
References
External links
Advantage and Disadvantage of Concrete
Release of ultrafine particles from three simulated building processes
Concrete: The Quest for Greener Alternatives
Building materials
Masonry
Pavements
Sculpture materials
Composite materials
Heterogeneous chemical mixtures
Roofing materials |
5374 | https://en.wikipedia.org/wiki/Condom | Condom | A condom is a sheath-shaped barrier device used during sexual intercourse to reduce the probability of pregnancy or a sexually transmitted infection (STI). There are both male and female condoms.
The male condom is rolled onto an erect penis before intercourse and works by forming a physical barrier which blocks semen from entering the body of a sexual partner. Male condoms are typically made from latex and, less commonly, from polyurethane, polyisoprene, or lamb intestine. Male condoms have the advantages of ease of use, ease of access, and few side effects. Individuals with latex allergy should use condoms made from a material other than latex, such as polyurethane. Female condoms are typically made from polyurethane and may be used multiple times.
With proper use—and use at every act of intercourse—women whose partners use male condoms experience a 2% per-year pregnancy rate. With typical use, the rate of pregnancy is 18% per-year. Their use greatly decreases the risk of gonorrhea, chlamydia, trichomoniasis, hepatitis B, and HIV/AIDS. To a lesser extent, they also protect against genital herpes, human papillomavirus (HPV), and syphilis.
Condoms as a method of preventing STIs have been used since at least 1564. Rubber condoms became available in 1855, followed by latex condoms in the 1920s. It is on the World Health Organization's List of Essential Medicines. As of 2019, globally around 21% of those using birth control use the condom, making it the second-most common method after female sterilization (24%). Rates of condom use are highest in East and Southeast Asia, Europe and North America. About six to nine billion are sold a year.
Medical uses
Birth control
The effectiveness of condoms, as of most forms of contraception, can be assessed two ways. Perfect use or method effectiveness rates only include people who use condoms properly and consistently. Actual use, or typical use effectiveness rates are of all condom users, including those who use condoms incorrectly or do not use condoms at every act of intercourse. Rates are generally presented for the first year of use. Most commonly the Pearl Index is used to calculate effectiveness rates, but some studies use decrement tables.
The typical use pregnancy rate among condom users varies depending on the population being studied, ranging from 10 to 18% per year. The perfect use pregnancy rate of condoms is 2% per year. Condoms may be combined with other forms of contraception (such as spermicide) for greater protection.
Sexually transmitted infections
Condoms are widely recommended for the prevention of sexually transmitted infections (STIs). They have been shown to be effective in reducing infection rates in both men and women. While not perfect, the condom is effective at reducing the transmission of organisms that cause AIDS, genital herpes, cervical cancer, genital warts, syphilis, chlamydia, gonorrhea, and other diseases. Condoms are often recommended as an adjunct to more effective birth control methods (such as IUD) in situations where STD protection is also desired.
For this reason, condoms are frequently used by those in the swinging (sexual practice) community.
According to a 2000 report by the National Institutes of Health (NIH), consistent use of latex condoms reduces the risk of HIV transmission by approximately 85% relative to risk when unprotected, putting the seroconversion rate (infection rate) at 0.9 per 100 person-years with condom, down from 6.7 per 100 person-years. Analysis published in 2007 from the University of Texas Medical Branch and the World Health Organization found similar risk reductions of 80–95%.
The 2000 NIH review concluded that condom use significantly reduces the risk of gonorrhea for men. A 2006 study reports that proper condom use decreases the risk of transmission of human papillomavirus (HPV) to women by approximately 70%. Another study in the same year found consistent condom use was effective at reducing transmission of herpes simplex virus-2, also known as genital herpes, in both men and women.
Although a condom is effective in limiting exposure, some disease transmission may occur even with a condom. Infectious areas of the genitals, especially when symptoms are present, may not be covered by a condom, and as a result, some diseases like HPV and herpes may be transmitted by direct contact. The primary effectiveness issue with using condoms to prevent STDs, however, is inconsistent use.
Condoms may also be useful in treating potentially precancerous cervical changes. Exposure to human papillomavirus, even in individuals already infected with the virus, appears to increase the risk of precancerous changes. The use of condoms helps promote regression of these changes. In addition, researchers in the UK suggest that a hormone in semen can aggravate existing cervical cancer, condom use during sex can prevent exposure to the hormone.
Causes of failure
Condoms may slip off the penis after ejaculation, break due to improper application or physical damage (such as tears caused when opening the package), or break or slip due to latex degradation (typically from usage past the expiration date, improper storage, or exposure to oils). The rate of breakage is between 0.4% and 2.3%, while the rate of slippage is between 0.6% and 1.3%. Even if no breakage or slippage is observed, 1–3% of women will test positive for semen residue after intercourse with a condom. Failure rates are higher for anal sex, and until 2022, condoms were only approved by the FDA for vaginal sex. The One Male Condom received FDA approval for anal sex on February 23, 2022.
"Double bagging", using two condoms at once, is often believed to cause a higher rate of failure due to the friction of rubber on rubber. This claim is not supported by research. The limited studies that have been done found that the simultaneous use of multiple condoms decreases the risk of condom breakage.
Different modes of condom failure result in different levels of semen exposure. If a failure occurs during application, the damaged condom may be disposed of and a new condom applied before intercourse begins – such failures generally pose no risk to the user. One study found that semen exposure from a broken condom was about half that of unprotected intercourse; semen exposure from a slipped condom was about one-fifth that of unprotected intercourse.
Standard condoms will fit almost any penis, with varying degrees of comfort or risk of slippage. Many condom manufacturers offer "snug" or "magnum" sizes. Some manufacturers also offer custom sized-to-fit condoms, with claims that they are more reliable and offer improved sensation/comfort. Some studies have associated larger penises and smaller condoms with increased breakage and decreased slippage rates (and vice versa), but other studies have been inconclusive.
It is recommended for condoms manufacturers to avoid very thick or very thin condoms, because they are both considered less effective. Some authors encourage users to choose thinner condoms "for greater durability, sensation, and comfort", but others warn that "the thinner the condom, the smaller the force required to break it".
Experienced condom users are significantly less likely to have a condom slip or break compared to first-time users, although users who experience one slippage or breakage are more likely to suffer a second such failure. An article in Population Reports suggests that education on condom use reduces behaviors that increase the risk of breakage and slippage. A Family Health International publication also offers the view that education can reduce the risk of breakage and slippage, but emphasizes that more research needs to be done to determine all of the causes of breakage and slippage.
Among people who intend condoms to be their form of birth control, pregnancy may occur when the user has sex without a condom. The person may have run out of condoms, or be traveling and not have a condom with them, or dislike the feel of condoms and decide to "take a chance". This behavior is the primary cause of typical use failure (as opposed to method or perfect use failure).
Another possible cause of condom failure is sabotage. One motive is to have a child against a partner's wishes or consent. Some commercial sex workers from Nigeria reported clients sabotaging condoms in retaliation for being coerced into condom use. Using a fine needle to make several pinholes at the tip of the condom is believed to significantly impact on their effectiveness. Cases of such condom sabotage have occurred.
Side effects
The use of latex condoms by people with an allergy to latex can cause allergic symptoms, such as skin irritation. In people with severe latex allergies, using a latex condom can potentially be life-threatening. Repeated use of latex condoms can also cause the development of a latex allergy in some people. Irritation may also occur due to spermicides that may be present.
Use
Male condoms are usually packaged inside a foil or plastic wrapper, in a rolled-up form, and are designed to be applied to the tip of the penis and then unrolled over the erect penis. It is important that some space be left in the tip of the condom so that semen has a place to collect; otherwise it may be forced out of the base of the device. Most condoms have a teat end for this purpose. After use, it is recommended the condom be wrapped in tissue or tied in a knot, then disposed of in a trash receptacle. Condoms are used to reduce the likelihood of pregnancy during intercourse and to reduce the likelihood of contracting sexually transmitted infections (STIs). Condoms are also used during fellatio to reduce the likelihood of contracting STIs.
Some couples find that putting on a condom interrupts sex, although others incorporate condom application as part of their foreplay. Some men and women find the physical barrier of a condom dulls sensation. Advantages of dulled sensation can include prolonged erection and delayed ejaculation; disadvantages might include a loss of some sexual excitement. Advocates of condom use also cite their advantages of being inexpensive, easy to use, and having few side effects.
Adult film industry
In 2012 proponents gathered 372,000 voter signatures through a citizens' initiative in Los Angeles County to put Measure B on the 2012 ballot. As a result, Measure B, a law requiring the use of condoms in the production of pornographic films, was passed. This requirement has received much criticism and is said by some to be counter-productive, merely forcing companies that make pornographic films to relocate to other places without this requirement. Producers claim that condom use depresses sales.
Sex education
Condoms are often used in sex education programs, because they have the capability to reduce the chances of pregnancy and the spread of some sexually transmitted diseases when used correctly. A recent American Psychological Association (APA) press release supported the inclusion of information about condoms in sex education, saying "comprehensive sexuality education programs ... discuss the appropriate use of condoms", and "promote condom use for those who are sexually active."
In the United States, teaching about condoms in public schools is opposed by some religious organizations. Planned Parenthood, which advocates family planning and sex education, argues that no studies have shown abstinence-only programs to result in delayed intercourse, and cites surveys showing that 76% of American parents want their children to receive comprehensive sexuality education including condom use.
Infertility treatment
Common procedures in infertility treatment such as semen analysis and intrauterine insemination (IUI) require collection of semen samples. These are most commonly obtained through masturbation, but an alternative to masturbation is use of a special collection condom to collect semen during sexual intercourse.
Collection condoms are made from silicone or polyurethane, as latex is somewhat harmful to sperm. Some religions prohibit masturbation entirely. Also, compared with samples obtained from masturbation, semen samples from collection condoms have higher total sperm counts, sperm motility, and percentage of sperm with normal morphology. For this reason, they are believed to give more accurate results when used for semen analysis, and to improve the chances of pregnancy when used in procedures such as intracervical or intrauterine insemination. Adherents of religions that prohibit contraception, such as Catholicism, may use collection condoms with holes pricked in them.
For fertility treatments, a collection condom may be used to collect semen during sexual intercourse where the semen is provided by the woman's partner. Private sperm donors may also use a collection condom to obtain samples through masturbation or by sexual intercourse with a partner and will transfer the ejaculate from the collection condom to a specially designed container. The sperm is transported in such containers, in the case of a donor, to a recipient woman to be used for insemination, and in the case of a woman's partner, to a fertility clinic for processing and use. However, transportation may reduce the fecundity of the sperm. Collection condoms may also be used where semen is produced at a sperm bank or fertility clinic.
Condom therapy is sometimes prescribed to infertile couples when the female has high levels of antisperm antibodies. The theory is that preventing exposure to her partner's semen will lower her level of antisperm antibodies, and thus increase her chances of pregnancy when condom therapy is discontinued. However, condom therapy has not been shown to increase subsequent pregnancy rates.
Other uses
Condoms excel as multipurpose containers and barriers because they are waterproof, elastic, durable, and (for military and espionage uses) will not arouse suspicion if found.
Ongoing military utilization began during World War II, and includes covering the muzzles of rifle barrels to prevent fouling, the waterproofing of firing assemblies in underwater demolitions, and storage of corrosive materials and garrotes by paramilitary agencies.
Condoms have also been used to smuggle alcohol, cocaine, heroin, and other drugs across borders and into prisons by filling the condom with drugs, tying it in a knot and then either swallowing it or inserting it into the rectum. These methods are very dangerous and potentially lethal; if the condom breaks, the drugs inside become absorbed into the bloodstream and can cause an overdose.
Medically, condoms can be used to cover endovaginal ultrasound probes, or in field chest needle decompressions they can be used to make a one-way valve.
Condoms have also been used to protect scientific samples from the environment, and to waterproof microphones for underwater recording.
Types
Most condoms have a reservoir tip or teat end, making it easier to accommodate the man's ejaculate. Condoms come in different sizes and shapes.
They also come in a variety of surfaces intended to stimulate the user's partner. Condoms are usually supplied with a lubricant coating to facilitate penetration, while flavored condoms are principally used for oral sex. As mentioned above, most condoms are made of latex, but polyurethane and lambskin condoms also exist.
Female condom
Male condoms have a tight ring to form a seal around the penis, while female condoms usually have a large stiff ring to prevent them from slipping into the body orifice. The Female Health Company produced a female condom that was initially made of polyurethane, but newer versions are made of nitrile rubber. Medtech Products produces a female condom made of latex.
Materials
Natural latex
Latex has outstanding elastic properties: Its tensile strength exceeds 30 MPa, and latex condoms may be stretched in excess of 800% before breaking. In 1990 the ISO set standards for condom production (ISO 4074, Natural latex rubber condoms), and the EU followed suit with its CEN standard (Directive 93/42/EEC concerning medical devices). Every latex condom is tested for holes with an electric current. If the condom passes, it is rolled and packaged. In addition, a portion of each batch of condoms is subject to water leak and air burst testing.
While the advantages of latex have made it the most popular condom material, it does have some drawbacks. Latex condoms are damaged when used with oil-based substances as lubricants, such as petroleum jelly, cooking oil, baby oil, mineral oil, skin lotions, suntan lotions, cold creams, butter or margarine. Contact with oil makes latex condoms more likely to break or slip off due to loss of elasticity caused by the oils. Additionally, latex allergy precludes use of latex condoms and is one of the principal reasons for the use of other materials. In May 2009, the U.S. Food and Drug Administration (FDA) granted approval for the production of condoms composed of Vytex, latex that has been treated to remove 90% of the proteins responsible for allergic reactions. An allergen-free condom made of synthetic latex (polyisoprene) is also available.
Synthetic
The most common non-latex condoms are made from polyurethane. Condoms may also be made from other synthetic materials, such as AT-10 resin, and most polyisoprene.
Polyurethane condoms tend to be the same width and thickness as latex condoms, with most polyurethane condoms between 0.04 mm and 0.07 mm thick.
Polyurethane can be considered better than latex in several ways: it conducts heat better than latex, is not as sensitive to temperature and ultraviolet light (and so has less rigid storage requirements and a longer shelf life), can be used with oil-based lubricants, is less allergenic than latex, and does not have an odor. Polyurethane condoms have gained FDA approval for sale in the United States as an effective method of contraception and HIV prevention, and under laboratory conditions have been shown to be just as effective as latex for these purposes.
However, polyurethane condoms are less elastic than latex ones, and may be more likely to slip or break than latex, lose their shape or bunch up more than latex, and are more expensive.
Polyisoprene is a synthetic version of natural rubber latex. While significantly more expensive, it has the advantages of latex (such as being softer and more elastic than polyurethane condoms) without the protein which is responsible for latex allergies. Unlike polyurethane condoms, they cannot be used with an oil-based lubricant.
Lambskin
Condoms made from sheep intestines, labeled "lambskin", are also available. Although they are generally effective as a contraceptive by blocking sperm, it is presumed that they are less effective than latex in preventing the transmission of sexually transmitted infections because of pores in the material. This is based on the idea that intestines, by their nature, are porous, permeable membranes, and while sperm are too large to pass through the pores, viruses — such as HIV, herpes, and genital warts — are small enough to pass. However, there are to date no clinical data confirming or denying this theory.
As a result of laboratory data on condom porosity, in 1989, the FDA began requiring lambskin condom manufacturers to indicate that the products were not to be used for the prevention of sexually transmitted infections. This was based on the presumption that lambskin condoms would be less effective than latex in preventing HIV transmission, rather than a conclusion that lambskin condoms lack efficacy in STI prevention altogether. An FDA publication in 1992 states that lambskin condoms "provide good birth control and a varying degree of protection against some, but not all, sexually transmitted diseases" and that the labelling requirement was decided upon because the FDA "cannot expect people to know which STDs they need to be protected against", and since "the reality is that you don't know what your partner has, we wanted natural-membrane condoms to have labels that don't allow the user to assume they're effective against the small viral STDs."
Some believe that lambskin condoms provide a more "natural" sensation and lack the allergens inherent to latex. Still, because of their lesser protection against infection, other hypoallergenic materials such as polyurethane are recommended for latex-allergic users and partners. Lambskin condoms are also significantly more expensive than different types, and as slaughter by-products, they are also not vegetarian.
Spermicide
Some latex condoms are lubricated at the manufacturer with a small amount of a nonoxynol-9, a spermicidal chemical. According to Consumer Reports, condoms lubricated with spermicide have no additional benefit in preventing pregnancy, have a shorter shelf life, and may cause urinary tract infections in women. In contrast, application of separately packaged spermicide is believed to increase the contraceptive efficacy of condoms.
Nonoxynol-9 was once believed to offer additional protection against STDs (including HIV) but recent studies have shown that, with frequent use, nonoxynol-9 may increase the risk of HIV transmission. The World Health Organization says that spermicidally lubricated condoms should no longer be promoted. However, it recommends using a nonoxynol-9 lubricated condom over no condom at all. , nine condom manufacturers have stopped manufacturing condoms with nonoxynol-9 and Planned Parenthood has discontinued the distribution of condoms so lubricated.
Ribbed and studded
Textured condoms include studded and ribbed condoms which can provide extra sensations to both partners. The studs or ribs can be located on the inside, outside, or both; alternatively, they are located in specific sections to provide directed stimulation to either the G-spot or frenulum. Many textured condoms which advertise "mutual pleasure" also are bulb-shaped at the top, to provide extra stimulation to the penis. Some women experience irritation during vaginal intercourse with studded condoms.
Other
A Swiss company (Lamprecht A.G) produces extra small condoms aimed at the teenage market. Designed to be used by boys as young as fourteen, Ceylor 'Hotshot' condoms are aimed at reducing teenage pregnancies.
The anti-rape condom is another variation designed to be worn by women. It is designed to cause pain to the attacker, hopefully allowing the victim a chance to escape.
A collection condom is used to collect semen for fertility treatments or sperm analysis. These condoms are designed to maximize sperm life and may be coated on the inside with a sperm-friendly lubricant.
Some condom-like devices are intended for entertainment only, such as glow-in-the dark condoms. These novelty condoms may not provide protection against pregnancy and STDs.
In February 2022, the U.S. Food and Drug Administration (FDA) approved the first condoms specifically indicated to help reduce transmission of sexually transmitted infections (STIs) during anal intercourse.
Prevalence
The prevalence of condom use varies greatly between countries. Most surveys of contraceptive use are among married women, or women in informal unions. Japan has the highest rate of condom usage in the world: in that country, condoms account for almost 80% of contraceptive use by married women. On average, in developed countries, condoms are the most popular method of birth control: 28% of married contraceptive users rely on condoms. In the average less-developed country, condoms are less common: only 6–8% of married contraceptive users choose condoms.
History
Before the 19th century
Whether condoms were used in ancient civilizations is debated by archaeologists and historians. In ancient Egypt, Greece, and Rome, pregnancy prevention was generally seen as a woman's responsibility, and the only well documented contraception methods were female-controlled devices. In Asia before the 15th century, some use of glans condoms (devices covering only the head of the penis) is recorded. Condoms seem to have been used for contraception, and to have been known only by members of the upper classes. In China, glans condoms may have been made of oiled silk paper, or of lamb intestines. In Japan, condoms called Kabuto-gata (甲形) were made of tortoise shell or animal horn.
In 16th-century Italy, anatomist and physician Gabriele Falloppio wrote a treatise on syphilis. The earliest documented strain of syphilis, first appearing in Europe in a 1490s outbreak, caused severe symptoms and often death within a few months of contracting the disease. Falloppio's treatise is the earliest uncontested description of condom use: it describes linen sheaths soaked in a chemical solution and allowed to dry before use. The cloths he described were sized to cover the glans of the penis, and were held on with a ribbon. Falloppio claimed that an experimental trial of the linen sheath demonstrated protection against syphilis.
After this, the use of penis coverings to protect from disease is described in a wide variety of literature throughout Europe. The first indication that these devices were used for birth control, rather than disease prevention, is the 1605 theological publication De iustitia et iure (On justice and law) by Catholic theologian Leonardus Lessius, who condemned them as immoral. In 1666, the English Birth Rate Commission attributed a recent downward fertility rate to use of "condons", the first documented use of that word or any similar spelling. Other early spellings include "condam" and "quondam", from which the Italian derivation guantone has been suggested, from guanto, "a glove".
In addition to linen, condoms during the Renaissance were made out of intestines and bladder. In the late 16th century, Dutch traders introduced condoms made from "fine leather" to Japan. Unlike the horn condoms used previously, these leather condoms covered the entire penis.
Casanova in the 18th century was one of the first reported using "assurance caps" to prevent impregnating his mistresses.
From at least the 18th century, condom use was opposed in some legal, religious, and medical circles for essentially the same reasons that are given today: condoms reduce the likelihood of pregnancy, which some thought immoral or undesirable for the nation; they do not provide full protection against sexually transmitted infections, while belief in their protective powers was thought to encourage sexual promiscuity; and, they are not used consistently due to inconvenience, expense, or loss of sensation.
Despite some opposition, the condom market grew rapidly. In the 18th century, condoms were available in a variety of qualities and sizes, made from either linen treated with chemicals, or "skin" (bladder or intestine softened by treatment with sulfur and lye). They were sold at pubs, barbershops, chemist shops, open-air markets, and at the theater throughout Europe and Russia. They later spread to America, although in every place there were generally used only by the middle and upper classes, due to both expense and lack of sex education.
1800 through 1920s
The early 19th century saw contraceptives promoted to the poorer classes for the first time. Writers on contraception tended to prefer other birth control methods to the condom. By the late 19th century, many feminists expressed distrust of the condom as a contraceptive, as its use was controlled and decided upon by men alone. They advocated instead for methods controlled by women, such as diaphragms and spermicidal douches. Other writers cited both the expense of condoms and their unreliability (they were often riddled with holes and often fell off or tore). Still, they discussed condoms as a good option for some and the only contraceptive that protects from disease.
Many countries passed laws impeding the manufacture and promotion of contraceptives. In spite of these restrictions, condoms were promoted by traveling lecturers and in newspaper advertisements, using euphemisms in places where such ads were illegal. Instructions on how to make condoms at home were distributed in the United States and Europe. Despite social and legal opposition, at the end of the 19th century the condom was the Western world's most popular birth control method.
Beginning in the second half of the 19th century, American rates of sexually transmitted diseases skyrocketed. Causes cited by historians include the effects of the American Civil War and the ignorance of prevention methods promoted by the Comstock laws. To fight the growing epidemic, sex education classes were introduced to public schools for the first time, teaching about venereal diseases and how they were transmitted. They generally taught abstinence was the only way to avoid sexually transmitted diseases. Condoms were not promoted for disease prevention because the medical community and moral watchdogs considered STDs to be punishment for sexual misbehavior. The stigma against people with these diseases was so significant that many hospitals refused to treat people with syphilis.
The German military was the first to promote condom use among its soldiers in the later 19th century. Early 20th century experiments by the American military concluded that providing condoms to soldiers significantly lowered rates of sexually transmitted diseases. During World War I, the United States and (at the beginning of the war only) Britain were the only countries with soldiers in Europe who did not provide condoms and promote their use.
In the decades after World War I, there remained social and legal obstacles to condom use throughout the U.S. and Europe. Founder of psychoanalysis Sigmund Freud opposed all methods of birth control because their failure rates were too high. Freud was especially opposed to the condom because he thought it cut down on sexual pleasure. Some feminists continued to oppose male-controlled contraceptives such as condoms. In 1920 the Church of England's Lambeth Conference condemned all "unnatural means of conception avoidance". The Bishop of London, Arthur Winnington-Ingram, complained of the huge number of condoms discarded in alleyways and parks, especially after weekends and holidays.
However, European militaries continued to provide condoms to their members for disease protection, even in countries where they were illegal for the general population. Through the 1920s, catchy names and slick packaging became an increasingly important marketing technique for many consumer items, including condoms and cigarettes. Quality testing became more common, involving filling each condom with air followed by one of several methods intended to detect loss of pressure. Worldwide, condom sales doubled in the 1920s.
Rubber and manufacturing advances
In 1839, Charles Goodyear discovered a way of processing natural rubber, which is too stiff when cold and too soft when warm, in such a way as to make it elastic. This proved to have advantages for the manufacture of condoms; unlike the sheep's gut condoms, they could stretch and did not tear quickly when used. The rubber vulcanization process was patented by Goodyear in 1844. The first rubber condom was produced in 1855. The earliest rubber condoms had a seam and were as thick as a bicycle inner tube. Besides this type, small rubber condoms covering only the glans were often used in England and the United States. There was more risk of losing them and if the rubber ring was too tight, it would constrict the penis. This type of condom was the original "capote" (French for condom), perhaps because of its resemblance to a woman's bonnet worn at that time, also called a capote.
For many decades, rubber condoms were manufactured by wrapping strips of raw rubber around penis-shaped molds, then dipping the wrapped molds in a chemical solution to cure the rubber. In 1912, Polish-born inventor Julius Fromm developed a new, improved manufacturing technique for condoms: dipping glass molds into a raw rubber solution. Called cement dipping, this method required adding gasoline or benzene to the rubber to make it liquid.
Around 1920 patent lawyer and vice-president of the United States Rubber Company Ernest Hopkinson invented a new technique of converting latex into rubber without a coagulant (demulsifier), which featured using water as a solvent and warm air to dry the solution, as well as optionally preserving liquid latex with ammonia. Condoms made this way, commonly called "latex" ones, required less labor to produce than cement-dipped rubber condoms, which had to be smoothed by rubbing and trimming. The use of water to suspend the rubber instead of gasoline and benzene eliminated the fire hazard previously associated with all condom factories. Latex condoms also performed better for the consumer: they were stronger and thinner than rubber condoms, and had a shelf life of five years (compared to three months for rubber).
Until the twenties, all condoms were individually hand-dipped by semi-skilled workers. Throughout the decade of the 1920s, advances in the automation of the condom assembly line were made. The first fully automated line was patented in 1930. Major condom manufacturers bought or leased conveyor systems, and small manufacturers were driven out of business. The skin condom, now significantly more expensive than the latex variety, became restricted to a niche high-end market.
1930 to present
In 1930 the Anglican Church's Lambeth Conference sanctioned the use of birth control by married couples. In 1931 the Federal Council of Churches in the U.S. issued a similar statement. The Roman Catholic Church responded by issuing the encyclical Casti connubii affirming its opposition to all contraceptives, a stance it has never reversed. In the 1930s, legal restrictions on condoms began to be relaxed. But during this period Fascist Italy and Nazi Germany increased restrictions on condoms (limited sales as disease preventatives were still allowed). During the Depression, condom lines by Schmid gained in popularity. Schmid still used the cement-dipping method of manufacture which had two advantages over the latex variety. Firstly, cement-dipped condoms could be safely used with oil-based lubricants. Secondly, while less comfortable, these older-style rubber condoms could be reused and so were more economical, a valued feature in hard times. More attention was brought to quality issues in the 1930s, and the U.S. Food and Drug Administration began to regulate the quality of condoms sold in the United States.
Throughout World War II, condoms were not only distributed to male U.S. military members, but also heavily promoted with films, posters, and lectures. European and Asian militaries on both sides of the conflict also provided condoms to their troops throughout the war, even Germany which outlawed all civilian use of condoms in 1941. In part because condoms were readily available, soldiers found a number of non-sexual uses for the devices, many of which continue to this day. After the war, condom sales continued to grow. From 1955 to 1965, 42% of Americans of reproductive age relied on condoms for birth control. In Britain from 1950 to 1960, 60% of married couples used condoms. The birth control pill became the world's most popular method of birth control in the years after its 1960 début, but condoms remained a strong second. The U.S. Agency for International Development pushed condom use in developing countries to help solve the "world population crises": by 1970 hundreds of millions of condoms were being used each year in India alone.(This number has grown in recent decades: in 2004, the government of India purchased 1.9 billion condoms for distribution at family planning clinics.)
In the 1960s and 1970s quality regulations tightened, and more legal barriers to condom use were removed. In Ireland, legal condom sales were allowed for the first time in 1978. Advertising, however was one area that continued to have legal restrictions. In the late 1950s, the American National Association of Broadcasters banned condom advertisements from national television; this policy remained in place until 1979.
After it was discovered in the early 1980s that AIDS can be a sexually transmitted infection, the use of condoms was encouraged to prevent transmission of HIV. Despite opposition by some political, religious, and other figures, national condom promotion campaigns occurred in the U.S. and Europe. These campaigns increased condom use significantly.
Due to increased demand and greater social acceptance, condoms began to be sold in a wider variety of retail outlets, including in supermarkets and in discount department stores such as Walmart. Condom sales increased every year until 1994, when media attention to the AIDS pandemic began to decline. The phenomenon of decreasing use of condoms as disease preventatives has been called prevention fatigue or condom fatigue. Observers have cited condom fatigue in both Europe and North America. As one response, manufacturers have changed the tone of their advertisements from scary to humorous.
New developments continued to occur in the condom market, with the first polyurethane condom—branded Avanti and produced by the manufacturer of Durex—introduced in the 1990s. Worldwide condom use is expected to continue to grow: one study predicted that developing nations would need 18.6 billion condoms by 2015. , condoms are available inside prisons in Canada, most of the European Union, Australia, Brazil, Indonesia, South Africa, and the US states of Vermont (on September 17, 2013, the Californian Senate approved a bill for condom distribution inside the state's prisons, but the bill was not yet law at the time of approval).
The global condom market was estimated at US$9.2 billion in 2020.
Etymology and other terms
The term condom first appears in the early 18th century: early forms include condum (1706 and 1717), condon (1708) and cundum (1744). The word's etymology is unknown. In popular tradition, the invention and naming of the condom came to be attributed to an associate of England's King Charles II, one "Dr. Condom" or "Earl of Condom". There is however no evidence of the existence of such a person, and condoms had been used for over one hundred years before King Charles II acceded to the throne in 1660.
A variety of unproven Latin etymologies have been proposed, including (receptacle), (house), and (scabbard or case). It has also been speculated to be from the Italian word guantone, derived from guanto, meaning glove. William E. Kruck wrote an article in 1981 concluding that, "As for the word 'condom', I need state only that its origin remains completely unknown, and there ends this search for an etymology." Modern dictionaries may also list the etymology as "unknown".
Other terms are also commonly used to describe condoms. In North America condoms are also commonly known as prophylactics, or rubbers. In Britain they may be called French letters or rubber johnnies. Additionally, condoms may be referred to using the manufacturer's name.
Society and culture
Some moral and scientific criticism of condoms exists despite the many benefits of condoms agreed on by scientific consensus and sexual health experts.
Condom usage is typically recommended for new couples who have yet to develop full trust in their partner with regard to STDs. Established couples on the other hand have few concerns about STDs, and can use other methods of birth control such as the pill, which does not act as a barrier to intimate sexual contact. Note that the polar debate with regard to condom usage is attenuated by the target group the argument is directed. Notably the age category and stable partner question are factors, as well as the distinction between heterosexual and homosexuals, who have different kinds of sex and have different risk consequences and factors.
Among the prime objections to condom usage is the blocking of erotic sensation, or the intimacy that barrier-free sex provides. As the condom is held tightly to the skin of the penis, it diminishes the delivery of stimulation through rubbing and friction. Condom proponents claim this has the benefit of making sex last longer, by diminishing sensation and delaying male ejaculation. Those who promote condom-free heterosexual sex (slang: "bareback") claim that the condom puts a barrier between partners, diminishing what is normally a highly sensual, intimate, and spiritual connection between partners.
Religious
The United Church of Christ (UCC), a Reformed denomination of the Congregationalist tradition, promotes the distribution of condoms in churches and faith-based educational settings. Michael Shuenemeyer, a UCC minister, has stated that "The practice of safer sex is a matter of life and death. People of faith make condoms available because we have chosen life so that we and our children may live."
On the other hand, the Roman Catholic Church opposes all kinds of sexual acts outside of marriage, as well as any sexual act in which the chance of successful conception has been reduced by direct and intentional acts (for example, surgery to prevent conception) or foreign objects (for example, condoms).
The use of condoms to prevent STI transmission is not specifically addressed by Catholic doctrine, and is currently a topic of debate among theologians and high-ranking Catholic authorities. A few, such as Belgian Cardinal Godfried Danneels, believe the Catholic Church should actively support condoms used to prevent disease, especially serious diseases such as AIDS. However, the majority view—including all statements from the Vatican—is that condom-promotion programs encourage promiscuity, thereby actually increasing STI transmission. This view was most recently reiterated in 2009 by Pope Benedict XVI.
The Roman Catholic Church is the largest organized body of any world religion. The church has hundreds of programs dedicated to fighting the AIDS epidemic in Africa, but its opposition to condom use in these programs has been highly controversial.
In a November 2011 interview, Pope Benedict XVI discussed for the first time the use of condoms to prevent STI transmission. He said that the use of a condom can be justified in a few individual cases if the purpose is to reduce the risk of an HIV infection. He gave as an example male prostitutes. There was some confusion at first whether the statement applied only to homosexual prostitutes and thus not to heterosexual intercourse at all. However, Federico Lombardi, spokesman for the Vatican, clarified that it applied to heterosexual and transsexual prostitutes, whether male or female, as well. He did, however, also clarify that the Vatican's principles on sexuality and contraception had not been changed.
Scientific and environmental
More generally, some scientific researchers have expressed objective concern over certain ingredients sometimes added to condoms, notably talc and nitrosamines. Dry dusting powders are applied to latex condoms before packaging to prevent the condom from sticking to itself when rolled up. Previously, talc was used by most manufacturers, but cornstarch is currently the most popular dusting powder. Although rare during normal use, talc is known to be potentially irritant to mucous membranes (such as in the vagina). Cornstarch is generally believed to be safe; however, some researchers have raised concerns over its use as well.
Nitrosamines, which are potentially carcinogenic in humans, are believed to be present in a substance used to improve elasticity in latex condoms. A 2001 review stated that humans regularly receive 1,000 to 10,000 times greater nitrosamine exposure from food and tobacco than from condom use and concluded that the risk of cancer from condom use is very low. However, a 2004 study in Germany detected nitrosamines in 29 out of 32 condom brands tested, and concluded that exposure from condoms might exceed the exposure from food by 1.5- to 3-fold.
In addition, the large-scale use of disposable condoms has resulted in concerns over their environmental impact via littering and in landfills, where they can eventually wind up in wildlife environments if not incinerated or otherwise permanently disposed of first. Polyurethane condoms in particular, given they are a form of plastic, are not biodegradable, and latex condoms take a very long time to break down. Experts, such as AVERT, recommend condoms be disposed of in a garbage receptacle, as flushing them down the toilet (which some people do) may cause plumbing blockages and other problems. Furthermore, the plastic and foil wrappers condoms are packaged in are also not biodegradable. However, the benefits condoms offer are widely considered to offset their small landfill mass. Frequent condom or wrapper disposal in public areas such as a parks have been seen as a persistent litter problem.
While biodegradable, latex condoms damage the environment when disposed of improperly. According to the Ocean Conservancy, condoms, along with certain other types of trash, cover the coral reefs and smother sea grass and other bottom dwellers. The United States Environmental Protection Agency also has expressed concerns that many animals might mistake the litter for food.
Cultural barriers to use
In much of the Western world, the introduction of the pill in the 1960s was associated with a decline in condom use. In Japan, oral contraceptives were not approved for use until September 1999, and even then access was more restricted than in other industrialized nations. Perhaps because of this restricted access to hormonal contraception, Japan has the highest rate of condom usage in the world: in 2008, 80% of contraceptive users relied on condoms.
Cultural attitudes toward gender roles, contraception, and sexual activity vary greatly around the world, and range from extremely conservative to extremely liberal. But in places where condoms are misunderstood, mischaracterised, demonised, or looked upon with overall cultural disapproval, the prevalence of condom use is directly affected. In less-developed countries and among less-educated populations, misperceptions about how disease transmission and conception work negatively affect the use of condoms; additionally, in cultures with more traditional gender roles, women may feel uncomfortable demanding that their partners use condoms.
As an example, Latino immigrants in the United States often face cultural barriers to condom use. A study on female HIV prevention published in the Journal of Sex Health Research asserts that Latino women often lack the attitudes needed to negotiate safe sex due to traditional gender-role norms in the Latino community, and may be afraid to bring up the subject of condom use with their partners. Women who participated in the study often reported that because of the general machismo subtly encouraged in Latino culture, their male partners would be angry or possibly violent at the woman's suggestion that they use condoms. A similar phenomenon has been noted in a survey of low-income American black women; the women in this study also reported a fear of violence at the suggestion to their male partners that condoms be used.
A telephone survey conducted by Rand Corporation and Oregon State University, and published in the Journal of Acquired Immune Deficiency Syndromes showed that belief in AIDS conspiracy theories among United States black men is linked to rates of condom use. As conspiracy beliefs about AIDS grow in a given sector of these black men, consistent condom use drops in that same sector. Female use of condoms was not similarly affected.
In the African continent, condom promotion in some areas has been impeded by anti-condom campaigns by some Muslim and Catholic clerics. Among the Maasai in Tanzania, condom use is hampered by an aversion to "wasting" sperm, which is given sociocultural importance beyond reproduction. Sperm is believed to be an "elixir" to women and to have beneficial health effects. Maasai women believe that, after conceiving a child, they must have sexual intercourse repeatedly so that the additional sperm aids the child's development. Frequent condom use is also considered by some Maasai to cause impotence. Some women in Africa believe that condoms are "for prostitutes" and that respectable women should not use them. A few clerics even promote the lie that condoms are deliberately laced with HIV. In the United States, possession of many condoms has been used by police to accuse women of engaging in prostitution. The Presidential Advisory Council on HIV/AIDS has condemned this practice and there are efforts to end it.
Middle-Eastern couples who have not had children, because of the strong desire and social pressure to establish fertility as soon as possible within marriage, rarely use condoms.
In 2017, India restricted TV advertisements for condoms to between the hours of 10 pm to 6 am. Family planning advocates were against this, saying it was liable to "undo decades of progress on sexual and reproductive health".
Major manufacturers
One analyst described the size of the condom market as something that "boggles the mind". Numerous small manufacturers, nonprofit groups, and government-run manufacturing plants exist around the world. Within the condom market, there are several major contributors, among them both for-profit businesses and philanthropic organizations. Most large manufacturers have ties to the business that reach back to the end of the 19th century.
Economics
In the United States condoms usually cost less than US$1.00.
Research
A spray-on condom made of latex is intended to be easier to apply and more successful in preventing the transmission of diseases. , the spray-on condom was not going to market because the drying time could not be reduced below two to three minutes.
The Invisible Condom, developed at Université Laval in Quebec, Canada, is a gel that hardens upon increased temperature after insertion into the vagina or rectum. In the lab, it has been shown to effectively block HIV and herpes simplex virus. The barrier breaks down and liquefies after several hours. , the invisible condom is in the clinical trial phase, and has not yet been approved for use.
Also developed in 2005 is a condom treated with an erectogenic compound. The drug-treated condom is intended to help the wearer maintain his erection, which should also help reduce slippage. If approved, the condom would be marketed under the Durex brand. , it was still in clinical trials. In 2009, Ansell Healthcare, the makers of Lifestyle condoms, introduced the X2 condom lubricated with "Excite Gel" which contains the amino acid L-arginine and is intended to improve the strength of the erectile response.
In March 2013, philanthropist Bill Gates offered US$100,000 grants through his foundation for a condom design that "significantly preserves or enhances pleasure" to encourage more males to adopt the use of condoms for safer sex. The grant information stated: "The primary drawback from the male perspective is that condoms decrease pleasure as compared to no condom, creating a trade-off that many men find unacceptable, particularly given that the decisions about use must be made just prior to intercourse. Is it possible to develop a product without this stigma, or better, one that is felt to enhance pleasure?" In November of the same year, 11 research teams were selected to receive the grant money.
References
External links
"Sheathing Cupid's Arrow: the Oldest Artificial Contraceptive May Be Ripe for a Makeover", The Economist, February 2014.
16th-century introductions
HIV/AIDS
Prevention of HIV/AIDS
Penis
Sexual health
World Health Organization essential medicines
Wikipedia medicine articles ready to translate
Contraception for males |
5376 | https://en.wikipedia.org/wiki/Cladistics | Cladistics | Cladistics (; ) is an approach to biological classification in which organisms are categorized in groups ("clades") based on hypotheses of most recent common ancestry. The evidence for hypothesized relationships is typically shared derived characteristics (synapomorphies) that are not present in more distant groups and ancestors. However, from an empirical perspective, common ancestors are inferences based on a cladistic hypothesis of relationships of taxa whose character states can be observed. Theoretically, a last common ancestor and all its descendants constitute a (minimal) clade. Importantly, all descendants stay in their overarching ancestral clade. For example, if the terms worms or fishes were used within a strict cladistic framework, these terms would include humans. Many of these terms are normally used paraphyletically, outside of cladistics, e.g. as a 'grade', which are fruitless to precisely delineate, especially when including extinct species. Radiation results in the generation of new subclades by bifurcation, but in practice sexual hybridization may blur very closely related groupings.
As a hypothesis, a clade can be rejected only if some groupings were explicitly excluded. It may then be found that the excluded group did actually descend from the last common ancestor of the group, and thus emerged within the group. ("Evolved from" is misleading, because in cladistics all descendants stay in the ancestral group). Upon finding that the group is paraphyletic this way, either such excluded groups should be granted to the clade, or the group should be abolished.
Branches down to the divergence to the next significant (e.g. extant) sister are considered stem-groupings of the clade, but in principle each level stands on its own, to be assigned a unique name. For a fully bifurcated tree, adding a group to a tree also adds an additional (named) clade, and a new level on that branch. Specifically, also extinct groups are always put on a side-branch, not distinguishing whether an actual ancestor of other groupings was found.
The techniques and nomenclature of cladistics have been applied to disciplines other than biology. (See phylogenetic nomenclature.)
Cladistics findings are posing a difficulty for taxonomy, where the rank and (genus-)naming of established groupings may turn out to be inconsistent.
Cladistics is now the most commonly used method to classify organisms.
History
The original methods used in cladistic analysis and the school of taxonomy derived from the work of the German entomologist Willi Hennig, who referred to it as phylogenetic systematics (also the title of his 1966 book); the terms "cladistics" and "clade" were popularized by other researchers. Cladistics in the original sense refers to a particular set of methods used in phylogenetic analysis, although it is now sometimes used to refer to the whole field.
What is now called the cladistic method appeared as early as 1901 with a work by Peter Chalmers Mitchell for birds and subsequently by Robert John Tillyard (for insects) in 1921, and W. Zimmermann (for plants) in 1943. The term "clade" was introduced in 1958 by Julian Huxley after having been coined by Lucien Cuénot in 1940, "cladogenesis" in 1958, "cladistic" by Arthur Cain and Harrison in 1960, "cladist" (for an adherent of Hennig's school) by Ernst Mayr in 1965, and "cladistics" in 1966. Hennig referred to his own approach as "phylogenetic systematics". From the time of his original formulation until the end of the 1970s, cladistics competed as an analytical and philosophical approach to systematics with phenetics and so-called evolutionary taxonomy. Phenetics was championed at this time by the numerical taxonomists Peter Sneath and Robert Sokal, and evolutionary taxonomy by Ernst Mayr.
Originally conceived, if only in essence, by Willi Hennig in a book published in 1950, cladistics did not flourish until its translation into English in 1966 (Lewin 1997). Today, cladistics is the most popular method for inferring phylogenetic trees from morphological data.
In the 1990s, the development of effective polymerase chain reaction techniques allowed the application of cladistic methods to biochemical and molecular genetic traits of organisms, vastly expanding the amount of data available for phylogenetics. At the same time, cladistics rapidly became popular in evolutionary biology, because computers made it possible to process large quantities of data about organisms and their characteristics.
Methodology
The cladistic method interprets each shared character state transformation as a potential piece of evidence for grouping. Synapomorphies (shared, derived character states) are viewed as evidence of grouping, while symplesiomorphies (shared ancestral character states) are not. The outcome of a cladistic analysis is a cladogram – a tree-shaped diagram (dendrogram) that is interpreted to represent the best hypothesis of phylogenetic relationships. Although traditionally such cladograms were generated largely on the basis of morphological characters and originally calculated by hand, genetic sequencing data and computational phylogenetics are now commonly used in phylogenetic analyses, and the parsimony criterion has been abandoned by many phylogeneticists in favor of more "sophisticated" but less parsimonious evolutionary models of character state transformation. Cladists contend that these models are unjustified because there is no evidence that they recover more "true" or "correct" results from actual empirical data sets
Every cladogram is based on a particular dataset analyzed with a particular method. Datasets are tables consisting of molecular, morphological, ethological and/or other characters and a list of operational taxonomic units (OTUs), which may be genes, individuals, populations, species, or larger taxa that are presumed to be monophyletic and therefore to form, all together, one large clade; phylogenetic analysis infers the branching pattern within that clade. Different datasets and different methods, not to mention violations of the mentioned assumptions, often result in different cladograms. Only scientific investigation can show which is more likely to be correct.
Until recently, for example, cladograms like the following have generally been accepted as accurate representations of the ancestral relations among turtles, lizards, crocodilians, and birds:
If this phylogenetic hypothesis is correct, then the last common ancestor of turtles and birds, at the branch near the lived earlier than the last common ancestor of lizards and birds, near the . Most molecular evidence, however, produces cladograms more like this:
If this is accurate, then the last common ancestor of turtles and birds lived later than the last common ancestor of lizards and birds. Since the cladograms show two mutually exclusive hypotheses to describe the evolutionary history, at most one of them is correct.
The cladogram to the right represents the current universally accepted hypothesis that all primates, including strepsirrhines like the lemurs and lorises, had a common ancestor all of whose descendants are or were primates, and so form a clade; the name Primates is therefore recognized for this clade. Within the primates, all anthropoids (monkeys, apes, and humans) are hypothesized to have had a common ancestor all of whose descendants are or were anthropoids, so they form the clade called Anthropoidea. The "prosimians", on the other hand, form a paraphyletic taxon. The name Prosimii is not used in phylogenetic nomenclature, which names only clades; the "prosimians" are instead divided between the clades Strepsirhini and Haplorhini, where the latter contains Tarsiiformes and Anthropoidea.
Lemurs and tarsiers may have looked closely related to humans, in the sense of being close on the evolutionary tree to humans. However, from the perspective of a tarsier, humans and lemurs would have looked close, in the exact same sense. Cladistics forces a neutral perspective, treating all branches (extant or extinct) in the same manner. It also forces one to try to make statements, and honestly take into account findings, about the exact historic relationships between the groups.
Terminology for character states
The following terms, coined by Hennig, are used to identify shared or distinct character states among groups:
A plesiomorphy ("close form") or ancestral state is a character state that a taxon has retained from its ancestors. When two or more taxa that are not nested within each other share a plesiomorphy, it is a symplesiomorphy (from syn-, "together"). Symplesiomorphies do not mean that the taxa that exhibit that character state are necessarily closely related. For example, Reptilia is traditionally characterized by (among other things) being cold-blooded (i.e., not maintaining a constant high body temperature), whereas birds are warm-blooded. Since cold-bloodedness is a plesiomorphy, inherited from the common ancestor of traditional reptiles and birds, and thus a symplesiomorphy of turtles, snakes and crocodiles (among others), it does not mean that turtles, snakes and crocodiles form a clade that excludes the birds.
An apomorphy ("separate form") or derived state is an innovation. It can thus be used to diagnose a clade – or even to help define a clade name in phylogenetic nomenclature. Features that are derived in individual taxa (a single species or a group that is represented by a single terminal in a given phylogenetic analysis) are called autapomorphies (from auto-, "self"). Autapomorphies express nothing about relationships among groups; clades are identified (or defined) by synapomorphies (from syn-, "together"). For example, the possession of digits that are homologous with those of Homo sapiens is a synapomorphy within the vertebrates. The tetrapods can be singled out as consisting of the first vertebrate with such digits homologous to those of Homo sapiens together with all descendants of this vertebrate (an apomorphy-based phylogenetic definition). Importantly, snakes and other tetrapods that do not have digits are nonetheless tetrapods: other characters, such as amniotic eggs and diapsid skulls, indicate that they descended from ancestors that possessed digits which are homologous with ours.
A character state is homoplastic or "an instance of homoplasy" if it is shared by two or more organisms but is absent from their common ancestor or from a later ancestor in the lineage leading to one of the organisms. It is therefore inferred to have evolved by convergence or reversal. Both mammals and birds are able to maintain a high constant body temperature (i.e., they are warm-blooded). However, the accepted cladogram explaining their significant features indicates that their common ancestor is in a group lacking this character state, so the state must have evolved independently in the two clades. Warm-bloodedness is separately a synapomorphy of mammals (or a larger clade) and of birds (or a larger clade), but it is not a synapomorphy of any group including both these clades. Hennig's Auxiliary Principle states that shared character states should be considered evidence of grouping unless they are contradicted by the weight of other evidence; thus, homoplasy of some feature among members of a group may only be inferred after a phylogenetic hypothesis for that group has been established.
The terms plesiomorphy and apomorphy are relative; their application depends on the position of a group within a tree. For example, when trying to decide whether the tetrapods form a clade, an important question is whether having four limbs is a synapomorphy of the earliest taxa to be included within Tetrapoda: did all the earliest members of the Tetrapoda inherit four limbs from a common ancestor, whereas all other vertebrates did not, or at least not homologously? By contrast, for a group within the tetrapods, such as birds, having four limbs is a plesiomorphy. Using these two terms allows a greater precision in the discussion of homology, in particular allowing clear expression of the hierarchical relationships among different homologous features.
It can be difficult to decide whether a character state is in fact the same and thus can be classified as a synapomorphy, which may identify a monophyletic group, or whether it only appears to be the same and is thus a homoplasy, which cannot identify such a group. There is a danger of circular reasoning: assumptions about the shape of a phylogenetic tree are used to justify decisions about character states, which are then used as evidence for the shape of the tree. Phylogenetics uses various forms of parsimony to decide such questions; the conclusions reached often depend on the dataset and the methods. Such is the nature of empirical science, and for this reason, most cladists refer to their cladograms as hypotheses of relationship. Cladograms that are supported by a large number and variety of different kinds of characters are viewed as more robust than those based on more limited evidence.
Terminology for taxa
Mono-, para- and polyphyletic taxa can be understood based on the shape of the tree (as done above), as well as based on their character states. These are compared in the table below.
Criticism
Cladistics, either generally or in specific applications, has been criticized from its beginnings. Decisions as to whether particular character states are homologous, a precondition of their being synapomorphies, have been challenged as involving circular reasoning and subjective judgements. Of course, the potential unreliability of evidence is a problem for any systematic method, or for that matter, for any empirical scientific endeavor at all.
Transformed cladistics arose in the late 1970s in an attempt to resolve some of these problems by removing a priori assumptions about phylogeny from cladistic analysis, but it has remained unpopular.
Issues
Ancestors
The cladistic method does not identify fossil species as actual ancestors of a clade. Instead, fossil taxa are identified as belonging to separate extinct branches. While a fossil species could be the actual ancestor of a clade, there is no way to know that. Therefore, a more conservative hypothesis is that the fossil taxon is related to other fossil and extant taxa, as implied by the pattern of shared apomorphic features.
Extinction status
An otherwise extinct group with any extant descendants, is not considered (literally) extinct, and for instance does not have a date of extinction.
Hybridization, interbreeding
Anything having to do with biology and sex is complicated and messy, and cladistics is no exception. Many species reproduce sexually, and are capable of interbreeding for millions of years. Worse, during such a period, many branches may have radiated, and it may take hundreds of millions of years for them to have whittled down to just two. Only then one can theoretically assign proper last common ancestors of groupings which do not inadvertently include earlier branches. The process of true cladistic bifurcation can thus take a much more extended time than one is usually aware of. In practice, for recent radiations, cladistically guided findings only give a coarse impression of the complexity. A more detailed account will give details about fractions of introgressions between groupings, and even geographic variations thereof. This has been used as an argument for the use of paraphyletic groupings, but typically other reasons are quoted.
Horizontal gene transfer
Horizontal gene transfer is the mobility of genetic info between different organisms that can have immediate or delayed effects for the reciprocal host. There are several processes in nature which can cause horizontal gene transfer. This does typically not directly interfere with ancestry of the organism, but can complicate the determination of that ancestry. On another level, one can map the horizontal gene transfer processes, by determining the phylogeny of the individual genes using cladistics.
Naming stability
If there is unclarity in mutual relationships, there are a lot of possible trees. Assigning names to each possible clade may not be prudent. Furthermore, established names are discarded in cladistics, or alternatively carry connotations which may no longer hold, such as when additional groups are found to have emerged in them. Naming changes are the direct result of changes in the recognition of mutual relationships, which often is still in flux, especially for extinct species. Hanging on to older naming and/or connotations is counter-productive, as they typically do not reflect actual mutual relationships precisely at all. E.g. Archaea, Asgard archaea, protists, slime molds, worms, invertebrata, fishes, reptilia, monkeys, Ardipithecus, Australopithecus, Homo erectus all contain Homo sapiens cladistically, in their sensu lato meaning. For originally extinct stem groups, sensu lato generally means generously keeping previously included groups, which then may come to include even living species. A pruned sensu stricto meaning is often adopted instead, but the group would need to be restricted to a single branche on the stem. Other branches then get their own name and level. This is commensurate to the fact that more senior stem branches are in fact closer related to the resulting group than the more basal stem branches; that those stem branches only may have lived for a short time does not affect that assessment in cladistics.
In disciplines other than biology
The comparisons used to acquire data on which cladograms can be based are not limited to the field of biology. Any group of individuals or classes that are hypothesized to have a common ancestor, and to which a set of common characteristics may or may not apply, can be compared pairwise. Cladograms can be used to depict the hypothetical descent relationships within groups of items in many different academic realms. The only requirement is that the items have characteristics that can be identified and measured.
Anthropology and archaeology: Cladistic methods have been used to reconstruct the development of cultures or artifacts using groups of cultural traits or artifact features.
Comparative mythology and folktale use cladistic methods to reconstruct the protoversion of many myths. Mythological phylogenies constructed with mythemes clearly support low horizontal transmissions (borrowings), historical (sometimes Palaeolithic) diffusions and punctuated evolution. They also are a powerful way to test hypotheses about cross-cultural relationships among folktales.
Literature: Cladistic methods have been used in the classification of the surviving manuscripts of the Canterbury Tales, and the manuscripts of the Sanskrit Charaka Samhita.
Historical linguistics: Cladistic methods have been used to reconstruct the phylogeny of languages using linguistic features. This is similar to the traditional comparative method of historical linguistics, but is more explicit in its use of parsimony and allows much faster analysis of large datasets (computational phylogenetics).
Textual criticism or stemmatics: Cladistic methods have been used to reconstruct the phylogeny of manuscripts of the same work (and reconstruct the lost original) using distinctive copying errors as apomorphies. This differs from traditional historical-comparative linguistics in enabling the editor to evaluate and place in genetic relationship large groups of manuscripts with large numbers of variants that would be impossible to handle manually. It also enables parsimony analysis of contaminated traditions of transmission that would be impossible to evaluate manually in a reasonable period of time.
Astrophysics infers the history of relationships between galaxies to create branching diagram hypotheses of galaxy diversification.
See also
Bioinformatics
Biomathematics
Coalescent theory
Common descent
Glossary of scientific naming
Language family
Patrocladogram
Phylogenetic network
Scientific classification
Stratocladistics
Subclade
Systematics
Three-taxon analysis
Tree model
Tree structure
Notes and references
Bibliography
Available free online at Gallica (No direct URL). This is the paper credited by for the first use of the term 'clade'.
responding to .
Translated from manuscript in German eventually published in 1982 (Phylogenetische Systematik, Verlag Paul Parey, Berlin).
d'Huy, Julien (2012b), "Le motif de Pygmalion : origine afrasienne et diffusion en Afrique". Sahara, 23: 49-59 .
d'Huy, Julien (2013a), "Polyphemus (Aa. Th. 1137)." "A phylogenetic reconstruction of a prehistoric tale". Nouvelle Mythologie Comparée / New Comparative Mythology 1,
d'Huy, Julien (2013c) "Les mythes évolueraient par ponctuations". Mythologie française, 252, 2013c: 8-12.
d'Huy, Julien (2013d) "A Cosmic Hunt in the Berber sky : a phylogenetic reconstruction of Palaeolithic mythology". Les Cahiers de l'AARS, 15, 2013d: 93-106.
Reissued 1997 in paperback. Includes a reprint of Mayr's 1974 anti-cladistics paper at pp. 433–476, "Cladistic analysis or cladistic classification." This is the paper to which is a response.
.
Tehrani, Jamshid J., 2013, "The Phylogeny of Little Red Riding Hood", PLOS ONE, 13 November.
External links
OneZoom: Tree of Life – all living species as intuitive and zoomable fractal explorer (responsive design)
Willi Hennig Society
Cladistics (scholarly journal of the Willi Hennig Society)
Phylogenetics
Evolutionary biology
Zoology
Philosophy of biology |
5377 | https://en.wikipedia.org/wiki/Calendar | Calendar | A calendar is a system of organizing days. This is done by giving names to periods of time, typically days, weeks, months and years. A date is the designation of a single and specific day within such a system. A calendar is also a physical record (often paper) of such a system. A calendar can also mean a list of planned events, such as a court calendar, or a partly or fully chronological list of documents, such as a calendar of wills.
Periods in a calendar (such as years and months) are usually, though not necessarily, synchronized with the cycle of the sun or the moon. The most common type of pre-modern calendar was the lunisolar calendar, a lunar calendar that occasionally adds one intercalary month to remain synchronized with the solar year over the long term.
Etymology
The term calendar is taken from , the term for the first day of the month in the Roman calendar, related to the verb 'to call out', referring to the "calling" of the new moon when it was first seen. Latin meant 'account book, register' (as accounts were settled and debts were collected on the calends of each month). The Latin term was adopted in Old French as and from there in Middle English as by the 13th century (the spelling calendar is early modern).
History
The course of the sun and the moon are the most salient regularly recurring natural events useful for timekeeping, and in pre-modern societies around the world lunation and the year were most commonly used as time units. Nevertheless, the Roman calendar contained remnants of a very ancient pre-Etruscan 10-month solar year.
The first recorded physical calendars, dependent on the development of writing in the Ancient Near East, are the Bronze Age Egyptian and Sumerian calendars.
During the Vedic period India developed a sophisticated timekeeping methodology and calendars for Vedic rituals. According to Yukio Ohashi, the Vedanga calendar in ancient India was based on astronomical studies during the Vedic Period and was not derived from other cultures.
A large number of calendar systems in the Ancient Near East were based on the Babylonian calendar dating from the Iron Age, among them the calendar system of the Persian Empire, which in turn gave rise to the Zoroastrian calendar and the Hebrew calendar.
A great number of Hellenic calendars were developed in Classical Greece, and during the Hellenistic period they gave rise to the ancient Roman calendar and to various Hindu calendars.
Calendars in antiquity were lunisolar, depending on the introduction of intercalary months to align the solar and the lunar years. This was mostly based on observation, but there may have been early attempts to model the pattern of intercalation algorithmically, as evidenced in the fragmentary 2nd-century Coligny calendar.
The Roman calendar was reformed by Julius Caesar in 46 BC. His "Julian" calendar was no longer dependent on the observation of the new moon, but followed an algorithm of introducing a leap day every four years. This created a dissociation of the calendar month from lunation. The Gregorian calendar, introduced in 1582, corrected most of the remaining difference between the Julian calendar and the solar year.
The Islamic calendar is based on the prohibition of intercalation (nasi') by Muhammad, in Islamic tradition dated to a sermon given on 9 Dhu al-Hijjah AH 10 (Julian date: 6 March 632). This resulted in an observation-based lunar calendar that shifts relative to the seasons of the solar year.
There have been several modern proposals for reform of the modern calendar, such as the World Calendar, the International Fixed Calendar, the Holocene calendar, and the Hanke-Henry Permanent Calendar. Such ideas are mooted from time to time, but have failed to gain traction because of the loss of continuity and the massive upheaval that implementing them would involve, as well as their effect on cycles of religious activity.
Systems
A full calendar system has a different calendar date for every day. Thus the week cycle is by itself not a full calendar system; neither is a system to name the days within a year without a system for identifying the years.
The simplest calendar system just counts time periods from a reference date. This applies for the Julian day or Unix Time. Virtually the only possible variation is using a different reference date, in particular, one less distant in the past to make the numbers smaller. Computations in these systems are just a matter of addition and subtraction.
Other calendars have one (or multiple) larger units of time.
Calendars that contain one level of cycles:
week and weekday – this system (without year, the week number keeps on increasing) is not very common
year and ordinal date within the year, e.g., the ISO 8601 ordinal date system
Calendars with two levels of cycles:
year, month, and day – most systems, including the Gregorian calendar (and its very similar predecessor, the Julian calendar), the Islamic calendar, the Solar Hijri calendar and the Hebrew calendar
year, week, and weekday – e.g., the ISO week date
Cycles can be synchronized with periodic phenomena:
Lunar calendars are synchronized to the motion of the Moon (lunar phases); an example is the Islamic calendar.
Solar calendars are based on perceived seasonal changes synchronized to the apparent motion of the Sun; an example is the Persian calendar.
Lunisolar calendars are based on a combination of both solar and lunar reckonings; examples include the traditional calendar of China, the Hindu calendar in India and Nepal, and the Hebrew calendar.
The week cycle is an example of one that is not synchronized to any external phenomenon (although it may have been derived from lunar phases, beginning anew every month).
Very commonly a calendar includes more than one type of cycle or has both cyclic and non-cyclic elements.
Most calendars incorporate more complex cycles. For example, the vast majority of them track years, months, weeks and days. The seven-day week is practically universal, though its use varies. It has run uninterrupted for millennia.
Solar
Solar calendars assign a date to each solar day. A day may consist of the period between sunrise and sunset, with a following period of night, or it may be a period between successive events such as two sunsets. The length of the interval between two such successive events may be allowed to vary slightly during the year, or it may be averaged into a mean solar day. Other types of calendar may also use a solar day.
Lunar
Not all calendars use the solar year as a unit. A lunar calendar is one in which days are numbered within each lunar phase cycle. Because the length of the lunar month is not an even fraction of the length of the tropical year, a purely lunar calendar quickly drifts against the seasons, which do not vary much near the equator. It does, however, stay constant with respect to other phenomena, notably tides. An example is the Islamic calendar.
Alexander Marshack, in a controversial reading, believed that marks on a bone baton () represented a lunar calendar. Other marked bones may also represent lunar calendars. Similarly, Michael Rappenglueck believes that marks on a 15,000-year-old cave painting represent a lunar calendar.
Lunisolar
A lunisolar calendar is a lunar calendar that compensates by adding an extra month as needed to realign the months with the seasons. Prominent examples of lunisolar calendar are Hindu calendar and Buddhist calendar that are popular in South Asia and Southeast Asia. Another example is the Hebrew calendar, which uses a 19-year cycle.
Subdivisions
Nearly all calendar systems group consecutive days into "months" and also into "years". In a solar calendar a year approximates Earth's tropical year (that is, the time it takes for a complete cycle of seasons), traditionally used to facilitate the planning of agricultural activities. In a lunar calendar, the month approximates the cycle of the moon phase. Consecutive days may be grouped into other periods such as the week.
Because the number of days in the tropical year is not a whole number, a solar calendar must have a different number of days in different years. This may be handled, for example, by adding an extra day in leap years. The same applies to months in a lunar calendar and also the number of months in a year in a lunisolar calendar. This is generally known as intercalation. Even if a calendar is solar, but not lunar, the year cannot be divided entirely into months that never vary in length.
Cultures may define other units of time, such as the week, for the purpose of scheduling regular activities that do not easily coincide with months or years. Many cultures use different baselines for their calendars' starting years. Historically, several countries have based their calendars on regnal years, a calendar based on the reign of their current sovereign. For example, the year 2006 in Japan is year 18 Heisei, with Heisei being the era name of Emperor Akihito.
Other types
Arithmetical and astronomical
An astronomical calendar is based on ongoing observation; examples are the religious Islamic calendar and the old religious Jewish calendar in the time of the Second Temple. Such a calendar is also referred to as an observation-based calendar. The advantage of such a calendar is that it is perfectly and perpetually accurate. The disadvantage is that working out when a particular date would occur is difficult.
An arithmetic calendar is one that is based on a strict set of rules; an example is the current Jewish calendar. Such a calendar is also referred to as a rule-based calendar. The advantage of such a calendar is the ease of calculating when a particular date occurs. The disadvantage is imperfect accuracy. Furthermore, even if the calendar is very accurate, its accuracy diminishes slowly over time, owing to changes in Earth's rotation. This limits the lifetime of an accurate arithmetic calendar to a few thousand years. After then, the rules would need to be modified from observations made since the invention of the calendar.
Complete and incomplete
Calendars may be either complete or incomplete. Complete calendars provide a way of naming each consecutive day, while incomplete calendars do not. The early Roman calendar, which had no way of designating the days of the winter months other than to lump them together as "winter", is an example of an incomplete calendar, while the Gregorian calendar is an example of a complete calendar.
Usage
The primary practical use of a calendar is to identify days: to be informed about or to agree on a future event and to record an event that has happened. Days may be significant for agricultural, civil, religious, or social reasons. For example, a calendar provides a way to determine when to start planting or harvesting, which days are religious or civil holidays, which days mark the beginning and end of business accounting periods, and which days have legal significance, such as the day taxes are due or a contract expires. Also, a calendar may, by identifying a day, provide other useful information about the day such as its season.
Calendars are also used as part of a complete timekeeping system: date and time of day together specify a moment in time. In the modern world, timekeepers can show time, date, and weekday. Some may also show the lunar phase.
Gregorian
The Gregorian calendar is the de facto international standard and is used almost everywhere in the world for civil purposes. The widely used solar aspect is a cycle of leap days in a 400-year cycle designed to keep the duration of the year aligned with the solar year. There is a lunar aspect which approximates the position of the moon during the year, and is used in the calculation of the date of Easter. Each Gregorian year has either 365 or 366 days (the leap day being inserted as 29 February), amounting to an average Gregorian year of 365.2425 days (compared to a solar year of 365.2422 days).
The calendar was introduced in 1582 as a refinement to the Julian calendar, which had been in use throughout the European Middle Ages, amounting to a 0.002% correction in the length of the year. During the Early Modern period, its adoption was mostly limited to Roman Catholic nations, but by the 19th century it had become widely adopted for the sake of convenience in international trade. The last European country to adopt it was Greece, in 1923.
The calendar epoch used by the Gregorian calendar is inherited from the medieval convention established by Dionysius Exiguus and associated with the Julian calendar. The year number is variously given as AD (for Anno Domini) or CE (for Common Era or Christian Era).
Religious
The most important use of pre-modern calendars is keeping track of the liturgical year and the observation of religious feast days.
While the Gregorian calendar is itself historically motivated to the calculation of the Easter date, it is now in worldwide secular use as the de facto standard. Alongside the use of the Gregorian calendar for secular matters, there remain several calendars in use for religious purposes.
Western Christian liturgical calendars are based on the cycle of the Roman Rite of the Catholic Church and generally include the liturgical seasons of Advent, Christmas, Ordinary Time (Time after Epiphany), Lent, Easter, and Ordinary Time (Time after Pentecost). Some Christian calendars do not include Ordinary Time and every day falls into a denominated season.
Eastern Christians, including the Orthodox Church, use the Julian calendar.
The Islamic calendar or Hijri calendar is a lunar calendar consisting of 12 lunar months in a year of 354 or 355 days. It is used to date events in most of the Muslim countries (concurrently with the Gregorian calendar) and used by Muslims everywhere to determine the proper day on which to celebrate Islamic holy days and festivals. Its epoch is the Hijra (corresponding to AD 622)
With an annual drift of 11 or 12 days, the seasonal relation is repeated approximately every 33 Islamic years.
Various Hindu calendars remain in use in the Indian subcontinent, including the Nepali calendars, Bengali calendar, Malayalam calendar, Tamil calendar, Vikrama Samvat used in Northern India, and Shalivahana calendar in the Deccan states.
The Buddhist calendar and the traditional lunisolar calendars of Cambodia, Laos, Myanmar, Sri Lanka and Thailand are also based on an older version of the Hindu calendar.
Most of the Hindu calendars are inherited from a system first enunciated in Vedanga Jyotisha of Lagadha, standardized in the Sūrya Siddhānta and subsequently reformed by astronomers such as Āryabhaṭa (AD 499), Varāhamihira (6th century) and Bhāskara II (12th century).
The Hebrew calendar is used by Jews worldwide for religious and cultural affairs, also influences civil matters in Israel (such as national holidays) and can be used business dealings (such as for the dating of cheques).
Followers of the Baháʼí Faith use the Baháʼí calendar. The Baháʼí Calendar, also known as the Badi Calendar was first established by the Bab in the Kitab-i-Asma. The Baháʼí Calendar is also purely a solar calendar and comprises 19 months each having nineteen days.
National
The Chinese, Hebrew, Hindu, and Julian calendars are widely used for religious and social purposes.
The Iranian (Persian) calendar is used in Iran and some parts of Afghanistan. The Assyrian calendar is in use by the members of the Assyrian community in the Middle East (mainly Iraq, Syria, Turkey, and Iran) and the diaspora. The first year of the calendar is exactly 4750 years prior to the start of the Gregorian calendar. The Ethiopian calendar or Ethiopic calendar is the principal calendar used in Ethiopia and Eritrea, with the Oromo calendar also in use in some areas. In neighboring Somalia, the Somali calendar co-exists alongside the Gregorian and Islamic calendars. In Thailand, where the Thai solar calendar is used, the months and days have adopted the western standard, although the years are still based on the traditional Buddhist calendar.
Fiscal
A fiscal calendar generally means the accounting year of a government or a business. It is used for budgeting, keeping accounts, and taxation. It is a set of 12 months that may start at any date in a year. The US government's fiscal year starts on 1 October and ends on 30 September. The government of India's fiscal year starts on 1 April and ends on 31 March. Small traditional businesses in India start the fiscal year on Diwali festival and end the day before the next year's Diwali festival.
In accounting (and particularly accounting software), a fiscal calendar (such as a 4/4/5 calendar) fixes each month at a specific number of weeks to facilitate comparisons from month to month and year to year. January always has exactly 4 weeks (Sunday through Saturday), February has 4 weeks, March has 5 weeks, etc. Note that this calendar will normally need to add a 53rd week to every 5th or 6th year, which might be added to December or might not be, depending on how the organization uses those dates. There exists an international standard way to do this (the ISO week). The ISO week starts on a Monday and ends on a Sunday. Week 1 is always the week that contains 4 January in the Gregorian calendar.
Formats
The term calendar applies not only to a given scheme of timekeeping but also to a specific record or device displaying such a scheme, for example, an appointment book in the form of a pocket calendar (or personal organizer), desktop calendar, a wall calendar, etc.
In a paper calendar, one or two sheets can show a single day, a week, a month, or a year. If a sheet is for a single day, it easily shows the date and the weekday. If a sheet is for multiple days it shows a conversion table to convert from weekday to date and back. With a special pointing device, or by crossing out past days, it may indicate the current date and weekday. This is the most common usage of the word.
In the US Sunday is considered the first day of the week and so appears on the far left and Saturday the last day of the week appearing on the far right. In Britain, the weekend may appear at the end of the week so the first day is Monday and the last day is Sunday. The US calendar display is also used in Britain.
It is common to display the Gregorian calendar in separate monthly grids of seven columns (from Monday to Sunday, or Sunday to Saturday depending on which day is considered to start the week – this varies according to country) and five to six rows (or rarely, four rows when the month of February contains 28 days in common years beginning on the first day of the week), with the day of the month numbered in each cell, beginning with 1. The sixth row is sometimes eliminated by marking 23/30 and 24/31 together as necessary.
When working with weeks rather than months, a continuous format is sometimes more convenient, where no blank cells are inserted to ensure that the first day of a new month begins on a fresh row.
Software
Calendaring software provides users with an electronic version of a calendar, and may additionally provide an appointment book, address book, or contact list.
Calendaring is a standard feature of many PDAs, EDAs, and smartphones. The software may be a local package designed for individual use (e.g., Lightning extension for Mozilla Thunderbird, Microsoft Outlook without Exchange Server, or Windows Calendar) or maybe a networked package that allows for the sharing of information between users (e.g., Mozilla Sunbird, Windows Live Calendar, Google Calendar, or Microsoft Outlook with Exchange Server).
See also
General Roman Calendar
List of calendars
Advent calendar
Calendar reform
Calendrical calculation
Docket (court)
History of calendars
Horology
List of international common standards
List of unofficial observances by date
Real-time clock (RTC), which underlies the Calendar software on modern computers.
Unit of time
References
Citations
Sources
Further reading
External links
Calendar converter, including all major civil, religious and technical calendars.
Units of time |
5378 | https://en.wikipedia.org/wiki/Physical%20cosmology | Physical cosmology | Physical cosmology is a branch of cosmology concerned with the study of cosmological models. A cosmological model, or simply cosmology, provides a description of the largest-scale structures and dynamics of the universe and allows study of fundamental questions about its origin, structure, evolution, and ultimate fate. Cosmology as a science originated with the Copernican principle, which implies that celestial bodies obey identical physical laws to those on Earth, and Newtonian mechanics, which first allowed those physical laws to be understood.
Physical cosmology, as it is now understood, began with the development in 1915 of Albert Einstein's general theory of relativity, followed by major observational discoveries in the 1920s: first, Edwin Hubble discovered that the universe contains a huge number of external galaxies beyond the Milky Way; then, work by Vesto Slipher and others showed that the universe is expanding. These advances made it possible to speculate about the origin of the universe, and allowed the establishment of the Big Bang theory, by Georges Lemaître, as the leading cosmological model. A few researchers still advocate a handful of alternative cosmologies; however, most cosmologists agree that the Big Bang theory best explains the observations.
Dramatic advances in observational cosmology since the 1990s, including the cosmic microwave background, distant supernovae and galaxy redshift surveys, have led to the development of a standard model of cosmology. This model requires the universe to contain large amounts of dark matter and dark energy whose nature is currently not well understood, but the model gives detailed predictions that are in excellent agreement with many diverse observations.
Cosmology draws heavily on the work of many disparate areas of research in theoretical and applied physics. Areas relevant to cosmology include particle physics experiments and theory, theoretical and observational astrophysics, general relativity, quantum mechanics, and plasma physics.
Subject history
Modern cosmology developed along tandem tracks of theory and observation. In 1916, Albert Einstein published his theory of general relativity, which provided a unified description of gravity as a geometric property of space and time. At the time, Einstein believed in a static universe, but found that his original formulation of the theory did not permit it. This is because masses distributed throughout the universe gravitationally attract, and move toward each other over time. However, he realized that his equations permitted the introduction of a constant term which could counteract the attractive force of gravity on the cosmic scale. Einstein published his first paper on relativistic cosmology in 1917, in which he added this cosmological constant to his field equations in order to force them to model a static universe. The Einstein model describes a static universe; space is finite and unbounded (analogous to the surface of a sphere, which has a finite area but no edges). However, this so-called Einstein model is unstable to small perturbations—it will eventually start to expand or contract. It was later realized that Einstein's model was just one of a larger set of possibilities, all of which were consistent with general relativity and the cosmological principle. The cosmological solutions of general relativity were found by Alexander Friedmann in the early 1920s. His equations describe the Friedmann–Lemaître–Robertson–Walker universe, which may expand or contract, and whose geometry may be open, flat, or closed.
In the 1910s, Vesto Slipher (and later Carl Wilhelm Wirtz) interpreted the red shift of spiral nebulae as a Doppler shift that indicated they were receding from Earth. However, it is difficult to determine the distance to astronomical objects. One way is to compare the physical size of an object to its angular size, but a physical size must be assumed to do this. Another method is to measure the brightness of an object and assume an intrinsic luminosity, from which the distance may be determined using the inverse-square law. Due to the difficulty of using these methods, they did not realize that the nebulae were actually galaxies outside our own Milky Way, nor did they speculate about the cosmological implications. In 1927, the Belgian Roman Catholic priest Georges Lemaître independently derived the Friedmann–Lemaître–Robertson–Walker equations and proposed, on the basis of the recession of spiral nebulae, that the universe began with the "explosion" of a "primeval atom"—which was later called the Big Bang. In 1929, Edwin Hubble provided an observational basis for Lemaître's theory. Hubble showed that the spiral nebulae were galaxies by determining their distances using measurements of the brightness of Cepheid variable stars. He discovered a relationship between the redshift of a galaxy and its distance. He interpreted this as evidence that the galaxies are receding from Earth in every direction at speeds proportional to their distance. This fact is now known as Hubble's law, though the numerical factor Hubble found relating recessional velocity and distance was off by a factor of ten, due to not knowing about the types of Cepheid variables.
Given the cosmological principle, Hubble's law suggested that the universe was expanding. Two primary explanations were proposed for the expansion. One was Lemaître's Big Bang theory, advocated and developed by George Gamow. The other explanation was Fred Hoyle's steady state model in which new matter is created as the galaxies move away from each other. In this model, the universe is roughly the same at any point in time.
For a number of years, support for these theories was evenly divided. However, the observational evidence began to support the idea that the universe evolved from a hot dense state. The discovery of the cosmic microwave background in 1965 lent strong support to the Big Bang model, and since the precise measurements of the cosmic microwave background by the Cosmic Background Explorer in the early 1990s, few cosmologists have seriously proposed other theories of the origin and evolution of the cosmos. One consequence of this is that in standard general relativity, the universe began with a singularity, as demonstrated by Roger Penrose and Stephen Hawking in the 1960s.
An alternative view to extend the Big Bang model, suggesting the universe had no beginning or singularity and the age of the universe is infinite, has been presented.
In September 2023, astrophysicists questioned the overall current view of the universe, in the form of the Standard Model of Cosmology, based on the latest James Webb Space Telescope studies.
Energy of the cosmos
The lightest chemical elements, primarily hydrogen and helium, were created during the Big Bang through the process of nucleosynthesis. In a sequence of stellar nucleosynthesis reactions, smaller atomic nuclei are then combined into larger atomic nuclei, ultimately forming stable iron group elements such as iron and nickel, which have the highest nuclear binding energies. The net process results in a later energy release, meaning subsequent to the Big Bang. Such reactions of nuclear particles can lead to sudden energy releases from cataclysmic variable stars such as novae. Gravitational collapse of matter into black holes also powers the most energetic processes, generally seen in the nuclear regions of galaxies, forming quasars and active galaxies.
Cosmologists cannot explain all cosmic phenomena exactly, such as those related to the accelerating expansion of the universe, using conventional forms of energy. Instead, cosmologists propose a new form of energy called dark energy that permeates all space. One hypothesis is that dark energy is just the vacuum energy, a component of empty space that is associated with the virtual particles that exist due to the uncertainty principle.
There is no clear way to define the total energy in the universe using the most widely accepted theory of gravity, general relativity. Therefore, it remains controversial whether the total energy is conserved in an expanding universe. For instance, each photon that travels through intergalactic space loses energy due to the redshift effect. This energy is not transferred to any other system, so seems to be permanently lost. On the other hand, some cosmologists insist that energy is conserved in some sense; this follows the law of conservation of energy.
Different forms of energy may dominate the cosmos—relativistic particles which are referred to as radiation, or non-relativistic particles referred to as matter. Relativistic particles are particles whose rest mass is zero or negligible compared to their kinetic energy, and so move at the speed of light or very close to it; non-relativistic particles have much higher rest mass than their energy and so move much slower than the speed of light.
As the universe expands, both matter and radiation become diluted. However, the energy densities of radiation and matter dilute at different rates. As a particular volume expands, mass-energy density is changed only by the increase in volume, but the energy density of radiation is changed both by the increase in volume and by the increase in the wavelength of the photons that make it up. Thus the energy of radiation becomes a smaller part of the universe's total energy than that of matter as it expands. The very early universe is said to have been 'radiation dominated' and radiation controlled the deceleration of expansion. Later, as the average energy per photon becomes roughly 10 eV and lower, matter dictates the rate of deceleration and the universe is said to be 'matter dominated'. The intermediate case is not treated well analytically. As the expansion of the universe continues, matter dilutes even further and the cosmological constant becomes dominant, leading to an acceleration in the universe's expansion.
History of the universe
The history of the universe is a central issue in cosmology. The history of the universe is divided into different periods called epochs, according to the dominant forces and processes in each period. The standard cosmological model is known as the Lambda-CDM model.
Equations of motion
Within the standard cosmological model, the equations of motion governing the universe as a whole are derived from general relativity with a small, positive cosmological constant. The solution is an expanding universe; due to this expansion, the radiation and matter in the universe cool down and become diluted. At first, the expansion is slowed down by gravitation attracting the radiation and matter in the universe. However, as these become diluted, the cosmological constant becomes more dominant and the expansion of the universe starts to accelerate rather than decelerate. In our universe this happened billions of years ago.
Particle physics in cosmology
During the earliest moments of the universe, the average energy density was very high, making knowledge of particle physics critical to understanding this environment. Hence, scattering processes and decay of unstable elementary particles are important for cosmological models of this period.
As a rule of thumb, a scattering or a decay process is cosmologically important in a certain epoch if the time scale describing that process is smaller than, or comparable to, the time scale of the expansion of the universe. The time scale that describes the expansion of the universe is with being the Hubble parameter, which varies with time. The expansion timescale is roughly equal to the age of the universe at each point in time.
Timeline of the Big Bang
Observations suggest that the universe began around 13.8 billion years ago. Since then, the evolution of the universe has passed through three phases. The very early universe, which is still poorly understood, was the split second in which the universe was so hot that particles had energies higher than those currently accessible in particle accelerators on Earth. Therefore, while the basic features of this epoch have been worked out in the Big Bang theory, the details are largely based on educated guesses.
Following this, in the early universe, the evolution of the universe proceeded according to known high energy physics. This is when the first protons, electrons and neutrons formed, then nuclei and finally atoms. With the formation of neutral hydrogen, the cosmic microwave background was emitted. Finally, the epoch of structure formation began, when matter started to aggregate into the first stars and quasars, and ultimately galaxies, clusters of galaxies and superclusters formed. The future of the universe is not yet firmly known, but according to the ΛCDM model it will continue expanding forever.
Areas of study
Below, some of the most active areas of inquiry in cosmology are described, in roughly chronological order. This does not include all of the Big Bang cosmology, which is presented in Timeline of the Big Bang.
Very early universe
The early, hot universe appears to be well explained by the Big Bang from roughly 10−33 seconds onwards, but there are several problems. One is that there is no compelling reason, using current particle physics, for the universe to be flat, homogeneous, and isotropic (see the cosmological principle). Moreover, grand unified theories of particle physics suggest that there should be magnetic monopoles in the universe, which have not been found. These problems are resolved by a brief period of cosmic inflation, which drives the universe to flatness, smooths out anisotropies and inhomogeneities to the observed level, and exponentially dilutes the monopoles. The physical model behind cosmic inflation is extremely simple, but it has not yet been confirmed by particle physics, and there are difficult problems reconciling inflation and quantum field theory. Some cosmologists think that string theory and brane cosmology will provide an alternative to inflation.
Another major problem in cosmology is what caused the universe to contain far more matter than antimatter. Cosmologists can observationally deduce that the universe is not split into regions of matter and antimatter. If it were, there would be X-rays and gamma rays produced as a result of annihilation, but this is not observed. Therefore, some process in the early universe must have created a small excess of matter over antimatter, and this (currently not understood) process is called baryogenesis. Three required conditions for baryogenesis were derived by Andrei Sakharov in 1967, and requires a violation of the particle physics symmetry, called CP-symmetry, between matter and antimatter. However, particle accelerators measure too small a violation of CP-symmetry to account for the baryon asymmetry. Cosmologists and particle physicists look for additional violations of the CP-symmetry in the early universe that might account for the baryon asymmetry.
Both the problems of baryogenesis and cosmic inflation are very closely related to particle physics, and their resolution might come from high energy theory and experiment, rather than through observations of the universe.
Big Bang Theory
Big Bang nucleosynthesis is the theory of the formation of the elements in the early universe. It finished when the universe was about three minutes old and its temperature dropped below that at which nuclear fusion could occur. Big Bang nucleosynthesis had a brief period during which it could operate, so only the very lightest elements were produced. Starting from hydrogen ions (protons), it principally produced deuterium, helium-4, and lithium. Other elements were produced in only trace abundances. The basic theory of nucleosynthesis was developed in 1948 by George Gamow, Ralph Asher Alpher, and Robert Herman. It was used for many years as a probe of physics at the time of the Big Bang, as the theory of Big Bang nucleosynthesis connects the abundances of primordial light elements with the features of the early universe. Specifically, it can be used to test the equivalence principle, to probe dark matter, and test neutrino physics. Some cosmologists have proposed that Big Bang nucleosynthesis suggests there is a fourth "sterile" species of neutrino.
Standard model of Big Bang cosmology
The ΛCDM (Lambda cold dark matter) or Lambda-CDM model is a parametrization of the Big Bang cosmological model in which the universe contains a cosmological constant, denoted by Lambda (Greek Λ), associated with dark energy, and cold dark matter (abbreviated CDM). It is frequently referred to as the standard model of Big Bang cosmology.
Cosmic microwave background
The cosmic microwave background is radiation left over from decoupling after the epoch of recombination when neutral atoms first formed. At this point, radiation produced in the Big Bang stopped Thomson scattering from charged ions. The radiation, first observed in 1965 by Arno Penzias and Robert Woodrow Wilson, has a perfect thermal black-body spectrum. It has a temperature of 2.7 kelvins today and is isotropic to one part in 105. Cosmological perturbation theory, which describes the evolution of slight inhomogeneities in the early universe, has allowed cosmologists to precisely calculate the angular power spectrum of the radiation, and it has been measured by the recent satellite experiments (COBE and WMAP) and many ground and balloon-based experiments (such as Degree Angular Scale Interferometer, Cosmic Background Imager, and Boomerang). One of the goals of these efforts is to measure the basic parameters of the Lambda-CDM model with increasing accuracy, as well as to test the predictions of the Big Bang model and look for new physics. The results of measurements made by WMAP, for example, have placed limits on the neutrino masses.
Newer experiments, such as QUIET and the Atacama Cosmology Telescope, are trying to measure the polarization of the cosmic microwave background. These measurements are expected to provide further confirmation of the theory as well as information about cosmic inflation, and the so-called secondary anisotropies, such as the Sunyaev-Zel'dovich effect and Sachs-Wolfe effect, which are caused by interaction between galaxies and clusters with the cosmic microwave background.
On 17 March 2014, astronomers of the BICEP2 Collaboration announced the apparent detection of B-mode polarization of the CMB, considered to be evidence of primordial gravitational waves that are predicted by the theory of inflation to occur during the earliest phase of the Big Bang. However, later that year the Planck collaboration provided a more accurate measurement of cosmic dust, concluding that the B-mode signal from dust is the same strength as that reported from BICEP2. On 30 January 2015, a joint analysis of BICEP2 and Planck data was published and the European Space Agency announced that the signal can be entirely attributed to interstellar dust in the Milky Way.
Formation and evolution of large-scale structure
Understanding the formation and evolution of the largest and earliest structures (i.e., quasars, galaxies, clusters and superclusters) is one of the largest efforts in cosmology. Cosmologists study a model of hierarchical structure formation in which structures form from the bottom up, with smaller objects forming first, while the largest objects, such as superclusters, are still assembling. One way to study structure in the universe is to survey the visible galaxies, in order to construct a three-dimensional picture of the galaxies in the universe and measure the matter power spectrum. This is the approach of the Sloan Digital Sky Survey and the 2dF Galaxy Redshift Survey.
Another tool for understanding structure formation is simulations, which cosmologists use to study the gravitational aggregation of matter in the universe, as it clusters into filaments, superclusters and voids. Most simulations contain only non-baryonic cold dark matter, which should suffice to understand the universe on the largest scales, as there is much more dark matter in the universe than visible, baryonic matter. More advanced simulations are starting to include baryons and study the formation of individual galaxies. Cosmologists study these simulations to see if they agree with the galaxy surveys, and to understand any discrepancy.
Other, complementary observations to measure the distribution of matter in the distant universe and to probe reionization include:
The Lyman-alpha forest, which allows cosmologists to measure the distribution of neutral atomic hydrogen gas in the early universe, by measuring the absorption of light from distant quasars by the gas.
The 21-centimeter absorption line of neutral atomic hydrogen also provides a sensitive test of cosmology.
Weak lensing, the distortion of a distant image by gravitational lensing due to dark matter.
These will help cosmologists settle the question of when and how structure formed in the universe.
Dark matter
Evidence from Big Bang nucleosynthesis, the cosmic microwave background, structure formation, and galaxy rotation curves suggests that about 23% of the mass of the universe consists of non-baryonic dark matter, whereas only 4% consists of visible, baryonic matter. The gravitational effects of dark matter are well understood, as it behaves like a cold, non-radiative fluid that forms haloes around galaxies. Dark matter has never been detected in the laboratory, and the particle physics nature of dark matter remains completely unknown. Without observational constraints, there are a number of candidates, such as a stable supersymmetric particle, a weakly interacting massive particle, a gravitationally-interacting massive particle, an axion, and a massive compact halo object. Alternatives to the dark matter hypothesis include a modification of gravity at small accelerations (MOND) or an effect from brane cosmology. TeVeS is a version of MOND that can explain gravitational lensing.
Dark energy
If the universe is flat, there must be an additional component making up 73% (in addition to the 23% dark matter and 4% baryons) of the energy density of the universe. This is called dark energy. In order not to interfere with Big Bang nucleosynthesis and the cosmic microwave background, it must not cluster in haloes like baryons and dark matter. There is strong observational evidence for dark energy, as the total energy density of the universe is known through constraints on the flatness of the universe, but the amount of clustering matter is tightly measured, and is much less than this. The case for dark energy was strengthened in 1999, when measurements demonstrated that the expansion of the universe has begun to gradually accelerate.
Apart from its density and its clustering properties, nothing is known about dark energy. Quantum field theory predicts a cosmological constant (CC) much like dark energy, but 120 orders of magnitude larger than that observed. Steven Weinberg and a number of string theorists (see string landscape) have invoked the 'weak anthropic principle': i.e. the reason that physicists observe a universe with such a small cosmological constant is that no physicists (or any life) could exist in a universe with a larger cosmological constant. Many cosmologists find this an unsatisfying explanation: perhaps because while the weak anthropic principle is self-evident (given that living observers exist, there must be at least one universe with a cosmological constant which allows for life to exist) it does not attempt to explain the context of that universe. For example, the weak anthropic principle alone does not distinguish between:
Only one universe will ever exist and there is some underlying principle that constrains the CC to the value we observe.
Only one universe will ever exist and although there is no underlying principle fixing the CC, we got lucky.
Lots of universes exist (simultaneously or serially) with a range of CC values, and of course ours is one of the life-supporting ones.
Other possible explanations for dark energy include quintessence or a modification of gravity on the largest scales. The effect on cosmology of the dark energy that these models describe is given by the dark energy's equation of state, which varies depending upon the theory. The nature of dark energy is one of the most challenging problems in cosmology.
A better understanding of dark energy is likely to solve the problem of the ultimate fate of the universe. In the current cosmological epoch, the accelerated expansion due to dark energy is preventing structures larger than superclusters from forming. It is not known whether the acceleration will continue indefinitely, perhaps even increasing until a big rip, or whether it will eventually reverse, lead to a Big Freeze, or follow some other scenario.
Gravitational waves
Gravitational waves are ripples in the curvature of spacetime that propagate as waves at the speed of light, generated in certain gravitational interactions that propagate outward from their source. Gravitational-wave astronomy is an emerging branch of observational astronomy which aims to use gravitational waves to collect observational data about sources of detectable gravitational waves such as binary star systems composed of white dwarfs, neutron stars, and black holes; and events such as supernovae, and the formation of the early universe shortly after the Big Bang.
In 2016, the LIGO Scientific Collaboration and Virgo Collaboration teams announced that they had made the first observation of gravitational waves, originating from a pair of merging black holes using the Advanced LIGO detectors. On 15 June 2016, a second detection of gravitational waves from coalescing black holes was announced. Besides LIGO, many other gravitational-wave observatories (detectors) are under construction.
Other areas of inquiry
Cosmologists also study:
Whether primordial black holes were formed in our universe, and what happened to them.
Detection of cosmic rays with energies above the GZK cutoff, and whether it signals a failure of special relativity at high energies.
The equivalence principle, whether or not Einstein's general theory of relativity is the correct theory of gravitation, and if the fundamental laws of physics are the same everywhere in the universe.
See also
Accretion
Hubble's law
Illustris project
List of cosmologists
Physical ontology
Quantum cosmology
String cosmology
Universal Rotation Curve
References
Further reading
Popular
Textbooks
Introductory cosmology and general relativity without the full tensor apparatus, deferred until the last part of the book.
Modern introduction to cosmology covering the homogeneous and inhomogeneous universe as well as inflation and the CMB.
An introductory text, released slightly before the WMAP results.
For undergraduates; mathematically gentle with a strong historical focus.
An introductory astronomy text.
The classic reference for researchers.
Cosmology without general relativity.
An introduction to cosmology with a thorough discussion of inflation.
Discusses the formation of large-scale structures in detail.
An introduction including more on general relativity and quantum field theory than most.
Strong historical focus.
The classic work on large-scale structure and correlation functions.
A standard reference for the mathematical formalism.
External links
From groups
Cambridge Cosmology – from Cambridge University (public home page)
Cosmology 101 – from the NASA WMAP group
Center for Cosmological Physics. University of Chicago, Chicago.
Origins, Nova Online – Provided by PBS.
From individuals
Gale, George, "Cosmology: Methodological Debates in the 1930s and 1940s", The Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed.)
Madore, Barry F., "Level 5 : A Knowledgebase for Extragalactic Astronomy and Cosmology". Caltech and Carnegie. Pasadena, California.
Tyler, Pat, and Phil Newman "Beyond Einstein". Laboratory for High Energy Astrophysics (LHEA) NASA Goddard Space Flight Center.
Wright, Ned. "Cosmology tutorial and FAQ". Division of Astronomy & Astrophysics, UCLA.
Philosophy of physics
Philosophy of time
Astronomical sub-disciplines
Astrophysics |
5382 | https://en.wikipedia.org/wiki/Inflation%20%28cosmology%29 | Inflation (cosmology) | In physical cosmology, cosmic inflation, cosmological inflation, or just inflation, is a theory of exponential expansion of space in the early universe. The inflationary epoch is believed to have lasted from seconds to between and seconds after the Big Bang. Following the inflationary period, the universe continued to expand, but at a slower rate. The acceleration of this expansion due to dark energy began after the universe was already over 7.7 billion years old (5.4 billion years ago).
Inflation theory was developed in the late 1970s and early 80s, with notable contributions by several theoretical physicists, including Alexei Starobinsky at Landau Institute for Theoretical Physics, Alan Guth at Cornell University, and Andrei Linde at Lebedev Physical Institute. Alexei Starobinsky, Alan Guth, and Andrei Linde won the 2014 Kavli Prize "for pioneering the theory of cosmic inflation". It was developed further in the early 1980s. It explains the origin of the large-scale structure of the cosmos. Quantum fluctuations in the microscopic inflationary region, magnified to cosmic size, become the seeds for the growth of structure in the Universe (see galaxy formation and evolution and structure formation). Many physicists also believe that inflation explains why the universe appears to be the same in all directions (isotropic), why the cosmic microwave background radiation is distributed evenly, why the universe is flat, and why no magnetic monopoles have been observed.
The detailed particle physics mechanism responsible for inflation is unknown. The basic inflationary paradigm is accepted by most physicists, as a number of inflation model predictions have been confirmed by observation; however, a substantial minority of scientists dissent from this position. The hypothetical field thought to be responsible for inflation is called the inflaton.
In 2002 three of the original architects of the theory were recognized for their major contributions; physicists Alan Guth of M.I.T., Andrei Linde of Stanford, and Paul Steinhardt of Princeton shared the prestigious Dirac Prize "for development of the concept of inflation in cosmology". In 2012 Guth and Linde were awarded the Breakthrough Prize in Fundamental Physics for their invention and development of inflationary cosmology.
Overview
Around 1930, Edwin Hubble discovered that light from remote galaxies was redshifted; the more remote, the more shifted. This implies that the galaxies are receding from the Earth, with more distant galaxies receding more rapidly, such that galaxies also recede from each other. This expansion of the universe was previously predicted by Alexander Friedmann and Georges Lemaître from the theory of general relativity. It can be understood as a consequence of an initial impulse, which sent the contents of the universe flying apart at such a rate that their mutual gravitational attraction has not reversed their separation.
Inflation may provide this initial impulse. According to the Friedmann equations that describe the dynamics of an expanding universe, a fluid with sufficiently negative pressure exerts gravitational repulsion in the cosmological context. A field in a positive-energy false vacuum state could represent such a fluid, and the resulting repulsion would set the universe into exponential expansion. This inflation phase was originally proposed by Alan Guth in 1979 because the exponential expansion could dilute exotic relics, such as magnetic monopoles, that were predicted by grand unified theories at the time. This would explain why such relics were not seen. It was quickly realized that such accelerated expansion would resolve the horizon problem and the flatness problem. These problems arise from the notion that to look like it does today, the Universe must have started from very finely tuned, or "special", initial conditions at the Big Bang.
Theory
An expanding universe generally has a cosmological horizon, which, by analogy with the more familiar horizon caused by the curvature of Earth's surface, marks the boundary of the part of the Universe that an observer can see. Light (or other radiation) emitted by objects beyond the cosmological horizon in an accelerating universe never reaches the observer, because the space in between the observer and the object is expanding too rapidly.
The observable universe is one causal patch of a much larger unobservable universe; other parts of the Universe cannot communicate with Earth yet. These parts of the Universe are outside our current cosmological horizon. In the standard hot big bang model, without inflation, the cosmological horizon moves out, bringing new regions into view. Yet as a local observer sees such a region for the first time, it looks no different from any other region of space the local observer has already seen: its background radiation is at nearly the same temperature as the background radiation of other regions, and its space-time curvature is evolving lock-step with the others. This presents a mystery: how did these new regions know what temperature and curvature they were supposed to have? They couldn't have learned it by getting signals, because they were not previously in communication with our past light cone.
Inflation answers this question by postulating that all the regions come from an earlier era with a big vacuum energy, or cosmological constant. A space with a cosmological constant is qualitatively different: instead of moving outward, the cosmological horizon stays put. For any one observer, the distance to the cosmological horizon is constant. With exponentially expanding space, two nearby observers are separated very quickly; so much so, that the distance between them quickly exceeds the limits of communications. The spatial slices are expanding very fast to cover huge volumes. Things are constantly moving beyond the cosmological horizon, which is a fixed distance away, and everything becomes homogeneous.
As the inflationary field slowly relaxes to the vacuum, the cosmological constant goes to zero and space begins to expand normally. The new regions that come into view during the normal expansion phase are exactly the same regions that were pushed out of the horizon during inflation, and so they are at nearly the same temperature and curvature, because they come from the same originally small patch of space.
The theory of inflation thus explains why the temperatures and curvatures of different regions are so nearly equal. It also predicts that the total curvature of a space-slice at constant global time is zero. This prediction implies that the total ordinary matter, dark matter and residual vacuum energy in the Universe have to add up to the critical density, and the evidence supports this. More strikingly, inflation allows physicists to calculate the minute differences in temperature of different regions from quantum fluctuations during the inflationary era, and many of these quantitative predictions have been confirmed.
Space expands
In a space that expands exponentially (or nearly exponentially) with time, any pair of free-floating objects that are initially at rest will move apart from each other at an accelerating rate, at least as long as they are not bound together by any force. From the point of view of one such object, the spacetime is something like an inside-out Schwarzschild black hole—each object is surrounded by a spherical event horizon. Once the other object has fallen through this horizon it can never return, and even light signals it sends will never reach the first object (at least so long as the space continues to expand exponentially).
In the approximation that the expansion is exactly exponential, the horizon is static and remains a fixed physical distance away. This patch of an inflating universe can be described by the following metric:
This exponentially expanding spacetime is called a de Sitter space, and to sustain it there must be a cosmological constant, a vacuum energy density that is constant in space and time and proportional to Λ in the above metric. For the case of exactly exponential expansion, the vacuum energy has a negative pressure p equal in magnitude to its energy density ρ; the equation of state is p=−ρ.
Inflation is typically not an exactly exponential expansion, but rather quasi- or near-exponential. In such a universe the horizon will slowly grow with time as the vacuum energy density gradually decreases.
Few inhomogeneities remain
Because the accelerating expansion of space stretches out any initial variations in density or temperature to very large length scales, an essential feature of inflation is that it smooths out inhomogeneities and anisotropies, and reduces the curvature of space. This pushes the Universe into a very simple state in which it is completely dominated by the inflaton field and the only significant inhomogeneities are tiny quantum fluctuations. Inflation also dilutes exotic heavy particles, such as the magnetic monopoles predicted by many extensions to the Standard Model of particle physics. If the Universe was only hot enough to form such particles before a period of inflation, they would not be observed in nature, as they would be so rare that it is quite likely that there are none in the observable universe. Together, these effects are called the inflationary "no-hair theorem" by analogy with the no hair theorem for black holes.
The "no-hair" theorem works essentially because the cosmological horizon is no different from a black-hole horizon, except for not testable disagreements about what is on the other side. The interpretation of the no-hair theorem is that the Universe (observable and unobservable) expands by an enormous factor during inflation. In an expanding universe, energy densities generally fall, or get diluted, as the volume of the Universe increases. For example, the density of ordinary "cold" matter (dust) goes down as the inverse of the volume: when linear dimensions double, the energy density goes down by a factor of eight; the radiation energy density goes down even more rapidly as the Universe expands since the wavelength of each photon is stretched (redshifted), in addition to the photons being dispersed by the expansion. When linear dimensions are doubled, the energy density in radiation falls by a factor of sixteen (see the solution of the energy density continuity equation for an ultra-relativistic fluid). During inflation, the energy density in the inflaton field is roughly constant. However, the energy density in everything else, including inhomogeneities, curvature, anisotropies, exotic particles, and standard-model particles is falling, and through sufficient inflation these all become negligible. This leaves the Universe flat and symmetric, and (apart from the homogeneous inflaton field) mostly empty, at the moment inflation ends and reheating begins.
Duration
A key requirement is that inflation must continue long enough to produce the present observable universe from a single, small inflationary Hubble volume. This is necessary to ensure that the Universe appears flat, homogeneous and isotropic at the largest observable scales. This requirement is generally thought to be satisfied if the Universe expanded by a factor of at least during inflation.
Reheating
Inflation is a period of supercooled expansion, when the temperature drops by a factor of 100,000 or so. (The exact drop is model-dependent, but in the first models it was typically from K down to K.) This relatively low temperature is maintained during the inflationary phase. When inflation ends the temperature returns to the pre-inflationary temperature; this is called reheating or thermalization because the large potential energy of the inflaton field decays into particles and fills the Universe with Standard Model particles, including electromagnetic radiation, starting the radiation dominated phase of the Universe. Because the nature of the inflaton field is not known, this process is still poorly understood, although it is believed to take place through a parametric resonance.
Motivations
Inflation resolves several problems in Big Bang cosmology that were discovered in the 1970s. Inflation was first proposed by Alan Guth in 1979 while investigating the problem of why no magnetic monopoles are seen today; he found that a positive-energy false vacuum would, according to general relativity, generate an exponential expansion of space. It was very quickly realised that such an expansion would resolve many other long-standing problems. These problems arise from the observation that to look like it does today, the Universe would have to have started from very finely tuned, or "special" initial conditions at the Big Bang. Inflation attempts to resolve these problems by providing a dynamical mechanism that drives the Universe to this special state, thus making a universe like ours much more likely in the context of the Big Bang theory.
Horizon problem
The horizon problem is the problem of determining why the Universe appears statistically homogeneous and isotropic in accordance with the cosmological principle. For example, molecules in a canister of gas are distributed homogeneously and isotropically because they are in thermal equilibrium: gas throughout the canister has had enough time to interact to dissipate inhomogeneities and anisotropies. The situation is quite different in the big bang model without inflation, because gravitational expansion does not give the early universe enough time to equilibrate. In a big bang with only the matter and radiation known in the Standard Model, two widely separated regions of the observable universe cannot have equilibrated because they move apart from each other faster than the speed of light and thus have never come into causal contact. In the early Universe, it was not possible to send a light signal between the two regions. Because they have had no interaction, it is difficult to explain why they have the same temperature (are thermally equilibrated). Historically, proposed solutions included the Phoenix universe of Georges Lemaître, the related oscillatory universe of Richard Chase Tolman, and the Mixmaster universe of Charles Misner. Lemaître and Tolman proposed that a universe undergoing a number of cycles of contraction and expansion could come into thermal equilibrium. Their models failed, however, because of the buildup of entropy over several cycles. Misner made the (ultimately incorrect) conjecture that the Mixmaster mechanism, which made the Universe more chaotic, could lead to statistical homogeneity and isotropy.
Flatness problem
The flatness problem is sometimes called one of the Dicke coincidences (along with the cosmological constant problem).
It became known in the 1960s that the density of matter in the Universe was comparable to the critical density necessary for a flat universe (that is, a universe whose large scale geometry is the usual Euclidean geometry, rather than a non-Euclidean hyperbolic or spherical geometry).
Therefore, regardless of the shape of the universe the contribution of spatial curvature to the expansion of the Universe could not be much greater than the contribution of matter. But as the Universe expands, the curvature redshifts away more slowly than matter and radiation. Extrapolated into the past, this presents a fine-tuning problem because the contribution of curvature to the Universe must be exponentially small (sixteen orders of magnitude less than the density of radiation at Big Bang nucleosynthesis, for example). This problem is exacerbated by recent observations of the cosmic microwave background that have demonstrated that the Universe is flat to within a few percent.
Magnetic-monopole problem
The magnetic monopole problem, sometimes called "the exotic-relics problem", says that if the early universe were very hot, a large number of very heavy, stable magnetic monopoles would have been produced.
Stable magnetic monopoles are a problem for Grand Unified Theories, which propose that at high temperatures (such as in the early universe) the electromagnetic force, strong, and weak nuclear forces are not actually fundamental forces but arise due to spontaneous symmetry breaking from a single gauge theory.
These theories predict a number of heavy, stable particles that have not been observed in nature. The most notorious is the magnetic monopole, a kind of stable, heavy "charge" of magnetic field.
Monopoles are predicted to be copiously produced following Grand Unified Theories at high temperature,
and they should have persisted to the present day, to such an extent that they would become the primary constituent of the Universe.
Not only is that not the case, but all searches for them have failed, placing stringent limits on the density of relic magnetic monopoles in the Universe.
A period of inflation that occurs below the temperature where magnetic monopoles can be produced would offer a possible resolution of this problem: Monopoles would be separated from each other as the Universe around them expands, potentially lowering their observed density by many orders of magnitude. Though, as cosmologist Martin Rees has written,
"Skeptics about exotic physics might not be hugely impressed by a theoretical argument to explain the absence of particles that are themselves only hypothetical. Preventive medicine can readily seem 100 percent effective against a disease that doesn't exist!"
History
Precursors
In the early days of General Relativity, Albert Einstein introduced the cosmological constant to allow a static solution, which was a three-dimensional sphere with a uniform density of matter. Later, Willem de Sitter found a highly symmetric inflating universe, which described a universe with a cosmological constant that is otherwise empty.
It was discovered that Einstein's universe is unstable, and that small fluctuations cause it to collapse or turn into a de Sitter universe.
In the early 1970s, Zeldovich noticed the flatness and horizon problems of Big Bang cosmology; before his work, cosmology was presumed to be symmetrical on purely philosophical grounds. In the Soviet Union, this and other considerations led Belinski and Khalatnikov to analyze the chaotic BKL singularity in General Relativity. Misner's Mixmaster universe attempted to use this chaotic behavior to solve the cosmological problems, with limited success.
False vacuum
In the late 1970s, Sidney Coleman applied the instanton techniques developed by Alexander Polyakov and collaborators to study the fate of the false vacuum in quantum field theory. Like a metastable phase in statistical mechanics—water below the freezing temperature or above the boiling point—a quantum field would need to nucleate a large enough bubble of the new vacuum, the new phase, in order to make a transition. Coleman found the most likely decay pathway for vacuum decay and calculated the inverse lifetime per unit volume. He eventually noted that gravitational effects would be significant, but he did not calculate these effects and did not apply the results to cosmology.
The universe could have been spontaneously created from nothing (no space, time, nor matter) by quantum fluctuations of metastable false vacuum causing an expanding bubble of true vacuum.
Starobinsky inflation
In the Soviet Union, Alexei Starobinsky noted that quantum corrections to general relativity should be important for the early universe. These generically lead to curvature-squared corrections to the Einstein–Hilbert action and a form of f(R) modified gravity. The solution to Einstein's equations in the presence of curvature squared terms, when the curvatures are large, leads to an effective cosmological constant. Therefore, he proposed that the early universe went through an inflationary de Sitter era.
This resolved the cosmology problems and led to specific predictions for the corrections to the microwave background radiation, corrections that were then calculated in detail. Starobinsky used the action
which corresponds to the potential
in the Einstein frame. This results in the observables:
Monopole problem
In 1978, Zeldovich noted the magnetic monopole problem, which was an unambiguous quantitative version of the horizon problem, this time in a subfield of particle physics, which led to several speculative attempts to resolve it. In 1980 Alan Guth realized that false vacuum decay in the early universe would solve the problem, leading him to propose a scalar-driven inflation. Starobinsky's and Guth's scenarios both predicted an initial de Sitter phase, differing only in mechanistic details.
Early inflationary models
Guth proposed inflation in January 1981 to explain the nonexistence of magnetic monopoles;
it was Guth who coined the term "inflation". At the same time, Starobinsky argued that quantum corrections to gravity would replace the supposed initial singularity of the Universe with an exponentially expanding de Sitter phase.
In October 1980, Demosthenes Kazanas suggested that exponential expansion could eliminate the particle horizon and perhaps solve the horizon problem,
while Sato suggested that an exponential expansion could eliminate domain walls (another kind of exotic relic). In 1981 Einhorn and Sato published a model similar to Guth's and showed that it would resolve the puzzle of the magnetic monopole abundance in Grand Unified Theories. Like Guth, they concluded that such a model not only required fine tuning of the cosmological constant, but also would likely lead to a much too granular universe, i.e., to large density variations resulting from bubble wall collisions.
Guth proposed that as the early universe cooled, it was trapped in a false vacuum with a high energy density, which is much like a cosmological constant. As the very early universe cooled it was trapped in a metastable state (it was supercooled), which it could only decay out of through the process of bubble nucleation via quantum tunneling. Bubbles of true vacuum spontaneously form in the sea of false vacuum and rapidly begin expanding at the speed of light. Guth recognized that this model was problematic because the model did not reheat properly: when the bubbles nucleated, they did not generate any radiation. Radiation could only be generated in collisions between bubble walls. But if inflation lasted long enough to solve the initial conditions problems, collisions between bubbles became exceedingly rare. In any one causal patch it is likely that only one bubble would nucleate.
Slow-roll inflation
The bubble collision problem was solved by Linde and independently by Andreas Albrecht and Paul Steinhardt in a model named new inflation or slow-roll inflation (Guth's model then became known as old inflation). In this model, instead of tunneling out of a false vacuum state, inflation occurred by a scalar field rolling down a potential energy hill. When the field rolls very slowly compared to the expansion of the Universe, inflation occurs. However, when the hill becomes steeper, inflation ends and reheating can occur.
Effects of asymmetries
Eventually, it was shown that new inflation does not produce a perfectly symmetric universe, but that quantum fluctuations in the inflaton are created. These fluctuations form the primordial seeds for all structure created in the later universe. These fluctuations were first calculated by Viatcheslav Mukhanov and G. V. Chibisov in analyzing Starobinsky's similar model. In the context of inflation, they were worked out independently of the work of Mukhanov and Chibisov at the three-week 1982 Nuffield Workshop on the Very Early Universe at Cambridge University. The fluctuations were calculated by four groups working separately over the course of the workshop: Stephen Hawking; Starobinsky; Guth and So-Young Pi; and Bardeen, Steinhardt and Turner.
Observational status
Inflation is a mechanism for realizing the cosmological principle, which is the basis of the standard model of physical cosmology: it accounts for the homogeneity and isotropy of the observable universe. In addition, it accounts for the observed flatness and absence of magnetic monopoles. Since Guth's early work, each of these observations has received further confirmation, most impressively by the detailed observations of the cosmic microwave background made by the Planck spacecraft. This analysis shows that the Universe is flat to within percent, and that it is homogeneous and isotropic to one part in 100,000.
Inflation predicts that the structures visible in the Universe today formed through the gravitational collapse of perturbations that were formed as quantum mechanical fluctuations in the inflationary epoch. The detailed form of the spectrum of perturbations, called a nearly-scale-invariant Gaussian random field is very specific and has only two free parameters. One is the amplitude of the spectrum and the spectral index, which measures the slight deviation from scale invariance predicted by inflation (perfect scale invariance corresponds to the idealized de Sitter universe).
The other free parameter is the tensor to scalar ratio. The simplest inflation models, those without fine-tuning, predict a tensor to scalar ratio near 0.1 .
Inflation predicts that the observed perturbations should be in thermal equilibrium with each other (these are called adiabatic or isentropic perturbations). This structure for the perturbations has been confirmed by the Planck spacecraft, WMAP spacecraft and other cosmic microwave background (CMB) experiments, and galaxy surveys, especially the ongoing Sloan Digital Sky Survey. These experiments have shown that the one part in 100,000 inhomogeneities observed have exactly the form predicted by theory. There is evidence for a slight deviation from scale invariance. The spectral index, is one for a scale-invariant Harrison–Zel'dovich spectrum. The simplest inflation models predict that is between 0.92 and 0.98 . This is the range that is possible without fine-tuning of the parameters related to energy. From Planck data it can be inferred that =0.968 ± 0.006, and a tensor to scalar ratio that is less than 0.11 . These are considered an important confirmation of the theory of inflation.
Various inflation theories have been proposed that make radically different predictions, but they generally have much more fine-tuning than should be necessary. As a physical model, however, inflation is most valuable in that it robustly predicts the initial conditions of the Universe based on only two adjustable parameters: the spectral index (that can only change in a small range) and the amplitude of the perturbations. Except in contrived models, this is true regardless of how inflation is realized in particle physics.
Occasionally, effects are observed that appear to contradict the simplest models of inflation. The first-year WMAP data suggested that the spectrum might not be nearly scale-invariant, but might instead have a slight curvature. However, the third-year data revealed that the effect was a statistical anomaly. Another effect remarked upon since the first cosmic microwave background satellite, the Cosmic Background Explorer is that the amplitude of the quadrupole moment of the CMB is unexpectedly low and the other low multipoles appear to be preferentially aligned with the ecliptic plane. Some have claimed that this is a signature of non-Gaussianity and thus contradicts the simplest models of inflation. Others have suggested that the effect may be due to other new physics, foreground contamination, or even publication bias.
An experimental program is underway to further test inflation with more precise CMB measurements. In particular, high precision measurements of the so-called "B-modes" of the polarization of the background radiation could provide evidence of the gravitational radiation produced by inflation, and could also show whether the energy scale of inflation predicted by the simplest models (~ GeV) is correct. In March 2014, the BICEP2 team announced B-mode CMB polarization confirming inflation had been demonstrated. The team announced the tensor-to-scalar power ratio was between 0.15 and 0.27 (rejecting the null hypothesis; is expected to be 0 in the absence of inflation). However, on 19 June 2014, lowered confidence in confirming the findings was reported; on 19 September 2014, a further reduction in confidence was reported and, on 30 January 2015, even less confidence yet was reported. By 2018, additional data suggested, with 95% confidence, that is 0.06 or lower: consistent with the null hypothesis, but still also consistent with many remaining models of inflation.
Other potentially corroborating measurements are expected from the Planck spacecraft, although it is unclear if the signal will be visible, or if contamination from foreground sources will interfere. Other forthcoming measurements, such as those of 21 centimeter radiation (radiation emitted and absorbed from neutral hydrogen before the first stars formed), may measure the power spectrum with even greater resolution than the CMB and galaxy surveys, although it is not known if these measurements will be possible or if interference with radio sources on Earth and in the galaxy will be too great.
Theoretical status
In Guth's early proposal, it was thought that the inflaton was the Higgs field, the field that explains the mass of the elementary particles. It is now believed by some that the inflaton cannot be the Higgs field
although the recent discovery of the Higgs boson has increased the number of works considering the Higgs field as inflaton.
One problem of this identification is the current tension with experimental data at the electroweak scale, which is currently under study at the Large Hadron Collider (LHC). Other models of inflation relied on the properties of Grand Unified Theories. Since the simplest models of grand unification have failed, it is now thought by many physicists that inflation will be included in a supersymmetric theory such as string theory or a supersymmetric grand unified theory. At present, while inflation is understood principally by its detailed predictions of the initial conditions for the hot early universe, the particle physics is largely ad hoc modelling. As such, although predictions of inflation have been consistent with the results of observational tests, many open questions remain.
Fine-tuning problem
One of the most severe challenges for inflation arises from the need for fine tuning. In new inflation, the slow-roll conditions must be satisfied for inflation to occur. The slow-roll conditions say that the inflaton potential must be flat (compared to the large vacuum energy) and that the inflaton particles must have a small mass. New inflation requires the Universe to have a scalar field with an especially flat potential and special initial conditions. However, explanations for these fine-tunings have been proposed. For example, classically scale invariant field theories, where scale invariance is broken by quantum effects, provide an explanation of the flatness of inflationary potentials, as long as the theory can be studied through perturbation theory.
Linde proposed a theory known as chaotic inflation in which he suggested that the conditions for inflation were actually satisfied quite generically. Inflation will occur in virtually any universe that begins in a chaotic, high energy state that has a scalar field with unbounded potential energy. However, in his model the inflaton field necessarily takes values larger than one Planck unit: for this reason, these are often called large field models and the competing new inflation models are called small field models. In this situation, the predictions of effective field theory are thought to be invalid, as renormalization should cause large corrections that could prevent inflation.
This problem has not yet been resolved and some cosmologists argue that the small field models, in which inflation can occur at a much lower energy scale, are better models. While inflation depends on quantum field theory (and the semiclassical approximation to quantum gravity) in an important way, it has not been completely reconciled with these theories.
Brandenberger commented on fine-tuning in another situation. The amplitude of the primordial inhomogeneities produced in inflation is directly tied to the energy scale of inflation. This scale is suggested to be around GeV or times the Planck energy. The natural scale is naïvely the Planck scale so this small value could be seen as another form of fine-tuning (called a hierarchy problem): the energy density given by the scalar potential is down by compared to the Planck density. This is not usually considered to be a critical problem, however, because the scale of inflation corresponds naturally to the scale of gauge unification.
Eternal inflation
In many models, the inflationary phase of the Universe's expansion lasts forever in at least some regions of the Universe. This occurs because inflating regions expand very rapidly, reproducing themselves. Unless the rate of decay to the non-inflating phase is sufficiently fast, new inflating regions are produced more rapidly than non-inflating regions. In such models, most of the volume of the Universe is continuously inflating at any given time.
All models of eternal inflation produce an infinite, hypothetical multiverse, typically a fractal. The multiverse theory has created significant dissension in the scientific community about the viability of the inflationary model.
Paul Steinhardt, one of the original architects of the inflationary model, introduced the first example of eternal inflation in 1983. He showed that the inflation could proceed forever by producing bubbles of non-inflating space filled with hot matter and radiation surrounded by empty space that continues to inflate. The bubbles could not grow fast enough to keep up with the inflation. Later that same year, Alexander Vilenkin showed that eternal inflation is generic.
Although new inflation is classically rolling down the potential, quantum fluctuations can sometimes lift it to previous levels. These regions in which the inflaton fluctuates upwards expand much faster than regions in which the inflaton has a lower potential energy, and tend to dominate in terms of physical volume. It has been shown that any inflationary theory with an unbounded potential is eternal. There are well-known theorems that this steady state cannot continue forever into the past. Inflationary spacetime, which is similar to de Sitter space, is incomplete without a contracting region. However, unlike de Sitter space, fluctuations in a contracting inflationary space collapse to form a gravitational singularity, a point where densities become infinite. Therefore, it is necessary to have a theory for the Universe's initial conditions.
In eternal inflation, regions with inflation have an exponentially growing volume, while regions that are not inflating don't. This suggests that the volume of the inflating part of the Universe in the global picture is always unimaginably larger than the part that has stopped inflating, even though inflation eventually ends as seen by any single pre-inflationary observer. Scientists disagree about how to assign a probability distribution to this hypothetical anthropic landscape. If the probability of different regions is counted by volume, one should expect that inflation will never end or applying boundary conditions that a local observer exists to observe it, that inflation will end as late as possible.
Some physicists believe this paradox can be resolved by weighting observers by their pre-inflationary volume. Others believe that there is no resolution to the paradox and that the multiverse is a critical flaw in the inflationary paradigm. Paul Steinhardt, who first introduced the eternal inflationary model, later became one of its most vocal critics for this reason.
Initial conditions
Some physicists have tried to avoid the initial conditions problem by proposing models for an eternally inflating universe with no origin. These models propose that while the Universe, on the largest scales, expands exponentially it was, is and always will be, spatially infinite and has existed, and will exist, forever.
Other proposals attempt to describe the ex nihilo creation of the Universe based on quantum cosmology and the following inflation. Vilenkin put forth one such scenario. Hartle and Hawking offered the no-boundary proposal for the initial creation of the Universe in which inflation comes about naturally.
Guth described the inflationary universe as the "ultimate free lunch": new universes, similar to our own, are continually produced in a vast inflating background. Gravitational interactions, in this case, circumvent (but do not violate) the first law of thermodynamics (energy conservation) and the second law of thermodynamics (entropy and the arrow of time problem). However, while there is consensus that this solves the initial conditions problem, some have disputed this, as it is much more likely that the Universe came about by a quantum fluctuation. Don Page was an outspoken critic of inflation because of this anomaly. He stressed that the thermodynamic arrow of time necessitates low entropy initial conditions, which would be highly unlikely. According to them, rather than solving this problem, the inflation theory aggravates it – the reheating at the end of the inflation era increases entropy, making it necessary for the initial state of the Universe to be even more orderly than in other Big Bang theories with no inflation phase.
Hawking and Page later found ambiguous results when they attempted to compute the probability of inflation in the Hartle-Hawking initial state. Other authors have argued that, since inflation is eternal, the probability doesn't matter as long as it is not precisely zero: once it starts, inflation perpetuates itself and quickly dominates the Universe. However, Albrecht and Lorenzo Sorbo argued that the probability of an inflationary cosmos, consistent with today's observations, emerging by a random fluctuation from some pre-existent state is much higher than that of a non-inflationary cosmos. This is because the "seed" amount of non-gravitational energy required for the inflationary cosmos is so much less than that for a non-inflationary alternative, which outweighs any entropic considerations.
Another problem that has occasionally been mentioned is the trans-Planckian problem or trans-Planckian effects. Since the energy scale of inflation and the Planck scale are relatively close, some of the quantum fluctuations that have made up the structure in our universe were smaller than the Planck length before inflation. Therefore, there ought to be corrections from Planck-scale physics, in particular the unknown quantum theory of gravity. Some disagreement remains about the magnitude of this effect: about whether it is just on the threshold of detectability or completely undetectable.
Hybrid inflation
Another kind of inflation, called hybrid inflation, is an extension of new inflation. It introduces additional scalar fields, so that while one of the scalar fields is responsible for normal slow roll inflation, another triggers the end of inflation: when inflation has continued for sufficiently long, it becomes favorable to the second field to decay into a much lower energy state.
In hybrid inflation, one scalar field is responsible for most of the energy density (thus determining the rate of expansion), while another is responsible for the slow roll (thus determining the period of inflation and its termination). Thus fluctuations in the former inflaton would not affect inflation termination, while fluctuations in the latter would not affect the rate of expansion. Therefore, hybrid inflation is not eternal. When the second (slow-rolling) inflaton reaches the bottom of its potential, it changes the location of the minimum of the first inflaton's potential, which leads to a fast roll of the inflaton down its potential, leading to termination of inflation.
Relation to dark energy
Dark energy is broadly similar to inflation and is thought to be causing the expansion of the present-day universe to accelerate. However, the energy scale of dark energy is much lower, GeV, roughly 27 orders of magnitude less than the scale of inflation.
Inflation and string cosmology
The discovery of flux compactifications opened the way for reconciling inflation and string theory. Brane inflation suggests that inflation arises from the motion of D-branes in the compactified geometry, usually towards a stack of anti-D-branes. This theory, governed by the Dirac-Born-Infeld action, is different from ordinary inflation. The dynamics are not completely understood. It appears that special conditions are necessary since inflation occurs in tunneling between two vacua in the string landscape. The process of tunneling between two vacua is a form of old inflation, but new inflation must then occur by some other mechanism.
Inflation and loop quantum gravity
When investigating the effects the theory of loop quantum gravity would have on cosmology, a loop quantum cosmology model has evolved that provides a possible mechanism for cosmological inflation. Loop quantum gravity assumes a quantized spacetime. If the energy density is larger than can be held by the quantized spacetime, it is thought to bounce back.
Alternatives and adjuncts
Other models have been advanced that are claimed to explain some or all of the observations addressed by inflation.
Big bounce
The big bounce hypothesis attempts to replace the cosmic singularity with a cosmic contraction and bounce, thereby explaining the initial conditions that led to the big bang.
The flatness and horizon problems are naturally solved in the Einstein-Cartan-Sciama-Kibble theory of gravity, without needing an exotic form of matter or free parameters.
This theory extends general relativity by removing a constraint of the symmetry of the affine connection and regarding its antisymmetric part, the torsion tensor, as a dynamical variable. The minimal coupling between torsion and Dirac spinors generates a spin-spin interaction that is significant in fermionic matter at extremely high densities. Such an interaction averts the unphysical Big Bang singularity, replacing it with a cusp-like bounce at a finite minimum scale factor, before which the Universe was contracting. The rapid expansion immediately after the Big Bounce explains why the present Universe at largest scales appears spatially flat, homogeneous and isotropic. As the density of the Universe decreases, the effects of torsion weaken and the Universe smoothly enters the radiation-dominated era.
Ekpyrotic and cyclic models
The ekpyrotic and cyclic models are also considered adjuncts to inflation. These models solve the horizon problem through an expanding epoch well before the Big Bang, and then generate the required spectrum of primordial density perturbations during a contracting phase leading to a Big Crunch. The Universe passes through the Big Crunch and emerges in a hot Big Bang phase. In this sense they are reminiscent of Richard Chace Tolman's oscillatory universe; in Tolman's model, however, the total age of the Universe is necessarily finite, while in these models this is not necessarily so. Whether the correct spectrum of density fluctuations can be produced, and whether the Universe can successfully navigate the Big Bang/Big Crunch transition, remains a topic of controversy and current research. Ekpyrotic models avoid the magnetic monopole problem as long as the temperature at the Big Crunch/Big Bang transition remains below the Grand Unified Scale, as this is the temperature required to produce magnetic monopoles in the first place. As things stand, there is no evidence of any 'slowing down' of the expansion, but this is not surprising as each cycle is expected to last on the order of a trillion years.
String gas cosmology
String theory requires that, in addition to the three observable spatial dimensions, additional dimensions exist that are curled up or compactified (see also Kaluza–Klein theory). Extra dimensions appear as a frequent component of supergravity models and other approaches to quantum gravity. This raised the contingent question of why four space-time dimensions became large and the rest became unobservably small. An attempt to address this question, called string gas cosmology, was proposed by Robert Brandenberger and Cumrun Vafa. This model focuses on the dynamics of the early universe considered as a hot gas of strings. Brandenberger and Vafa show that a dimension of spacetime can only expand if the strings that wind around it can efficiently annihilate each other. Each string is a one-dimensional object, and the largest number of dimensions in which two strings will generically intersect (and, presumably, annihilate) is three. Therefore, the most likely number of non-compact (large) spatial dimensions is three. Current work on this model centers on whether it can succeed in stabilizing the size of the compactified dimensions and produce the correct spectrum of primordial density perturbations. The original model did not "solve the entropy and flatness problems of standard cosmology", although Brandenburger and coauthors later argued that these problems can be eliminated by implementing string gas cosmology in the context of a bouncing-universe scenario.
Varying c
Cosmological models employing a variable speed of light have been proposed to resolve the horizon problem of and provide an alternative to cosmic inflation. In the VSL models, the fundamental constant c, denoting the speed of light in vacuum, is greater in the early universe than its present value, effectively increasing the particle horizon at the time of decoupling sufficiently to account for the observed isotropy of the CMB.
Criticisms
Since its introduction by Alan Guth in 1980, the inflationary paradigm has become widely accepted. Nevertheless, many physicists, mathematicians, and philosophers of science have voiced criticisms, claiming untestable predictions and a lack of serious empirical support. In 1999, John Earman and Jesús Mosterín published a thorough critical review of inflationary cosmology, concluding,
"we do not think that there are, as yet, good grounds for admitting any of the models of inflation into the standard core of cosmology."
As pointed out by Roger Penrose from 1986 on, in order to work, inflation requires extremely specific initial conditions of its own, so that the problem (or pseudo-problem) of initial conditions is not solved:
"There is something fundamentally misconceived about trying to explain the uniformity of the early universe as resulting from a thermalization process. ... For, if the thermalization is actually doing anything ... then it represents a definite increasing of the entropy. Thus, the universe would have been even more special before the thermalization than after."
The problem of specific or "fine-tuned" initial conditions would not have been solved; it would have gotten worse. At a conference in 2015, Penrose said that
"inflation isn't falsifiable, it's falsified. ... BICEP did a wonderful service by bringing all the Inflation-ists out of their shell, and giving them a black eye."
A recurrent criticism of inflation is that the invoked inflaton field does not correspond to any known physical field, and that its potential energy curve seems to be an ad hoc contrivance to accommodate almost any data obtainable. Paul Steinhardt, one of the founding fathers of inflationary cosmology, has recently become one of its sharpest critics. He calls 'bad inflation' a period of accelerated expansion whose outcome conflicts with observations, and 'good inflation' one compatible with them:
"Not only is bad inflation more likely than good inflation, but no inflation is more likely than either ... Roger Penrose considered all the possible configurations of the inflaton and gravitational fields. Some of these configurations lead to inflation ... Other configurations lead to a uniform, flat universe directly – without inflation. Obtaining a flat universe is unlikely overall. Penrose's shocking conclusion, though, was that obtaining a flat universe without inflation is much more likely than with inflation – by a factor of 10 to the googol power!"
Together with Anna Ijjas and Abraham Loeb, he wrote articles claiming that the inflationary paradigm is in trouble in view of the data from the Planck satellite.
Counter-arguments were presented by Alan Guth, David Kaiser, and Yasunori Nomura
and by Andrei Linde,
saying that
"cosmic inflation is on a stronger footing than ever before".
See also
Notes
References
Sources
External links
Was Cosmic Inflation The 'Bang' Of The Big Bang?, by Alan Guth, 1997
update 2004 by Andrew Liddle
The Growth of Inflation Symmetry, December 2004
Guth's logbook showing the original idea
WMAP Bolsters Case for Cosmic Inflation, March 2006
NASA March 2006 WMAP press release
Max Tegmark's Our Mathematical Universe (2014), "Chapter 5: Inflation"
Physical cosmology
Concepts in astronomy
Astronomical events
1980 in science |
5387 | https://en.wikipedia.org/wiki/Condensed%20matter%20physics | Condensed matter physics | Condensed matter physics is the field of physics that deals with the macroscopic and microscopic physical properties of matter, especially the solid and liquid phases which arise from electromagnetic forces between atoms. More generally, the subject deals with condensed phases of matter: systems of many constituents with strong interactions among them. More exotic condensed phases include the superconducting phase exhibited by certain materials at extremely low cryogenic temperature, the ferromagnetic and antiferromagnetic phases of spins on crystal lattices of atoms, and the Bose–Einstein condensate found in ultracold atomic systems. Condensed matter physicists seek to understand the behavior of these phases by experiments to measure various material properties, and by applying the physical laws of quantum mechanics, electromagnetism, statistical mechanics, and other physics theories to develop mathematical models.
The diversity of systems and phenomena available for study makes condensed matter physics the most active field of contemporary physics: one third of all American physicists self-identify as condensed matter physicists, and the Division of Condensed Matter Physics is the largest division at the American Physical Society. The field overlaps with chemistry, materials science, engineering and nanotechnology, and relates closely to atomic physics and biophysics. The theoretical physics of condensed matter shares important concepts and methods with that of particle physics and nuclear physics.
A variety of topics in physics such as crystallography, metallurgy, elasticity, magnetism, etc., were treated as distinct areas until the 1940s, when they were grouped together as solid-state physics. Around the 1960s, the study of physical properties of liquids was added to this list, forming the basis for the more comprehensive specialty of condensed matter physics. The Bell Telephone Laboratories was one of the first institutes to conduct a research program in condensed matter physics. According to founding director of the Max Planck Institute for Solid State Research, physics professor Manuel Cardona, it was Albert Einstein who created the modern field of condensed matter physics starting with his seminal 1905 article on the photoelectric effect and photoluminescence which opened the fields of photoelectron spectroscopy and photoluminescence spectroscopy, and later his 1907 article on the specific heat of solids which introduced, for the first time, the effect of lattice vibrations on the thermodynamic properties of crystals, in particular the specific heat. Deputy Director of the Yale Quantum Institute A. Douglas Stone makes a similar priority case for Einstein in his work on the synthetic history of quantum mechanics.
Etymology
According to physicist Philip Warren Anderson, the use of the term "condensed matter" to designate a field of study was coined by him and Volker Heine, when they changed the name of their group at the Cavendish Laboratories, Cambridge from Solid state theory to Theory of Condensed Matter in 1967, as they felt it better included their interest in liquids, nuclear matter, and so on. Although Anderson and Heine helped popularize the name "condensed matter", it had been used in Europe for some years, most prominently in the Springer-Verlag journal Physics of Condensed Matter, launched in 1963. The name "condensed matter physics" emphasized the commonality of scientific problems encountered by physicists working on solids, liquids, plasmas, and other complex matter, whereas "solid state physics" was often associated with restricted industrial applications of metals and semiconductors. In the 1960s and 70s, some physicists felt the more comprehensive name better fit the funding environment and Cold War politics of the time.
References to "condensed" states can be traced to earlier sources. For example, in the introduction to his 1947 book Kinetic Theory of Liquids, Yakov Frenkel proposed that "The kinetic theory of liquids must accordingly be developed as a generalization and extension of the kinetic theory of solid bodies. As a matter of fact, it would be more correct to unify them under the title of 'condensed bodies'".
History
Classical physics
One of the first studies of condensed states of matter was by English chemist Humphry Davy, in the first decades of the nineteenth century. Davy observed that of the forty chemical elements known at the time, twenty-six had metallic properties such as lustre, ductility and high electrical and thermal conductivity. This indicated that the atoms in John Dalton's atomic theory were not indivisible as Dalton claimed, but had inner structure. Davy further claimed that elements that were then believed to be gases, such as nitrogen and hydrogen could be liquefied under the right conditions and would then behave as metals.
In 1823, Michael Faraday, then an assistant in Davy's lab, successfully liquefied chlorine and went on to liquefy all known gaseous elements, except for nitrogen, hydrogen, and oxygen. Shortly after, in 1869, Irish chemist Thomas Andrews studied the phase transition from a liquid to a gas and coined the term critical point to describe the condition where a gas and a liquid were indistinguishable as phases, and Dutch physicist Johannes van der Waals supplied the theoretical framework which allowed the prediction of critical behavior based on measurements at much higher temperatures. By 1908, James Dewar and Heike Kamerlingh Onnes were successfully able to liquefy hydrogen and then newly discovered helium, respectively.
Paul Drude in 1900 proposed the first theoretical model for a classical electron moving through a metallic solid. Drude's model described properties of metals in terms of a gas of free electrons, and was the first microscopic model to explain empirical observations such as the Wiedemann–Franz law. However, despite the success of Drude's free electron model, it had one notable problem: it was unable to correctly explain the electronic contribution to the specific heat and magnetic properties of metals, and the temperature dependence of resistivity at low temperatures.
In 1911, three years after helium was first liquefied, Onnes working at University of Leiden discovered superconductivity in mercury, when he observed the electrical resistivity of mercury to vanish at temperatures below a certain value. The phenomenon completely surprised the best theoretical physicists of the time, and it remained unexplained for several decades. Albert Einstein, in 1922, said regarding contemporary theories of superconductivity that "with our far-reaching ignorance of the quantum mechanics of composite systems we are very far from being able to compose a theory out of these vague ideas."
Advent of quantum mechanics
Drude's classical model was augmented by Wolfgang Pauli, Arnold Sommerfeld, Felix Bloch and other physicists. Pauli realized that the free electrons in metal must obey the Fermi–Dirac statistics. Using this idea, he developed the theory of paramagnetism in 1926. Shortly after, Sommerfeld incorporated the Fermi–Dirac statistics into the free electron model and made it better to explain the heat capacity. Two years later, Bloch used quantum mechanics to describe the motion of an electron in a periodic lattice. The mathematics of crystal structures developed by Auguste Bravais, Yevgraf Fyodorov and others was used to classify crystals by their symmetry group, and tables of crystal structures were the basis for the series International Tables of Crystallography, first published in 1935. Band structure calculations was first used in 1930 to predict the properties of new materials, and in 1947 John Bardeen, Walter Brattain and William Shockley developed the first semiconductor-based transistor, heralding a revolution in electronics.
In 1879, Edwin Herbert Hall working at the Johns Hopkins University discovered a voltage developed across conductors transverse to an electric current in the conductor and magnetic field perpendicular to the current. This phenomenon arising due to the nature of charge carriers in the conductor came to be termed the Hall effect, but it was not properly explained at the time, since the electron was not experimentally discovered until 18 years later. After the advent of quantum mechanics, Lev Landau in 1930 developed the theory of Landau quantization and laid the foundation for the theoretical explanation for the quantum Hall effect discovered half a century later.
Magnetism as a property of matter has been known in China since 4000 BC. However, the first modern studies of magnetism only started with the development of electrodynamics by Faraday, Maxwell and others in the nineteenth century, which included classifying materials as ferromagnetic, paramagnetic and diamagnetic based on their response to magnetization. Pierre Curie studied the dependence of magnetization on temperature and discovered the Curie point phase transition in ferromagnetic materials. In 1906, Pierre Weiss introduced the concept of magnetic domains to explain the main properties of ferromagnets. The first attempt at a microscopic description of magnetism was by Wilhelm Lenz and Ernst Ising through the Ising model that described magnetic materials as consisting of a periodic lattice of spins that collectively acquired magnetization. The Ising model was solved exactly to show that spontaneous magnetization cannot occur in one dimension but is possible in higher-dimensional lattices. Further research such as by Bloch on spin waves and Néel on antiferromagnetism led to developing new magnetic materials with applications to magnetic storage devices.
Modern many-body physics
The Sommerfeld model and spin models for ferromagnetism illustrated the successful application of quantum mechanics to condensed matter problems in the 1930s. However, there still were several unsolved problems, most notably the description of superconductivity and the Kondo effect. After World War II, several ideas from quantum field theory were applied to condensed matter problems. These included recognition of collective excitation modes of solids and the important notion of a quasiparticle. Russian physicist Lev Landau used the idea for the Fermi liquid theory wherein low energy properties of interacting fermion systems were given in terms of what are now termed Landau-quasiparticles. Landau also developed a mean-field theory for continuous phase transitions, which described ordered phases as spontaneous breakdown of symmetry. The theory also introduced the notion of an order parameter to distinguish between ordered phases. Eventually in 1956, John Bardeen, Leon Cooper and John Schrieffer developed the so-called BCS theory of superconductivity, based on the discovery that arbitrarily small attraction between two electrons of opposite spin mediated by phonons in the lattice can give rise to a bound state called a Cooper pair.
The study of phase transitions and the critical behavior of observables, termed critical phenomena, was a major field of interest in the 1960s. Leo Kadanoff, Benjamin Widom and Michael Fisher developed the ideas of critical exponents and widom scaling. These ideas were unified by Kenneth G. Wilson in 1972, under the formalism of the renormalization group in the context of quantum field theory.
The quantum Hall effect was discovered by Klaus von Klitzing, Dorda and Pepper in 1980 when they observed the Hall conductance to be integer multiples of a fundamental constant .(see figure) The effect was observed to be independent of parameters such as system size and impurities. In 1981, theorist Robert Laughlin proposed a theory explaining the unanticipated precision of the integral plateau. It also implied that the Hall conductance is proportional to a topological invariant, called Chern number, whose relevance for the band structure of solids was formulated by David J. Thouless and collaborators. Shortly after, in 1982, Horst Störmer and Daniel Tsui observed the fractional quantum Hall effect where the conductance was now a rational multiple of the constant . Laughlin, in 1983, realized that this was a consequence of quasiparticle interaction in the Hall states and formulated a variational method solution, named the Laughlin wavefunction. The study of topological properties of the fractional Hall effect remains an active field of research. Decades later, the aforementioned topological band theory advanced by David J. Thouless and collaborators was further expanded leading to the discovery of topological insulators.
In 1986, Karl Müller and Johannes Bednorz discovered the first high temperature superconductor, a material which was superconducting at temperatures as high as 50 kelvins. It was realized that the high temperature superconductors are examples of strongly correlated materials where the electron–electron interactions play an important role. A satisfactory theoretical description of high-temperature superconductors is still not known and the field of strongly correlated materials continues to be an active research topic.
In 2009, David Field and researchers at Aarhus University discovered spontaneous electric fields when creating prosaic films of various gases. This has more recently expanded to form the research area of spontelectrics.
In 2012, several groups released preprints which suggest that samarium hexaboride has the properties of a topological insulator in accord with the earlier theoretical predictions. Since samarium hexaboride is an established Kondo insulator, i.e. a strongly correlated electron material, it is expected that the existence of a topological Dirac surface state in this material would lead to a topological insulator with strong electronic correlations.
Theoretical
Theoretical condensed matter physics involves the use of theoretical models to understand properties of states of matter. These include models to study the electronic properties of solids, such as the Drude model, the band structure and the density functional theory. Theoretical models have also been developed to study the physics of phase transitions, such as the Ginzburg–Landau theory, critical exponents and the use of mathematical methods of quantum field theory and the renormalization group. Modern theoretical studies involve the use of numerical computation of electronic structure and mathematical tools to understand phenomena such as high-temperature superconductivity, topological phases, and gauge symmetries.
Emergence
Theoretical understanding of condensed matter physics is closely related to the notion of emergence, wherein complex assemblies of particles behave in ways dramatically different from their individual constituents. For example, a range of phenomena related to high temperature superconductivity are understood poorly, although the microscopic physics of individual electrons and lattices is well known. Similarly, models of condensed matter systems have been studied where collective excitations behave like photons and electrons, thereby describing electromagnetism as an emergent phenomenon. Emergent properties can also occur at the interface between materials: one example is the lanthanum aluminate-strontium titanate interface, where two band-insulators are joined to create conductivity and superconductivity.
Electronic theory of solids
The metallic state has historically been an important building block for studying properties of solids. The first theoretical description of metals was given by Paul Drude in 1900 with the Drude model, which explained electrical and thermal properties by describing a metal as an ideal gas of then-newly discovered electrons. He was able to derive the empirical Wiedemann-Franz law and get results in close agreement with the experiments. This classical model was then improved by Arnold Sommerfeld who incorporated the Fermi–Dirac statistics of electrons and was able to explain the anomalous behavior of the specific heat of metals in the Wiedemann–Franz law. In 1912, The structure of crystalline solids was studied by Max von Laue and Paul Knipping, when they observed the X-ray diffraction pattern of crystals, and concluded that crystals get their structure from periodic lattices of atoms. In 1928, Swiss physicist Felix Bloch provided a wave function solution to the Schrödinger equation with a periodic potential, known as Bloch's theorem.
Calculating electronic properties of metals by solving the many-body wavefunction is often computationally hard, and hence, approximation methods are needed to obtain meaningful predictions. The Thomas–Fermi theory, developed in the 1920s, was used to estimate system energy and electronic density by treating the local electron density as a variational parameter. Later in the 1930s, Douglas Hartree, Vladimir Fock and John Slater developed the so-called Hartree–Fock wavefunction as an improvement over the Thomas–Fermi model. The Hartree–Fock method accounted for exchange statistics of single particle electron wavefunctions. In general, it is very difficult to solve the Hartree–Fock equation. Only the free electron gas case can be solved exactly. Finally in 1964–65, Walter Kohn, Pierre Hohenberg and Lu Jeu Sham proposed the density functional theory (DFT) which gave realistic descriptions for bulk and surface properties of metals. The density functional theory has been widely used since the 1970s for band structure calculations of variety of solids.
Symmetry breaking
Some states of matter exhibit symmetry breaking, where the relevant laws of physics possess some form of symmetry that is broken. A common example is crystalline solids, which break continuous translational symmetry. Other examples include magnetized ferromagnets, which break rotational symmetry, and more exotic states such as the ground state of a BCS superconductor, that breaks U(1) phase rotational symmetry.
Goldstone's theorem in quantum field theory states that in a system with broken continuous symmetry, there may exist excitations with arbitrarily low energy, called the Goldstone bosons. For example, in crystalline solids, these correspond to phonons, which are quantized versions of lattice vibrations.
Phase transition
Phase transition refers to the change of phase of a system, which is brought about by change in an external parameter such as temperature, pressure, or molar composition. In a single-component system, a classical phase transition occurs at a temperature (at a specific pressure) where there is an abrupt change in the order of the system For example, when ice melts and becomes water, the ordered hexagonal crystal structure of ice is modified to a hydrogen bonded, mobile arrangement of water molecules.
In quantum phase transitions, the temperature is set to absolute zero, and the non-thermal control parameter, such as pressure or magnetic field, causes the phase transitions when order is destroyed by quantum fluctuations originating from the Heisenberg uncertainty principle. Here, the different quantum phases of the system refer to distinct ground states of the Hamiltonian matrix. Understanding the behavior of quantum phase transition is important in the difficult tasks of explaining the properties of rare-earth magnetic insulators, high-temperature superconductors, and other substances.
Two classes of phase transitions occur: first-order transitions and second-order or continuous transitions. For the latter, the two phases involved do not co-exist at the transition temperature, also called the critical point. Near the critical point, systems undergo critical behavior, wherein several of their properties such as correlation length, specific heat, and magnetic susceptibility diverge exponentially. These critical phenomena present serious challenges to physicists because normal macroscopic laws are no longer valid in the region, and novel ideas and methods must be invented to find the new laws that can describe the system.
The simplest theory that can describe continuous phase transitions is the Ginzburg–Landau theory, which works in the so-called mean-field approximation. However, it can only roughly explain continuous phase transition for ferroelectrics and type I superconductors which involves long range microscopic interactions. For other types of systems that involves short range interactions near the critical point, a better theory is needed.
Near the critical point, the fluctuations happen over broad range of size scales while the feature of the whole system is scale invariant. Renormalization group methods successively average out the shortest wavelength fluctuations in stages while retaining their effects into the next stage. Thus, the changes of a physical system as viewed at different size scales can be investigated systematically. The methods, together with powerful computer simulation, contribute greatly to the explanation of the critical phenomena associated with continuous phase transition.
Experimental
Experimental condensed matter physics involves the use of experimental probes to try to discover new properties of materials. Such probes include effects of electric and magnetic fields, measuring response functions, transport properties and thermometry. Commonly used experimental methods include spectroscopy, with probes such as X-rays, infrared light and inelastic neutron scattering; study of thermal response, such as specific heat and measuring transport via thermal and heat conduction.
Scattering
Several condensed matter experiments involve scattering of an experimental probe, such as X-ray, optical photons, neutrons, etc., on constituents of a material. The choice of scattering probe depends on the observation energy scale of interest. Visible light has energy on the scale of 1 electron volt (eV) and is used as a scattering probe to measure variations in material properties such as dielectric constant and refractive index. X-rays have energies of the order of 10 keV and hence are able to probe atomic length scales, and are used to measure variations in electron charge density and crystal structure.
Neutrons can also probe atomic length scales and are used to study scattering off nuclei and electron spins and magnetization (as neutrons have spin but no charge). Coulomb and Mott scattering measurements can be made by using electron beams as scattering probes. Similarly, positron annihilation can be used as an indirect measurement of local electron density. Laser spectroscopy is an excellent tool for studying the microscopic properties of a medium, for example, to study forbidden transitions in media with nonlinear optical spectroscopy.
External magnetic fields
In experimental condensed matter physics, external magnetic fields act as thermodynamic variables that control the state, phase transitions and properties of material systems. Nuclear magnetic resonance (NMR) is a method by which external magnetic fields are used to find resonance modes of individual nuclei, thus giving information about the atomic, molecular, and bond structure of their environment. NMR experiments can be made in magnetic fields with strengths up to 60 tesla. Higher magnetic fields can improve the quality of NMR measurement data. Quantum oscillations is another experimental method where high magnetic fields are used to study material properties such as the geometry of the Fermi surface. High magnetic fields will be useful in experimentally testing of the various theoretical predictions such as the quantized magnetoelectric effect, image magnetic monopole, and the half-integer quantum Hall effect.
Nuclear spectroscopy
The local structure, the structure of the nearest neighbour atoms, of condensed matter can be investigated with methods of nuclear spectroscopy, which are very sensitive to small changes. Using specific and radioactive nuclei, the nucleus becomes the probe that interacts with its surrounding electric and magnetic fields (hyperfine interactions). The methods are suitable to study defects, diffusion, phase change, magnetism. Common methods are e.g. NMR, Mössbauer spectroscopy, or perturbed angular correlation (PAC). Especially PAC is ideal for the study of phase changes at extreme temperature above 2000 °C due to no temperature dependence of the method.
Cold atomic gases
Ultracold atom trapping in optical lattices is an experimental tool commonly used in condensed matter physics, and in atomic, molecular, and optical physics. The method involves using optical lasers to form an interference pattern, which acts as a lattice, in which ions or atoms can be placed at very low temperatures. Cold atoms in optical lattices are used as quantum simulators, that is, they act as controllable systems that can model behavior of more complicated systems, such as frustrated magnets. In particular, they are used to engineer one-, two- and three-dimensional lattices for a Hubbard model with pre-specified parameters, and to study phase transitions for antiferromagnetic and spin liquid ordering.
In 1995, a gas of rubidium atoms cooled down to a temperature of 170 nK was used to experimentally realize the Bose–Einstein condensate, a novel state of matter originally predicted by S. N. Bose and Albert Einstein, wherein a large number of atoms occupy one quantum state.
Applications
Research in condensed matter physics has given rise to several device applications, such as the development of the semiconductor transistor, laser technology, and several phenomena studied in the context of nanotechnology. Methods such as scanning-tunneling microscopy can be used to control processes at the nanometer scale, and have given rise to the study of nanofabrication. Such molecular machines were developed for example by Nobel laurate in chemistry Ben Feringa, Jean-Pierre Sauvage and Fraser Stoddart. Feringa and his team developed multiple molecular machines such as molecular car, molecular windmill and many more.
In quantum computation, information is represented by quantum bits, or qubits. The qubits may decohere quickly before useful computation is completed. This serious problem must be solved before quantum computing may be realized. To solve this problem, several promising approaches are proposed in condensed matter physics, including Josephson junction qubits, spintronic qubits using the spin orientation of magnetic materials, or the topological non-Abelian anyons from fractional quantum Hall effect states.
Condensed matter physics also has important uses for biomedicine, for example, the experimental method of magnetic resonance imaging, which is widely used in medical diagnosis.
See also
Notes
References
Further reading
Anderson, Philip W. (2018-03-09). Basic Notions Of Condensed Matter Physics. CRC Press. .
Girvin, Steven M.; Yang, Kun (2019-02-28). Modern Condensed Matter Physics. Cambridge University Press. .
Coleman, Piers (2015). Introduction to Many-Body Physics, Cambridge University Press, .
P. M. Chaikin and T. C. Lubensky (2000). Principles of Condensed Matter Physics, Cambridge University Press; 1st edition,
Alexander Altland and Ben Simons (2006). Condensed Matter Field Theory, Cambridge University Press, .
Michael P. Marder (2010). Condensed Matter Physics, second edition, John Wiley and Sons, .
Lillian Hoddeson, Ernest Braun, Jürgen Teichmann and Spencer Weart, eds. (1992). Out of the Crystal Maze: Chapters from the History of Solid State Physics, Oxford University Press, .
External links
Materials science |
5388 | https://en.wikipedia.org/wiki/Cultural%20anthropology | Cultural anthropology | Cultural anthropology is a branch of anthropology focused on the study of cultural variation among humans. It is in contrast to social anthropology, which perceives cultural variation as a subset of a posited anthropological constant. The term sociocultural anthropology includes both cultural and social anthropology traditions.
Anthropologists have pointed out that through culture, people can adapt to their environment in non-genetic ways, so people living in different environments will often have different cultures. Much of anthropological theory has originated in an appreciation of and interest in the tension between the local (particular cultures) and the global (a universal human nature, or the web of connections between people in distinct places/circumstances).
Cultural anthropology has a rich methodology, including participant observation (often called fieldwork because it requires the anthropologist spending an extended period of time at the research location), interviews, and surveys.
History
The rise of cultural anthropology took place within the context of the late 19th century, when questions regarding which cultures were "primitive" and which were "civilized" occupied the mind of not only Freud, but many others. Colonialism and its processes increasingly brought European thinkers into direct or indirect contact with "primitive others". The relative status of various humans, some of whom had modern advanced technologies that included engines and telegraphs, while others lacked anything but face-to-face communication techniques and still lived a Paleolithic lifestyle, was of interest to the first generation of cultural anthropologists.
Theoretical foundations
The concept of culture
One of the earliest articulations of the anthropological meaning of the term "culture" came from Sir Edward Tylor who writes on the first page of his 1871 book: "Culture, or civilization, taken in its broad, ethnographic sense, is that complex whole which includes knowledge, belief, art, morals, law, custom, and any other capabilities and habits acquired by man as a member of society." The term "civilization" later gave way to definitions given by V. Gordon Childe, with culture forming an umbrella term and civilization becoming a particular kind of culture.
According to Kay Milton, former director of anthropology research at Queens University Belfast, culture can be general or specific. This means culture can be something applied to all human beings or it can be specific to a certain group of people such as African American culture or Irish American culture. Specific cultures are structured systems which means they are organized very specifically and adding or taking away any element from that system may disrupt it.
The critique of evolutionism
Anthropology is concerned with the lives of people in different parts of the world, particularly in relation to the discourse of beliefs and practices. In addressing this question, ethnologists in the 19th century divided into two schools of thought. Some, like Grafton Elliot Smith, argued that different groups must have learned from one another somehow, however indirectly; in other words, they argued that cultural traits spread from one place to another, or "diffused".
Other ethnologists argued that different groups had the capability of creating similar beliefs and practices independently. Some of those who advocated "independent invention", like Lewis Henry Morgan, additionally supposed that similarities meant that different groups had passed through the same stages of cultural evolution (See also classical social evolutionism). Morgan, in particular, acknowledged that certain forms of society and culture could not possibly have arisen before others. For example, industrial farming could not have been invented before simple farming, and metallurgy could not have developed without previous non-smelting processes involving metals (such as simple ground collection or mining). Morgan, like other 19th century social evolutionists, believed there was a more or less orderly progression from the primitive to the civilized.
20th-century anthropologists largely reject the notion that all human societies must pass through the same stages in the same order, on the grounds that such a notion does not fit the empirical facts. Some 20th-century ethnologists, like Julian Steward, have instead argued that such similarities reflected similar adaptations to similar environments. Although 19th-century ethnologists saw "diffusion" and "independent invention" as mutually exclusive and competing theories, most ethnographers quickly reached a consensus that both processes occur, and that both can plausibly account for cross-cultural similarities. But these ethnographers also pointed out the superficiality of many such similarities. They noted that even traits that spread through diffusion often were given different meanings and function from one society to another. Analyses of large human concentrations in big cities, in multidisciplinary studies by Ronald Daus, show how new methods may be applied to the understanding of man living in a global world and how it was caused by the action of extra-European nations, so highlighting the role of Ethics in modern anthropology.
Accordingly, most of these anthropologists showed less interest in comparing cultures, generalizing about human nature, or discovering universal laws of cultural development, than in understanding particular cultures in those cultures' own terms. Such ethnographers and their students promoted the idea of "cultural relativism", the view that one can only understand another person's beliefs and behaviors in the context of the culture in which they live or lived.
Others, such as Claude Lévi-Strauss (who was influenced both by American cultural anthropology and by French Durkheimian sociology), have argued that apparently similar patterns of development reflect fundamental similarities in the structure of human thought (see structuralism). By the mid-20th century, the number of examples of people skipping stages, such as going from hunter-gatherers to post-industrial service occupations in one generation, were so numerous that 19th-century evolutionism was effectively disproved.
Cultural relativism
Cultural relativism is a principle that was established as axiomatic in anthropological research by Franz Boas and later popularized by his students. Boas first articulated the idea in 1887: "...civilization is not something absolute, but ... is relative, and ... our ideas and conceptions are true only so far as our civilization goes." Although Boas did not coin the term, it became common among anthropologists after Boas' death in 1942, to express their synthesis of a number of ideas Boas had developed. Boas believed that the sweep of cultures, to be found in connection with any sub-species, is so vast and pervasive that there cannot be a relationship between culture and race. Cultural relativism involves specific epistemological and methodological claims. Whether or not these claims require a specific ethical stance is a matter of debate. This principle should not be confused with moral relativism.
Cultural relativism was in part a response to Western ethnocentrism. Ethnocentrism may take obvious forms, in which one consciously believes that one's people's arts are the most beautiful, values the most virtuous, and beliefs the most truthful. Boas, originally trained in physics and geography, and heavily influenced by the thought of Kant, Herder, and von Humboldt, argued that one's culture may mediate and thus limit one's perceptions in less obvious ways. This understanding of culture confronts anthropologists with two problems: first, how to escape the unconscious bonds of one's own culture, which inevitably bias our perceptions of and reactions to the world, and second, how to make sense of an unfamiliar culture. The principle of cultural relativism thus forced anthropologists to develop innovative methods and heuristic strategies.
Boas and his students realized that if they were to conduct scientific research in other cultures, they would need to employ methods that would help them escape the limits of their own ethnocentrism. One such method is that of ethnography: basically, they advocated living with people of another culture for an extended period of time, so that they could learn the local language and be enculturated, at least partially, into that culture. In this context, cultural relativism is of fundamental methodological importance, because it calls attention to the importance of the local context in understanding the meaning of particular human beliefs and activities. Thus, in 1948 Virginia Heyer wrote, "Cultural relativity, to phrase it in starkest abstraction, states the relativity of the part to the whole. The part gains its cultural significance by its place in the whole, and cannot retain its integrity in a different situation."
Theoretical approaches
Actor–network theory
Cultural materialism
Culture theory
Feminist anthropology
Functionalism
Symbolic and interpretive anthropology
Political economy in anthropology
Practice theory
Structuralism
Post-structuralism
Systems theory in anthropology
Comparison with social anthropology
The rubric cultural anthropology is generally applied to ethnographic works that are holistic in approach, are oriented to the ways in which culture affects individual experience, or aim to provide a rounded view of the knowledge, customs, and institutions of a people. Social anthropology is a term applied to ethnographic works that attempt to isolate a particular system of social relations such as those that comprise domestic life, economy, law, politics, or religion, give analytical priority to the organizational bases of social life, and attend to cultural phenomena as somewhat secondary to the main issues of social scientific inquiry.
Parallel with the rise of cultural anthropology in the United States, social anthropology developed as an academic discipline in Britain and in France.
Foundational thinkers
Lewis Henry Morgan
Lewis Henry Morgan (1818–1881), a lawyer from Rochester, New York, became an advocate for and ethnological scholar of the Iroquois. His comparative analyses of religion, government, material culture, and especially kinship patterns proved to be influential contributions to the field of anthropology. Like other scholars of his day (such as Edward Tylor), Morgan argued that human societies could be classified into categories of cultural evolution on a scale of progression that ranged from savagery, to barbarism, to civilization. Generally, Morgan used technology (such as bowmaking or pottery) as an indicator of position on this scale.
Franz Boas, founder of the modern discipline
Franz Boas (1858–1942) established academic anthropology in the United States in opposition to Morgan's evolutionary perspective. His approach was empirical, skeptical of overgeneralizations, and eschewed attempts to establish universal laws. For example, Boas studied immigrant children to demonstrate that biological race was not immutable, and that human conduct and behavior resulted from nurture, rather than nature.
Influenced by the German tradition, Boas argued that the world was full of distinct cultures, rather than societies whose evolution could be measured by how much or how little "civilization" they had. He believed that each culture has to be studied in its particularity, and argued that cross-cultural generalizations, like those made in the natural sciences, were not possible.
In doing so, he fought discrimination against immigrants, blacks, and indigenous peoples of the Americas. Many American anthropologists adopted his agenda for social reform, and theories of race continue to be popular subjects for anthropologists today. The so-called "Four Field Approach" has its origins in Boasian Anthropology, dividing the discipline in the four crucial and interrelated fields of sociocultural, biological, linguistic, and archaic anthropology (e.g. archaeology). Anthropology in the United States continues to be deeply influenced by the Boasian tradition, especially its emphasis on culture.
Kroeber, Mead, and Benedict
Boas used his positions at Columbia University and the American Museum of Natural History (AMNH) to train and develop multiple generations of students. His first generation of students included Alfred Kroeber, Robert Lowie, Edward Sapir, and Ruth Benedict, who each produced richly detailed studies of indigenous North American cultures. They provided a wealth of details used to attack the theory of a single evolutionary process. Kroeber and Sapir's focus on Native American languages helped establish linguistics as a truly general science and free it from its historical focus on Indo-European languages.
The publication of Alfred Kroeber's textbook Anthropology (1923) marked a turning point in American anthropology. After three decades of amassing material, Boasians felt a growing urge to generalize. This was most obvious in the 'Culture and Personality' studies carried out by younger Boasians such as Margaret Mead and Ruth Benedict. Influenced by psychoanalytic psychologists including Sigmund Freud and Carl Jung, these authors sought to understand the way that individual personalities were shaped by the wider cultural and social forces in which they grew up.
Though such works as Mead's Coming of Age in Samoa (1928) and Benedict's The Chrysanthemum and the Sword (1946) remain popular with the American public, Mead and Benedict never had the impact on the discipline of anthropology that some expected. Boas had planned for Ruth Benedict to succeed him as chair of Columbia's anthropology department, but she was sidelined in favor of Ralph Linton, and Mead was limited to her offices at the AMNH.
Wolf, Sahlins, Mintz, and political economy
In the 1950s and mid-1960s anthropology tended increasingly to model itself after the natural sciences. Some anthropologists, such as Lloyd Fallers and Clifford Geertz, focused on processes of modernization by which newly independent states could develop. Others, such as Julian Steward and Leslie White, focused on how societies evolve and fit their ecological niche—an approach popularized by Marvin Harris.
Economic anthropology as influenced by Karl Polanyi and practiced by Marshall Sahlins and George Dalton challenged standard neoclassical economics to take account of cultural and social factors, and employed Marxian analysis into anthropological study. In England, British Social Anthropology's paradigm began to fragment as Max Gluckman and Peter Worsley experimented with Marxism and authors such as Rodney Needham and Edmund Leach incorporated Lévi-Strauss's structuralism into their work. Structuralism also influenced a number of developments in the 1960s and 1970s, including cognitive anthropology and componential analysis.
In keeping with the times, much of anthropology became politicized through the Algerian War of Independence and opposition to the Vietnam War; Marxism became an increasingly popular theoretical approach in the discipline. By the 1970s the authors of volumes such as Reinventing Anthropology worried about anthropology's relevance.
Since the 1980s issues of power, such as those examined in Eric Wolf's Europe and the People Without History, have been central to the discipline. In the 1980s books like Anthropology and the Colonial Encounter pondered anthropology's ties to colonial inequality, while the immense popularity of theorists such as Antonio Gramsci and Michel Foucault moved issues of power and hegemony into the spotlight. Gender and sexuality became popular topics, as did the relationship between history and anthropology, influenced by Marshall Sahlins, who drew on Lévi-Strauss and Fernand Braudel to examine the relationship between symbolic meaning, sociocultural structure, and individual agency in the processes of historical transformation. Jean and John Comaroff produced a whole generation of anthropologists at the University of Chicago that focused on these themes. Also influential in these issues were Nietzsche, Heidegger, the critical theory of the Frankfurt School, Derrida and Lacan.
Geertz, Schneider, and interpretive anthropology
Many anthropologists reacted against the renewed emphasis on materialism and scientific modelling derived from Marx by emphasizing the importance of the concept of culture. Authors such as David Schneider, Clifford Geertz, and Marshall Sahlins developed a more fleshed-out concept of culture as a web of meaning or signification, which proved very popular within and beyond the discipline. Geertz was to state:
Geertz's interpretive method involved what he called "thick description". The cultural symbols of rituals, political and economic action, and of kinship, are "read" by the anthropologist as if they are a document in a foreign language. The interpretation of those symbols must be re-framed for their anthropological audience, i.e. transformed from the "experience-near" but foreign concepts of the other culture, into the "experience-distant" theoretical concepts of the anthropologist. These interpretations must then be reflected back to its originators, and its adequacy as a translation fine-tuned in a repeated way, a process called the hermeneutic circle. Geertz applied his method in a number of areas, creating programs of study that were very productive. His analysis of "religion as a cultural system" was particularly influential outside of anthropology. David Schnieder's cultural analysis of American kinship has proven equally influential. Schneider demonstrated that the American folk-cultural emphasis on "blood connections" had an undue influence on anthropological kinship theories, and that kinship is not a biological characteristic but a cultural relationship established on very different terms in different societies.
Prominent British symbolic anthropologists include Victor Turner and Mary Douglas.
The post-modern turn
In the late 1980s and 1990s authors such as James Clifford pondered ethnographic authority, in particular how and why anthropological knowledge was possible and authoritative. They were reflecting trends in research and discourse initiated by feminists in the academy, although they excused themselves from commenting specifically on those pioneering critics. Nevertheless, key aspects of feminist theory and methods became de rigueur as part of the 'post-modern moment' in anthropology: Ethnographies became more interpretative and reflexive, explicitly addressing the author's methodology; cultural, gendered, and racial positioning; and their influence on the ethnographic analysis. This was part of a more general trend of postmodernism that was popular contemporaneously. Currently anthropologists pay attention to a wide variety of issues pertaining to the contemporary world, including globalization, medicine and biotechnology, indigenous rights, virtual communities, and the anthropology of industrialized societies.
Socio-cultural anthropology subfields
Anthropology of art
Cognitive anthropology
Anthropology of development
Disability anthropology
Ecological anthropology
Economic anthropology
Feminist anthropology and anthropology of gender and sexuality
Ethnohistory and historical anthropology
Kinship and family
Legal anthropology
Multimodal anthropology
Media anthropology
Medical anthropology
Political anthropology
Political economy in anthropology
Psychological anthropology
Public anthropology
Anthropology of religion
Cyborg anthropology
Transpersonal anthropology
Urban anthropology
Visual anthropology
Methods
Modern cultural anthropology has its origins in, and developed in reaction to, 19th century ethnology, which involves the organized comparison of human societies. Scholars like E.B. Tylor and J.G. Frazer in England worked mostly with materials collected by others—usually missionaries, traders, explorers, or colonial officials—earning them the moniker of "arm-chair anthropologists".
Participant observation
Participant observation is one of the principal research methods of cultural anthropology. It relies on the assumption that the best way to understand a group of people is to interact with them closely over a long period of time. The method originated in the field research of social anthropologists, especially Bronislaw Malinowski in Britain, the students of Franz Boas in the United States, and in the later urban research of the Chicago School of Sociology. Historically, the group of people being studied was a small, non-Western society. However, today it may be a specific corporation, a church group, a sports team, or a small town. There are no restrictions as to what the subject of participant observation can be, as long as the group of people is studied intimately by the observing anthropologist over a long period of time. This allows the anthropologist to develop trusting relationships with the subjects of study and receive an inside perspective on the culture, which helps him or her to give a richer description when writing about the culture later. Observable details (like daily time allotment) and more hidden details (like taboo behavior) are more easily observed and interpreted over a longer period of time, and researchers can discover discrepancies between what participants say—and often believe—should happen (the formal system) and what actually does happen, or between different aspects of the formal system; in contrast, a one-time survey of people's answers to a set of questions might be quite consistent, but is less likely to show conflicts between different aspects of the social system or between conscious representations and behavior.
Interactions between an ethnographer and a cultural informant must go both ways. Just as an ethnographer may be naive or curious about a culture, the members of that culture may be curious about the ethnographer. To establish connections that will eventually lead to a better understanding of the cultural context of a situation, an anthropologist must be open to becoming part of the group, and willing to develop meaningful relationships with its members. One way to do this is to find a small area of common experience between an anthropologist and their subjects, and then to expand from this common ground into the larger area of difference. Once a single connection has been established, it becomes easier to integrate into the community, and more likely that accurate and complete information is being shared with the anthropologist.
Before participant observation can begin, an anthropologist must choose both a location and a focus of study. This focus may change once the anthropologist is actively observing the chosen group of people, but having an idea of what one wants to study before beginning fieldwork allows an anthropologist to spend time researching background information on their topic. It can also be helpful to know what previous research has been conducted in one's chosen location or on similar topics, and if the participant observation takes place in a location where the spoken language is not one the anthropologist is familiar with, they will usually also learn that language. This allows the anthropologist to become better established in the community. The lack of need for a translator makes communication more direct, and allows the anthropologist to give a richer, more contextualized representation of what they witness. In addition, participant observation often requires permits from governments and research institutions in the area of study, and always needs some form of funding.
The majority of participant observation is based on conversation. This can take the form of casual, friendly dialogue, or can also be a series of more structured interviews. A combination of the two is often used, sometimes along with photography, mapping, artifact collection, and various other methods. In some cases, ethnographers also turn to structured observation, in which an anthropologist's observations are directed by a specific set of questions they are trying to answer. In the case of structured observation, an observer might be required to record the order of a series of events, or describe a certain part of the surrounding environment. While the anthropologist still makes an effort to become integrated into the group they are studying, and still participates in the events as they observe, structured observation is more directed and specific than participant observation in general. This helps to standardize the method of study when ethnographic data is being compared across several groups or is needed to fulfill a specific purpose, such as research for a governmental policy decision.
One common criticism of participant observation is its lack of objectivity. Because each anthropologist has their own background and set of experiences, each individual is likely to interpret the same culture in a different way. Who the ethnographer is has a lot to do with what they will eventually write about a culture, because each researcher is influenced by their own perspective. This is considered a problem especially when anthropologists write in the ethnographic present, a present tense which makes a culture seem stuck in time, and ignores the fact that it may have interacted with other cultures or gradually evolved since the anthropologist made observations. To avoid this, past ethnographers have advocated for strict training, or for anthropologists working in teams. However, these approaches have not generally been successful, and modern ethnographers often choose to include their personal experiences and possible biases in their writing instead.
Participant observation has also raised ethical questions, since an anthropologist is in control of what they report about a culture. In terms of representation, an anthropologist has greater power than their subjects of study, and this has drawn criticism of participant observation in general. Additionally, anthropologists have struggled with the effect their presence has on a culture. Simply by being present, a researcher causes changes in a culture, and anthropologists continue to question whether or not it is appropriate to influence the cultures they study, or possible to avoid having influence.
Ethnography
In the 20th century, most cultural and social anthropologists turned to the crafting of ethnographies. An ethnography is a piece of writing about a people, at a particular place and time. Typically, the anthropologist lives among people in another society for a period of time, simultaneously participating in and observing the social and cultural life of the group.
Numerous other ethnographic techniques have resulted in ethnographic writing or details being preserved, as cultural anthropologists also curate materials, spend long hours in libraries, churches and schools poring over records, investigate graveyards, and decipher ancient scripts. A typical ethnography will also include information about physical geography, climate and habitat. It is meant to be a holistic piece of writing about the people in question, and today often includes the longest possible timeline of past events that the ethnographer can obtain through primary and secondary research.
Bronisław Malinowski developed the ethnographic method, and Franz Boas taught it in the United States. Boas' students such as Alfred L. Kroeber, Ruth Benedict and Margaret Mead drew on his conception of culture and cultural relativism to develop cultural anthropology in the United States. Simultaneously, Malinowski and A.R. Radcliffe Brown's students were developing social anthropology in the United Kingdom. Whereas cultural anthropology focused on symbols and values, social anthropology focused on social groups and institutions. Today socio-cultural anthropologists attend to all these elements.
In the early 20th century, socio-cultural anthropology developed in different forms in Europe and in the United States. European "social anthropologists" focused on observed social behaviors and on "social structure", that is, on relationships among social roles (for example, husband and wife, or parent and child) and social institutions (for example, religion, economy, and politics).
American "cultural anthropologists" focused on the ways people expressed their view of themselves and their world, especially in symbolic forms, such as art and myths. These two approaches frequently converged and generally complemented one another. For example, kinship and leadership function both as symbolic systems and as social institutions. Today almost all socio-cultural anthropologists refer to the work of both sets of predecessors, and have an equal interest in what people do and in what people say.
Cross-cultural comparison
One means by which anthropologists combat ethnocentrism is to engage in the process of cross-cultural comparison. It is important to test so-called "human universals" against the ethnographic record. Monogamy, for example, is frequently touted as a universal human trait, yet comparative study shows that it is not.
The Human Relations Area Files, Inc. (HRAF) is a research agency based at Yale University. Since 1949, its mission has been to encourage and facilitate worldwide comparative studies of human culture, society, and behavior in the past and present. The name came from the Institute of Human Relations, an interdisciplinary program/building at Yale at the time. The Institute of Human Relations had sponsored HRAF's precursor, the Cross-Cultural Survey (see George Peter Murdock), as part of an effort to develop an integrated science of human behavior and culture. The two eHRAF databases on the Web are expanded and updated annually. eHRAF World Cultures includes materials on cultures, past and present, and covers nearly 400 cultures. The second database, eHRAF Archaeology, covers major archaeological traditions and many more sub-traditions and sites around the world.
Comparison across cultures includies the industrialized (or de-industrialized) West. Cultures in the more traditional standard cross-cultural sample of small scale societies are:
Multi-sited ethnography
Ethnography dominates socio-cultural anthropology. Nevertheless, many contemporary socio-cultural anthropologists have rejected earlier models of ethnography as treating local cultures as bounded and isolated. These anthropologists continue to concern themselves with the distinct ways people in different locales experience and understand their lives, but they often argue that one cannot understand these particular ways of life solely from a local perspective; they instead combine a focus on the local with an effort to grasp larger political, economic, and cultural frameworks that impact local lived realities. Notable proponents of this approach include Arjun Appadurai, James Clifford, George Marcus, Sidney Mintz, Michael Taussig, Eric Wolf and Ronald Daus.
A growing trend in anthropological research and analysis is the use of multi-sited ethnography, discussed in George Marcus' article, "Ethnography In/Of the World System: the Emergence of Multi-Sited Ethnography". Looking at culture as embedded in macro-constructions of a global social order, multi-sited ethnography uses traditional methodology in various locations both spatially and temporally. Through this methodology, greater insight can be gained when examining the impact of world-systems on local and global communities.
Also emerging in multi-sited ethnography are greater interdisciplinary approaches to fieldwork, bringing in methods from cultural studies, media studies, science and technology studies, and others. In multi-sited ethnography, research tracks a subject across spatial and temporal boundaries. For example, a multi-sited ethnography may follow a "thing", such as a particular commodity, as it is transported through the networks of global capitalism.
Multi-sited ethnography may also follow ethnic groups in diaspora, stories or rumours that appear in multiple locations and in multiple time periods, metaphors that appear in multiple ethnographic locations, or the biographies of individual people or groups as they move through space and time. It may also follow conflicts that transcend boundaries. An example of multi-sited ethnography is Nancy Scheper-Hughes' work on the international black market for the trade of human organs. In this research, she follows organs as they are transferred through various legal and illegal networks of capitalism, as well as the rumours and urban legends that circulate in impoverished communities about child kidnapping and organ theft.
Sociocultural anthropologists have increasingly turned their investigative eye on to "Western" culture. For example, Philippe Bourgois won the Margaret Mead Award in 1997 for In Search of Respect, a study of the entrepreneurs in a Harlem crack-den. Also growing more popular are ethnographies of professional communities, such as laboratory researchers, Wall Street investors, law firms, or information technology (IT) computer employees.
Topics
Kinship and family
Kinship refers to the anthropological study of the ways in which humans form and maintain relationships with one another and how those relationships operate within and define social organization.
Research in kinship studies often crosses over into different anthropological subfields including medical, feminist, and public anthropology. This is likely due to its fundamental concepts, as articulated by linguistic anthropologist Patrick McConvell: Throughout history, kinship studies have primarily focused on the topics of marriage, descent, and procreation. Anthropologists have written extensively on the variations within marriage across cultures and its legitimacy as a human institution. There are stark differences between communities in terms of marital practice and value, leaving much room for anthropological fieldwork. For instance, the Nuer of Sudan and the Brahmans of Nepal practice polygyny, where one man has several marriages to two or more women. The Nyar of India and Nyimba of Tibet and Nepal practice polyandry, where one woman is often married to two or more men. The marital practice found in most cultures, however, is monogamy, where one woman is married to one man. Anthropologists also study different marital taboos across cultures, most commonly the incest taboo of marriage within sibling and parent-child relationships. It has been found that all cultures have an incest taboo to some degree, but the taboo shifts between cultures when the marriage extends beyond the nuclear family unit.
There are similar foundational differences where the act of procreation is concerned. Although anthropologists have found that biology is acknowledged in every cultural relationship to procreation, there are differences in the ways in which cultures assess the constructs of parenthood. For example, in the Nuyoo municipality of Oaxaca, Mexico, it is believed that a child can have partible maternity and partible paternity. In this case, a child would have multiple biological mothers in the case that it is born of one woman and then breastfed by another. A child would have multiple biological fathers in the case that the mother had sex with multiple men, following the commonplace belief in Nuyoo culture that pregnancy must be preceded by sex with multiple men in order have the necessary accumulation of semen.
Late twentieth-century shifts in interest
In the twenty-first century, Western ideas of kinship have evolved beyond the traditional assumptions of the nuclear family, raising anthropological questions of consanguinity, lineage, and normative marital expectation. The shift can be traced back to the 1960s, with the reassessment of kinship's basic principles offered by Edmund Leach, Rodney Neeham, David Schneider, and others. Instead of relying on narrow ideas of Western normalcy, kinship studies increasingly catered to "more ethnographic voices, human agency, intersecting power structures, and historical contex". The study of kinship evolved to accommodate for the fact that it cannot be separated from its institutional roots and must pay respect to the society in which it lives, including that society's contradictions, hierarchies, and individual experiences of those within it. This shift was progressed further by the emergence of second-wave feminism in the early 1970s, which introduced ideas of marital oppression, sexual autonomy, and domestic subordination. Other themes that emerged during this time included the frequent comparisons between Eastern and Western kinship systems and the increasing amount of attention paid to anthropologists' own societies, a swift turn from the focus that had traditionally been paid to largely "foreign", non-Western communities.
Kinship studies began to gain mainstream recognition in the late 1990s with the surging popularity of feminist anthropology, particularly with its work related to biological anthropology and the intersectional critique of gender relations. At this time, there was the arrival of "Third World feminism", a movement that argued kinship studies could not examine the gender relations of developing countries in isolation, and must pay respect to racial and economic nuance as well. This critique became relevant, for instance, in the anthropological study of Jamaica: race and class were seen as the primary obstacles to Jamaican liberation from economic imperialism, and gender as an identity was largely ignored. Third World feminism aimed to combat this in the early twenty-first century by promoting these categories as coexisting factors. In Jamaica, marriage as an institution is often substituted for a series of partners, as poor women cannot rely on regular financial contributions in a climate of economic instability. In addition, there is a common practice of Jamaican women artificially lightening their skin tones in order to secure economic survival. These anthropological findings, according to Third World feminism, cannot see gender, racial, or class differences as separate entities, and instead must acknowledge that they interact together to produce unique individual experiences.
Rise of reproductive anthropology
Kinship studies have also experienced a rise in the interest of reproductive anthropology with the advancement of assisted reproductive technologies (ARTs), including in vitro fertilization (IVF). These advancements have led to new dimensions of anthropological research, as they challenge the Western standard of biogenetically based kinship, relatedness, and parenthood. According to anthropologists Maria C. Inhorn and Daphna Birenbaum-Carmeli, "ARTs have pluralized notions of relatedness and led to a more dynamic notion of "kinning" namely, kinship as a process, as something under construction, rather than a natural given". With this technology, questions of kinship have emerged over the difference between biological and genetic relatedness, as gestational surrogates can provide a biological environment for the embryo while the genetic ties remain with a third party. If genetic, surrogate, and adoptive maternities are involved, anthropologists have acknowledged that there can be the possibility for three "biological" mothers to a single child. With ARTs, there are also anthropological questions concerning the intersections between wealth and fertility: ARTs are generally only available to those in the highest income bracket, meaning the infertile poor are inherently devalued in the system. There have also been issues of reproductive tourism and bodily commodification, as individuals seek economic security through hormonal stimulation and egg harvesting, which are potentially harmful procedures. With IVF, specifically, there have been many questions of embryotic value and the status of life, particularly as it relates to the manufacturing of stem cells, testing, and research.
Current issues in kinship studies, such as adoption, have revealed and challenged the Western cultural disposition towards the genetic, "blood" tie. Western biases against single parent homes have also been explored through similar anthropological research, uncovering that a household with a single parent experiences "greater levels of scrutiny and [is] routinely seen as the 'other' of the nuclear, patriarchal family". The power dynamics in reproduction, when explored through a comparative analysis of "conventional" and "unconventional" families, have been used to dissect the Western assumptions of child bearing and child rearing in contemporary kinship studies.
Critiques of kinship studies
Kinship, as an anthropological field of inquiry, has been heavily criticized across the discipline. One critique is that, as its inception, the framework of kinship studies was far too structured and formulaic, relying on dense language and stringent rules. Another critique, explored at length by American anthropologist David Schneider, argues that kinship has been limited by its inherent Western ethnocentrism. Schneider proposes that kinship is not a field that can be applied cross-culturally, as the theory itself relies on European assumptions of normalcy. He states in the widely circulated 1984 book A critique of the study of kinship that "[K]inship has been defined by European social scientists, and European social scientists use their own folk culture as the source of many, if not all of their ways of formulating and understanding the world about them". However, this critique has been challenged by the argument that it is linguistics, not cultural divergence, that has allowed for a European bias, and that the bias can be lifted by centering the methodology on fundamental human concepts. Polish anthropologist Anna Wierzbicka argues that "mother" and "father" are examples of such fundamental human concepts, and can only be Westernized when conflated with English concepts such as "parent" and "sibling".
A more recent critique of kinship studies is its solipsistic focus on privileged, Western human relations and its promotion of normative ideals of human exceptionalism. In Critical Kinship Studies, social psychologists Elizabeth Peel and Damien Riggs argue for a move beyond this human-centered framework, opting instead to explore kinship through a "posthumanist" vantage point where anthropologists focus on the intersecting relationships of human animals, non-human animals, technologies and practices.
Institutional anthropology
The role of anthropology in institutions has expanded significantly since the end of the 20th century. Much of this development can be attributed to the rise in anthropologists working outside of academia and the increasing importance of globalization in both institutions and the field of anthropology. Anthropologists can be employed by institutions such as for-profit business, nonprofit organizations, and governments. For instance, cultural anthropologists are commonly employed by the United States federal government.
The two types of institutions defined in the field of anthropology are total institutions and social institutions. Total institutions are places that comprehensively coordinate the actions of people within them, and examples of total institutions include prisons, convents, and hospitals. Social institutions, on the other hand, are constructs that regulate individuals' day-to-day lives, such as kinship, religion, and economics. Anthropology of institutions may analyze labor unions, businesses ranging from small enterprises to corporations, government, medical organizations, education, prisons, and financial institutions. Nongovernmental organizations have garnered particular interest in the field of institutional anthropology because they are capable of fulfilling roles previously ignored by governments, or previously realized by families or local groups, in an attempt to mitigate social problems.
The types and methods of scholarship performed in the anthropology of institutions can take a number of forms. Institutional anthropologists may study the relationship between organizations or between an organization and other parts of society. Institutional anthropology may also focus on the inner workings of an institution, such as the relationships, hierarchies and cultures formed, and the ways that these elements are transmitted and maintained, transformed, or abandoned over time. Additionally, some anthropology of institutions examines the specific design of institutions and their corresponding strength. More specifically, anthropologists may analyze specific events within an institution, perform semiotic investigations, or analyze the mechanisms by which knowledge and culture are organized and dispersed.
In all manifestations of institutional anthropology, participant observation is critical to understanding the intricacies of the way an institution works and the consequences of actions taken by individuals within it. Simultaneously, anthropology of institutions extends beyond examination of the commonplace involvement of individuals in institutions to discover how and why the organizational principles evolved in the manner that they did.
Common considerations taken by anthropologists in studying institutions include the physical location at which a researcher places themselves, as important interactions often take place in private, and the fact that the members of an institution are often being examined in their workplace and may not have much idle time to discuss the details of their everyday endeavors. The ability of individuals to present the workings of an institution in a particular light or frame must additionally be taken into account when using interviews and document analysis to understand an institution, as the involvement of an anthropologist may be met with distrust when information being released to the public is not directly controlled by the institution and could potentially be damaging.
See also
References
External links
Official website of Human Relations Area Files (HRAF) based at Yale University
A Basic Guide to Cross-Cultural Research from HRAF |
5391 | https://en.wikipedia.org/wiki/City | City | A city is a human settlement of a notable size. It can be defined as a permanent and densely settled place with administratively defined boundaries whose members work primarily on non-agricultural tasks. Cities generally have extensive systems for housing, transportation, sanitation, utilities, land use, production of goods, and communication. Their density facilitates interaction between people, government organizations, and businesses, sometimes benefiting different parties in the process, such as improving the efficiency of goods and service distribution.
Historically, city dwellers have been a small proportion of humanity overall, but following two centuries of unprecedented and rapid urbanization, more than half of the world population now lives in cities, which has had profound consequences for global sustainability. Present-day cities usually form the core of larger metropolitan areas and urban areas—creating numerous commuters traveling toward city centres for employment, entertainment, and education. However, in a world of intensifying globalization, all cities are to varying degrees also connected globally beyond these regions. This increased influence means that cities also have significant influences on global issues, such as sustainable development, climate change, and global health. Because of these major influences on global issues, the international community has prioritized investment in sustainable cities through Sustainable Development Goal 11. Due to the efficiency of transportation and the smaller land consumption, dense cities hold the potential to have a smaller ecological footprint per inhabitant than more sparsely populated areas. Therefore, compact cities are often referred to as a crucial element in fighting climate change. However, this concentration can also have significant negative consequences, such as forming urban heat islands, concentrating pollution, and stressing water supplies and other resources.
Other important traits of cities besides population include the capital status and relative continued occupation of the city. For example, country capitals such as Athens,Beijing, Jakarta, Kuala Lumpur, London, Manila, Mexico City, Moscow, Nairobi, New Delhi, Paris, Rome, Seoul, Singapore, Tokyo, and Washington, D.C. reflect the identity and apex of their respective nations. Some historic capitals, such as Kyoto, Yogyakarta, and Xi'an, maintain their reflection of cultural identity even without modern capital status. Religious holy sites offer another example of capital status within a religion; examples include Jerusalem, Mecca, Varanasi, Ayodhya, Haridwar, and Prayagraj.
Meaning
A city can be distinguished from other human settlements by its relatively great size, but also by its functions and its special symbolic status, which may be conferred by a central authority. The term can also refer either to the physical streets and buildings of the city or to the collection of people who dwell there and can be used in a general sense to mean urban rather than rural territory.
National censuses use a variety of definitions – invoking factors such as population, population density, number of dwellings, economic function, and infrastructure – to classify populations as urban. Typical working definitions for small-city populations start at around 100,000 people. Common population definitions for an urban area (city or town) range between 1,500 and 50,000 people, with most U.S. states using a minimum between 1,500 and 5,000 inhabitants. Some jurisdictions set no such minima. In the United Kingdom, city status is awarded by the Crown and then remains permanent. (Historically, the qualifying factor was the presence of a cathedral, resulting in some very small cities such as Wells, with a population of 12,000 , and St Davids, with a population of 1,841 .) According to the "functional definition", a city is not distinguished by size alone, but also by the role it plays within a larger political context. Cities serve as administrative, commercial, religious, and cultural hubs for their larger surrounding areas.
The presence of a literate elite is often associated with cities because of the cultural diversities present in a city. A typical city has professional administrators, regulations, and some form of taxation (food and other necessities or means to trade for them) to support the government workers. (This arrangement contrasts with the more typically horizontal relationships in a tribe or village accomplishing common goals through informal agreements between neighbors, or the leadership of a chief.) The governments may be based on heredity, religion, military power, work systems such as canal-building, food distribution, land-ownership, agriculture, commerce, manufacturing, finance, or a combination of these. Societies that live in cities are often called civilizations.
The degree of urbanization is a modern metric to help define what comprises a city: "a population of at least 50,000 inhabitants in contiguous dense grid cells (>1,500 inhabitants per square kilometer)". This metric was "devised over years by the European Commission, OECD, World Bank and others, and endorsed in March [2021] by the United Nations ... largely for the purpose of international statistical comparison".
Etymology
The word city and the related civilization come from the Latin root civitas, originally meaning 'citizenship' or 'community member' and eventually coming to correspond with urbs, meaning 'city' in a more physical sense. The Roman civitas was closely linked with the Greek polis—another common root appearing in English words such as metropolis.
In toponymic terminology, names of individual cities and towns are called astionyms (from Ancient Greek ἄστυ 'city or town' and ὄνομα 'name').
Geography
Urban geography deals both with cities in their larger context and with their internal structure. Cities are estimated to cover about 3% of the land surface of the Earth.
Site
Town siting has varied through history according to natural, technological, economic, and military contexts. Access to water has long been a major factor in city placement and growth, and despite exceptions enabled by the advent of rail transport in the nineteenth century, through the present most of the world's urban population lives near the coast or on a river.
Urban areas as a rule cannot produce their own food and therefore must develop some relationship with a hinterland that sustains them. Only in special cases such as mining towns which play a vital role in long-distance trade, are cities disconnected from the countryside which feeds them. Thus, centrality within a productive region influences siting, as economic forces would, in theory, favor the creation of marketplaces in optimal mutually reachable locations.
Center
The vast majority of cities have a central area containing buildings with special economic, political, and religious significance. Archaeologists refer to this area by the Greek term temenos or if fortified as a citadel. These spaces historically reflect and amplify the city's centrality and importance to its wider sphere of influence. Today cities have a city center or downtown, sometimes coincident with a central business district.
Public space
Cities typically have public spaces where anyone can go. These include privately owned spaces open to the public as well as forms of public land such as public domain and the commons. Western philosophy since the time of the Greek agora has considered physical public space as the substrate of the symbolic public sphere. Public art adorns (or disfigures) public spaces. Parks and other natural sites within cities provide residents with relief from the hardness and regularity of typical built environments. Urban green spaces are another component of public space that provides the benefit of mitigating the urban heat island effect, especially in cities that are in warmer climates. These spaces prevent carbon imbalances, extreme habitat losses, electricity and water consumption, and human health risks.
Internal structure
The urban structure generally follows one or more basic patterns: geomorphic, radial, concentric, rectilinear, and curvilinear. The physical environment generally constrains the form in which a city is built. If located on a mountainside, urban structures may rely on terraces and winding roads. It may be adapted to its means of subsistence (e.g. agriculture or fishing). And it may be set up for optimal defense given the surrounding landscape. Beyond these "geomorphic" features, cities can develop internal patterns, due to natural growth or to city planning.
In a radial structure, main roads converge on a central point. This form could evolve from successive growth over a long time, with concentric traces of town walls and citadels marking older city boundaries. In more recent history, such forms were supplemented by ring roads moving traffic around the outskirts of a town. Dutch cities such as Amsterdam and Haarlem are structured as a central square surrounded by concentric canals marking every expansion. In cities such as Moscow, this pattern is still clearly visible.
A system of rectilinear city streets and land plots, known as the grid plan, has been used for millennia in Asia, Europe, and the Americas. The Indus Valley civilization built Mohenjo-Daro, Harappa, and other cities on a grid pattern, using ancient principles described by Kautilya, and aligned with the compass points. The ancient Greek city of Priene exemplifies a grid plan with specialized districts used across the Hellenistic Mediterranean.
Urban areas
The urban-type settlement extends far beyond the traditional boundaries of the city proper in a form of development sometimes described critically as urban sprawl. Decentralization and dispersal of city functions (commercial, industrial, residential, cultural, political) has transformed the very meaning of the term and has challenged geographers seeking to classify territories according to an urban-rural binary.
Metropolitan areas include suburbs and exurbs organized around the needs of commuters, and sometimes edge cities characterized by a degree of economic and political independence. (In the US these are grouped into metropolitan statistical areas for purposes of demography and marketing.) Some cities are now part of a continuous urban landscape called urban agglomeration, conurbation, or megalopolis (exemplified by the BosWash corridor of the Northeastern United States.)
History
The emergence of cities from proto-urban settlements, such as Çatalhöyük, is a non-linear development that demonstrates the varied experiences of early urbanization.
The cities of Jericho, Aleppo, Faiyum, Yerevan, Athens, Matera, Damascus, and Argos are among those laying claim to the longest continual inhabitation.
Cities, characterized by population density, symbolic function, and urban planning, have existed for thousands of years. In the conventional view, civilization and the city were both followed by the development of agriculture, which enabled the production of surplus food and thus a social division of labor (with concomitant social stratification) and trade. Early cities often featured granaries, sometimes within a temple. A minority viewpoint considers that cities may have arisen without agriculture, due to alternative means of subsistence (fishing), to use as communal seasonal shelters, to their value as bases for defensive and offensive military organization, or to their inherent economic function. Cities played a crucial role in the establishment of political power over an area, and ancient leaders such as Alexander the Great founded and created them with zeal.
Ancient times
Jericho and Çatalhöyük, dated to the eighth millennium BC, are among the earliest proto-cities known to archaeologists. However, the Mesopotamian city of Uruk from the mid-fourth millennium BC (ancient Iraq) is considered by most archaeologists to be the first true city, innovating many characteristics for cities to follow, with its name attributed to the Uruk period.
In the fourth and third millennium BC, complex civilizations flourished in the river valleys of Mesopotamia, India, China, and Egypt. Excavations in these areas have found the ruins of cities geared variously towards trade, politics, or religion. Some had large, dense populations, but others carried out urban activities in the realms of politics or religion without having large associated populations.
Among the early Old World cities, Mohenjo-Daro of the Indus Valley civilization in present-day Pakistan, existing from about 2600 BC, was one of the largest, with a population of 50,000 or more and a sophisticated sanitation system. China's planned cities were constructed according to sacred principles to act as celestial microcosms.
The Ancient Egyptian cities known physically by archaeologists are not extensive. They include (known by their Arab names) El Lahun, a workers' town associated with the pyramid of Senusret II, and the religious city Amarna built by Akhenaten and abandoned. These sites appear planned in a highly regimented and stratified fashion, with a minimalistic grid of rooms for the workers and increasingly more elaborate housing available for higher classes.
In Mesopotamia, the civilization of Sumer, followed by Assyria and Babylon, gave rise to numerous cities, governed by kings and fostered multiple languages written in cuneiform. The Phoenician trading empire, flourishing around the turn of the first millennium BC, encompassed numerous cities extending from Tyre, Cydon, and Byblos to Carthage and Cádiz.
In the following centuries, independent city-states of Greece, especially Athens, developed the polis, an association of male landowning citizens who collectively constituted the city. The agora, meaning "gathering place" or "assembly", was the center of the athletic, artistic, spiritual, and political life of the polis. Rome was the first city that surpassed one million inhabitants. Under the authority of its empire, Rome transformed and founded many cities (), and with them brought its principles of urban architecture, design, and society.
In the ancient Americas, early urban traditions developed in the Andes and Mesoamerica. In the Andes, the first urban centers developed in the Norte Chico civilization, Chavin and Moche cultures, followed by major cities in the Huari, Chimu, and Inca cultures. The Norte Chico civilization included as many as 30 major population centers in what is now the Norte Chico region of north-central coastal Peru. It is the oldest known civilization in the Americas, flourishing between the 30th and 18th centuries BC. Mesoamerica saw the rise of early urbanism in several cultural regions, beginning with the Olmec and spreading to the Preclassic Maya, the Zapotec of Oaxaca, and Teotihuacan in central Mexico. Later cultures such as the Aztec, Andean civilizations, Mayan, Mississippians, and Pueblo peoples drew on these earlier urban traditions. Many of their ancient cities continue to be inhabited, including major metropolitan cities such as Mexico City, in the same location as Tenochtitlan; while ancient continuously inhabited Pueblos are near modern urban areas in New Mexico, such as Acoma Pueblo near the Albuquerque metropolitan area and Taos Pueblo near Taos; while others like Lima are located nearby ancient Peruvian sites such as Pachacamac.
Jenné-Jeno, located in present-day Mali and dating to the third century BC, lacked monumental architecture and a distinctive elite social class—but nevertheless had specialized production and relations with a hinterland. Pre-Arabic trade contacts probably existed between Jenné-Jeno and North Africa. Other early urban centers in sub-Saharan Africa, dated to around 500 AD, include Awdaghust, Kumbi-Saleh the ancient capital of Ghana, and Maranda a center located on a trade route between Egypt and Gao.
Middle Ages
In the remnants of the Roman Empire, cities of late antiquity gained independence but soon lost population and importance. The locus of power in the West shifted to Constantinople and to the ascendant Islamic civilization with its major cities Baghdad, Cairo, and Córdoba. From the 9th through the end of the 12th century, Constantinople, the capital of the Eastern Roman Empire, was the largest and wealthiest city in Europe, with a population approaching 1 million. The Ottoman Empire gradually gained control over many cities in the Mediterranean area, including Constantinople in 1453.
In the Holy Roman Empire, beginning in the 12th century, free imperial cities such as Nuremberg, Strasbourg, Frankfurt, Basel, Zurich, and Nijmegen became a privileged elite among towns having won self-governance from their local lord or having been granted self-governance by the emperor and being placed under his immediate protection. By 1480, these cities, as far as still part of the empire, became part of the Imperial Estates governing the empire with the emperor through the Imperial Diet.
By the 13th and 14th centuries, some cities become powerful states, taking surrounding areas under their control or establishing extensive maritime empires. In Italy, medieval communes developed into city-states including the Republic of Venice and the Republic of Genoa. In Northern Europe, cities including Lübeck and Bruges formed the Hanseatic League for collective defense and commerce. Their power was later challenged and eclipsed by the Dutch commercial cities of Ghent, Ypres, and Amsterdam. Similar phenomena existed elsewhere, as in the case of Sakai, which enjoyed considerable autonomy in late medieval Japan.
In the first millennium AD, the Khmer capital of Angkor in Cambodia grew into the most extensive preindustrial settlement in the world by area, covering over 1,000 km2 and possibly supporting up to one million people.
Early modern
In the West, nation-states became the dominant unit of political organization following the Peace of Westphalia in the seventeenth century. Western Europe's larger capitals (London and Paris) benefited from the growth of commerce following the emergence of an Atlantic trade. However, most towns remained small.
During the Spanish colonization of the Americas, the old Roman city concept was extensively used. Cities were founded in the middle of the newly conquered territories and were bound to several laws regarding administration, finances, and urbanism.
Industrial age
The growth of the modern industry from the late 18th century onward led to massive urbanization and the rise of new great cities, first in Europe and then in other regions, as new opportunities brought huge numbers of migrants from rural communities into urban areas. England led the way as London became the capital of a world empire and cities across the country grew in locations strategic for manufacturing. In the United States from 1860 to 1910, the introduction of railroads reduced transportation costs, and large manufacturing centers began to emerge, fueling migration from rural to city areas.
Some industrialized cities were confronted with health challenges associated with overcrowding, occupational hazards of industry, contaminated water and air, poor sanitation, and communicable diseases such as typhoid and cholera. Factories and slums emerged as regular features of the urban landscape.
Post-industrial age
In the second half of the 20th century, deindustrialization (or "economic restructuring") in the West led to poverty, homelessness, and urban decay in formerly prosperous cities. America's "Steel Belt" became a "Rust Belt" and cities such as Detroit, Michigan, and Gary, Indiana began to shrink, contrary to the global trend of massive urban expansion. Such cities have shifted with varying success into the service economy and public-private partnerships, with concomitant gentrification, uneven revitalization efforts, and selective cultural development. Under the Great Leap Forward and subsequent five-year plans continuing today, China has undergone concomitant urbanization and industrialization and become the world's leading manufacturer.
Amidst these economic changes, high technology and instantaneous telecommunication enable select cities to become centers of the knowledge economy. A new smart city paradigm, supported by institutions such as the RAND Corporation and IBM, is bringing computerized surveillance, data analysis, and governance to bear on cities and city dwellers. Some companies are building brand-new master-planned cities from scratch on greenfield sites.
Urbanization
Urbanization is the process of migration from rural to urban areas, driven by various political, economic, and cultural factors. Until the 18th century, an equilibrium existed between the rural agricultural population and towns featuring markets and small-scale manufacturing. With the agricultural and industrial revolutions urban population began its unprecedented growth, both through migration and demographic expansion. In England, the proportion of the population living in cities jumped from 17% in 1801 to 72% in 1891. In 1900, 15% of the world's population lived in cities. The cultural appeal of cities also plays a role in attracting residents.
Urbanization rapidly spread across Europe and the Americas and since the 1950s has taken hold in Asia and Africa as well. The Population Division of the United Nations Department of Economic and Social Affairs reported in 2014 that for the first time, more than half of the world population lives in cities.
Latin America is the most urban continent, with four-fifths of its population living in cities, including one-fifth of the population said to live in shantytowns (favelas, poblaciones callampas, etc.). Batam, Indonesia, Mogadishu, Somalia, Xiamen, China, and Niamey, Niger, are considered among the world's fastest-growing cities, with annual growth rates of 5–8%. In general, the more developed countries of the "Global North" remain more urbanized than the less developed countries of the "Global South"—but the difference continues to shrink because urbanization is happening faster in the latter group. Asia is home to by far the greatest absolute number of city-dwellers: over two billion and counting. The UN predicts an additional 2.5 billion city dwellers (and 300 million fewer country dwellers) worldwide by 2050, with 90% of urban population expansion occurring in Asia and Africa.
Megacities, cities with populations in the multi-millions, have proliferated into the dozens, arising especially in Asia, Africa, and Latin America. Economic globalization fuels the growth of these cities, as new torrents of foreign capital arrange for rapid industrialization, as well as the relocation of major businesses from Europe and North America, attracting immigrants from near and far. A deep gulf divides the rich and poor in these cities, with usually contain a super-wealthy elite living in gated communities and large masses of people living in substandard housing with inadequate infrastructure and otherwise poor conditions.
Cities around the world have expanded physically as they grow in population, with increases in their surface extent, with the creation of high-rise buildings for residential and commercial use, and with development underground.
Urbanization can create rapid demand for water resources management, as formerly good sources of freshwater become overused and polluted, and the volume of sewage begins to exceed manageable levels.
Government
The local government of cities takes different forms including prominently the municipality (especially in England, in the United States, India, and other British colonies; legally, the municipal corporation; municipio in Spain and Portugal, and, along with municipalidad, in most former parts of the Spanish and Portuguese empires) and the commune (in France and Chile; or comune in Italy).
The chief official of the city has the title of mayor. Whatever their true degree of political authority, the mayor typically acts as the figurehead or personification of their city.
Legal conflicts and issues arise more frequently in cities than elsewhere due to the bare fact of their greater density. Modern city governments thoroughly regulate everyday life in many dimensions, including public and personal health, transport, burial, resource use and extraction, recreation, and the nature and use of buildings. Technologies, techniques, and laws governing these areas—developed in cities—have become ubiquitous in many areas.
Municipal officials may be appointed from a higher level of government or elected locally.
Municipal services
Cities typically provide municipal services such as education, through school systems; policing, through police departments; and firefighting, through fire departments; as well as the city's basic infrastructure. These are provided more or less routinely, in a more or less equal fashion. Responsibility for administration usually falls on the city government, but some services may be operated by a higher level of government, while others may be privately run. Armies may assume responsibility for policing cities in states of domestic turmoil such as America's King assassination riots of 1968.
Finance
The traditional basis for municipal finance is local property tax levied on real estate within the city. Local government can also collect revenue for services, or by leasing land that it owns. However, financing municipal services, as well as urban renewal and other development projects, is a perennial problem, which cities address through appeals to higher governments, arrangements with the private sector, and techniques such as privatization (selling services into the private sector), corporatization (formation of quasi-private municipally-owned corporations), and financialization (packaging city assets into tradeable financial public contracts and other related rights). This situation has become acute in deindustrialized cities and in cases where businesses and wealthier citizens have moved outside of city limits and therefore beyond the reach of taxation. Cities in search of ready cash increasingly resort to the municipal bond, essentially a loan with interest and a repayment date. City governments have also begun to use tax increment financing, in which a development project is financed by loans based on future tax revenues which it is expected to yield. Under these circumstances, creditors and consequently city governments place a high importance on city credit ratings.
Governance
Governance includes government but refers to a wider domain of social control functions implemented by many actors including non-governmental organizations. The impact of globalization and the role of multinational corporations in local governments worldwide, has led to a shift in perspective on urban governance, away from the "urban regime theory" in which a coalition of local interests functionally govern, toward a theory of outside economic control, widely associated in academics with the philosophy of neoliberalism. In the neoliberal model of governance, public utilities are privatized, the industry is deregulated, and corporations gain the status of governing actors—as indicated by the power they wield in public-private partnerships and over business improvement districts, and in the expectation of self-regulation through corporate social responsibility. The biggest investors and real estate developers act as the city's de facto urban planners.
The related concept of good governance places more emphasis on the state, with the purpose of assessing urban governments for their suitability for development assistance. The concepts of governance and good governance are especially invoked in emergent megacities, where international organizations consider existing governments inadequate for their large populations.
Urban planning
Urban planning, the application of forethought to city design, involves optimizing land use, transportation, utilities, and other basic systems, in order to achieve certain objectives. Urban planners and scholars have proposed overlapping theories as ideals for how plans should be formed. Planning tools, beyond the original design of the city itself, include public capital investment in infrastructure and land-use controls such as zoning. The continuous process of comprehensive planning involves identifying general objectives as well as collecting data to evaluate progress and inform future decisions.
Government is legally the final authority on planning but in practice, the process involves both public and private elements. The legal principle of eminent domain is used by the government to divest citizens of their property in cases where its use is required for a project. Planning often involves tradeoffs—decisions in which some stand to gain and some to lose—and thus is closely connected to the prevailing political situation.
The history of urban planning dates to some of the earliest known cities, especially in the Indus Valley and Mesoamerican civilizations, which built their cities on grids and apparently zoned different areas for different purposes. The effects of planning, ubiquitous in today's world, can be seen most clearly in the layout of planned communities, fully designed prior to construction, often with consideration for interlocking physical, economic, and cultural systems.
Society
Social structure
Urban society is typically stratified. Spatially, cities are formally or informally segregated along ethnic, economic, and racial lines. People living relatively close together may live, work, and play in separate areas, and associate with different people, forming ethnic or lifestyle enclaves or, in areas of concentrated poverty, ghettoes. While in the US and elsewhere poverty became associated with the inner city, in France it has become associated with the banlieues, areas of urban development that surround the city proper. Meanwhile, across Europe and North America, the racially white majority is empirically the most segregated group. Suburbs in the West, and, increasingly, gated communities and other forms of "privatopia" around the world, allow local elites to self-segregate into secure and exclusive neighborhoods.
Landless urban workers, contrasted with peasants and known as the proletariat, form a growing stratum of society in the age of urbanization. In Marxist doctrine, the proletariat will inevitably revolt against the bourgeoisie as their ranks swell with disenfranchised and disaffected people lacking all stake in the status quo. The global urban proletariat of today, however, generally lacks the status of factory workers which in the nineteenth century provided access to the means of production.
Economics
Historically, cities rely on rural areas for intensive farming to yield surplus crops, in exchange for which they provide money, political administration, manufactured goods, and culture. Urban economics tends to analyze larger agglomerations, stretching beyond city limits, in order to reach a more complete understanding of the local labor market.
As hubs of trade, cities have long been home to retail commerce and consumption through the interface of shopping. In the 20th century, department stores using new techniques of advertising, public relations, decoration, and design, transformed urban shopping areas into fantasy worlds encouraging self-expression and escape through consumerism.
In general, the density of cities expedites commerce and facilitates knowledge spillovers, helping people and firms exchange information and generate new ideas. A thicker labor market allows for better skill matching between firms and individuals. Population density enables also sharing of common infrastructure and production facilities; however, in very dense cities, increased crowding and waiting times may lead to some negative effects.
Although manufacturing fueled the growth of cities, many now rely on a tertiary or service economy. The services in question range from tourism, hospitality, entertainment, and housekeeping to grey-collar work in law, financial consulting, and administration.
According to a scientific model of cities by Professor Geoffrey West, with the doubling of a city's size, salaries per capita will generally increase by 15%.
Culture and communications
Cities are typically hubs for education and the arts, supporting universities, museums, temples, and other cultural institutions. They feature impressive displays of architecture ranging from small to enormous and ornate to brutal; skyscrapers, providing thousands of offices or homes within a small footprint, and visible from miles away, have become iconic urban features. Cultural elites tend to live in cities, bound together by shared cultural capital, and themselves play some role in governance. By virtue of their status as centers of culture and literacy, cities can be described as the locus of civilization, human history, and social change.
Density makes for effective mass communication and transmission of news, through heralds, printed proclamations, newspapers, and digital media. These communication networks, though still using cities as hubs, penetrate extensively into all populated areas. In the age of rapid communication and transportation, commentators have described urban culture as nearly ubiquitous or as no longer meaningful.
Today, a city's promotion of its cultural activities dovetails with place branding and city marketing, public diplomacy techniques used to inform development strategy; attract businesses, investors, residents, and tourists; and to create shared identity and sense of place within the metropolitan area. Physical inscriptions, plaques, and monuments on display physically transmit a historical context for urban places. Some cities, such as Jerusalem, Mecca, and Rome have indelible religious status and for hundreds of years have attracted pilgrims. Patriotic tourists visit Agra to see the Taj Mahal, or New York City to visit the World Trade Center. Elvis lovers visit Memphis to pay their respects at Graceland. Place brands (which include place satisfaction and place loyalty) have great economic value (comparable to the value of commodity brands) because of their influence on the decision-making process of people thinking about doing business in—"purchasing" (the brand of)—a city.
Bread and circuses among other forms of cultural appeal, attract and entertain the masses. Sports also play a major role in city branding and local identity formation. Cities go to considerable lengths in competing to host the Olympic Games, which bring global attention and tourism. Paris, a city known for its cultural history, is the site of the next Olympics in the summer of 2024.
Warfare
Cities play a crucial strategic role in warfare due to their economic, demographic, symbolic, and political centrality. For the same reasons, they are targets in asymmetric warfare. Many cities throughout history were founded under military auspices, a great many have incorporated fortifications, and military principles continue to influence urban design. Indeed, war may have served as the social rationale and economic basis for the very earliest cities.
Powers engaged in geopolitical conflict have established fortified settlements as part of military strategies, as in the case of garrison towns, America's Strategic Hamlet Program during the Vietnam War, and Israeli settlements in Palestine. While occupying the Philippines, the US Army ordered local people to concentrate in cities and towns, in order to isolate committed insurgents and battle freely against them in the countryside.
During World War II, national governments on occasion declared certain cities open, effectively surrendering them to an advancing enemy in order to avoid damage and bloodshed. Urban warfare proved decisive, however, in the Battle of Stalingrad, where Soviet forces repulsed German occupiers, with extreme casualties and destruction. In an era of low-intensity conflict and rapid urbanization, cities have become sites of long-term conflict waged both by foreign occupiers and by local governments against insurgency. Such warfare, known as counterinsurgency, involves techniques of surveillance and psychological warfare as well as close combat, and functionally extends modern urban crime prevention, which already uses concepts such as defensible space.
Although capture is the more common objective, warfare has in some cases spelled complete destruction for a city. Mesopotamian tablets and ruins attest to such destruction, as does the Latin motto Carthago delenda est. Since the atomic bombings of Hiroshima and Nagasaki and throughout the Cold War, nuclear strategists continued to contemplate the use of "counter-value" targeting: crippling an enemy by annihilating its valuable cities, rather than aiming primarily at its military forces.
Climate change
Infrastructure
Urban infrastructure involves various physical networks and spaces necessary for transportation, water use, energy, recreation, and public functions. Infrastructure carries a high initial cost in fixed capital but lower marginal costs and thus positive economies of scale. Because of the higher barriers to entry, these networks have been classified as natural monopolies, meaning that economic logic favors control of each network by a single organization, public or private.
Infrastructure in general plays a vital role in a city's capacity for economic activity and expansion, underpinning the very survival of the city's inhabitants, as well as technological, commercial, industrial, and social activities. Structurally, many infrastructure systems take the form of networks with redundant links and multiple pathways, so that the system as a whole continue to operate even if parts of it fail. The particulars of a city's infrastructure systems have historical path dependence because new development must build from what exists already.
Megaprojects such as the construction of airports, power plants, and railways require large upfront investments and thus tend to require funding from the national government or the private sector. Privatization may also extend to all levels of infrastructure construction and maintenance.
Urban infrastructure ideally serves all residents equally but in practice may prove uneven—with, in some cities, clear first-class and second-class alternatives.
Utilities
Public utilities (literally, useful things with general availability) include basic and essential infrastructure networks, chiefly concerned with the supply of water, electricity, and telecommunications capability to the populace.
Sanitation, necessary for good health in crowded conditions, requires water supply and waste management as well as individual hygiene. Urban water systems include principally a water supply network and a network (sewerage system) for sewage and stormwater. Historically, either local governments or private companies have administered urban water supply, with a tendency toward government water supply in the 20th century and a tendency toward private operation at the turn of the twenty-first. The market for private water services is dominated by two French companies, Veolia Water (formerly Vivendi) and Engie (formerly Suez), said to hold 70% of all water contracts worldwide.
Modern urban life relies heavily on the energy transmitted through electricity for the operation of electric machines (from household appliances to industrial machines to now-ubiquitous electronic systems used in communications, business, and government) and for traffic lights, street lights, and indoor lighting. Cities rely to a lesser extent on hydrocarbon fuels such as gasoline and natural gas for transportation, heating, and cooking. Telecommunications infrastructure such as telephone lines and coaxial cables also traverse cities, forming dense networks for mass and point-to-point communications.
Transportation
Because cities rely on specialization and an economic system based on wage labor, their inhabitants must have the ability to regularly travel between home, work, commerce, and entertainment. City dwellers travel by foot or by wheel on roads and walkways, or use special rapid transit systems based on underground, overground, and elevated rail. Cities also rely on long-distance transportation (truck, rail, and airplane) for economic connections with other cities and rural areas.
City streets historically were the domain of horses and their riders and pedestrians, who only sometimes had sidewalks and special walking areas reserved for them. In the West, bicycles or (velocipedes), efficient human-powered machines for short- and medium-distance travel, enjoyed a period of popularity at the beginning of the twentieth century before the rise of automobiles. Soon after, they gained a more lasting foothold in Asian and African cities under European influence. In Western cities, industrializing, expanding, and electrifying public transit systems, and especially streetcars enabled urban expansion as new residential neighborhoods sprung up along transit lines and workers rode to and from work downtown.
Since the mid-20th century, cities have relied heavily on motor vehicle transportation, with major implications for their layout, environment, and aesthetics. (This transformation occurred most dramatically in the US—where corporate and governmental policies favored automobile transport systems—and to a lesser extent in Europe.) The rise of personal cars accompanied the expansion of urban economic areas into much larger metropolises, subsequently creating ubiquitous traffic issues with the accompanying construction of new highways, wider streets, and alternative walkways for pedestrians. However, severe traffic jams still occur regularly in cities around the world, as private car ownership and urbanization continue to increase, overwhelming existing urban street networks.
The urban bus system, the world's most common form of public transport, uses a network of scheduled routes to move people through the city, alongside cars, on the roads. The economic function itself also became more decentralized as concentration became impractical and employers relocated to more car-friendly locations (including edge cities). Some cities have introduced bus rapid transit systems which include exclusive bus lanes and other methods for prioritizing bus traffic over private cars. Many big American cities still operate conventional public transit by rail, as exemplified by the ever-popular New York City Subway system. Rapid transit is widely used in Europe and has increased in Latin America and Asia.
Walking and cycling ("non-motorized transport") enjoy increasing favor (more pedestrian zones and bike lanes) in American and Asian urban transportation planning, under the influence of such trends as the Healthy Cities movement, the drive for sustainable development, and the idea of a carfree city. Techniques such as road space rationing and road use charges have been introduced to limit urban car traffic.
Housing
The housing of residents presents one of the major challenges every city must face. Adequate housing entails not only physical shelters but also the physical systems necessary to sustain life and economic activity.
Homeownership represents status and a modicum of economic security, compared to renting which may consume much of the income of low-wage urban workers. Homelessness, or lack of housing, is a challenge currently faced by millions of people in countries rich and poor. Because cities generally have higher population densities than rural areas, city dwellers are more likely to reside in apartments and less likely to live in a single-family home.
Ecology
Urban ecosystems, influenced as they are by the density of human buildings and activities, differ considerably from those of their rural surroundings. Anthropogenic buildings and waste, as well as cultivation in gardens, create physical and chemical environments which have no equivalents in the wilderness, in some cases enabling exceptional biodiversity. They provide homes not only for immigrant humans but also for immigrant plants, bringing about interactions between species that never previously encountered each other. They introduce frequent disturbances (construction, walking) to plant and animal habitats, creating opportunities for recolonization and thus favoring young ecosystems with r-selected species dominant. On the whole, urban ecosystems are less complex and productive than others, due to the diminished absolute amount of biological interactions.
Typical urban fauna includes insects (especially ants), rodents (mice, rats), and birds, as well as cats and dogs (domesticated and feral). Large predators are scarce. However, in North America, large predators such as coyotes and white-tailed deer roam in urban wildlife
Cities generate considerable ecological footprints, locally and at longer distances, due to concentrated populations and technological activities. From one perspective, cities are not ecologically sustainable due to their resource needs. From another, proper management may be able to ameliorate a city's ill effects. Air pollution arises from various forms of combustion, including fireplaces, wood or coal-burning stoves, other heating systems, and internal combustion engines. Industrialized cities, and today third-world megacities, are notorious for veils of smog (industrial haze) that envelop them, posing a chronic threat to the health of their millions of inhabitants. Urban soil contains higher concentrations of heavy metals (especially lead, copper, and nickel) and has lower pH than soil in the comparable wilderness.
Modern cities are known for creating their own microclimates, due to concrete, asphalt, and other artificial surfaces, which heat up in sunlight and channel rainwater into underground ducts. The temperature in New York City exceeds nearby rural temperatures by an average of 2–3 °C and at times 5–10 °C differences have been recorded. This effect varies nonlinearly with population changes (independently of the city's physical size). Aerial particulates increase rainfall by 5–10%. Thus, urban areas experience unique climates, with earlier flowering and later leaf dropping than in nearby countries.
Poor and working-class people face disproportionate exposure to environmental risks (known as environmental racism when intersecting also with racial segregation). For example, within the urban microclimate, less-vegetated poor neighborhoods bear more of the heat (but have fewer means of coping with it).
One of the main methods of improving the urban ecology is including in the cities more urban green spaces: parks, gardens, lawns, and trees. These areas improve the health and well-being of the human, animal, and plant populations of the cities. Well-maintained urban trees can provide many social, ecological, and physical benefits to the residents of the city.
A study published in Nature's Scientific Reports journal in 2019 found that people who spent at least two hours per week in nature were 23 percent more likely to be satisfied with their life and were 59 percent more likely to be in good health than those who had zero exposure. The study used data from almost 20,000 people in the UK. Benefits increased for up to 300 minutes of exposure. The benefits are applied to men and women of all ages, as well as across different ethnicities, socioeconomic statuses, and even those with long-term illnesses and disabilities. People who did not get at least two hours – even if they surpassed an hour per week – did not get the benefits. The study is the latest addition to a compelling body of evidence for the health benefits of nature. Many doctors already give nature prescriptions to their patients. The study didn't count time spent in a person's own yard or garden as time in nature, but the majority of nature visits in the study took place within two miles of home. "Even visiting local urban green spaces seems to be a good thing," Dr. White said in a press release. "Two hours a week is hopefully a realistic target for many people, especially given that it can be spread over an entire week to get the benefit."
World city system
As the world becomes more closely linked through economics, politics, technology, and culture (a process called globalization), cities have come to play a leading role in transnational affairs, exceeding the limitations of international relations conducted by national governments. This phenomenon, resurgent today, can be traced back to the Silk Road, Phoenicia, and the Greek city-states, through the Hanseatic League and other alliances of cities. Today the information economy based on high-speed internet infrastructure enables instantaneous telecommunication around the world, effectively eliminating the distance between cities for the purposes of the international markets and other high-level elements of the world economy, as well as personal communications and mass media.
Global city
A global city, also known as a world city, is a prominent centre of trade, banking, finance, innovation, and markets. Saskia Sassen used the term "global city" in her 1991 work, The Global City: New York, London, Tokyo to refer to a city's power, status, and cosmopolitanism, rather than to its size. Following this view of cities, it is possible to rank the world's cities hierarchically. Global cities form the capstone of the global hierarchy, exerting command and control through their economic and political influence. Global cities may have reached their status due to early transition to post-industrialism or through inertia which has enabled them to maintain their dominance from the industrial era. This type of ranking exemplifies an emerging discourse in which cities, considered variations on the same ideal type, must compete with each other globally to achieve prosperity.
Critics of the notion point to the different realms of power and interchange. The term "global city" is heavily influenced by economic factors and, thus, may not account for places that are otherwise significant. Paul James, for example argues that the term is "reductive and skewed" in its focus on financial systems.
Multinational corporations and banks make their headquarters in global cities and conduct much of their business within this context. American firms dominate the international markets for law and engineering and maintain branches in the biggest foreign global cities.
Large cities have a great divide between populations of both ends of the financial spectrum. Regulations on immigration promote the exploitation of low- and high-skilled immigrant workers from poor areas. During employment, migrant workers may be subject to unfair working conditions, including working overtime, low wages, and lack of safety in workplaces.
Transnational activity
Cities increasingly participate in world political activities independently of their enclosing nation-states. Early examples of this phenomenon are the sister city relationship and the promotion of multi-level governance within the European Union as a technique for European integration. Cities including Hamburg, Prague, Amsterdam, The Hague, and City of London maintain their own embassies to the European Union at Brussels.
New urban dwellers are increasingly transmigrants, keeping one foot each (through telecommunications if not travel) in their old and their new homes.
Global governance
Cities participate in global governance by various means including membership in global networks which transmit norms and regulations. At the general, global level, United Cities and Local Governments (UCLG) is a significant umbrella organization for cities; regionally and nationally, Eurocities, Asian Network of Major Cities 21, the Federation of Canadian Municipalities the National League of Cities, and the United States Conference of Mayors play similar roles. UCLG took responsibility for creating Agenda 21 for culture, a program for cultural policies promoting sustainable development, and has organized various conferences and reports for its furtherance.
Networks have become especially prevalent in the arena of environmentalism and specifically climate change following the adoption of Agenda 21. Environmental city networks include the C40 Cities Climate Leadership Group, the United Nations Global Compact Cities Programme, the Carbon Neutral Cities Alliance (CNCA), the Covenant of Mayors and the Compact of Mayors, ICLEI – Local Governments for Sustainability, and the Transition Towns network.
Cities with world political status as meeting places for advocacy groups, non-governmental organizations, lobbyists, educational institutions, intelligence agencies, military contractors, information technology firms, and other groups with a stake in world policymaking. They are consequently also sites for symbolic protest. South Africa has one of the highest rate of protests in the world. Pretoria, a city in South Africa had a rally where 5 thousand people took part in order to advocate for increasing wages to afford living costs.
United Nations System
The United Nations System has been involved in a series of events and declarations dealing with the development of cities during this period of rapid urbanization.
The Habitat I conference in 1976 adopted the "Vancouver Declaration on Human Settlements" which identifies urban management as a fundamental aspect of development and establishes various principles for maintaining urban habitats.
Citing the Vancouver Declaration, the UN General Assembly in December 1977 authorized the United Nations Commission Human Settlements and the HABITAT Centre for Human Settlements, intended to coordinate UN activities related to housing and settlements.
The 1992 Earth Summit in Rio de Janeiro resulted in a set of international agreements including Agenda 21 which establishes principles and plans for sustainable development.
The Habitat II conference in 1996 called for cities to play a leading role in this program, which subsequently advanced the Millennium Development Goals and Sustainable Development Goals.
In January 2002 the UN Commission on Human Settlements became an umbrella agency called the United Nations Human Settlements Programme or UN-Habitat, a member of the United Nations Development Group.
The Habitat III conference of 2016 focused on implementing these goals under the banner of a "New Urban Agenda". The four mechanisms envisioned for effecting the New Urban Agenda are (1) national policies promoting integrated sustainable development, (2) stronger urban governance, (3) long-term integrated urban and territorial planning, and (4) effective financing frameworks. Just before this conference, the European Union concurrently approved an "Urban Agenda for the European Union" known as the Pact of Amsterdam.
UN-Habitat coordinates the U.N. urban agenda, working with the UN Environmental Programme, the UN Development Programme, the Office of the High Commissioner for Human Rights, the World Health Organization, and the World Bank.
The World Bank, a U.N. specialized agency, has been a primary force in promoting the Habitat conferences, and since the first Habitat conference has used their declarations as a framework for issuing loans for urban infrastructure. The bank's structural adjustment programs contributed to urbanization in the Third World by creating incentives to move to cities. The World Bank and UN-Habitat in 1999 jointly established the Cities Alliance (based at the World Bank headquarters in Washington, D.C.) to guide policymaking, knowledge sharing, and grant distribution around the issue of urban poverty. (UN-Habitat plays an advisory role in evaluating the quality of a locality's governance.) The Bank's policies have tended to focus on bolstering real estate markets through credit and technical assistance.
The United Nations Educational, Scientific and Cultural Organization, UNESCO has increasingly focused on cities as key sites for influencing cultural governance. It has developed various city networks including the International Coalition of Cities against Racism and the Creative Cities Network. UNESCO's capacity to select World Heritage Sites gives the organization significant influence over cultural capital, tourism, and historic preservation funding.
Representation in culture
Cities figure prominently in traditional Western culture, appearing in the Bible in both evil and holy forms, symbolized by Babylon and Jerusalem. Cain and Nimrod are the first city builders in the Book of Genesis. In Sumerian mythology Gilgamesh built the walls of Uruk.
Cities can be perceived in terms of extremes or opposites: at once liberating and oppressive, wealthy and poor, organized and chaotic. The name anti-urbanism refers to various types of ideological opposition to cities, whether because of their culture or their political relationship with the country. Such opposition may result from identification of cities with oppression and the ruling elite. This and other political ideologies strongly influence narratives and themes in discourse about cities. In turn, cities symbolize their home societies.
Writers, painters, and filmmakers have produced innumerable works of art concerning the urban experience. Classical and medieval literature includes a genre of descriptiones which treat of city features and history. Modern authors such as Charles Dickens and James Joyce are famous for evocative descriptions of their home cities. Fritz Lang conceived the idea for his influential 1927 film Metropolis while visiting Times Square and marveling at the nighttime neon lighting. Other early cinematic representations of cities in the twentieth century generally depicted them as technologically efficient spaces with smoothly functioning systems of automobile transport. By the 1960s, however, traffic congestion began to appear in such films as The Fast Lady (1962) and Playtime (1967).
Literature, film, and other forms of popular culture have supplied visions of future cities both utopian and dystopian. The prospect of expanding, communicating, and increasingly interdependent world cities has given rise to images such as Nylonkong (New York, London, Hong Kong) and visions of a single world-encompassing ecumenopolis.
See also
Lists of cities
List of adjectivals and demonyms for cities
Lost city
Metropolis
Compact city
Megacity
Settlement hierarchy
Urbanization
Notes
References
Bibliography
Abrahamson, Mark (2004). Global Cities. Oxford University Press.
Ashworth, G.J. War and the City. London & New York: Routledge, 1991. .
Bridge, Gary, and Sophie Watson, eds. (2000). A Companion to the City. Malden, MA: Blackwell, 2000/2003.
Brighenti, Andrea Mubi, ed. (2013). Urban Interstices: The Aesthetics and the Politics of the In-between. Farnham: Ashgate Publishing. .
Carter, Harold (1995). The Study of Urban Geography. 4th ed. London: Arnold.
Clark, Peter (ed.) (2013). The Oxford Handbook of Cities in World History. Oxford University Press.
Curtis, Simon (2016). Global Cities and Global Order. Oxford University Press.
Ellul, Jacques (1970). The Meaning of the City. Translated by Dennis Pardee. Grand Rapids, Michigan: Eerdmans, 1970. ; French original (written earlier, published later as): Sans feu ni lieu : Signification biblique de la Grande Ville; Paris: Gallimard, 1975. Republished 2003 with
Gupta, Joyetta, Karin Pfeffer, Hebe Verrest, & Mirjam Ros-Tonen, eds. (2015). Geographies of Urban Governance: Advanced Theories, Methods and Practices. Springer, 2015. .
Hahn, Harlan, & Charles Levine (1980). Urban Politics: Past, Present, & Future. New York & London: Longman.
Hanson, Royce (ed.). Perspectives on Urban Infrastructure. Committee on National Urban Policy, Commission on Behavioral and Social Sciences and Education, National Research Council. Washington: National Academy Press, 1984.
Herrschel, Tassilo & Peter Newman (2017). Cities as International Actors: Urban and Regional Governance Beyond the Nation State. Palgrave Macmillan (Springer Nature).
Grava, Sigurd (2003). Urban Transportation Systems: Choices for Communities. McGraw Hill, e-book.
Kaplan, David H.; James O. Wheeler; Steven R. Holloway; & Thomas W. Hodler, cartographer (2004). Urban Geography. John Wiley & Sons, Inc.
Kavaratzis, Mihalis, Gary Warnaby, & Gregory J. Ashworth, eds. (2015). Rethinking Place Branding: Comprehensive Brand Development for Cities and Regions. Springer. .
Kraas, Frauke, Surinder Aggarwal, Martin Coy, & Günter Mertins, eds. (2014). Megacities: Our Global Urban Future. United Nations "International Year of Planet Earth" book series. Springer. .
Latham, Alan, Derek McCormack, Kim McNamara, & Donald McNeil (2009). Key Concepts in Urban Geography. London: SAGE. .
Leach, William (1993). Land of Desire: Merchants, Power, and the Rise of a New American Culture. New York: Vintage Books (Random House), 1994. .
Levy, John M. (2017). Contemporary Urban Planning. 11th ed. New York: Routledge (Taylor & Francis).
Magnusson, Warren. Politics of Urbanism: Seeing like a city. London & New York: Routledge, 2011. .
Marshall, John U. (1989). The Structure of Urban Systems. University of Toronto Press. .
Marzluff, John M., Eric Schulenberger, Wilfried Endlicher, Marina Alberti, Gordon Bradley, Clre Ryan, Craig ZumBrunne, & Ute Simon (2008). Urban Ecology: An International Perspective on the Interaction Between Humans and Nature. New York: Springer Science+Business Media. .
McQuillan, Eugene. The Law of Municipal Corporations, 3rd ed. 1987 revised volume by Charles R.P. Keating, Esq. Wilmette, Illinois: Callaghan & Company.
Moholy-Nagy, Sibyl (1968). Matrix of Man: An Illustrated History of Urban Environment. New York: Frederick A Praeger.
Mumford, Lewis (1961). The City in History: Its Origins, Its Transformations, and Its Prospects. New York: Harcourt, Brace & World.
Paddison, Ronan, ed. (2001). Handbook of Urban Studies. London; Thousand Oaks, California; and New Delhi: Sage Publications. .
Rybczynski, W., City Life: Urban Expectations in a New World, (1995)
Smith, Michael E. (2002) The Earliest Cities. In Urban Life: Readings in Urban Anthropology, edited by George Gmelch and Walter Zenner, pp. 3–19. 4th ed. Waveland Press, Prospect Heights, IL.
Southall, Aidan (1998). The City in Time and Space. Cambridge University Press.
Wellman, Kath & Marcus Spiller, eds. (2012). Urban Infrastructure: Finance and Management. Chichester, UK: Wiley-Blackwell. .
Further reading
Berger, Alan S., The City: Urban Communities and Their Problems, Dubuque, Iowa : William C. Brown, 1978.
Chandler, T. Four Thousand Years of Urban Growth: An Historical Census. Lewiston, NY: Edwin Mellen Press, 1987.
Geddes, Patrick, City Development (1904)
Kemp, Roger L. Managing America's Cities: A Handbook for Local Government Productivity, McFarland and Company, Inc., Publisher, Jefferson, North Carolina and London, 2007. ().
Kemp, Roger L. How American Governments Work: A Handbook of City, County, Regional, State, and Federal Operations, McFarland and Company, Inc., Publisher, Jefferson, North Carolina and London. ().
Kemp, Roger L. "City and Gown Relations: A Handbook of Best Practices", McFarland and Company, Inc., Publisher, Jefferson, North Carolina, US, and London, (2013). ().
Monti, Daniel J. Jr., The American City: A Social and Cultural History. Oxford, England and Malden, Massachusetts: Blackwell Publishers, 1999. 391 pp. .
Reader, John (2005) Cities. Vintage, New York.
Robson, W.A., and Regan, D.E., ed., Great Cities of the World, (3d ed., 2 vol., 1972)
Smethurst, Paul (2015). The Bicycle – Towards a Global History. Palgrave Macmillan. .
Smith, L. Monica (2020) Cities: The First 6,000 Years. Penguin Books.
Thernstrom, S., and Sennett, R., ed., Nineteenth-Century Cities (1969)
Toynbee, Arnold J. (ed), Cities of Destiny, New York: McGraw-Hill, 1967. Pan historical/geographical essays, many images. Starts with "Athens", ends with "The Coming World City-Ecumenopolis".
Weber, Max, The City, 1921. (tr. 1958)
External links
World Urbanization Prospects, Website of the United Nations Population Division (archived 10 July 2017)
Urban population (% of total) – World Bank website based on UN data.
Degree of urbanization (percentage of urban population in total population) by continent in 2016 – Statista, based on Population Reference Bureau data.
Cities
Populated places by type
Types of populated places
Urban geography |
5399 | https://en.wikipedia.org/wiki/Colorado | Colorado | Colorado (, other variants) is a state in the Mountain West sub-region of the Western United States. It encompasses most of the Southern Rocky Mountains, as well as the northeastern portion of the Colorado Plateau and the western edge of the Great Plains. Colorado is the eighth most extensive and 21st most populous U.S. state. The United States Census Bureau estimated the population of Colorado at 5,839,926 as of July 1, 2022, a 1.15% increase since the 2020 United States census.
The region has been inhabited by Native Americans and their ancestors for at least 13,500 years and possibly much longer. The eastern edge of the Rocky Mountains was a major migration route for early peoples who spread throughout the Americas. In 1848, much of the region was annexed to the United States with the Treaty of Guadalupe Hidalgo. The Pike's Peak Gold Rush of 1858–1862 created an influx of settlers. On February 28, 1861, U.S. President James Buchanan signed an act creating the Territory of Colorado, and on August 1, 1876, President Ulysses S. Grant signed Proclamation 230 admitting Colorado to the Union as the 38th state. The Spanish adjective "colorado" means "colored red" or "ruddy". Colorado is nicknamed the "Centennial State" because it became a state one century (and four weeks) after the signing of the United States Declaration of Independence.
Colorado is bordered by Wyoming to the north, Nebraska to the northeast, Kansas to the east, Oklahoma to the southeast, New Mexico to the south, Utah to the west, and touches Arizona to the southwest at the Four Corners. Colorado is noted for its landscape of mountains, forests, high plains, mesas, canyons, plateaus, rivers, and desert lands. Colorado is one of the Mountain States and is often considered to be part of the southwestern United States. The high plains of Colorado may be considered a part of the midwestern United States.
Denver is the capital, the most populous city, and the center of the Front Range Urban Corridor. Colorado Springs is the second most populous city. Residents of the state are known as Coloradans, although the antiquated "Coloradoan" is occasionally used. Major parts of the economy include government and defense, mining, agriculture, tourism, and increasingly other kinds of manufacturing. With increasing temperatures and decreasing water availability, Colorado's agriculture, forestry, and tourism economies are expected to be heavily affected by climate change.
History
The region that is today the State of Colorado has been inhabited by Native Americans and their Paleoamerican ancestors for at least 13,500 years and possibly more than 37,000 years. The eastern edge of the Rocky Mountains was a major migration route that was important to the spread of early peoples throughout the Americas. The Lindenmeier site in Larimer County contains artifacts dating from approximately 8720 BCE. The Ancient Pueblo peoples lived in the valleys and mesas of the Colorado Plateau. The Ute Nation inhabited the mountain valleys of the Southern Rocky Mountains and the Western Rocky Mountains, even as far east as the Front Range of the present day. The Apache and the Comanche also inhabited the Eastern and Southeastern parts of the state. In the 17th century, the Arapaho and Cheyenne moved west from the Great Lakes region to hunt across the High Plains of Colorado and Wyoming.
The Spanish Empire claimed Colorado as part of its New Mexico province before U.S. involvement in the region. The U.S. acquired a territorial claim to the eastern Rocky Mountains with the Louisiana Purchase from France in 1803. This U.S. claim conflicted with the claim by Spain to the upper Arkansas River Basin as the exclusive trading zone of its colony of Santa Fe de Nuevo México. In 1806, Zebulon Pike led a U.S. Army reconnaissance expedition into the disputed region. Colonel Pike and his troops were arrested by Spanish cavalrymen in the San Luis Valley the following February, taken to Chihuahua, and expelled from Mexico the following July.
The U.S. relinquished its claim to all land south and west of the Arkansas River and south of 42nd parallel north and west of the 100th meridian west as part of its purchase of Florida from Spain with the Adams-Onís Treaty of 1819. The treaty took effect on February 22, 1821. Having settled its border with Spain, the U.S. admitted the southeastern portion of the Territory of Missouri to the Union as the state of Missouri on August 10, 1821. The remainder of Missouri Territory, including what would become northeastern Colorado, became an unorganized territory and remained so for 33 years over the question of slavery. After 11 years of war, Spain finally recognized the independence of Mexico with the Treaty of Córdoba signed on August 24, 1821. Mexico eventually ratified the Adams–Onís Treaty in 1831. The Texian Revolt of 1835–36 fomented a dispute between the U.S. and Mexico which eventually erupted into the Mexican–American War in 1846. Mexico surrendered its northern territory to the U.S. with the Treaty of Guadalupe Hidalgo after the war in 1848; this included much of the western and southern areas of the current state of Colorado.
Most American settlers traveling overland west to the Oregon Country, the new goldfields of California, or the new Mormon settlements of the State of Deseret in the Salt Lake Valley, avoided the rugged Southern Rocky Mountains, and instead followed the North Platte River and Sweetwater River to South Pass (Wyoming), the lowest crossing of the Continental Divide between the Southern Rocky Mountains and the Central Rocky Mountains. In 1849, the Mormons of the Salt Lake Valley organized the extralegal State of Deseret, claiming the entire Great Basin and all lands drained by the rivers Green, Grand, and Colorado. The federal government of the U.S. flatly refused to recognize the new Mormon government, because it was theocratic and sanctioned plural marriage. Instead, the Compromise of 1850 divided the Mexican Cession and the northwestern claims of Texas into a new state and two new territories, the state of California, the Territory of New Mexico, and the Territory of Utah. On April 9, 1851, Mexican American settlers from the area of Taos settled the village of San Luis, then in the New Mexico Territory, later to become Colorado's first permanent Euro-American settlement.
In 1854, Senator Stephen A. Douglas persuaded the U.S. Congress to divide the unorganized territory east of the Continental Divide into two new organized territories, the Territory of Kansas and the Territory of Nebraska, and an unorganized southern region known as the Indian territory. Each new territory was to decide the fate of slavery within its boundaries, but this compromise merely served to fuel animosity between free soil and pro-slavery factions.
The gold seekers organized the Provisional Government of the Territory of Jefferson on August 24, 1859, but this new territory failed to secure approval from the Congress of the United States embroiled in the debate over slavery. The election of Abraham Lincoln for the President of the United States on November 6, 1860, led to the secession of nine southern slave states and the threat of civil war among the states. Seeking to augment the political power of the Union states, the Republican Party-dominated Congress quickly admitted the eastern portion of the Territory of Kansas into the Union as the free State of Kansas on January 29, 1861, leaving the western portion of the Kansas Territory, and its gold-mining areas, as unorganized territory.
Territory act
Thirty days later on February 28, 1861, outgoing U.S. President James Buchanan signed an Act of Congress organizing the free Territory of Colorado. The original boundaries of Colorado remain unchanged except for government survey amendments. In 1776, Spanish priest Silvestre Vélez de Escalante recorded that Native Americans in the area knew the river as el Rio Colorado for the red-brown silt that the river carried from the mountains. In 1859, a U.S. Army topographic expedition led by Captain John Macomb located the confluence of the Green River with the Grand River in what is now Canyonlands National Park in Utah. The Macomb party designated the confluence as the source of the Colorado River.
On April 12, 1861, South Carolina artillery opened fire on Fort Sumter to start the American Civil War. While many gold seekers held sympathies for the Confederacy, the vast majority remained fiercely loyal to the Union cause.
In 1862, a force of Texas cavalry invaded the Territory of New Mexico and captured Santa Fe on March 10. The object of this Western Campaign was to seize or disrupt the gold fields of Colorado and California and to seize ports on the Pacific Ocean for the Confederacy. A hastily organized force of Colorado volunteers force-marched from Denver City, Colorado Territory, to Glorieta Pass, New Mexico Territory, in an attempt to block the Texans. On March 28, the Coloradans and local New Mexico volunteers stopped the Texans at the Battle of Glorieta Pass, destroyed their cannon and supply wagons, and dispersed 500 of their horses and mules. The Texans were forced to retreat to Santa Fe. Having lost the supplies for their campaign and finding little support in New Mexico, the Texans abandoned Santa Fe and returned to San Antonio in defeat. The Confederacy made no further attempts to seize the Southwestern United States.
In 1864, Territorial Governor John Evans appointed the Reverend John Chivington as Colonel of the Colorado Volunteers with orders to protect white settlers from Cheyenne and Arapaho warriors who were accused of stealing cattle. Colonel Chivington ordered his troops to attack a band of Cheyenne and Arapaho encamped along Sand Creek. Chivington reported that his troops killed more than 500 warriors. The militia returned to Denver City in triumph, but several officers reported that the so-called battle was a blatant massacre of Indians at peace, that most of the dead were women and children, and that the bodies of the dead had been hideously mutilated and desecrated. Three U.S. Army inquiries condemned the action, and incoming President Andrew Johnson asked Governor Evans for his resignation, but none of the perpetrators was ever punished. This event is now known as the Sand Creek massacre.
In the midst and aftermath of the Civil War, many discouraged prospectors returned to their homes, but a few stayed and developed mines, mills, farms, ranches, roads, and towns in Colorado Territory. On September 14, 1864, James Huff discovered silver near Argentine Pass, the first of many silver strikes. In 1867, the Union Pacific Railroad laid its tracks west to Weir, now Julesburg, in the northeast corner of the Territory. The Union Pacific linked up with the Central Pacific Railroad at Promontory Summit, Utah, on May 10, 1869, to form the First transcontinental railroad. The Denver Pacific Railway reached Denver in June of the following year, and the Kansas Pacific arrived two months later to forge the second line across the continent. In 1872, rich veins of silver were discovered in the San Juan Mountains on the Ute Indian reservation in southwestern Colorado. The Ute people were removed from the San Juans the following year.
Statehood
The United States Congress passed an enabling act on March 3, 1875, specifying the requirements for the Territory of Colorado to become a state. On August 1, 1876 (four weeks after the Centennial of the United States), U.S. President Ulysses S. Grant signed a proclamation admitting Colorado to the Union as the 38th state and earning it the moniker "Centennial State".
The discovery of a major silver lode near Leadville in 1878 triggered the Colorado Silver Boom. The Sherman Silver Purchase Act of 1890 invigorated silver mining, and Colorado's last, but greatest, gold strike at Cripple Creek a few months later lured a new generation of gold seekers. Colorado women were granted the right to vote on November 7, 1893, making Colorado the second state to grant universal suffrage and the first one by a popular vote (of Colorado men). The repeal of the Sherman Silver Purchase Act in 1893 led to a staggering collapse of the mining and agricultural economy of Colorado, but the state slowly and steadily recovered. Between the 1880s and 1930s, Denver's floriculture industry developed into a major industry in Colorado. This period became known locally as the Carnation Gold Rush.
Twentieth and twenty-first centuries
Poor labor conditions and discontent among miners resulted in several major clashes between strikers and the Colorado National Guard, including the 1903–1904 Western Federation of Miners Strike and Colorado Coalfield War, the latter of which included the Ludlow massacre that killed a dozen women and children. Both the 1913–1914 Coalfield War and the Denver streetcar strike of 1920 resulted in federal troops intervening to end the violence. In 1927, the 1927-28 Colorado coal strike occurred and was ultimately successful in winning a dollar a day increase in wages. During it however the Columbine Mine massacre resulted in six dead strikers following a confrontation with Colorado Rangers. In a separate incident in Trinidad the mayor was accused of deputizing members of the KKK against the striking workers. More than 5,000 Colorado miners—many immigrants—are estimated to have died in accidents since records were first formally collected following an 1884 accident in Crested Butte that killed 59.
In 1924, the Ku Klux Klan Colorado Realm achieved dominance in Colorado politics. With peak membership levels, the Second Klan levied significant control over both the local and state Democrat and Republican parties, particularly in the governor's office and city governments of Denver, Cañon City, and Durango. A particularly strong element of the Klan controlled the Denver Police. Cross burnings became semi-regular occurrences in cities such as Florence and Pueblo. The Klan targeted African-Americans, Catholics, Eastern European immigrants, and other non-White Protestant groups. Efforts by non-Klan lawmen and lawyers including Philip Van Cise lead to a rapid decline in the organization's power, with membership waning significantly by the end of the 1920s.
Colorado became the first western state to host a major political convention when the Democratic Party met in Denver in 1908. By the U.S. census in 1930, the population of Colorado first exceeded one million residents. Colorado suffered greatly through the Great Depression and the Dust Bowl of the 1930s, but a major wave of immigration following World War II boosted Colorado's fortune. Tourism became a mainstay of the state economy, and high technology became an important economic engine. The United States Census Bureau estimated that the population of Colorado exceeded five million in 2009.
On September 11, 1957, a plutonium fire occurred at the Rocky Flats Plant, which resulted in the significant plutonium contamination of surrounding populated areas.
From the 1940s and 1970s, many protest movements gained momentum in Colorado, predominantly in Denver. This included the Chicano Movement, a civil rights, and social movement of Mexican Americans emphasizing a Chicano identity that is widely considered to have begun in Denver. The National Chicano Liberation Youth Conference was held in Colorado in March 1969.
In 1967, Colorado was the first state to loosen restrictions on abortion when governor John Love signed a law allowing abortions in cases of rape, incest, or threats to the woman's mental or physical health. Many states followed Colorado's lead in loosening abortion laws in the 1960s and 1970s.
Since the late 1990s, Colorado has been the site of multiple major mass shootings, including the infamous Columbine High School massacre in 1999 which made international news, where two gunmen killed 12 students and one teacher, before committing suicide. The incident has since spawned many copycat incidents. On July 20, 2012, a gunman killed 12 people in a movie theater in Aurora. The state responded with tighter restrictions on firearms, including introducing a limit on magazine capacity. On March 22, 2021, a gunman killed 10 people, including a police officer, in a King Soopers supermarket in Boulder. In an instance of anti-LGBT violence, a gunman killed 5 people at a nightclub in Colorado Springs during the night of November 19–20, 2022.
Four warships of the U.S. Navy have been named the USS Colorado. The first USS Colorado was named for the Colorado River and served in the Civil War and later the Asiatic Squadron, where it was attacked during the 1871 Korean Expedition. The later three ships were named in honor of the state, including an armored cruiser and the battleship USS Colorado, the latter of which was the lead ship of her class and served in World War II in the Pacific beginning in 1941. At the time of the attack on Pearl Harbor, the battleship USS Colorado was located at the naval base in San Diego, California, and thus went unscathed. The most recent vessel to bear the name USS Colorado is Virginia-class submarine USS Colorado (SSN-788), which was commissioned in 2018.
Geography
Colorado is notable for its diverse geography, which includes alpine mountains, high plains, deserts with huge sand dunes, and deep canyons. In 1861, the United States Congress defined the boundaries of the new Territory of Colorado exclusively by lines of latitude and longitude, stretching from 37°N to 41°N latitude, and from 102°02′48″W to 109°02′48″W longitude (25°W to 32°W from the Washington Meridian). After years of government surveys, the borders of Colorado were officially defined by 697 boundary markers and 697 straight boundary lines. Colorado, Wyoming, and Utah are the only states that have their borders defined solely by straight boundary lines with no natural features. The southwest corner of Colorado is the Four Corners Monument at 36°59′56″N, 109°2′43″W. The Four Corners Monument, located at the place where Colorado, New Mexico, Arizona, and Utah meet, is the only place in the United States where four states meet.
Plains
Approximately half of Colorado is flat and rolling land. East of the Rocky Mountains are the Colorado Eastern Plains of the High Plains, the section of the Great Plains within Colorado at elevations ranging from roughly . The Colorado plains are mostly prairies but also include deciduous forests, buttes, and canyons. Precipitation averages annually.
Eastern Colorado is presently mainly farmland and rangeland, along with small farming villages and towns. Corn, wheat, hay, soybeans, and oats are all typical crops. Most villages and towns in this region boast both a water tower and a grain elevator. Irrigation water is available from both surface and subterranean sources. Surface water sources include the South Platte, the Arkansas River, and a few other streams. Subterranean water is generally accessed through artesian wells. Heavy usage of these wells for irrigation purposes caused underground water reserves to decline in the region. Eastern Colorado also hosts a considerable amount and range of livestock, such as cattle ranches and hog farms.
Front Range
Roughly 70% of Colorado's population resides along the eastern edge of the Rocky Mountains in the Front Range Urban Corridor between Cheyenne, Wyoming, and Pueblo, Colorado. This region is partially protected from prevailing storms that blow in from the Pacific Ocean region by the high Rockies in the middle of Colorado. The "Front Range" includes Denver, Boulder, Fort Collins, Loveland, Castle Rock, Colorado Springs, Pueblo, Greeley, and other townships and municipalities in between. On the other side of the Rockies, the significant population centers in western Colorado (which is known as "The Western Slope") are the cities of Grand Junction, Durango, and Montrose.
Mountains
To the west of the Great Plains of Colorado rises the eastern slope of the Rocky Mountains. Notable peaks of the Rocky Mountains include Longs Peak, Mount Blue Sky, Pikes Peak, and the Spanish Peaks near Walsenburg, in southern Colorado. This area drains to the east and the southeast, ultimately either via the Mississippi River or the Rio Grande into the Gulf of Mexico.
The Rocky Mountains within Colorado contain 53 true peaks with a total of 58 that are or higher in elevation above sea level, known as fourteeners. These mountains are largely covered with trees such as conifers and aspens up to the tree line, at an elevation of about in southern Colorado to about in northern Colorado. Above this tree line, only alpine vegetation grows. Only small parts of the Colorado Rockies are snow-covered year-round.
Much of the alpine snow melts by mid-August except for a few snow-capped peaks and a few small glaciers. The Colorado Mineral Belt, stretching from the San Juan Mountains in the southwest to Boulder and Central City on the front range, contains most of the historic gold- and silver-mining districts of Colorado. Mount Elbert is the highest summit of the Rocky Mountains. The 30 highest major summits of the Rocky Mountains of North America are all within the state.
The summit of Mount Elbert at elevation in Lake County is the highest point in Colorado and the Rocky Mountains of North America. Colorado is the only U.S. state that lies entirely above 1,000 meters elevation. The point where the Arikaree River flows out of Yuma County, Colorado, and into Cheyenne County, Kansas, is the lowest in Colorado at elevation. This point, which is the highest low elevation point of any state, is higher than the high elevation points of 18 states and the District of Columbia.
Continental Divide
The Continental Divide of the Americas extends along the crest of the Rocky Mountains. The area of Colorado to the west of the Continental Divide is called the Western Slope of Colorado. West of the Continental Divide, water flows to the southwest via the Colorado River and the Green River into the Gulf of California.
Within the interior of the Rocky Mountains are several large parks which are high broad basins. In the north, on the east side of the Continental Divide is the North Park of Colorado. The North Park is drained by the North Platte River, which flows north into Wyoming and Nebraska. Just to the south of North Park, but on the western side of the Continental Divide, is the Middle Park of Colorado, which is drained by the Colorado River. The South Park of Colorado is the region of the headwaters of the South Platte River.
South Central region
In south-central Colorado is the large San Luis Valley, where the headwaters of the Rio Grande are located. The northern part of the valley is the San Luis Closed Basin, an endorheic basin that helped created the Great Sand Dunes. The valley sits between the Sangre De Cristo Mountains and San Juan Mountains. The Rio Grande drains due south into New Mexico, Texas, and Mexico. Across the Sangre de Cristo Range to the east of the San Luis Valley lies the Wet Mountain Valley. These basins, particularly the San Luis Valley, lie along the Rio Grande Rift, a major geological formation of the Rocky Mountains, and its branches.
Western Slope
The Western Slope of Colorado includes the western face of the Rocky Mountains and all of the area to the western border. This area includes several terrains and climates from alpine mountains to arid deserts. The Western Slope includes many ski resort towns in the Rocky Mountains and towns west to Utah. It is less populous than the Front Range but includes a large number of national parks and monuments.
The northwestern corner of Colorado is a sparsely populated region, and it contains part of the noted Dinosaur National Monument, which not only is a paleontological area, but is also a scenic area of rocky hills, canyons, arid desert, and streambeds. Here, the Green River briefly crosses over into Colorado.
The Western Slope of Colorado is drained by the Colorado River and its tributaries (primarily the Gunnison River, Green River, and the San Juan River). The Colorado River flows through Glenwood Canyon, and then through an arid valley made up of desert from Rifle to Parachute, through the desert canyon of De Beque Canyon, and into the arid desert of Grand Valley, where the city of Grand Junction is located.
Also prominent is the Grand Mesa, which lies to the southeast of Grand Junction; the high San Juan Mountains, a rugged mountain range; and to the north and west of the San Juan Mountains, the Colorado Plateau.
Grand Junction, Colorado, at the confluence of the Colorado and Gunnison Rivers, is the largest city on the Western Slope. Grand Junction and Durango are the only major centers of television broadcasting west of the Continental Divide in Colorado, though most mountain resort communities publish daily newspapers. Grand Junction is located at the juncture of Interstate 70 and US 50, the only major highways in western Colorado. Grand Junction is also along the major railroad of the Western Slope, the Union Pacific. This railroad also provides the tracks for Amtrak's California Zephyr passenger train, which crosses the Rocky Mountains between Denver and Grand Junction.
The Western Slope includes multiple notable destinations in the Colorado Rocky Mountains, including Glenwood Springs, with its resort hot springs, and the ski resorts of Aspen, Breckenridge, Vail, Crested Butte, Steamboat Springs, and Telluride.
Higher education in and near the Western Slope can be found at Colorado Mesa University in Grand Junction, Western Colorado University in Gunnison, Fort Lewis College in Durango, and Colorado Mountain College in Glenwood Springs and Steamboat Springs.
The Four Corners Monument in the southwest corner of Colorado marks the common boundary of Colorado, New Mexico, Arizona, and Utah; the only such place in the United States.
Climate
The climate of Colorado is more complex than states outside of the Mountain States region. Unlike most other states, southern Colorado is not always warmer than northern Colorado. Most of Colorado is made up of mountains, foothills, high plains, and desert lands. Mountains and surrounding valleys greatly affect the local climate. Northeast, east, and southeast Colorado are mostly the high plains, while Northern Colorado is a mix of high plains, foothills, and mountains. Northwest and west Colorado are predominantly mountainous, with some desert lands mixed in. Southwest and southern Colorado are a complex mixture of desert and mountain areas.
Eastern Plains
The climate of the Eastern Plains is semi-arid (Köppen climate classification: BSk) with low humidity and moderate precipitation, usually from annually, although many areas near the rivers are semi-humid climate. The area is known for its abundant sunshine and cool, clear nights, which give this area a great average diurnal temperature range. The difference between the highs of the days and the lows of the nights can be considerable as warmth dissipates to space during clear nights, the heat radiation not being trapped by clouds. The Front Range urban corridor, where most of the population of Colorado resides, lies in a pronounced precipitation shadow as a result of being on the lee side of the Rocky Mountains.
In summer, this area can have many days above 95 °F (35 °C) and often 100 °F (38 °C). On the plains, the winter lows usually range from 25 to −10 °F (−4 to −23 °C). About 75% of the precipitation falls within the growing season, from April to September, but this area is very prone to droughts. Most of the precipitation comes from thunderstorms, which can be severe, and from major snowstorms that occur in the winter and early spring. Otherwise, winters tend to be mostly dry and cold.
In much of the region, March is the snowiest month. April and May are normally the rainiest months, while April is the wettest month overall. The Front Range cities closer to the mountains tend to be warmer in the winter due to Chinook winds which warms the area, sometimes bringing temperatures of 70 °F (21 °C) or higher in the winter. The average July temperature is 55 °F (13 °C) in the morning and 90 °F (32 °C) in the afternoon. The average January temperature is 18 °F (−8 °C) in the morning and 48 °F (9 °C) in the afternoon, although variation between consecutive days can be 40 °F (-40 °C).
Front Range foothills
Just west of the plains and into the foothills, there is a wide variety of climate types. Locations merely a few miles apart can experience entirely different weather depending on the topography. Most valleys have a semi-arid climate, not unlike the eastern plains, which transitions to an alpine climate at the highest elevations. Microclimates also exist in local areas that run nearly the entire spectrum of climates, including subtropical highland (Cfb/Cwb), humid subtropical (Cfa), humid continental (Dfa/Dfb), Mediterranean (Csa/Csb) and subarctic (Dfc).
Extreme weather
Extreme weather changes are common in Colorado, although a significant portion of the extreme weather occurs in the least populated areas of the state. Thunderstorms are common east of the Continental Divide in the spring and summer, yet are usually brief. Hail is a common sight in the mountains east of the Divide and across the eastern Plains, especially the northeast part of the state. Hail is the most commonly reported warm-season severe weather hazard, and occasionally causes human injuries, as well as significant property damage. The eastern Plains are subject to some of the biggest hail storms in North America. Notable examples are the severe hailstorms that hit Denver on July 11, 1990, and May 8, 2017, the latter being the costliest ever in the state.
The Eastern Plains are part of the extreme western portion of Tornado Alley; some damaging tornadoes in the Eastern Plains include the 1990 Limon F3 tornado and the 2008 Windsor EF3 tornado, which devastated a small town. Portions of the eastern Plains see especially frequent tornadoes, both those spawned from mesocyclones in supercell thunderstorms and from less intense landspouts, such as within the Denver convergence vorticity zone (DCVZ).
The Plains are also susceptible to occasional floods and particularly severe flash floods, which are caused both by thunderstorms and by the rapid melting of snow in the mountains during warm weather. Notable examples include the 1965 Denver Flood, the Big Thompson River flooding of 1976 and the 2013 Colorado floods. Hot weather is common during summers in Denver. The city's record in 1901 for the number of consecutive days above 90 °F (32 °C) was broken during the summer of 2008. The new record of 24 consecutive days surpassed the previous record by almost a week.
Much of Colorado is very dry, with the state averaging only of precipitation per year statewide. The state rarely experiences a time when some portion is not in some degree of drought. The lack of precipitation contributes to the severity of wildfires in the state, such as the Hayman Fire of 2002. Other notable fires include the Fourmile Canyon Fire of 2010, the Waldo Canyon Fire and High Park Fire of June 2012, and the Black Forest Fire of June 2013. Even these fires were exceeded in severity by the Pine Gulch Fire, Cameron Peak Fire, and East Troublesome Fire in 2020, all being the three largest fires in Colorado history (see 2020 Colorado wildfires). And the Marshall Fire which started on December 30, 2021, while not the largest in state history, was the most destructive ever in terms of property loss (see Marshall Fire).
However, some of the mountainous regions of Colorado receive a huge amount of moisture from winter snowfalls. The spring melts of these snows often cause great waterflows in the Yampa River, the Colorado River, the Rio Grande, the Arkansas River, the North Platte River, and the South Platte River.
Water flowing out of the Colorado Rocky Mountains is a very significant source of water for the farms, towns, and cities of the southwest states of New Mexico, Arizona, Utah, and Nevada, as well as the Midwest, such as Nebraska and Kansas, and the southern states of Oklahoma and Texas. A significant amount of water is also diverted for use in California; occasionally (formerly naturally and consistently), the flow of water reaches northern Mexico.
Climate change
Records
The highest official ambient air temperature ever recorded in Colorado was on July 20, 2019, at John Martin Dam. The lowest official air temperature was on February 1, 1985, at Maybell.
Extreme temperatures
Earthquakes
Despite its mountainous terrain, Colorado is relatively quiet seismically. The U.S. National Earthquake Information Center is located in Golden.
On August 22, 2011, a 5.3 magnitude earthquake occurred west-southwest of the city of Trinidad. There were no casualties and only a small amount of damage was reported. It was the second-largest earthquake in Colorado's history. A magnitude 5.7 earthquake was recorded in 1973.
In the early morning hours of August 24, 2018, four minor earthquakes rattled Colorado, ranging from magnitude 2.9 to 4.3.
Colorado has recorded 525 earthquakes since 1973, a majority of which range 2 to 3.5 on the Richter scale.
Fauna
A process of extirpation by trapping and poisoning of the gray wolf (Canis lupus) from Colorado in the 1930s saw the last wild wolf in the state shot in 1945. A wolf pack recolonized Moffat County, Colorado in northwestern Colorado in 2019. Cattle farmers have expressed concern that a returning wolf population potentially threatens their herds. Coloradans voted to reintroduce gray wolves in 2020, with the state committing to a plan to have a population in the state by 2022 and permitting non-lethal methods of driving off wolves attacking livestock and pets.
While there is fossil evidence of Harrington's mountain goat in Colorado between at least 800,000 years ago and its extinction with megafauna roughly 11,000 years ago, the mountain goat is not native to Colorado but was instead introduced to the state over time during the interval between 1947 and 1972. Despite being an artificially-introduced species, the state declared mountain goats a native species in 1993. In 2013, 2014, and 2019, an unknown illness killed nearly all mountain goat kids, leading to a Colorado Parks and Wildlife investigation.
The native population of pronghorn in Colorado has varied wildly over the last century, reaching a low of only 15,000 individuals during the 1960s. However, conservation efforts succeeded in bringing the stable population back up to roughly 66,000 by 2013. The population was estimated to have reached 85,000 by 2019 and had increasingly more run-ins with the increased suburban housing along the eastern Front Range. State wildlife officials suggested that landowners would need to modify fencing to allow the greater number of pronghorns to move unabated through the newly developed land. Pronghorns are most readily found in the northern and eastern portions of the state, with some populations also in the western San Juan Mountains.
Common wildlife found in the mountains of Colorado include mule deer, southwestern red squirrel, golden-mantled ground squirrel, yellow-bellied marmot, moose, American pika, and red fox, all at exceptionally high numbers, though moose are not native to the state. The foothills include deer, fox squirrel, desert cottontail, mountain cottontail, and coyote. The prairies are home to black-tailed prairie dog, the endangered swift fox, American badger, and white-tailed jackrabbit.
Counties
The State of Colorado is divided into 64 counties. Two of these counties, the City and County of Broomfield and the City and County of Denver, have consolidated city and county governments. Counties are important units of government in Colorado since there are no civil townships or other minor civil divisions.
The most populous county in Colorado is El Paso County, the home of the City of Colorado Springs. The second most populous county is the City and County of Denver, the state capital. Five of the 64 counties now have more than 500,000 residents, while 12 have fewer than 5,000 residents. The ten most populous Colorado counties are all located in the Front Range Urban Corridor. Mesa County is the most populous county on the Colorado Western Slope.
Municipalities
Colorado has 272 active incorporated municipalities, comprising 197 towns, 73 cities, and two consolidated city and county governments. At the 2020 United States census, 4,299,942 of the 5,773,714 Colorado residents (74.47%) lived in one of these 272 municipalities. Another 714,417 residents (12.37%) lived in one of the 210 census-designated places, while the remaining 759,355 residents (13.15%) lived in the many rural and mountainous areas of the state.
Colorado municipalities operate under one of five types of municipal governing authority. Colorado currently has two consolidated city and county governments, 61 home rule cities, 12 statutory cities, 35 home rule towns, 161 statutory towns, and one territorial charter municipality.
The most populous municipality is the City and County of Denver. Colorado has 12 municipalities with more than 100,000 residents, and 17 with fewer than 100 residents. The 16 most populous Colorado municipalities are all located in the Front Range Urban Corridor. The City of Grand Junction is the most populous municipality on the Colorado Western Slope. The Town of Carbonate has had no year-round population since the 1890 census due to its severe winter weather and difficult access.
Unincorporated communities
In addition to its 272 municipalities, Colorado has 210 unincorporated census-designated places (CDPs) and many other small communities. The most populous unincorporated community in Colorado is Highlands Ranch south of Denver. The seven most populous CDPs are located in the Front Range Urban Corridor. The Clifton CDP is the most populous CDP on the Colorado Western Slope.
Special districts
Colorado has more than 4,000 special districts, most with property tax authority. These districts may provide schools, law enforcement, fire protection, water, sewage, drainage, irrigation, transportation, recreation, infrastructure, cultural facilities, business support, redevelopment, or other services.
Some of these districts have the authority to levy sales tax as well as property tax and use fees. This has led to a hodgepodge of sales tax and property tax rates in Colorado. There are some street intersections in Colorado with a different sales tax rate on each corner, sometimes substantially different.
Some of the more notable Colorado districts are:
The Regional Transportation District (RTD), which affects the counties of Denver, Boulder, Jefferson, and portions of Adams, Arapahoe, Broomfield, and Douglas Counties
The Scientific and Cultural Facilities District (SCFD), a special regional tax district with physical boundaries contiguous with county boundaries of Adams, Arapahoe, Boulder, Broomfield, Denver, Douglas, and Jefferson Counties
It is a 0.1% retail sales and uses tax (one penny on every $10).
According to the Colorado statute, the SCFD distributes the money to local organizations on an annual basis. These organizations must provide for the enlightenment and entertainment of the public through the production, presentation, exhibition, advancement, or preservation of art, music, theater, dance, zoology, botany, natural history, or cultural history.
As directed by statute, SCFD recipient organizations are currently divided into three "tiers" among which receipts are allocated by percentage.
Tier I includes regional organizations: the Denver Art Museum, the Denver Botanic Gardens, the Denver Museum of Nature and Science, the Denver Zoo, and the Denver Center for the Performing Arts. It receives 65.5%.
Tier II currently includes 26 regional organizations. Tier II receives 21%.
Tier III has more than 280 local organizations such as small theaters, orchestras, art centers, natural history, cultural history, and community groups. Tier III organizations apply for funding from the county cultural councils via a grant process. This tier receives 13.5%.
An 11-member board of directors oversees the distributions by the Colorado Revised Statutes. Seven board members are appointed by county commissioners (in Denver, the Denver City Council) and four members are appointed by the Governor of Colorado.
The Football Stadium District (FD or FTBL), approved by the voters to pay for and help build the Denver Broncos' stadium Empower Field at Mile High.
Local Improvement Districts (LID) within designated areas of Jefferson and Broomfield counties.
The Metropolitan Major League Baseball Stadium District, approved by voters to pay for and help build the Colorado Rockies' stadium Coors Field.
Regional Transportation Authority (RTA) taxes at varying rates in Basalt, Carbondale, Glenwood Springs, and Gunnison County.
Statistical areas
Most recently on March 6, 2020, the Office of Management and Budget defined 21 statistical areas for Colorado comprising four combined statistical areas, seven metropolitan statistical areas, and ten micropolitan statistical areas.
The most populous of the seven metropolitan statistical areas in Colorado is the 10-county Denver-Aurora-Lakewood, CO Metropolitan Statistical Area with a population of 2,963,821 at the 2020 United States census, an increase of +15.29% since the 2010 census.
The more extensive 12-county Denver-Aurora, CO Combined Statistical Area had a population of 3,623,560 at the 2020 census, an increase of +17.23% since the 2010 census.
The most populous extended metropolitan region in Rocky Mountain Region is the 18-county Front Range Urban Corridor along the northeast face of the Southern Rocky Mountains. This region with Denver at its center had a population of 5,055,344 at the 2020 census, an increase of +16.65% since the 2010 census.
Demographics
The United States Census Bureau estimated the population of Colorado on July 1, 2022, at 5,839,926, a 1.15% increase since the 2020 United States census.
People of Hispanic and Latino American (of any race made) heritage made up 20.7% of the population. According to the 2000 census, the largest ancestry groups in Colorado are German (22%) including those of Swiss and Austrian descent, Mexican (18%), Irish (12%), and English (12%). Persons reporting German ancestry are especially numerous in the Front Range, the Rockies (west-central counties), and Eastern parts/High Plains.
Colorado has a high proportion of Hispanic, mostly Mexican-American, citizens in Metropolitan Denver, Colorado Springs, as well as the smaller cities of Greeley and Pueblo, and elsewhere. Southern, Southwestern, and Southeastern Colorado have a large number of Hispanos, the descendants of the early settlers of colonial Spanish origin. In 1940, the U.S. Census Bureau reported Colorado's population as 8.2% Hispanic and 90.3% non-Hispanic white. The Hispanic population of Colorado has continued to grow quickly over the past decades. By 2019, Hispanics made up 22% of Colorado's population, and Non-Hispanic Whites made up 70%. Spoken English in Colorado has many Spanish idioms.
Colorado also has some large African-American communities located in Denver, in the neighborhoods of Montbello, Five Points, Whittier, and many other East Denver areas. The state has sizable numbers of Asian-Americans of Mongolian, Chinese, Filipino, Korean, Southeast Asian, and Japanese descent. The highest population of Asian Americans can be found on the south and southeast side of Denver, as well as some on Denver's southwest side. The Denver metropolitan area is considered more liberal and diverse than much of the state when it comes to political issues and environmental concerns.
The majority of Colorado's immigrants are from Mexico, India, China, Vietnam, Korea, Germany and Canada.
There were a total of 70,331 births in Colorado in 2006. (Birth rate of 14.6 per thousand.) In 2007, non-Hispanic whites were involved in 59.1% of all births. Some 14.06% of those births involved a non-Hispanic white person and someone of a different race, most often with a couple including one Hispanic. A birth where at least one Hispanic person was involved counted for 43% of the births in Colorado. As of the 2010 census, Colorado has the seventh highest percentage of Hispanics (20.7%) in the U.S. behind New Mexico (46.3%), California (37.6%), Texas (37.6%), Arizona (29.6%), Nevada (26.5%), and Florida (22.5%). Per the 2000 census, the Hispanic population is estimated to be 918,899, or approximately 20% of the state's total population. Colorado has the 5th-largest population of Mexican-Americans, behind California, Texas, Arizona, and Illinois. In percentages, Colorado has the 6th-highest percentage of Mexican-Americans, behind New Mexico, California, Texas, Arizona, and Nevada.
Birth data
In 2011, 46% of Colorado's population younger than the age of one were minorities, meaning that they had at least one parent who was not non-Hispanic white.
Note: Births in table do not add up, because Hispanics are counted both by their ethnicity and by their race, giving a higher overall number.
Since 2016, data for births of White Hispanic origin are not collected, but included in one Hispanic group; persons of Hispanic origin may be of any race.
In 2017, Colorado recorded the second-lowest fertility rate in the United States outside of New England, after Oregon, at 1.63 children per woman. Significant, contributing factors to the decline in pregnancies were the Title X Family Planning Program and an intrauterine device grant from Warren Buffett's family.
Language
English, the official language of the state, is the most commonly spoken in Colorado. One Native American language still spoken in Colorado is the Colorado River Numic language also known as the Ute dialect.
Religion
Major religious affiliations of the people of Colorado as of 2014 were 64% Christian, of whom there are 44% Protestant, 16% Roman Catholic, 3% Mormon, and 1% Eastern Orthodox. Other religious breakdowns according to the Pew Research Center were 1% Jewish, 1% Muslim, 1% Buddhist and 4% other. The religiously unaffiliated made up 29% of the population. In 2020, according to the Public Religion Research Institute, Christianity was 66% of the population. Judaism was also reported to have increased in this separate study, forming 2% of the religious landscape, while the religiously unaffiliated were reported to form 28% of the population in this separate study. In 2022, the same organization reported 61% was Christian (39% Protestant, 19% Catholic, 2% Mormon, 1% Eastern Orthodox), 2% New Age, 1% Jewish, 1% Hindu, and 34% religiously unaffiliated.
According to the Association of Religion Data Archives, the largest Christian denominations by the number of adherents in 2010 were the Catholic Church with 811,630; multi-denominational Evangelical Protestants with 229,981; and the Church of Jesus Christ of Latter-day Saints with 151,433. In 2020, the Association of Religion Data Archives determined the largest Christian denominations were Catholics (873,236), non/multi/inter-denominational Protestants (406,798), and Mormons (150,509). Throughout its non-Christian population, there were 12,500 Hindus, 7,101 Hindu Yogis, and 17,369 Buddhists at the 2020 study.
Our Lady of Guadalupe Catholic Church was the first permanent Catholic parish in modern-day Colorado and was constructed by Spanish colonists from New Mexico in modern-day Conejos. Latin Church Catholics are served by three dioceses: the Archdiocese of Denver and the Dioceses of Colorado Springs and Pueblo.
The first permanent settlement by members of the Church of Jesus Christ of Latter-day Saints in Colorado arrived from Mississippi and initially camped along the Arkansas River just east of the present-day site of Pueblo.
Health
Colorado is generally considered among the healthiest states by behavioral and healthcare researchers. Among the positive contributing factors is the state's well-known outdoor recreation opportunities and initiatives. However, there is a stratification of health metrics with wealthier counties such as Douglas and Pitkin performing significantly better relative to southern, less wealthy counties such as Huerfano and Las Animas.
Obesity
According to several studies, Coloradans have the lowest rates of obesity of any state in the US. , 24% of the population was considered medically obese, and while the lowest in the nation, the percentage had increased from 17% in 2004.
Life expectancy
According to a report in the Journal of the American Medical Association, residents of Colorado had a 2014 life expectancy of 80.21 years, the longest of any U.S. state.
Homelessness
According to HUD's 2022 Annual Homeless Assessment Report, there were an estimated 10,397 homeless people in Colorado.
Economy
Total employment (2019): 2,473,192
Number of employer establishments: 174,258
The total state product in 2015 was $318.6 billion. Median Annual Household Income in 2016 was $70,666, 8th in the nation. Per capita personal income in 2010 was $51,940, ranking Colorado 11th in the nation. The state's economy broadened from its mid-19th-century roots in mining when irrigated agriculture developed, and by the late 19th century, raising livestock had become important. Early industry was based on the extraction and processing of minerals and agricultural products. Current agricultural products are cattle, wheat, dairy products, corn, and hay.
The federal government operates several federal facilities in the state, including NORAD (North American Aerospace Defense Command), United States Air Force Academy, Schriever Air Force Base located approximately 10 miles (16 kilometers) east of Peterson Air Force Base, and Fort Carson, both located in Colorado Springs within El Paso County; NOAA, the National Renewable Energy Laboratory (NREL) in Golden, and the National Institute of Standards and Technology in Boulder; U.S. Geological Survey and other government agencies at the Denver Federal Center near Lakewood; the Denver Mint, Buckley Space Force Base, the Tenth Circuit Court of Appeals, and the Byron G. Rogers Federal Building and United States Courthouse in Denver; and a federal Supermax Prison and other federal prisons near Cañon City. In addition to these and other federal agencies, Colorado has abundant National Forest land and four National Parks that contribute to federal ownership of of land in Colorado, or 37% of the total area of the state.
In the second half of the 20th century, the industrial and service sectors expanded greatly. The state's economy is diversified and is notable for its concentration on scientific research and high-technology industries. Other industries include food processing, transportation equipment, machinery, chemical products, the extraction of metals such as gold (see Gold mining in Colorado), silver, and molybdenum. Colorado now also has the largest annual production of beer in any state. Denver is an important financial center.
The state's diverse geography and majestic mountains attract millions of tourists every year, including 85.2 million in 2018. Tourism contributes greatly to Colorado's economy, with tourists generating $22.3 billion in 2018.
Several nationally known brand names have originated in Colorado factories and laboratories. From Denver came the forerunner of telecommunications giant Qwest in 1879, Samsonite luggage in 1910, Gates belts and hoses in 1911, and Russell Stover Candies in 1923. Kuner canned vegetables began in Brighton in 1864. From Golden came Coors beer in 1873, CoorsTek industrial ceramics in 1920, and Jolly Rancher candy in 1949. CF&I railroad rails, wire, nails, and pipe debuted in Pueblo in 1892. Holly Sugar was first milled from beets in Holly in 1905, and later moved its headquarters to Colorado Springs. The present-day Swift packed meat of Greeley evolved from Monfort of Colorado, Inc., established in 1930. Estes model rockets were launched in Penrose in 1958. Fort Collins has been the home of Woodward Governor Company's motor controllers (governors) since 1870, and Waterpik dental water jets and showerheads since 1962. Celestial Seasonings herbal teas have been made in Boulder since 1969. Rocky Mountain Chocolate Factory made its first candy in Durango in 1981.
Colorado has a flat 4.63% income tax, regardless of income level. On November 3, 2020, voters authorized an initiative to lower that income tax rate to 4.55 percent. Unlike most states, which calculate taxes based on federal adjusted gross income, Colorado taxes are based on taxable income—income after federal exemptions and federal itemized (or standard) deductions. Colorado's state sales tax is 2.9% on retail sales. When state revenues exceed state constitutional limits, according to Colorado's Taxpayer Bill of Rights legislation, full-year Colorado residents can claim a sales tax refund on their individual state income tax return. Many counties and cities charge their own rates, in addition to the base state rate. There are also certain county and special district taxes that may apply.
Real estate and personal business property are taxable in Colorado. The state's senior property tax exemption was temporarily suspended by the Colorado Legislature in 2003. The tax break was scheduled to return for the assessment year 2006, payable in 2007.
, the state's unemployment rate was 4.2%.
The West Virginia teachers' strike in 2018 inspired teachers in other states, including Colorado, to take similar action.
Agriculture
Corn is grown in the Eastern Plains of Colorado. Arid conditions and drought negatively impacted yields in 2020 and 2022.
Natural resources
Colorado has significant hydrocarbon resources. According to the Energy Information Administration, Colorado hosts seven of the largest natural gas fields in the United States, and two of the largest oil fields. Conventional and unconventional natural gas output from several Colorado basins typically accounts for more than five percent of annual U.S. natural gas production. Colorado's oil shale deposits hold an estimated of oil—nearly as much oil as the entire world's proven oil reserves. Substantial deposits of bituminous, subbituminous, and lignite coal are found in the state.
Uranium mining in Colorado goes back to 1872, when pitchblende ore was taken from gold mines near Central City, Colorado. Not counting byproduct uranium from phosphate, Colorado is considered to have the third-largest uranium reserves of any U.S. state, behind Wyoming and New Mexico. When Colorado and Utah dominated radium mining from 1910 to 1922, uranium and vanadium were the byproducts (giving towns like present-day Superfund site Uravan their names). Uranium price increases from 2001 to 2007 prompted several companies to revive uranium mining in Colorado. During the 1940s, certain communities–including Naturita and Paradox–earned the moniker of "yellowcake towns" from their relationship with uranium mining. Price drops and financing problems in late 2008 forced these companies to cancel or scale back the uranium-mining project. As of 2016, there were no major uranium mining operations in the state, though plans existed to restart production.
Electricity generation
Colorado's high Rocky Mountain ridges and eastern plains offer wind power potential, and geologic activity in the mountain areas provides the potential for geothermal power development. Much of the state is sunny and could produce solar power. Major rivers flowing from the Rocky Mountains offer hydroelectric power resources.
Culture
Arts and film
List of museums in Colorado
List of theaters in Colorado
Music of Colorado
Several film productions have been shot on location in Colorado, especially prominent Westerns like True Grit, The Searchers, and Butch Cassidy and the Sundance Kid. Several historic military forts, railways with trains still operating, and mining ghost towns have been used and transformed for historical accuracy in well-known films. There are also several scenic highways and mountain passes that helped to feature the open road in films such as Vanishing Point, Bingo and Starman. Some Colorado landmarks have been featured in films, such as The Stanley Hotel in Dumb and Dumber and The Shining and the Sculptured House in Sleeper. In 2015, Furious 7 was to film driving sequences on Pikes Peak Highway in Colorado. The TV adult-animated series South Park takes place in central Colorado in the titular town. Additionally, The TV series Good Luck Charlie was set, but not filmed, in Denver, Colorado. The Colorado Office of Film and Television has noted that more than 400 films have been shot in Colorado.
There are also several established film festivals in Colorado, including Aspen Shortsfest, Boulder International Film Festival, Castle Rock Film Festival, Denver Film Festival, Festivus Film Festival, Mile High Horror Film Festival, Moondance International Film Festival, Mountainfilm in Telluride, Rocky Mountain Women's Film Festival, and Telluride Film Festival.
Many notable writers have lived or spent extended periods in Colorado. Beat Generation writers Jack Kerouac and Neal Cassady lived in and around Denver for several years each. Irish playwright Oscar Wilde visited Colorado on his tour of the United States in 1882, writing in his 1906 Impressions of America that Leadville was "the richest city in the world. It has also got the reputation of being the roughest, and every man carries a revolver."
Cuisine
Colorado is known for its Southwest and Rocky Mountain cuisine, with Mexican restaurants found throughout the state.
Boulder was named America's Foodiest Town 2010 by Bon Appétit. Boulder, and Colorado in general, is home to several national food and beverage companies, top-tier restaurants and farmers' markets. Boulder also has more Master Sommeliers per capita than any other city, including San Francisco and New York. Denver is known for steak, but now has a diverse culinary scene with many restaurants.
Polidori Sausage is a brand of pork products available in supermarkets, which originated in Colorado, in the early 20th century.
The Food & Wine Classic is held annually each June in Aspen. Aspen also has a reputation as the culinary capital of the Rocky Mountain region.
Wine and beer
Colorado wines include award-winning varietals that have attracted favorable notice from outside the state. With wines made from traditional Vitis vinifera grapes along with wines made from cherries, peaches, plums, and honey, Colorado wines have won top national and international awards for their quality. Colorado's grape growing regions contain the highest elevation vineyards in the United States, with most viticulture in the state practiced between above sea level. The mountain climate ensures warm summer days and cool nights. Colorado is home to two designated American Viticultural Areas of the Grand Valley AVA and the West Elks AVA, where most of the vineyards in the state are located. However, an increasing number of wineries are located along the Front Range. In 2018, Wine Enthusiast Magazine named Colorado's Grand Valley AVA in Mesa County, Colorado, as one of the Top Ten wine travel destinations in the world.
Colorado is home to many nationally praised microbreweries, including New Belgium Brewing Company, Odell Brewing Company, Great Divide Brewing Company, and Bristol Brewing Company. The area of northern Colorado near and between the cities of Denver, Boulder, and Fort Collins is known as the "Napa Valley of Beer" due to its high density of craft breweries.
Marijuana and hemp
Colorado is open to cannabis (marijuana) tourism. With the adoption of the 64th state amendment in 2012, Colorado became the first state in the union to legalize marijuana for medicinal (2000), industrial (referring to hemp, 2012), and recreational (2012) use. Colorado's marijuana industry sold $1.31 billion worth of marijuana in 2016 and $1.26 billion in the first three-quarters of 2017. The state generated tax, fee, and license revenue of $194 million in 2016 on legal marijuana sales. Colorado regulates hemp as any part of the plant with less than 0.3% THC.
On April 4, 2014, Senate Bill 14–184 addressing oversight of Colorado's industrial hemp program was first introduced, ultimately being signed into law by Governor John Hickenlooper on May 31, 2014.
Medicinal use
On November 7, 2000, 54% of Colorado voters passed Amendment 20, which amends the Colorado State constitution to allow the medical use of marijuana. A patient's medical use of marijuana, within the following limits, is lawful:
(I) No more than of a usable form of marijuana; and
(II) No more than twelve marijuana plants, with six or fewer being mature, flowering plants that are producing a usable form of marijuana.
Currently, Colorado has listed "eight medical conditions for which patients can use marijuana—cancer, glaucoma, HIV/AIDS, muscle spasms, seizures, severe pain, severe nausea and cachexia, or dramatic weight loss and muscle atrophy". While governor, John Hickenlooper allocated about half of the state's $13 million "Medical Marijuana Program Cash Fund" to medical research in the 2014 budget. By 2018, the Medical Marijuana Program Cash Fund was the "largest pool of pot money in the state" and was used to fund programs including research into pediatric applications for controlling autism symptoms.
Recreational use
On November 6, 2012, voters amended the state constitution to protect "personal use" of marijuana for adults, establishing a framework to regulate marijuana in a manner similar to alcohol. The first recreational marijuana shops in Colorado, and by extension the United States, opened their doors on January 1, 2014.
Sports
Colorado has five major professional sports leagues, all based in the Denver metropolitan area. Colorado is the least populous state with a franchise in each of the major professional sports leagues.
The Colorado Springs Snow Sox professional baseball team is based in Colorado Springs. The team is a member of the Pecos League, an independent baseball league which is not affiliated with Major or Minor League Baseball.
The Pikes Peak International Hill Climb is a major hill climbing motor race held on the Pikes Peak Highway.
The Cherry Hills Country Club has hosted several professional golf tournaments, including the U.S. Open, U.S. Senior Open, U.S. Women's Open, PGA Championship and BMW Championship.
Professional sports teams
College athletics
The following universities and colleges participate in the National Collegiate Athletic Association Division I. The most popular college sports program is the University of Colorado Buffaloes, who used to play in the Big-12 but now play in the Pac-12. They have won the 1957 and 1991 Orange Bowl, 1995 Fiesta Bowl, and 1996 Cotton Bowl Classic.
Transportation
Colorado's primary mode of transportation (in terms of passengers) is its highway system. Interstate 25 (I-25) is the primary north–south highway in the state, connecting Pueblo, Colorado Springs, Denver, and Fort Collins, and extending north to Wyoming and south to New Mexico. I-70 is the primary east–west corridor. It connects Grand Junction and the mountain communities with Denver and enters Utah and Kansas. The state is home to a network of US and Colorado highways that provide access to all principal areas of the state. Many smaller communities are connected to this network only via county roads.
Denver International Airport (DIA) is the third-busiest domestic U.S. and international airport in the world by passenger traffic. DIA handles by far the largest volume of commercial air traffic in Colorado and is the busiest U.S. hub airport between Chicago and the Pacific coast, making Denver the most important airport for connecting passenger traffic in the western United States.
Public transportation bus services are offered both intra-city and inter-city—including the Denver metro area's RTD services. The Regional Transportation District (RTD) operates the popular RTD Bus & Rail transit system in the Denver Metropolitan Area. the RTD rail system had 170 light-rail vehicles, serving of track. In addition to local public transit, intercity bus service is provided by Burlington Trailways, Bustang, Express Arrow, and Greyhound Lines.
Amtrak operates two passenger rail lines in Colorado, the California Zephyr and Southwest Chief. Colorado's contribution to world railroad history was forged principally by the Denver and Rio Grande Western Railroad which began in 1870 and wrote the book on mountain railroading. In 1988 the "Rio Grande" was acquired, but was merged into, the Southern Pacific Railroad by their joint owner Philip Anschutz. On September 11, 1996, Anschutz sold the combined company to the Union Pacific Railroad, creating the largest railroad network in the United States. The Anschutz sale was partly in response to the earlier merger of Burlington Northern and Santa Fe which formed the large Burlington Northern and Santa Fe Railway (BNSF), Union Pacific's principal competitor in western U.S. railroading. Both Union Pacific and BNSF have extensive freight operations in Colorado.
Colorado's freight railroad network consists of 2,688 miles of Class I trackage. It is integral to the U.S. economy, being a critical artery for the movement of energy, agriculture, mining, and industrial commodities as well as general freight and manufactured products between the East and Midwest and the Pacific coast states.
In August 2014, Colorado began to issue driver licenses to aliens not lawfully in the United States who lived in Colorado. In September 2014, KCNC reported that 524 non-citizens were issued Colorado driver licenses that are normally issued to U.S. citizens living in Colorado.
Education
The first institution of higher education in the Colorado Territory was the Colorado Seminary, opened on November 16, 1864, by the Methodist Episcopal Church. The seminary closed in 1867 but reopened in 1880 as the University of Denver. In 1870, the Bishop George Maxwell Randall of the Episcopal Church's Missionary District of Colorado and Parts Adjacent opened the first of what become the Colorado University Schools which would include the Territorial School of Mines opened in 1873 and sold to the Colorado Territory in 1874. These schools were initially run by the Episcopal Church. An 1861 territorial act called for the creation of a public university in Boulder, though it would not be until 1876 that the University of Colorado was founded. The 1876 act also renamed Territorial School of Mines as the Colorado School of Mines. An 1870 territorial act created the Agricultural College of Colorado which opened in 1879. The college was renamed the Colorado State College of Agriculture and Mechanic Arts in 1935, and became Colorado State University in 1957.
The first Catholic college in Colorado was the Jesuit Sacred Heart College, which was founded in New Mexico in 1877, moved to Morrison in 1884, and to Denver in 1887. The college was renamed Regis College in 1921 and Regis University in 1991. On April 1, 1924, armed students patrolled the campus after a burning cross was found, the climax of tensions between Regis College and the locally-powerful Ku Klux Klan.
Following a 1950 assessment by the Service Academy Board, it was determined that there was a need to supplement the U.S. Military and Naval Academies with a third school that would provide commissioned officers for the newly independent Air Force. On April 1, 1954, President Dwight Eisenhower signed a law that moved for the creation of a U.S. Air Force Academy. Later that year, Colorado Springs was selected to host the new institution. From its establishment in 1955, until the construction of appropriate facilities in Colorado Springs was completed and opened in 1958, the Air Force Academy operated out of Lowry Air Force Base in Denver. With the opening of the Colorado Springs facility, the cadets moved to the new campus, though not in the full-kit march that some urban and campus legends suggest. The first class of Space Force officers from the Air Force Academy commissioned on April 18, 2020.
Military installations
The major military installations in Colorado include:
Buckley Space Force Base (1938–)
Air Reserve Personnel Center (1953–)
Fort Carson (U.S. Army 1942–)
Piñon Canyon Maneuver Site (1983–)
Peterson Space Force Base (1942–)
Cheyenne Mountain Space Force Station (1961–)
Pueblo Chemical Depot (U.S. Army 1942–)
Schriever Space Force Base (1983–)
United States Air Force Academy (1954–)
Former military posts in Colorado include:
Spanish Fort (Spanish Army 1819–1821)
Fort Massachusetts (U.S. Army 1852–1858)
Fort Garland (U.S. Army 1858–1883)
Camp Collins (U.S. Army 1862–1870)
Fort Logan (U.S. Army 1887–1946)
Colorado National Guard Armory (1913–1933)
Fitzsimons Army Hospital (U.S. Army 1918–1999)
Denver Medical Depot (U.S. Army 1925–1949)
Lowry Air Force Base (1938–1994)
Pueblo Army Air Base (1941-1948)
Rocky Mountain Arsenal (U.S. Army 1942–1992)
Camp Hale (U.S. Army 1942–1945)
La Junta Army Air Field (1942–1946)
Leadville Army Air Field (1943–1944)
Government
State government
Like the federal government and all other U.S. states, Colorado's state constitution provides for three branches of government: the legislative, the executive, and the judicial branches.
The Governor of Colorado heads the state's executive branch. The current governor is Jared Polis, a Democrat. Colorado's other statewide elected executive officers are the Lieutenant Governor of Colorado (elected on a ticket with the Governor), Secretary of State of Colorado, Colorado State Treasurer, and Attorney General of Colorado, all of whom serve four-year terms.
The seven-member Colorado Supreme Court is the state's highest court, with seven justices. The Colorado Court of Appeals, with 22 judges, sits in divisions of three judges each. Colorado is divided into 22 judicial districts, each of which has a district court and a county court with limited jurisdiction. The state also has specialized water courts, which sit in seven distinct divisions around the state and which decide matters relating to water rights and the use and administration of water.
The state legislative body is the Colorado General Assembly, which is made up of two houses – the House of Representatives and the Senate. The House has 65 members and the Senate has 35. , the Democratic Party holds a 23 to 12 majority in the Senate and a 46 to 19 majority in the House.
Most Coloradans are native to other states (nearly 60% according to the 2000 census), and this is illustrated by the fact that the state did not have a native-born governor from 1975 (when John David Vanderhoof left office) until 2007, when Bill Ritter took office; his election the previous year marked the first electoral victory for a native-born Coloradan in a gubernatorial race since 1958 (Vanderhoof had ascended from the Lieutenant Governorship when John Arthur Love was given a position in Richard Nixon's administration in 1973).
Tax is collected by the Colorado Department of Revenue.
Politics
Colorado was once considered a swing state, but has become a relatively safe blue state in both state and federal elections. In presidential elections, it had not been won until 2020 by double digits since 1984 and has backed the winning candidate in 9 of the last 11 elections. Coloradans have elected 17 Democrats and 12 Republicans to the governorship in the last 100 years.
In presidential politics, Colorado was considered a reliably Republican state during the post-World War II era, voting for the Democratic candidate only in 1948, 1964, and 1992. However, it became a competitive swing state in the 1990s. Since the mid-2000s, it has swung heavily to the Democrats, voting for Barack Obama in 2008 and 2012, Hillary Clinton in 2016, and Joe Biden in 2020.
Colorado politics exhibits a contrast between conservative cities such as Colorado Springs and Grand Junction, and liberal cities such as Boulder and Denver. Democrats are strongest in metropolitan Denver, the college towns of Fort Collins and Boulder, southern Colorado (including Pueblo), and several western ski resort counties. The Republicans are strongest in the Eastern Plains, Colorado Springs, Greeley, and far Western Colorado near Grand Junction.
Colorado is represented by two members of the United States Senate:
Class 2, John Hickenlooper (Democratic), since 2021
Class 3, Michael Bennet (Democratic), since 2009
Colorado is represented by eight members of the United States House of Representatives:
1st district: Diana DeGette (Democratic), since 1997
2nd district: Joe Neguse (Democratic), since 2019
3rd district: Lauren Boebert (Republican), since 2021
4th district: Ken Buck (Republican), since 2015
5th district: Doug Lamborn (Republican), since 2007
6th district: Jason Crow (Democratic), since 2019
7th district: Brittany Pettersen (Democratic), since 2023
8th district: Yadira Caraveo (Democratic), since 2023
In a 2020 study, Colorado was ranked as the seventh easiest state for citizens to vote in.
Significant initiatives and legislation enacted in Colorado
In 1881 Colorado voters approved a referendum that selected Denver as the state capital.
Colorado was the first state in the union to enact, by voter referendum, a law extending suffrage to women. That initiative was approved by the state's voters on November 7, 1893.
On the November 8, 1932, ballot, Colorado approved the repeal of alcohol prohibition more than a year before the Twenty-first Amendment to the United States Constitution was ratified.
Colorado has banned, via C.R.S. section 12-6-302, the sale of motor vehicles on Sunday since at least 1953.
In 1972 Colorado voters rejected a referendum proposal to fund the 1976 Winter Olympics, which had been scheduled to be held in the state. Denver had been chosen by the International Olympic Committee as the host city on May 12, 1970.
In 1992, by a margin of 53 to 47 percent, Colorado voters approved an amendment to the state constitution (Amendment 2) that would have prevented any city, town, or county in the state from taking any legislative, executive or judicial action to recognize homosexuals or bisexuals as a protected class. In 1996, in a 6–3 ruling in Romer v. Evans, the U.S. Supreme Court found that preventing protected status based upon homosexuality or bisexuality did not satisfy the Equal Protection Clause.
In 2006, voters passed Amendment 43, which banned gay marriage in Colorado. That initiative was nullified by the U.S. Supreme Court's 2015 decision in Obergefell v. Hodges.
In 2012, voters amended the state constitution protecting the "personal use" of marijuana for adults, establishing a framework to regulate cannabis like alcohol. The first recreational marijuana shops in Colorado, and by extension the United States, opened their doors on January 1, 2014.
On May 29, 2019, Governor Jared Polis signed House Bill 1124 immediately prohibiting law enforcement officials in Colorado from holding undocumented immigrants solely based on a request from U.S. Immigration and Customs Enforcement.
Native American reservations
The two Native American reservations remaining in Colorado are the Southern Ute Indian Reservation (1873; Ute dialect: Kapuuta-wa Moghwachi Núuchi-u) and Ute Mountain Ute Indian Reservation (1940; Ute dialect: Wʉgama Núuchi). The two abolished Indian reservations in Colorado were the Cheyenne and Arapaho Indian Reservation (1851–1870) and Ute Indian Reservation (1855–1873).
Protected areas
Colorado is home to 4 national parks, 9 national monuments, 3 national historic sites, 2 national recreation areas, 4 national historic trails, 1 national scenic trail, 11 national forests, 2 national grasslands, 44 national wildernesses, 3 national conservation areas, 8 national wildlife refuges, 3 national heritage areas, 26 national historic landmarks, 16 national natural landmarks, more than 1,500 National Register of Historic Places, 1 wild and scenic river, 42 state parks, 307 state wildlife areas, 93 state natural areas, 28 national recreation trails, 6 regional trails, and numerous other scenic, historic, and recreational areas.
See also
Bibliography of Colorado
Geography of Colorado
History of Colorado
Index of Colorado-related articles
List of Colorado-related lists
List of ships named the USS Colorado
Outline of Colorado
Footnotes
References
Further reading
Explore Colorado, A Naturalist's Handbook, The Denver Museum of Natural History and Westcliff Publishers, 1995, for an excellent guide to the ecological regions of Colorado.
The Archeology of Colorado, Revised Edition, E. Steve Cassells, Johnson Books, Boulder, Colorado, 1997, trade paperback, .
Chokecherry Places, Essays from the High Plains, Merrill Gilfillan, Johnson Press, Boulder, Colorado, trade paperback, .
The Tie That Binds, Kent Haruf, 1984, hardcover, , a fictional account of farming in Colorado.
Railroads of Colorado: Your Guide to Colorado's Historic Trains and Railway Sites, Claude Wiatrowski, Voyageur Press, 2002, hardcover, 160 pages,
External links
State government
State of Colorado
Colorado Tourism Office
History Colorado
Federal government
Energy & Environmental Data for Colorado
USGS Colorado state facts, real-time, geographic, and other scientific resources of Colorado
United States Census Bureau
Colorado QuickFacts
2000 Census of Population and Housing for Colorado
USDA ERS Colorado state facts
Colorado State Guide, from the Library of Congress
Other
List of searchable databases produced by Colorado state agencies hosted by the American Library Association Government Documents Roundtable
Colorado County Evolution
Ask Colorado
Colorado Historic Newspapers Collection (CHNC)
Mountain and Desert Plants of Colorado and the Southwest,
Climate of Colorado
Holocene Volcano in Colorado (Smithsonian Institution Global Volcanism Program)
Contiguous United States
Former Spanish colonies
Colorado, Territory of
Colorado, State of
States of the United States
Western United States
1861 establishments in Colorado Territory |
5401 | https://en.wikipedia.org/wiki/Carboniferous | Carboniferous | The Carboniferous ( ) is a geologic period and system of the Paleozoic that spans 60 million years from the end of the Devonian Period million years ago (mya), to the beginning of the Permian Period, mya. The name Carboniferous means "coal-bearing", from the Latin ("coal") and ("bear, carry"), and refers to the many coal beds formed globally during that time.
The first of the modern 'system' names, it was coined by geologists William Conybeare and William Phillips in 1822, based on a study of the British rock succession. The Carboniferous is often treated in North America as two geological periods, the earlier Mississippian and the later Pennsylvanian.
Terrestrial animal life was well established by the Carboniferous Period. Tetrapods (four limbed vertebrates), which had originated from lobe-finned fish during the preceding Devonian, became pentadactylous in and diversified during the Carboniferous, including early amphibian lineages such as temnospondyls, with the first appearance of amniotes, including synapsids (the group to which modern mammals belong) and reptiles during the late Carboniferous. The period is sometimes called the Age of Amphibians, during which amphibians became dominant land vertebrates and diversified into many forms including lizard-like, snake-like, and crocodile-like.
Insects underwent a major radiation during the late Carboniferous. Vast swaths of forest covered the land, which eventually fell and became the coal beds characteristic of the Carboniferous stratigraphy evident today.
The later half of the period experienced glaciations, low sea level, and mountain building as the continents collided to form Pangaea. A minor marine and terrestrial extinction event, the Carboniferous rainforest collapse, occurred at the end of the period, caused by climate change.
Etymology and history
The term "Carboniferous" had first been used as an adjective by Irish geologist Richard Kirwan in 1799, and later used in a heading entitled "Coal-measures or Carboniferous Strata" by John Farey Sr. in 1811, becoming an informal term referring to coal-bearing sequences in Britain and elsewhere in Western Europe. Four units were originally ascribed to the Carboniferous, in ascending order, the Old Red Sandstone, Carboniferous Limestone, Millstone Grit and the Coal Measures. These four units were placed into a formalised Carboniferous unit by William Conybeare and William Phillips in 1822, and later into the Carboniferous System by Phillips in 1835. The Old Red Sandstone was later considered Devonian in age. Subsequently, separate stratigraphic schemes were developed in Western Europe, North America, and Russia. The first attempt to build an international timescale for the Carboniferous was during the Eighth International Congress on Carboniferous Stratigraphy and Geology in Moscow in 1975, when all of the modern ICS stages were proposed.
Stratigraphy
The Carboniferous is divided into two subsystems, the lower Mississippian and upper Pennsylvanian, which are sometimes treated as separate geological periods in North American stratigraphy.
Stages can be defined globally or regionally. For global stratigraphic correlation, the International Commission on Stratigraphy (ICS) ratify global stages based on a Global Boundary Stratotype Section and Point (GSSP) from a single formation (a stratotype) identifying the lower boundary of the stage. The ICS subdivisions from youngest to oldest are as follows:
ICS units
The Mississippian was first proposed by Alexander Winchell, and the Pennsylvanian was proposed by J. J. Stevenson in 1888, and both were proposed as distinct and independent systems by H. S. Williams in 1881.
The Tournaisian was named after the Belgian city of Tournai. It was introduced in scientific literature by Belgian geologist André Hubert Dumont in 1832. The GSSP for the base of the Tournaisian is located at the La Serre section in Montagne Noire, southern France. It is defined by the first appearance datum of the conodont Siphonodella sulcata, which was ratified in 1990. However, the GSSP was later shown to have issues, with Siphonodella sulcata being shown to occur 0.45 m below the proposed boundary.
The Viséan Stage was introduced by André Dumont in 1832. Dumont named this stage after the city of Visé in Belgium's Liège Province. The GSSP for the Visean is located in Bed 83 at the Pengchong section, Guangxi, southern China, which was ratified in 2012. The GSSP for the base of the Viséan is the first appearance datum of fusulinid (an extinct group of forams) Eoparastaffella simplex.
The Serpukhovian Stage was proposed in 1890 by Russian stratigrapher Sergei Nikitin. It is named after the city of Serpukhov, near Moscow. The Serpukhovian Stage currently lacks a defined GSSP. The proposed definition for the base of the Serpukhovian is the first appearance of conodont Lochriea ziegleri.
The Bashkirian was named after Bashkiria, the then Russian name of the republic of Bashkortostan in the southern Ural Mountains of Russia. The stage was introduced by Russian stratigrapher Sofia Semikhatova in 1934. The GSSP for the base of the Bashkirian is located at Arrow Canyon in Nevada, US, which was ratified in 1996. The GSSP for the base of the Bashkirian is defined by the first appearance of the conodont Declinognathodus noduliferus.
The Moscovian is named after Moscow, Russia, and was first introduced by Sergei Nikitin in 1890. The Moscovian currently lacks a defined GSSP.
The Kasimovian is named after the Russian city of Kasimov, and originally included as part of Nikitin's original 1890 definition of the Moscovian. It was first recognised as a distinct unit by A.P. Ivanov in 1926, who named it the "Tiguliferina" Horizon after a kind of brachiopod. The Kasimovian currently lacks a defined GSSP.
The Gzhelian is named after the Russian village of Gzhel (), nearby Ramenskoye, not far from Moscow. The name and type locality were defined by Sergei Nikitin in 1890. The base of the Gzhelian currently lacks a defined GSSP.
The GSSP for the base of the Permian is located in the Aidaralash River valley near Aqtöbe, Kazakhstan, which was ratified in 1996. The beginning of the stage is defined by the first appearance of the conodont Streptognathodus postfusus.
Regional stratigraphy
North America
In North American stratigraphy, the Mississippian is divided, in ascending order, into the Kinderhookian, Osagean, Meramecian and Chesterian series, while the Pennsylvanian is divided into the Morrowan, Atokan, Desmoinesian, Missourian and Virgilian series.
The Kinderhookian is named after the village of Kinderhook, Pike County, Illinois. It corresponds to the lower part of the Tournasian.
The Osagean is named after the Osage River in St. Clair County, Missouri. It corresponds to the upper part of the Tournaisian and the lower part of the Viséan.
The Meramecian is named after the Meramec Highlands Quarry, located the near the Meramec River, southwest of St. Louis, Missouri. It corresponds to the mid Viséan.
The Chesterian is named after the Chester Group, a sequence of rocks named after the town of Chester, Illinois. It corresponds to the upper Viséan and all of the Serpukhovian.
The Morrowan is named after the Morrow Formation located in NW Arkansas, it corresponds to the lower Bashkirian.
The Atokan was originally a formation named after the town of Atoka in southwestern Oklahoma. It corresponds to the upper Bashkirian and lower Moscovian
The Desmoinesian is named after the Des Moines Formation found near the Des Moines River in central Iowa. It corresponds to the middle and upper Moscovian and lower Kasimovian.
The Missourian was named at the same time as the Desmoinesian. It corresponds to the middle and upper Kasimovian.
The Virgilian is named after the town of Virgil, Kansas, it corresponds to the Gzhelian.
Europe
The European Carboniferous is divided into the lower Dinantian and upper Silesian, the former being named for the Belgian city of Dinant, and the latter for the Silesia region of Central Europe. The boundary between the two subdivisions is older than the Mississippian-Pennsylvanian boundary, lying within the lower Serpukhovian. The boundary has traditionally been marked by the first appearance of the ammonoid Cravenoceras leion. In Europe, the Dinantian is primarily marine, the so-called "Carboniferous Limestone", while the Silesian is known primarily for its coal measures.
The Dinantian is divided up into two stages, the Tournaisian and Viséan. The Tournaisian is the same length as the ICS stage, but the Viséan is longer, extending into the lower Serpukhovian.
The Silesian is divided into three stages, in ascending order, the Namurian, Westphalian, Stephanian. The Autunian, which corresponds to the middle and upper Gzhelian, is considered a part of the overlying Rotliegend.
The Namurian is named after the city of Namur in Belgium. It corresponds to the middle and upper Serpukhovian and the lower Bashkirian.
The Westphalian is named after the region of Westphalia in Germany it corresponds to the upper Bashkirian and all but the uppermost Moscovian.
The Stephanian is named after the city of Saint-Étienne in eastern France. It corresponds to the uppermost Moscovian, the Kasimovian, and the lower Gzhelian.
Palaeogeography
A global drop in sea level at the end of the Devonian reversed early in the Carboniferous; this created the widespread inland seas and the carbonate deposition of the Mississippian. There was also a drop in south polar temperatures; southern Gondwanaland was glaciated for much of the period, though it is uncertain if the ice sheets were a holdover from the Devonian or not. These conditions apparently had little effect in the deep tropics, where lush swamps, later to become coal, flourished to within 30 degrees of the northernmost glaciers.
Mid-Carboniferous, a drop in sea level precipitated a major marine extinction, one that hit crinoids and ammonites especially hard. This sea level drop and the associated unconformity in North America separate the Mississippian Subperiod from the Pennsylvanian Subperiod. This happened about 323 million years ago, at the onset of the Permo-Carboniferous Glaciation.
The Carboniferous was a time of active mountain-building as the supercontinent Pangaea came together. The southern continents remained tied together in the supercontinent Gondwana, which collided with North America–Europe (Laurussia) along the present line of eastern North America. This continental collision resulted in the Hercynian orogeny in Europe, and the Alleghenian orogeny in North America; it also extended the newly uplifted Appalachians southwestward as the Ouachita Mountains. In the same time frame, much of present eastern Eurasian plate welded itself to Europe along the line of the Ural Mountains. Most of the Mesozoic supercontinent of Pangea was now assembled, although North China (which collided in the Latest Carboniferous), and South China continents were still separated from Laurasia. The Late Carboniferous Pangaea was shaped like an "O".
There were two major oceans in the Carboniferous: Panthalassa and Paleo-Tethys, which was inside the "O" in the Carboniferous Pangaea. Other minor oceans were shrinking and eventually closed: the Rheic Ocean (closed by the assembly of South and North America), the small, shallow Ural Ocean (which was closed by the collision of Baltica and Siberia continents, creating the Ural Mountains), and the Proto-Tethys Ocean (closed by North China collision with Siberia/Kazakhstania). In the Late Carboniferous, a shallow epicontinental sea covered a significant part of what is today northwestern Europe.
Climate
Average global temperatures in the Early Carboniferous Period were high: approximately 20 °C (68 °F). However, cooling during the Middle Carboniferous reduced average global temperatures to about 12 °C (54 °F). Atmospheric carbon dioxide levels fell during the Carboniferous Period from roughly 8 times the current level in the beginning, to a level similar to today's at the end. The Carboniferous is considered part of the Late Palaeozoic Ice Age, which began in the latest Devonian with the formation of small glaciers in Gondwana. During the Tournaisian the climate warmed, before cooling, there was another warm interval during the Viséan, but cooling began again during the early Serpukhovian. At the beginning of the Pennsylvanian around 323 million years ago, glaciers began to form around the South Pole, which grew to cover a vast area of Gondwana. This area extended from the southern reaches of the Amazon basin and covered large areas of southern Africa, as well as most of Australia and Antarctica. Cyclothems, which began around 313 million years ago, and continue into the following Permian indicate that the size of the glaciers were controlled by Milankovitch cycles akin to recent ice ages, with glacial periods and interglacials. Deep ocean temperatures during this time were cold due to the influx of cold bottom waters generated by seasonal melting of the ice cap.
The cooling and drying of the climate led to the Carboniferous Rainforest Collapse (CRC) during the late Carboniferous. Tropical rainforests fragmented and then were eventually devastated by climate change.
Rocks and coal
Carboniferous rocks in Europe and eastern North America largely consist of a repeated sequence of limestone, sandstone, shale and coal beds. In North America, the early Carboniferous is largely marine limestone, which accounts for the division of the Carboniferous into two periods in North American schemes. The Carboniferous coal beds provided much of the fuel for power generation during the Industrial Revolution and are still of great economic importance.
The large coal deposits of the Carboniferous may owe their existence primarily to two factors. The first of these is the appearance of wood tissue and bark-bearing trees. The evolution of the wood fiber lignin and the bark-sealing, waxy substance suberin variously opposed decay organisms so effectively that dead materials accumulated long enough to fossilise on a large scale. The second factor was the lower sea levels that occurred during the Carboniferous as compared to the preceding Devonian Period. This fostered the development of extensive lowland swamps and forests in North America and Europe. Based on a genetic analysis of basidiomycetes, it was proposed that large quantities of wood were buried during this period because animals and decomposing bacteria and fungi had not yet evolved enzymes that could effectively digest the resistant phenolic lignin polymers and waxy suberin polymers. They suggest that fungi that could break those substances down effectively became dominant only towards the end of the period, making subsequent coal formation much rarer. The delayed fungal evolution hypothesis has been challenged by other researchers, who conclude that tectonic and climatic conditions during the formation of Pangaea, which created water filled basins alongside developing mountain ranges, resulted in the development of widespread humid, tropical conditions and the burial of massive quantities of organic matter, were responsible for the high rate of coal formation, with large amounts of coal also being formed during the Mesozoic and Cenozoic well after lignin digesting fungi had become well established, and that fungal degredation of lignin had likely already evolved by the end of the Devonian, even if the specific enzymes used by basidiomycetes had not.
Although it is often asserted that Carboniferous atmospheric oxygen concentrations were signficiantly higher than today, at around 30% of total atmospheric concentration, prehistoric atmospheric oxygen concentration estimates are highly uncertain, with other estimates suggesting that the amount of oxygen was actually lower than that present in todays atmosphere.
In eastern North America, marine beds are more common in the older part of the period than the later part and are almost entirely absent by the late Carboniferous. More diverse geology existed elsewhere, of course. Marine life is especially rich in crinoids and other echinoderms. Brachiopods were abundant. Trilobites became quite uncommon. On land, large and diverse plant populations existed. Land vertebrates included large amphibians.
Life
Plants
Early Carboniferous land plants, some of which were preserved in coal balls, were very similar to those of the preceding Late Devonian, but new groups also appeared at this time.
The main Early Carboniferous plants were the Equisetales (horse-tails), Sphenophyllales (scrambling plants), Lycopodiales (club mosses), Lepidodendrales (scale trees), Filicales (ferns), Medullosales (informally included in the "seed ferns", an assemblage of a number of early gymnosperm groups) and the Cordaitales. These continued to dominate throughout the period, but during late Carboniferous, several other groups, Cycadophyta (cycads), the Callistophytales (another group of "seed ferns"), and the Voltziales, appeared.
The Carboniferous lycophytes of the order Lepidodendrales, which are cousins (but not ancestors) of the tiny club-moss of today, were huge trees with trunks 30 meters high and up to 1.5 meters in diameter. These included Lepidodendron (with its cone called Lepidostrobus), Anabathra, Lepidophloios and Sigillaria. The roots of several of these forms are known as Stigmaria. Unlike present-day trees, their secondary growth took place in the cortex, which also provided stability, instead of the xylem. The Cladoxylopsids were large trees, that were ancestors of ferns, first arising in the Carboniferous.
The fronds of some Carboniferous ferns are almost identical with those of living species. Probably many species were epiphytic. Fossil ferns and "seed ferns" include Pecopteris, Cyclopteris, Neuropteris, Alethopteris, and Sphenopteris; Megaphyton and Caulopteris were tree ferns.
The Equisetales included the common giant form Calamites, with a trunk diameter of 30 to and a height of up to . Sphenophyllum was a slender climbing plant with whorls of leaves, which was probably related both to the calamites and the lycopods.
Cordaites, a tall plant (6 to over 30 meters) with strap-like leaves, was related to the cycads and conifers; the catkin-like reproductive organs, which bore ovules/seeds, is called Cardiocarpus. These plants were thought to live in swamps. True coniferous trees (Walchia, of the order Voltziales) appear later in the Carboniferous, and preferred higher drier ground.
Marine invertebrates
In the oceans the marine invertebrate groups are the Foraminifera, corals, Bryozoa, Ostracoda, brachiopods, ammonoids, hederelloids, microconchids and echinoderms (especially crinoids). The diversity of brachiopods and fusilinid foraminiferans, surged beginning in the Visean, continuing through the end of the Carboniferous, although cephalopod and nektonic conodont diversity declined. This evolutionary radiation was known as the Carboniferous-Earliest Permian Biodiversification Event. For the first time foraminifera take a prominent part in the marine faunas. The large spindle-shaped genus Fusulina and its relatives were abundant in what is now Russia, China, Japan, North America; other important genera include Valvulina, Endothyra, Archaediscus, and Saccammina (the latter common in Britain and Belgium). Some Carboniferous genera are still extant. The first true priapulids appeared during this period.
The microscopic shells of radiolarians are found in cherts of this age in the Culm of Devon and Cornwall, and in Russia, Germany and elsewhere. Sponges are known from spicules and anchor ropes, and include various forms such as the Calcispongea Cotyliscus and Girtycoelia, the demosponge Chaetetes, and the genus of unusual colonial glass sponges Titusvillia.
Both reef-building and solitary corals diversify and flourish; these include both rugose (for example, Caninia, Corwenia, Neozaphrentis), heterocorals, and tabulate (for example, Chladochonus, Michelinia) forms. Conularids were well represented by Conularia
Bryozoa are abundant in some regions; the fenestellids including Fenestella, Polypora, and Archimedes, so named because it is in the shape of an Archimedean screw. Brachiopods are also abundant; they include productids, some of which reached very large for brachiopods size and had very thick shells (for example, the -wide Gigantoproductus), while others like Chonetes were more conservative in form. Athyridids, spiriferids, rhynchonellids, and terebratulids are also very common. Inarticulate forms include Discina and Crania. Some species and genera had a very wide distribution with only minor variations.
Annelids such as Serpulites are common fossils in some horizons. Among the mollusca, the bivalves continue to increase in numbers and importance. Typical genera include Aviculopecten, Posidonomya, Nucula, Carbonicola, Edmondia, and Modiola. Gastropods are also numerous, including the genera Murchisonia, Euomphalus, Naticopsis. Nautiloid cephalopods are represented by tightly coiled nautilids, with straight-shelled and curved-shelled forms becoming increasingly rare. Goniatite ammonoids such as Aenigmatoceras are common.
Trilobites are rarer than in previous periods, on a steady trend towards extinction, represented only by the proetid group. Ostracoda, a class of crustaceans, were abundant as representatives of the meiobenthos; genera included Amphissites, Bairdia, Beyrichiopsis, Cavellina, Coryellina, Cribroconcha, Hollinella, Kirkbya, Knoxiella, and Libumella.
Crinoids were highly numerous during the Carboniferous, though they suffered a gradual decline in diversity during the middle Mississippian. Dense submarine thickets of long-stemmed crinoids appear to have flourished in shallow seas, and their remains were consolidated into thick beds of rock. Prominent genera include Cyathocrinus, Woodocrinus, and Actinocrinus. Echinoids such as Archaeocidaris and Palaeechinus were also present. The blastoids, which included the Pentreinitidae and Codasteridae and superficially resembled crinoids in the possession of long stalks attached to the seabed, attain their maximum development at this time.
Freshwater and lagoonal invertebrates
Freshwater Carboniferous invertebrates include various bivalve molluscs that lived in brackish or fresh water, such as Anthraconaia, Naiadites, and Carbonicola; diverse crustaceans such as Candona, Carbonita, Darwinula, Estheria, Acanthocaris, Dithyrocaris, and Anthrapalaemon.
The eurypterids were also diverse, and are represented by such genera as Adelophthalmus, Megarachne (originally misinterpreted as a giant spider, hence its name) and the specialised very large Hibbertopterus. Many of these were amphibious.
Frequently a temporary return of marine conditions resulted in marine or brackish water genera such as Lingula, Orbiculoidea, and Productus being found in the thin beds known as marine bands.
Terrestrial invertebrates
Fossil remains of air-breathing insects, myriapods and arachnids are known from the Carboniferous. Their diversity when they do appear, however, shows that these arthropods were both well-developed and numerous. Some arthropods grew to large sizes with the up to millipede-like Arthropleura being the largest-known land invertebrate of all time. Among the insect groups are the huge predatory Protodonata (griffinflies), among which was Meganeura, a giant dragonfly-like insect and with a wingspan of ca. —the largest flying insect ever to roam the planet. Further groups are the Syntonopterodea (relatives of present-day mayflies), the abundant and often large sap-sucking Palaeodictyopteroidea, the diverse herbivorous Protorthoptera, and numerous basal Dictyoptera (ancestors of cockroaches). Many insects have been obtained from the coalfields of Saarbrücken and Commentry, and from the hollow trunks of fossil trees in Nova Scotia. Some British coalfields have yielded good specimens: Archaeoptilus, from the Derbyshire coalfield, had a large wing with preserved part, and some specimens (Brodia) still exhibit traces of brilliant wing colors. In the Nova Scotian tree trunks land snails (Archaeozonites, Dendropupa) have been found.
Fish
Many fish inhabited the Carboniferous seas; predominantly Elasmobranchs (sharks and their relatives). These included some, like Psammodus, with crushing pavement-like teeth adapted for grinding the shells of brachiopods, crustaceans, and other marine organisms. Other groups of elasmobranchs, like the ctenacanthiformes grew to large sizes, with some genera like Saivodus reaching around 6-9 meters (20-30 feet). Other fish had piercing teeth, such as the Symmoriida; some, the petalodonts, had peculiar cycloid cutting teeth. Most of the other cartilaginous fish were marine, but others like the Xenacanthida, and several genera like Bandringa invaded fresh waters of the coal swamps. Among the bony fish, the Palaeonisciformes found in coastal waters also appear to have migrated to rivers. Sarcopterygian fish were also prominent, and one group, the Rhizodonts, reached very large size.
Most species of Carboniferous marine fish have been described largely from teeth, fin spines and dermal ossicles, with smaller freshwater fish preserved whole.
Freshwater fish were abundant, and include the genera Ctenodus, Uronemus, Acanthodes, Cheirodus, and Gyracanthus.
Chondrichthyes (especially holocephalans like the Stethacanthids) underwent a major evolutionary radiation during the Carboniferous. It is believed that this evolutionary radiation occurred because the decline of the placoderms at the end of the Devonian Period caused many environmental niches to become unoccupied and allowed new organisms to evolve and fill these niches. As a result of the evolutionary radiation Carboniferous holocephalans assumed a wide variety of bizarre shapes including Stethacanthus which possessed a flat brush-like dorsal fin with a patch of denticles on its top. Stethacanthus unusual fin may have been used in mating rituals. Other groups like the eugeneodonts filled in the niches left by large predatory placoderms. These fish were unique as they only possessed one row of teeth in their upper or lower jaws in the form of elaborate tooth whorls. The first members of the helicoprionidae, a family eugeneodonts that were characterized by the presence of one circular tooth whorl in the lower jaw, appeared during the lower Carboniferous. Perhaps the most bizarre radiation of holocephalans at this time was that of the iniopterygiformes, an order of holocephalans that greatly resembled modern day flying fish that could have also "flown" in the water with their massive, elongated pectoral fins. They were further characterized by their large eye sockets, club-like structures on their tails, and spines on the tips of their fins.
Tetrapods
Carboniferous amphibians were diverse and common by the middle of the period, more so than they are today; some were as long as 6 meters, and those fully terrestrial as adults had scaly skin. They included a number of basal tetrapod groups classified in early books under the Labyrinthodontia. These had long bodies, a head covered with bony plates and generally weak or undeveloped limbs. The largest were over 2 meters long. They were accompanied by an assemblage of smaller amphibians included under the Lepospondyli, often only about long. Some Carboniferous amphibians were aquatic and lived in rivers (Loxomma, Eogyrinus, Proterogyrinus); others may have been semi-aquatic (Ophiderpeton, Amphibamus, Hyloplesion) or terrestrial (Dendrerpeton, Tuditanus, Anthracosaurus).
The Carboniferous Rainforest Collapse slowed the evolution of amphibians who could not survive as well in the cooler, drier conditions. Amniotes, however, prospered due to specific key adaptations. One of the greatest evolutionary innovations of the Carboniferous was the amniote egg, which allowed the laying of eggs in a dry environment, as well as keratinized scales and claws, allowing for the further exploitation of the land by certain tetrapods. These included the earliest sauropsid reptiles (Hylonomus), and the earliest known synapsid (Archaeothyris). Synapsids quickly became huge and diversified in the Permian, only for their dominance to stop during the Mesozoic Era. Sauropsids (reptiles, and also, later, birds) also diversified but remained small until the Mesozoic, during which they dominated the land, as well as the water and sky, only for their dominance to stop during the Cenozoic Era.
Reptiles underwent a major evolutionary radiation in response to the drier climate that preceded the rainforest collapse. By the end of the Carboniferous Period, amniotes had already diversified into a number of groups, including several families of synapsid pelycosaurs, protorothyridids, captorhinids, saurians and araeoscelids.
Fungi
As plants and animals were growing in size and abundance in this time (for example, Lepidodendron), land fungi diversified further. Marine fungi still occupied the oceans. All modern classes of fungi were present in the Late Carboniferous (Pennsylvanian Epoch).
During the Carboniferous, animals and bacteria had great difficulty with processing the lignin and cellulose that made up the gigantic trees of the period. Microbes had not evolved that could process them. The trees, after they died, simply piled up on the ground, occasionally becoming part of long-running wildfires after a lightning strike, with others very slowly degrading into coal. White rot fungus were the first organisms to be able to process these and break them down in any reasonable quantity and timescale. Thus, some have proposed that fungi helped end the Carboniferous Period, stopping accumulation of undegraded plant matter, although this idea remains highly controversial.
Extinction events
Romer's gap
The first 15 million years of the Carboniferous had very limited terrestrial fossils. This gap in the fossil record is called Romer's gap after the American palaentologist Alfred Romer. While it has long been debated whether the gap is a result of fossilisation or relates to an actual event, recent work indicates the gap period saw a drop in atmospheric oxygen levels, indicating some sort of ecological collapse. The gap saw the demise of the Devonian fish-like ichthyostegalian labyrinthodonts, and the rise of the more advanced temnospondyl and reptiliomorphan amphibians that so typify the Carboniferous terrestrial vertebrate fauna.
Carboniferous rainforest collapse
Before the end of the Carboniferous Period, an extinction event occurred. On land this event is referred to as the Carboniferous Rainforest Collapse (CRC). Vast tropical rainforests collapsed suddenly as the climate changed from hot and humid to cool and arid. This was likely caused by intense glaciation and a drop in sea levels.
The new climatic conditions were not favorable to the growth of rainforest and the animals within them. Rainforests shrank into isolated islands, surrounded by seasonally dry habitats. Towering lycopsid forests with a heterogeneous mixture of vegetation were replaced by much less diverse tree-fern dominated flora.
Amphibians, the dominant vertebrates at the time, fared poorly through this event with large losses in biodiversity; reptiles continued to diversify due to key adaptations that let them survive in the drier habitat, specifically the hard-shelled egg and scales, both of which retain water better than their amphibian counterparts.
See also
List of Carboniferous tetrapods
Carboniferous rainforest collapse
Important Carboniferous Lagerstätten
Granton Shrimp Bed; 359 mya; Edinburgh, Scotland
East Kirkton Quarry; c. 350 mya; Bathgate, Scotland
Bear Gulch Limestone; 324 mya; Montana, US
Mazon Creek; 309 mya; Illinois, US
Hamilton Quarry; 300 mya; Kansas, US
List of fossil sites (with link directory)
References
Sources
Rainer Zangerl and Gerard Ramon Case: Iniopterygia: a new order of Chondrichthyan fishes from the Pennsylvanian of North America. Fieldiana Geology Memoirs, v. 6, Field Museum of Natural History, 1973 Biodiversity Heritage Library (Volltext, engl.)
External links
Examples of Carboniferous Fossils
60+ images of Carboniferous Foraminifera
Carboniferous (Chronostratography scale)
Geological periods |
5403 | https://en.wikipedia.org/wiki/Comoros | Comoros | The Comoros, officially the Union of the Comoros, is an archipelagic country made up of three islands in Southeastern Africa, located at the northern end of the Mozambique Channel in the Indian Ocean. Its capital and largest city is Moroni. The religion of the majority of the population, and the official state religion, is Sunni Islam. Comoros proclaimed its independence from France on 6 July 1975. A member of the Arab League, it is the only country in the Arab world which is entirely in the Southern Hemisphere. It is a member state of the African Union, the Organisation internationale de la Francophonie, the Organisation of Islamic Cooperation, and the Indian Ocean Commission. The country has three official languages: Shikomori, French and Arabic.
The sovereign state consists of three major islands and numerous smaller islands, all of the volcanic Comoro Islands with the exception of Mayotte. Mayotte voted against independence from France in a referendum in 1974, and continues to be administered by France as an overseas department. France has vetoed United Nations Security Council resolutions that would affirm Comorian sovereignty over the island. Mayotte became an overseas department and a region of France in 2011 following a referendum which was passed overwhelmingly.
At , the Comoros is the third-smallest African country by area. In 2019, its population was estimated to be 850,886.
The Comoros were likely first settled by Austronesian/Malagasy peoples, Bantu speakers from East Africa, and seafaring Arab traders. It became part of the French colonial empire during the 19th century, before its independence in 1975. It has experienced more than 20 coups or attempted coups, with various heads of state assassinated. Along with this constant political instability, it has one of the worst levels of income inequality of any nation, and ranks in the lowest quartile on the Human Development Index. , about half the population lived below the international poverty line of US$1.25 a day.
Etymology
The name "Comoros" derives from the Arabic word qamar ("moon").
History
Settlement
According to mythology, a jinni (spirit) dropped a jewel, which formed a great circular inferno. This became the Karthala volcano, which created the island of Ngazidja (Grande Comore). King Solomon is also said to have visited the island accompanied by his queen Bilqis.
The first attested human inhabitants of the Comoro Islands are now thought to have been Austronesian settlers travelling by boat from islands in Southeast Asia. These people arrived no later than the eighth century AD, the date of the earliest known archaeological site, found on Mayotte, although settlement beginning as early as the first century has been postulated.
Subsequent settlers came from the east coast of Africa, the Arabian Peninsula and the Persian Gulf, the Malay Archipelago, and Madagascar. Bantu-speaking settlers were present on the islands from the beginnings of settlement, probably brought to the islands as slaves.
Development of the Comoros is divided into phases. The earliest reliably recorded phase is the Dembeni phase (eighth to tenth centuries), during which there were several small settlements on each island. From the eleventh to the fifteenth centuries, trade with the island of Madagascar and merchants from the Swahili coast and the Middle East flourished, more villages were founded and existing villages grew. Many Comorians can trace their genealogies to ancestors from the Arabian peninsula, particularly Hadhramaut, who arrived during this period.
Medieval Comoros
According to legend, in 632, upon hearing of Islam, islanders are said to have dispatched an emissary, Mtswa-Mwindza, to Mecca—but by the time he arrived there, the Islamic prophet Muhammad had died. Nonetheless, after a stay in Mecca, he returned to Ngazidja, where he built a mosque in his home town of Ntsaweni, and led the gradual conversion of the islanders to Islam.
In 933, the Comoros was referred to by Omani sailors as the Perfume Islands.
Among the earliest accounts of East Africa, the works of Al-Masudi describe early Islamic trade routes, and how the coast and islands were frequently visited by Muslims including Persian and Arab merchants and sailors in search of coral, ambergris, ivory, tortoiseshell, gold and slaves. They also brought Islam to the people of the Zanj including the Comoros. As the importance of the Comoros grew along the East African coast, both small and large mosques were constructed. The Comoros are part of the Swahili cultural and economic complex and the islands became a major hub of trade and an important location in a network of trading towns that included Kilwa, in present-day Tanzania, Sofala (an outlet for Zimbabwean gold), in Mozambique, and Mombasa in Kenya.
The Portuguese arrived in the Indian Ocean at the end of the 15th century and the first Portuguese visit to the islands seems to have been that of Vasco da Gama's second fleet in 1503. For much of the 16th century the islands provided provisions to the Portuguese fort at Mozambique and although there was no formal attempt by the Portuguese crown to take possession, a number of Portuguese traders settled and married local women.
By the end of the 16th century local rulers on the African mainland were beginning to push back and, with the support of the Omani Sultan Saif bin Sultan they began to defeat the Dutch and the Portuguese. One of his successors, Said bin Sultan, increased Omani Arab influence in the region, moving his administration to nearby Zanzibar, which came under Omani rule. Nevertheless, the Comoros remained independent, and although the three smaller islands were usually politically unified, the largest island, Ngazidja, was divided into a number of autonomous kingdoms (ntsi).
The islands were well placed to meet the needs of Europeans, initially supplying the Portuguese in Mozambique, then ships, particularly the English, on the route to India, and, later, slaves to the plantation islands in the Mascarenes.
European contact and French colonisation
In the last decade of the 18th century, Malagasy warriors, mostly Betsimisaraka and Sakalava, started raiding the Comoros for slaves and the islands were devastated as crops were destroyed and the people were slaughtered, taken into captivity or fled to the African mainland: it is said that by the time the raids finally ended in the second decade of the 19th century only one man remained on Mwali. The islands were repopulated by slaves from the mainland, who were traded to the French in Mayotte and the Mascarenes. On the Comoros, it was estimated in 1865 that as much as 40% of the population consisted of slaves.
France first established colonial rule in the Comoros by taking possession of Mayotte in 1841 when the Sakalava usurper sultan (also known as Tsy Levalo) signed the Treaty of April 1841, which ceded the island to the French authorities. After its annexation, France attempted to convert Mayotte into a sugar plantation colony.
Meanwhile, Ndzwani (or Johanna as it was known to the British) continued to serve as a way station for English merchants sailing to India and the Far East, as well as American whalers, although the British gradually abandoned it following their possession of Mauritius in 1814, and by the time the Suez Canal opened in 1869 there was no longer any significant supply trade at Ndzwani. Local commodities exported by the Comoros were, in addition to slaves, coconuts, timber, cattle and tortoiseshell. British and American settlers, as well as the island's sultan, established a plantation-based economy that used about one-third of the land for export crops. In addition to sugar on Mayotte, ylang-ylang and other perfume plants, vanilla, cloves, coffee, cocoa beans, and sisal were introduced.
In 1886, Mwali was placed under French protection by its Sultan Mardjani Abdou Cheikh. That same year, Sultan Said Ali of Bambao, one of the sultanates on Ngazidja, placed the island under French protection in exchange for French support of his claim to the entire island, which he retained until his abdication in 1910. In 1908 the four islands were unified under a single administration (Colonie de Mayotte et dépendances) and placed under the authority of the French colonial Governor-General of Madagascar. In 1909, Sultan Said Muhamed of Ndzwani abdicated in favour of French rule and in 1912 the protectorates were abolished and the islands administered as a single colony. Two years later the colony was abolished and the islands became a province of the colony of Madagascar.
Agreement was reached with France in 1973 for the Comoros to become independent in 1978, despite the deputies of Mayotte voting for increased integration with France. A referendum was held on all four of the islands. Three voted for independence by large margins, while Mayotte voted against. On 6 July 1975, however, the Comorian parliament passed a unilateral resolution declaring independence. Ahmed Abdallah proclaimed the independence of the Comorian State (État comorien; دولة القمر) and became its first president. France did not recognise the new state until 31 December, and retained control of Mayotte.
Independence (1975)
The next 30 years were a period of political turmoil. On 3 August 1975, less than one month after independence, president Ahmed Abdallah was removed from office in an armed coup and replaced with United National Front of the Comoros (FNUK) member Said Mohamed Jaffar. Months later, in January 1976, Jaffar was ousted in favour of his Minister of Defence Ali Soilihi.
The population of Mayotte voted against independence from France in three referendums during this period. The first, held on all the islands on 22 December 1974, won 63.8% support for maintaining ties with France on Mayotte; the second, held in February 1976, confirmed that vote with an overwhelming 99.4%, while the third, in April 1976, confirmed that the people of Mayotte wished to remain a French territory. The three remaining islands, ruled by President Soilihi, instituted a number of socialist and isolationist policies that soon strained relations with France. On 13 May 1978, Bob Denard, once again commissioned by the French intelligence service (SDECE), returned to overthrow President Soilihi and reinstate Abdallah with the support of the French, Rhodesian and South African governments. Ali Soilihi was captured and executed a few weeks later.
In contrast to Soilihi, Abdallah's presidency was marked by authoritarian rule and increased adherence to traditional Islam and the country was renamed the Federal Islamic Republic of the Comoros (République Fédérale Islamique des Comores; جمهورية القمر الإتحادية الإسلامية). Bob Denard served as Abdallah's first advisor; nicknamed the "Viceroy of the Comoros," he was sometimes considered the real strongman of the regime. Very close to South Africa, which financed his "presidential guard," he allowed Paris to circumvent the international embargo on the apartheid regime via Moroni. He also set up from the archipelago a permanent mercenary corps, called upon to intervene at the request of Paris or Pretoria in conflicts in Africa. Abdallah continued as president until 1989 when, fearing a probable coup, he signed a decree ordering the Presidential Guard, led by Bob Denard, to disarm the armed forces. Shortly after the signing of the decree, Abdallah was allegedly shot dead in his office by a disgruntled military officer, though later sources claim an antitank missile was launched into his bedroom and killed him. Although Denard was also injured, it is suspected that Abdallah's killer was a soldier under his command.
A few days later, Bob Denard was evacuated to South Africa by French paratroopers. Said Mohamed Djohar, Soilihi's older half-brother, then became president, and served until September 1995, when Bob Denard returned and attempted another coup. This time France intervened with paratroopers and forced Denard to surrender. The French removed Djohar to Reunion, and the Paris-backed Mohamed Taki Abdoulkarim became president by election. He led the country from 1996, during a time of labour crises, government suppression, and secessionist conflicts, until his death in November 1998. He was succeeded by Interim President Tadjidine Ben Said Massounde.
The islands of Ndzwani and Mwali declared their independence from the Comoros in 1997, in an attempt to restore French rule. But France rejected their request, leading to bloody confrontations between federal troops and rebels. In April 1999, Colonel Azali Assoumani, Army Chief of Staff, seized power in a bloodless coup, overthrowing the Interim President Massounde, citing weak leadership in the face of the crisis. This was the Comoros' 18th coup, or attempted coup d'état since independence in 1975.
Azali failed to consolidate power and reestablish control over the islands, which was the subject of international criticism. The African Union, under the auspices of President Thabo Mbeki of South Africa, imposed sanctions on Ndzwani to help broker negotiations and effect reconciliation. Under the terms of the Fomboni Accords, signed in December 2001 by the leaders of all three islands, the official name of the country was changed to the Union of the Comoros; the new state was to be highly decentralised and the central union government would devolve most powers to the new island governments, each led by a president. The Union president, although elected by national elections, would be chosen in rotation from each of the islands every five years.
Azali stepped down in 2002 to run in the democratic election of the President of the Comoros, which he won. Under ongoing international pressure, as a military ruler who had originally come to power by force, and was not always democratic while in office, Azali led the Comoros through constitutional changes that enabled new elections. A Loi des compétences law was passed in early 2005 that defines the responsibilities of each governmental body, and is in the process of implementation. The elections in 2006 were won by Ahmed Abdallah Mohamed Sambi, a Sunni Muslim cleric nicknamed the "Ayatollah" for his time spent studying Islam in Iran. Azali honoured the election results, thus allowing the first peaceful and democratic exchange of power for the archipelago.
Colonel Mohammed Bacar, a French-trained former gendarme elected President of Ndzwani in 2001, refused to step down at the end of his five-year mandate. He staged a vote in June 2007 to confirm his leadership that was rejected as illegal by the Comoros federal government and the African Union. On 25 March 2008 hundreds of soldiers from the African Union and the Comoros seized rebel-held Ndzwani, generally welcomed by the population: there have been reports of hundreds, if not thousands, of people tortured during Bacar's tenure.
Some rebels were killed and injured, but there are no official figures. At least 11 civilians were wounded. Some officials were imprisoned. Bacar fled in a speedboat to Mayotte to seek asylum. Anti-French protests followed in the Comoros (see 2008 invasion of Anjouan). Bacar was eventually granted asylum in Benin.
Since independence from France, the Comoros experienced more than 20 coups or attempted coups.
Following elections in late 2010, former Vice-president Ikililou Dhoinine was inaugurated as president on 26 May 2011. A member of the ruling party, Dhoinine was supported in the election by the incumbent President Ahmed Abdallah Mohamed Sambi. Dhoinine, a pharmacist by training, is the first President of the Comoros from the island of Mwali. Following the 2016 elections, Azali Assoumani, from Ngazidja, became president for a third term. In 2018 Azali held a referendum on constitutional reform that would permit a president to serve two terms. The amendments passed, although the vote was widely contested and boycotted by the opposition, and in April 2019, and to widespread opposition, Azali was re-elected president to serve the first of potentially two five-year terms.
In January 2020, the legislative elections in Comoros were dominated by President Azali Assoumani's party, the Convention for the Renewal of the Comoros, CRC. It took an overwhelming majority in the parliament, meaning his hold on power strengthened. CRC took 17 out of 24 seats of the parliament.
In 2021, Comoros signed and ratified the Treaty on the Prohibition of Nuclear Weapons, making it a nuclear-weapon-free state. and in 2023, Comoros was invited as a non-member guest to the G7 summit in Hiroshima.
On 18 February 2023 the Comoros assumed the presidency of the African Union.
Geography
The Comoros is formed by Ngazidja (Grande Comore), Mwali (Mohéli) and Ndzwani (Anjouan), three major islands in the Comoros Archipelago, as well as many minor islets. The islands are officially known by their Comorian language names, though international sources still use their French names (given in parentheses above). The capital and largest city, Moroni, is located on Ngazidja. The archipelago is situated in the Indian Ocean, in the Mozambique Channel, between the African coast (nearest to Mozambique and Tanzania) and Madagascar, with no land borders.
At , it is one of the smallest countries in the world. The Comoros also has claim to of territorial seas. The interiors of the islands vary from steep mountains to low hills.
The areas and populations (at the 2017 Census) of the main islands are as follows:
Ngazidja is the largest of the Comoros Archipelago, with an area of 1,024 km2. It is also the most recent island, and therefore has rocky soil. The island's two volcanoes, Karthala (active) and La Grille (dormant), and the lack of good harbours are distinctive characteristics of its terrain. Mwali, with its capital at Fomboni, is the smallest of the four major islands. Ndzwani, whose capital is Mutsamudu, has a distinctive triangular shape caused by three mountain chains – Shisiwani, Nioumakele and Jimilime – emanating from a central peak, ().
The islands of the Comoros Archipelago were formed by volcanic activity. Mount Karthala, an active shield volcano located on Ngazidja, is the country's highest point, at . It contains the Comoros' largest patch of disappearing rainforest. Karthala is currently one of the most active volcanoes in the world, with a minor eruption in May 2006, and prior eruptions as recently as April 2005 and 1991. In the 2005 eruption, which lasted from 17 to 19 April, 40,000 citizens were evacuated, and the crater lake in the volcano's caldera was destroyed.
The Comoros also lays claim to the Îles Éparses or Îles éparses de l'océan indien (Scattered Islands in the Indian Ocean) – Glorioso Islands, comprising Grande Glorieuse, Île du Lys, Wreck Rock, South Rock, (three islets) and three unnamed islets – one of France's overseas districts. The Glorioso Islands were administered by the colonial Comoros before 1975, and are therefore sometimes considered part of the Comoros Archipelago. Banc du Geyser, a former island in the Comoros Archipelago, now submerged, is geographically located in the Îles Éparses, but was annexed by Madagascar in 1976 as an unclaimed territory. The Comoros and France each still view the Banc du Geyser as part of the Glorioso Islands and, thus, part of its particular exclusive economic zone.
Climate
The climate is generally tropical and mild, and the two major seasons are distinguishable by their raininess. The temperature reaches an average of in March, the hottest month in the rainy season (called kashkazi/kaskazi [meaning north monsoon], which runs from November to April), and an average low of in the cool, dry season (kusi (meaning south monsoon), which proceeds from May to October). The islands are rarely subject to cyclones.
Biodiversity
The Comoros constitute an ecoregion in their own right, Comoros forests. It had a 2018 Forest Landscape Integrity Index mean score of 7.69/10, ranking it 33rd globally out of 172 countries.
In December 1952 a specimen of the West Indian Ocean coelacanth fish was re-discovered off the Comoros coast. The 66 million-year-old species was thought to have been long extinct until its first recorded appearance in 1938 off the South African coast. Between 1938 and 1975, 84 specimens were caught and recorded.
Protected areas
There are six national parks in the Comoros – Karthala, Coelacanth, and Mitsamiouli Ndroudi on Grande Comore, Mount Ntringui and Shisiwani on Anjouan, and Mohéli National Park on Mohéli. Karthala and Mount Ntrigui national parks cover the highest peaks on the respective islands, and Coelacanth, Mitsamiouli Ndroudi, and Shisiwani are marine national parks that protect the island's coastal waters and fringing reefs. Mohéli National Park includes both terrestrial and marine areas.
Government
Politics of the Comoros takes place in a framework of a federal presidential republic, whereby the President of the Comoros is both head of state and head of government, and of a multi-party system. The Constitution of the Union of the Comoros was ratified by referendum on 23 December 2001, and the islands' constitutions and executives were elected in the following months. It had previously been considered a military dictatorship, and the transfer of power from Azali Assoumani to Ahmed Abdallah Mohamed Sambi in May 2006 was a watershed moment as it was the first peaceful transfer in Comorian history.
Executive power is exercised by the government. Federal legislative power is vested in both the government and parliament. The preamble of the constitution guarantees an Islamic inspiration in governance, a commitment to human rights, and several specific enumerated rights, democracy, "a common destiny" for all Comorians. Each of the islands (according to Title II of the Constitution) has a great amount of autonomy in the Union, including having their own constitutions (or Fundamental Law), president, and Parliament. The presidency and Assembly of the Union are distinct from each of the islands' governments. The presidency of the Union rotates between the islands. Despite widespread misgivings about the durability of the system of presidential rotation, Ngazidja holds the current presidency rotation, and Azali is President of the Union; Ndzwani is in theory to provide the next president.
Legal system
The Comorian legal system rests on Islamic law, an inherited French (Napoleonic Code) legal code, and customary law (mila na ntsi). Village elders, kadis or civilian courts settle most disputes. The judiciary is independent of the legislative and the executive. The Supreme Court acts as a Constitutional Council in resolving constitutional questions and supervising presidential elections. As High Court of Justice, the Supreme Court also arbitrates in cases where the government is accused of malpractice. The Supreme Court consists of two members selected by the president, two elected by the Federal Assembly, and one by the council of each island.
Political culture
Around 80 percent of the central government's annual budget is spent on the country's complex administrative system which provides for a semi-autonomous government and president for each of the three islands and a rotating presidency for the overarching Union government. A referendum took place on 16 May 2009 to decide whether to cut down the government's unwieldy political bureaucracy. 52.7% of those eligible voted, and 93.8% of votes were cast in approval of the referendum. Following the implementation of the changes, each island's president became a governor and the ministers became councillors.
Foreign relations
In November 1975, the Comoros became the 143rd member of the United Nations. The new nation was defined as comprising the entire archipelago, although the citizens of Mayotte chose to become French citizens and keep their island as a French territory.
The Comoros has repeatedly pressed its claim to Mayotte before the United Nations General Assembly, which adopted a series of resolutions under the caption "Question of the Comorian Island of Mayotte", opining that Mayotte belongs to the Comoros under the principle that the territorial integrity of colonial territories should be preserved upon independence. As a practical matter, however, these resolutions have little effect and there is no foreseeable likelihood that Mayotte will become de facto part of the Comoros without its people's consent. More recently, the Assembly has maintained this item on its agenda but deferred it from year to year without taking action. Other bodies, including the Organization of African Unity, the Movement of Non-Aligned Countries and the Organisation of Islamic Cooperation, have similarly questioned French sovereignty over Mayotte. To close the debate and to avoid being integrated by force in the Union of the Comoros, the population of Mayotte overwhelmingly chose to become an overseas department and a region of France in a 2009 referendum. The new status was effective on 31 March 2011 and Mayotte has been recognised as an outermost region by the European Union on 1 January 2014. This decision legally integrates Mayotte in the French Republic.
The Comoros is a member of the United Nations, the African Union, the Arab League, the World Bank, the International Monetary Fund, the Indian Ocean Commission and the African Development Bank. On 10 April 2008, the Comoros became the 179th nation to accept the Kyoto Protocol to the United Nations Framework Convention on Climate Change. The Comoros signed the UN treaty on the Prohibition of Nuclear Weapons. Azali Assoumani, President of the Comoros and Chair of the African Union, attended the 2023 Russia–Africa Summit in Saint Petersburg.
In May 2013 the Union of the Comoros became known for filing a referral to the Office of the Prosecutor of the International Criminal Court (ICC) regarding the events of "the 31 May 2010 Israeli raid on the Humanitarian Aid Flotilla bound for [the] Gaza Strip". In November 2014 the ICC Prosecutor eventually decided that the events did constitute war crimes but did not meet the gravity standards of bringing the case before ICC.
The emigration rate of skilled workers was about 21.2% in 2000.
Military
The military resources of the Comoros consist of a small standing army and a 500-member police force, as well as a 500-member defence force. A defence treaty with France provides naval resources for protection of territorial waters, training of Comorian military personnel, and air surveillance. France maintains the presence of a few senior officers in the Comoros at government request, as well as a small maritime base and a Foreign Legion Detachment (DLEM) on Mayotte.
Once the new government was installed in May–June 2011, an expert mission from UNREC (Lomé) came to the Comoros and produced guidelines for the elaboration of a national security policy, which were discussed by different actors, notably the national defence authorities and civil society. By the end of the programme in end March 2012, a normative framework agreed upon by all entities involved in SSR will have been established. This will then have to be adopted by Parliament and implemented by the authorities.
Human rights
Both male and female same-sex sexual acts are illegal in Comoros. Such acts are punished with up to five years imprisonment.
Economy
The level of poverty in the Comoros is high, but "judging by the international poverty threshold of $1.9 per person per day, only two out of every ten Comorians could be classified as poor, a rate that places the Comoros ahead of other low-income countries and 30 percentage points ahead of other countries in Sub-Saharan Africa." Poverty declined by about 10% between 2014 and 2018, and living conditions generally improved. Economic inequality remains widespread, with a major gap between rural and urban areas. Remittances through the sizable Comorian diaspora form a substantial part of the country's GDP and have contributed to decreases in poverty and increases in living standards.
According to ILO's ILOSTAT statistical database, between 1991 and 2019 the unemployment rate as a percent of the total labor force ranged from 4.38% to 4.3%. An October 2005 paper by the Comoros Ministry of Planning and Regional Development, however, reported that "registered unemployment rate is 14.3 percent, distributed very unevenly among and within the islands, but with marked incidence in urban areas."
In 2019, more than 56% of the labor force was employed in agriculture, with 29% employed in industry and 14% employed in services. The islands' agricultural sector is based on the export of spices, including vanilla, cinnamon, and cloves, and thus susceptible to price fluctuations in the volatile world commodity market for these goods. The Comoros is the world's largest producer of ylang-ylang, a plant whose extracted essential oil is used in the perfume industry; some 80% of the world's supply comes from the Comoros.
High population densities, as much as 1000 per square kilometre in the densest agricultural zones, for what is still a mostly rural, agricultural economy may lead to an environmental crisis in the near future, especially considering the high rate of population growth. In 2004 the Comoros' real GDP growth was a low 1.9% and real GDP per capita continued to decline. These declines are explained by factors including declining investment, drops in consumption, rising inflation, and an increase in trade imbalance due in part to lowered cash crop prices, especially vanilla.
Fiscal policy is constrained by erratic fiscal revenues, a bloated civil service wage bill, and an external debt that is far above the HIPC threshold. Membership in the franc zone, the main anchor of stability, has nevertheless helped contain pressures on domestic prices.
The Comoros has an inadequate transportation system, a young and rapidly increasing population, and few natural resources. The low educational level of the labour force contributes to a subsistence level of economic activity, high unemployment, and a heavy dependence on foreign grants and technical assistance. Agriculture contributes 40% to GDP and provides most of the exports.
The government is struggling to upgrade education and technical training, to privatise commercial and industrial enterprises, to improve health services, to diversify exports, to promote tourism, and to reduce the high population growth rate.
The Comoros is a member of the Organization for the Harmonization of Business Law in Africa (OHADA).
Demographics
With about 850,000 residents, the Comoros is one of the least-populous countries in the world, but its population density is high, with an average of . In 2001, 34% of the population was considered urban, but the urban population has since grown; in recent years rural population growth has been negative, while overall population growth is still relatively high. In 1958 the population was 183,133.
Almost half the population of the Comoros is under the age of 15. Major urban centres include Moroni, Mitsamihuli, Fumbuni, Mutsamudu, Domoni, and Fomboni. There are between 200,000 and 350,000 Comorians in France.
Ethnic groups
The islands of the Comoros are 97.1% ethnically Comorian, which is a mixture of Bantu, Malagasy, and Arab people. Minorities include Makua and Indian (mostly Ismaili). There are recent immigrants of Chinese origin in Grande Comore (especially Moroni). Although most French left after independence in 1975, a small Creole community, descended from settlers from France, Madagascar and Réunion, lives in the Comoros.
Languages
The most common languages in the Comoros are the Comorian languages, collectively known as Shikomori. They are related to Swahili, and the four different variants (Shingazidja, Shimwali, Shindzwani and Shimaore) are spoken on each of the four islands. Arabic and Latin scripts are both used, Arabic being the more widely used, and an official orthography has recently been developed for the Latin script.
Arabic and French are also official languages, along with Comorian. Arabic is widely known as a second language, being the language of Quranic teaching. French is the administrative language and the language of most non-Quranic formal education.
Religion
Sunni Islam is the dominant religion, followed by as much as 99% of the population. Comoros is the only Muslim-majority country in Southern Africa and the third southernmost Muslim-majority territory after Mayotte and the Australian territory of Cocos Islands.
A minority of the population of the Comoros are Christian, both Catholic and Protestant denominations are represented, and most Malagasy residents are also Christian. Immigrants from metropolitan France are mostly Catholic.
Health
There are 15 physicians per 100,000 people. The fertility rate was 4.7 per adult woman in 2004. Life expectancy at birth is 67 for females and 62 for males.
Education
Almost all children attend Quranic schools, usually before, although increasingly in tandem with regular schooling. Children are taught about the Qur'an, and memorise it, and learn the Arabic script. Most parents prefer their children to attend Koran schools before moving on to the French-based schooling system. Although the state sector is plagued by a lack of resources, and the teachers by unpaid salaries, there are numerous private and community schools of relatively good standard. The national curriculum, apart from a few years during the revolutionary period immediately post-independence, has been very much based on the French system, both because resources are French and most Comorians hope to go on to further education in France. There have recently been moves to Comorianise the syllabus and integrate the two systems, the formal and the Quran schools, into one, thus moving away from the secular educational system inherited from France.
Pre-colonization education systems in Comoros focused on necessary skills such as agriculture, caring for livestock and completing household tasks. Religious education also taught children the virtues of Islam. The education system underwent a transformation during colonization in the early 1900s which brought secular education based on the French system. This was mainly for children of the elite. After Comoros gained independence in 1975, the education system changed again. Funding for teachers' salaries was lost, and many went on strike. Thus, the public education system was not functioning between 1997 and 2001. Since gaining independence, the education system has also undergone a democratization and options exist for those other than the elite. Enrollment has also grown.
In 2000, 44.2% of children aged 5 to 14 years were attending school. There is a general lack of facilities, equipment, qualified teachers, textbooks and other resources. Salaries for teachers are often so far in arrears that many refuse to work.
Prior to 2000, students seeking a university education had to attend school outside of the country. However, in the early 2000s a university was created in the country. This served to help economic growth and to fight the "flight" of many educated people who were not returning to the islands to work.
Comorian has no native script, but both the Arabic and Latin alphabets are used. In 2004, about 57 percent of the population was literate in the Latin script while more than 90 percent were literate in the Arabic script.
Culture
Traditionally, women on Ndzwani wear red and white patterned garments called shiromani, while on Ngazidja and Mwali colourful shawls called leso are worn. Many women apply a paste of ground sandalwood and coral called msindzano to their faces. Traditional male clothing is a long white shirt known as a nkandu, and a bonnet called a kofia.
Marriage
There are two types of marriages in Comoros, the little marriage (known as Mna daho on Ngazidja) and the customary marriage (known as ada on Ngazidja, harusi on the other islands). The little marriage is a simple legal marriage. It is small, intimate, and inexpensive, and the bride's dowry is nominal. A man may undertake a number of Mna daho marriages in his lifetime, often at the same time, a woman fewer; but both men and women will usually only undertake one ada, or grand marriage, and this must generally be within the village. The hallmarks of the grand marriage are dazzling gold jewelry, two weeks of celebration and an enormous bridal dowry. Although the expenses are shared between both families as well as with a wider social circle, an ada wedding on Ngazidja can cost up to €50,000. Many couples take a lifetime to save for their ada, and it is not uncommon for a marriage to be attended by a couple's adult children.
The ada marriage marks a man's transition in the Ngazidja age system from youth to elder. His status in the social hierarchy greatly increases, and he will henceforth be entitled to speak in public and participate in the political process, both in his village and more widely across the island. He will be entitled to display his status by wearing a mharuma, a type of shawl, across his shoulders, and he can enter the mosque by the door reserved for elders, and sit at the front. A woman's status also changes, although less formally, as she becomes a "mother" and moves into her own house. The system is less formalised on the other islands, but the marriage is nevertheless a significant and costly event across the archipelago. The ada is often criticized because of its great expense, but at the same time it is a source of social cohesion and the main reason why migrants in France and elsewhere continue to send money home. Increasingly, marriages are also being taxed for the purposes of village development.
Kinship and social structure
Comorian society has a bilateral descent system. Lineage membership and inheritance of immovable goods (land, housing) is matrilineal, passed in the maternal line, similar to many Bantu peoples who are also matrilineal, while other goods and patronymics are passed in the male line. However, there are differences between the islands, the matrilineal element being stronger on Ngazidja.
Music
Twarab music, imported from Zanzibar in the early 20th century, remains the most influential genre on the islands and is popular at ada marriages.
Media
There are two daily national newspapers published in the Comoros, the government-owned Al-Watwan, and the privately owned La Gazette des Comores, both published in Moroni. There are a number of smaller newsletters published on an irregular basis as well as a variety of news websites. The government-owned ORTC (Office de Radio et Télévision des Comores) provides national radio and television service. There is a TV station run by the Anjouan regional government, and regional governments on the islands of Grande Comore and Anjouan each operate a radio station. There are also a few independent and small community radio stations that operate on the islands of Grande Comore and Mohéli, and these two islands have access to Mayotte Radio and French TV.
See also
Index of Comoros-related articles
Notes
References
Citations
Sources
This article incorporates text from the Library of Congress Country Studies, which is in the public domain.
External links
Union des Comores – Official government website
Tourism website
Embassy des Comores – The Federal and Islamic Republic of the Comoros in New York, United States
Comoros from the BBC News
Key Development Forecasts for Comoros from International Futures
Countries in Africa
1975 establishments in Africa
Countries and territories where Arabic is an official language
Comoros archipelago
East African countries
Federal republics
French-speaking countries and territories
Island countries of the Indian Ocean
Island countries
Least developed countries
Member states of the African Union
Member states of the Arab League
Member states of the Organisation internationale de la Francophonie
Member states of the Organisation of Islamic Cooperation
Member states of the United Nations
Small Island Developing States
States and territories established in 1503
States and territories established in 1975 |
5405 | https://en.wikipedia.org/wiki/China | China | China (), officially the People's Republic of China (PRC), is a country in East Asia. It is the world's second-most-populous country, with a population exceeding 1.4 billion. China spans the equivalent of five time zones and borders fourteen countries by land, tied with Russia as having the most of any country in the world. With an area of nearly , it is the world's third-largest country by total land area. The country is divided into 22 provinces, five autonomous regions, four municipalities, and two semi-autonomous special administrative regions. The national capital is Beijing, and the most populous city and largest financial center is Shanghai.
The region that is now China has been inhabited since the Paleolithic era. The earliest Chinese dynastic states, such as the Shang and the Zhou, emerged in the basin of the Yellow River before the late second millennium BCE. The eighth to third centuries BCE saw a breakdown in Zhou authority and significant conflict, as well as the emergence of Classical Chinese literature and philosophy. In 221 BCE, China was unified under an emperor, ushering in more than two millennia in which China was governed by one or more imperial dynasties, including the Han, Tang, Ming and Qing. Some of China's most notable achievements—such as the invention of gunpowder and paper, the establishment of the Silk Road, and the building of the Great Wall—occurred during this period. The Chinese culture—including languages, traditions, architecture, philosophy and more—has heavily influenced East Asia during this imperial period.
In 1912, the Chinese monarchy was overthrown and the Republic of China established. The Republic saw consistent conflict for most of the mid-20th century, including a civil war between the Kuomintang government and the Chinese Communist Party (CCP), which began in 1927, as well as the Second Sino-Japanese War that began in 1937 and continued until 1945, therefore becoming involved in World War II. The latter led to a temporary stop in the civil war and numerous Japanese atrocities such as the Nanjing Massacre, which continue to influence China–Japan relations. In 1949, the CCP established control over China as the Kuomintang fled to Taiwan. Early communist rule saw two major projects: the Great Leap Forward, which resulted in a sharp economic decline and massive famine; and the Cultural Revolution, a movement to purge all non-communist elements of Chinese society that led to mass violence and persecution. Beginning in 1978, the Chinese government began economic reforms that moved the country away from planned economics, but political reforms were cut short by the 1989 Tiananmen Square protests, which ended in a massacre. Despite the event, the economic reform continued to strengthen the nation's economy in the following decades while raising China's standard of living significantly.
China is a unitary one-party socialist republic led by the CCP. It is one of the five permanent members of the UN Security Council and a founding member of several multilateral and regional organizations such as the Asian Infrastructure Investment Bank, the Silk Road Fund, the New Development Bank, and the RCEP. It is also a member of the BRICS, the G20, APEC, and the East Asia Summit. China ranks poorly in measures of democracy, transparency, and human rights, including for press freedom, religious freedom, and ethnic equality. Making up around one-fifth of the world economy, China is the world's largest economy by GDP at purchasing power parity, the second-largest economy by nominal GDP, and the second-wealthiest country. The country is one of the fastest-growing major economies and is the world's largest manufacturer and exporter, as well as the second-largest importer. China is a nuclear-weapon state with the world's largest standing army by military personnel and the second-largest defense budget.
Etymology
The word "China" has been used in English since the 16th century; however, it was not a word used by the Chinese themselves during this period. Its origin has been traced through Portuguese, Malay, and Persian back to the Sanskrit word Cīna, used in ancient India. "China" appears in Richard Eden's 1555 translation of the 1516 journal of the Portuguese explorer Duarte Barbosa. Barbosa's usage was derived from Persian Chīn (), which was in turn derived from Sanskrit Cīna (). Cīna was first used in early Hindu scripture, including the Mahābhārata (5th century BCE) and the Laws of Manu (2nd century BCE). In 1655, Martino Martini suggested that the word China is derived ultimately from the name of the Qin dynasty (221–206 BCE). Although usage in Indian sources precedes this dynasty, this derivation is still given in various sources. The origin of the Sanskrit word is a matter of debate, according to the Oxford English Dictionary.
Alternative suggestions include the names for Yelang and the Jing or Chu state. The official name of the modern state is the "People's Republic of China" (). The shorter form is "China" () from ("central") and ("state"), a term which developed under the Western Zhou dynasty in reference to its royal demesne. It was then applied to the area around Luoyi (present-day Luoyang) during the Eastern Zhou and then to China's Central Plain before being used in official documents as an synonym for the state under the Qing. It was sometimes also used as a cultural concept to distinguish the Huaxia people from perceived "barbarians". The name Zhongguo is also translated as in English. China (PRC) is sometimes referred to as the Mainland when distinguishing the ROC from the PRC.
History
Prehistory
China is regarded as one of the world's oldest civilizations. Archaeological evidence suggests that early hominids inhabited the country 2.25 million years ago. The hominid fossils of Peking Man, a Homo erectus who used fire, were discovered in a cave at Zhoukoudian near Beijing; they have been dated to between 680,000 and 780,000 years ago. The fossilized teeth of Homo sapiens (dated to 125,000–80,000 years ago) have been discovered in Fuyan Cave in Dao County, Hunan. Chinese proto-writing existed in Jiahu around 6600 BCE, at Damaidi around 6000 BCE, Dadiwan from 5800 to 5400 BCE, and Banpo dating from the 5th millennium BCE. Some scholars have suggested that the Jiahu symbols (7th millennium BCE) constituted the earliest Chinese writing system.
Early dynastic rule
According to Chinese tradition, the first dynasty was the Xia, which emerged around 2100 BCE. The Xia dynasty marked the beginning of China's political system based on hereditary monarchies, or dynasties, which lasted for a millennium. The Xia dynasty was considered mythical by historians until scientific excavations found early Bronze Age sites at Erlitou, Henan in 1959. It remains unclear whether these sites are the remains of the Xia dynasty or of another culture from the same period. The succeeding Shang dynasty is the earliest to be confirmed by contemporary records. The Shang ruled the plain of the Yellow River in eastern China from the 17th to the 11th century BCE. Their oracle bone script (from BCE) represents the oldest form of Chinese writing yet found and is a direct ancestor of modern Chinese characters.
The Shang was conquered by the Zhou, who ruled between the 11th and 5th centuries BCE, though centralized authority was slowly eroded by feudal warlords. Some principalities eventually emerged from the weakened Zhou, no longer fully obeyed the Zhou king, and continually waged war with each other during the 300-year Spring and Autumn period. By the time of the Warring States period of the 5th–3rd centuries BCE, there were seven major powerful states left.
Imperial China
The Warring States period ended in 221 BCE after the state of Qin conquered the other six kingdoms, reunited China and established the dominant order of autocracy. King Zheng of Qin proclaimed himself the First Emperor of the Qin dynasty. He enacted Qin's legalist reforms throughout China, notably the forced standardization of Chinese characters, measurements, road widths (i.e., the cart axles' length), and currency. His dynasty also conquered the Yue tribes in Guangxi, Guangdong, and Northern Vietnam. The Qin dynasty lasted only fifteen years, falling soon after the First Emperor's death, as his harsh authoritarian policies led to widespread rebellion.
Following a widespread civil war during which the imperial library at Xianyang was burned, the Han dynasty emerged to rule China between 206 BCE and CE 220, creating a cultural identity among its populace still remembered in the ethnonym of the modern Han Chinese. The Han expanded the empire's territory considerably, with military campaigns reaching Central Asia, Mongolia, South Korea, and Yunnan, and the recovery of Guangdong and northern Vietnam from Nanyue. Han involvement in Central Asia and Sogdia helped establish the land route of the Silk Road, replacing the earlier path over the Himalayas to India. Han China gradually became the largest economy of the ancient world. Despite the Han's initial decentralization and the official abandonment of the Qin philosophy of Legalism in favor of Confucianism, Qin's legalist institutions and policies continued to be employed by the Han government and its successors.
After the end of the Han dynasty, a period of strife known as Three Kingdoms followed, whose central figures were later immortalized in one of the Four Classics of Chinese literature. At its end, Wei was swiftly overthrown by the Jin dynasty. The Jin fell to civil war upon the ascension of a developmentally disabled emperor; the Five Barbarians then invaded and ruled northern China as the Sixteen States. The Xianbei unified them as the Northern Wei, whose Emperor Xiaowen reversed his predecessors' apartheid policies and enforced a drastic sinification on his subjects, largely integrating them into Chinese culture. In the south, the general Liu Yu secured the abdication of the Jin in favor of the Liu Song. The various successors of these states became known as the Northern and Southern dynasties, with the two areas finally reunited by the Sui in 581. The Sui restored the Han to power through China, reformed its agriculture, economy and imperial examination system, constructed the Grand Canal, and patronized Buddhism. However, they fell quickly when their conscription for public works and a failed war in northern Korea provoked widespread unrest.
Under the succeeding Tang and Song dynasties, Chinese economy, technology, and culture entered a golden age. The Tang dynasty retained control of the Western Regions and the Silk Road, which brought traders to as far as Mesopotamia and the Horn of Africa, and made the capital Chang'an a cosmopolitan urban center. However, it was devastated and weakened by the An Lushan Rebellion in the 8th century. In 907, the Tang disintegrated completely when the local military governors became ungovernable. The Song dynasty ended the separatist situation in 960, leading to a balance of power between the Song and Khitan Liao. The Song was the first government in world history to issue paper money and the first Chinese polity to establish a permanent standing navy which was supported by the developed shipbuilding industry along with the sea trade.
Between the 10th and 11th century CE, the population of China doubled in size to around 100 million people, mostly because of the expansion of rice cultivation in central and southern China, and the production of abundant food surpluses. The Song dynasty also saw a revival of Confucianism, in response to the growth of Buddhism during the Tang, and a flourishing of philosophy and the arts, as landscape art and porcelain were brought to new levels of maturity and complexity. However, the military weakness of the Song army was observed by the Jurchen Jin dynasty. In 1127, Emperor Huizong of Song and the capital Bianjing were captured during the Jin–Song Wars. The remnants of the Song retreated to southern China.
The Mongol conquest of China began in 1205 with the gradual conquest of Western Xia by Genghis Khan, who also invaded Jin territories. In 1271, the Mongol leader Kublai Khan established the Yuan dynasty, which conquered the last remnant of the Song dynasty in 1279. Before the Mongol invasion, the population of Song China was 120 million citizens; this was reduced to 60 million by the time of the census in 1300. A peasant named Zhu Yuanzhang led a rebellion that overthrew the Yuan in 1368 and founded the Ming dynasty as the Hongwu Emperor. Under the Ming dynasty, China enjoyed another golden age, developing one of the strongest navies in the world and a rich and prosperous economy amid a flourishing of art and culture. It was during this period that admiral Zheng He led the Ming treasure voyages throughout the Indian Ocean, reaching as far as East Africa.
In the early years of the Ming dynasty, China's capital was moved from Nanjing to Beijing. With the budding of capitalism, philosophers such as Wang Yangming further critiqued and expanded Neo-Confucianism with concepts of individualism and equality of four occupations. The scholar-official stratum became a supporting force of industry and commerce in the tax boycott movements, which, together with the famines and defense against Japanese invasions of Korea (1592–1598) and Later Jin incursions led to an exhausted treasury. In 1644, Beijing was captured by a coalition of peasant rebel forces led by Li Zicheng. The Chongzhen Emperor committed suicide when the city fell. The Manchu Qing dynasty, then allied with Ming dynasty general Wu Sangui, overthrew Li's short-lived Shun dynasty and subsequently seized control of Beijing, which became the new capital of the Qing dynasty.
The Qing dynasty, which lasted from 1644 until 1912, was the last imperial dynasty of China. The Ming-Qing transition (1618–1683) cost 25 million lives in total, but the Qing appeared to have restored China's imperial power and inaugurated another flowering of the arts. After the Southern Ming ended, the further conquest of the Dzungar Khanate added Mongolia, Tibet and Xinjiang to the empire. Meanwhile, China's population growth resumed and shortly began to accelerate. It is commonly agreed that pre-modern China's population experienced two growth spurts, one during the Northern Song period (960-1127), and other during the Qing period (around 1700–1830). By the High Qing era China was possibly the most commercialized country in the world, and imperial China experienced a second commercial revolution in the economic history of China by the end of the 18th century. On the other hand, the centralized autocracy was strengthened in part to suppress anti-Qing sentiment with the policy of valuing agriculture and restraining commerce, like the Haijin during the early Qing period and ideological control as represented by the literary inquisition, causing some social and technological stagnation.
Fall of the Qing dynasty
In the mid-19th century, the Qing dynasty experienced Western imperialism in the Opium Wars with Britain and France. China was forced to pay compensation, open treaty ports, allow extraterritoriality for foreign nationals, and cede Hong Kong to the British under the 1842 Treaty of Nanking, the first of the Unequal Treaties. The First Sino-Japanese War (1894–1895) resulted in Qing China's loss of influence in the Korean Peninsula, as well as the cession of Taiwan to Japan.
The Qing dynasty also began experiencing internal unrest in which tens of millions of people died, especially in the White Lotus Rebellion, the failed Taiping Rebellion that ravaged southern China in the 1850s and 1860s and the Dungan Revolt (1862–1877) in the northwest. The initial success of the Self-Strengthening Movement of the 1860s was frustrated by a series of military defeats in the 1880s and 1890s.
In the 19th century, the great Chinese diaspora began. Losses due to emigration were added to by conflicts and catastrophes such as the Northern Chinese Famine of 1876–1879, in which between 9 and 13 million people died. The Guangxu Emperor drafted a reform plan in 1898 to establish a modern constitutional monarchy, but these plans were thwarted by the Empress Dowager Cixi. The ill-fated anti-foreign Boxer Rebellion of 1899–1901 further weakened the dynasty. Although Cixi sponsored a program of reforms known as the late Qing reforms, the Xinhai Revolution of 1911–1912 brought an end to the Qing dynasty and established the Republic of China. Puyi, the last Emperor of China, abdicated in 1912.
Establishment of the Republic and World War II
On 1 January 1912, the Republic of China was established, and Sun Yat-sen of the Kuomintang (the KMT or Nationalist Party) was proclaimed provisional president. On 12 February 1912, regent Empress Dowager Longyu sealed the imperial abdication decree on behalf of 4 year old Puyi, the last emperor of China, ending 5,000 years of monarchy in China. In March 1912, the presidency was given to Yuan Shikai, a former Qing general who in 1915 proclaimed himself Emperor of China. In the face of popular condemnation and opposition from his own Beiyang Army, he was forced to abdicate and re-establish the republic in 1916.
After Yuan Shikai's death in 1916, China was politically fragmented. Its Beijing-based government was internationally recognized but virtually powerless; regional warlords controlled most of its territory. In the late 1920s, the Kuomintang under Chiang Kai-shek, the then Principal of the Republic of China Military Academy, was able to reunify the country under its own control with a series of deft military and political maneuverings, known collectively as the Northern Expedition. The Kuomintang moved the nation's capital to Nanjing and implemented "political tutelage", an intermediate stage of political development outlined in Sun Yat-sen's San-min program for transforming China into a modern democratic state. The political division in China made it difficult for Chiang to battle the communist-led People's Liberation Army (PLA), against whom the Kuomintang had been warring since 1927 in the Chinese Civil War. This war continued successfully for the Kuomintang, especially after the PLA retreated in the Long March, until Japanese aggression and the 1936 Xi'an Incident forced Chiang to confront Imperial Japan.
The Second Sino-Japanese War (1937–1945), a theater of World War II, forced an uneasy alliance between the Kuomintang and the Communists. Japanese forces committed numerous war atrocities against the civilian population; in all, as many as 20 million Chinese civilians died. An estimated 40,000 to 300,000 Chinese were massacred in the city of Nanjing alone during the Japanese occupation. During the war, China, along with the UK, the United States, and the Soviet Union, were referred to as "trusteeship of the powerful" and were recognized as the Allied "Big Four" in the Declaration by United Nations. Along with the other three great powers, China was one of the four major Allies of World War II, and was later considered one of the primary victors in the war. After the surrender of Japan in 1945, Taiwan, including the Pescadores, was handed over to Chinese control. However, the validity of this handover is controversial, in that whether Taiwan's sovereignty was legally transferred and whether China is a legitimate recipient, due to complex issues that arose from the handling of Japan's surrender. China emerged victorious but war-ravaged and financially drained. The continued distrust between the Kuomintang and the Communists led to the resumption of civil war. Constitutional rule was established in 1947, but because of the ongoing unrest, many provisions of the ROC constitution were never implemented in mainland China.
Civil War and the People's Republic
Before the existence of the People's Republic, the CCP had declared several areas of the country as the Chinese Soviet Republic (Jiangxi Soviet), a predecessor state to the PRC, in November 1931 in Ruijin, Jiangxi. The Jiangxi Soviet was wiped out by the KMT armies in 1934 and was relocated to Yan'an in Shaanxi where the Long March concluded in 1935. It would be the base of the communists before major combat in the Chinese Civil War ended in 1949. Afterwards, the CCP took control of most of mainland China, and the Kuomintang retreating offshore to Taiwan, reducing its territory to only Taiwan, Hainan, and their surrounding islands.
On 1 October 1949, CCP Chairman Mao Zedong formally proclaimed the establishment of the People's Republic of China at the new nation's founding ceremony and inaugural military parade in Tiananmen Square, Beijing. In 1950, the People's Liberation Army captured Hainan from the ROC and annexed Tibet. However, remaining Kuomintang forces continued to wage an insurgency in western China throughout the 1950s.
The government consolidated its popularity among the peasants through the Land Reform Movement, which included the execution of between 1 and 2 million landlords. China developed an independent industrial system and its own nuclear weapons. The Chinese population increased from 550 million in 1950 to 900 million in 1974. However, the Great Leap Forward, an idealistic massive industrialization project, resulted in an estimated 15 to 55 million deaths between 1959 and 1961, mostly from starvation. In 1964, China's first atomic bomb exploded successfully. In 1966, Mao and his allies launched the Cultural Revolution, sparking a decade of political recrimination and social upheaval that lasted until Mao's death in 1976. In October 1971, the PRC replaced the Republic of China in the United Nations, and took its seat as a permanent member of the Security Council. This UN action also created the problem of the political status of Taiwan and the Two Chinas issue.
Reforms and contemporary history
After Mao's death, the Gang of Four was quickly arrested by Hua Guofeng and held responsible for the excesses of the Cultural Revolution. Deng Xiaoping took power in 1978, and instituted large-scale political and economic reforms, together with the "Eight Elders", CCP members who held huge influence during this time. The CCP loosened governmental control over citizens' personal lives, and the communes were gradually disbanded in favor of working contracted to households. The Cultural Revolution was also rebuked, with millions of its victims being rehabilitated. Agricultural collectivization was dismantled and farmlands privatized, while foreign trade became a major new focus, leading to the creation of special economic zones (SEZs). Inefficient state-owned enterprises (SOEs) were restructured and unprofitable ones were closed outright, resulting in massive job losses. This marked China's transition from a planned economy to a mixed economy with an increasingly open-market environment. China adopted its current constitution on 4 December 1982.
In 1989, the country saw large pro-democracy protests, eventually leading to the Tiananmen Square massacre by the leadership, bringing condemnations and sanctions against the Chinese government from various foreign countries, though the effect on external relations was short-lived. Jiang Zemin, Party secretary of Shanghai at the time, was selected to replace Zhao Ziyang as the CCP general secretary; Zhao was put under house arrest for his sympathies to the protests. Jiang later additionally took the presidency and Central Military Commission chairmanship posts, effectively becoming China's top leader. Li Peng, who was instrumental in the crackdown, remained premier until 1998, after which Zhu Rongji became the premier. Under their administration, China continued economic reforms, further closing many SOEs and massively trimming down "iron rice bowl"; occupations with guaranteed job security. During Jiang's rule, China's economy grew sevenfold, and its performance pulled an estimated 150 million peasants out of poverty and sustained an average annual gross domestic product growth rate of 11.2%. British Hong Kong and Portuguese Macau returned to China in 1997 and 1999, respectively, as the Hong Kong and Macau special administrative regions under the principle of one country, two systems. The country joined the World Trade Organization in 2001.
Between 2002 and 2003, Hu Jintao and Wen Jiabao succeeded Jiang and Zhu as paramount leader and premier respectively; Jiang attempted to remain CMC chairman for longer before giving up the post entirely between 2004 and 2005. Under Hu and Wen, China maintained its high rate of economic growth, overtaking the United Kingdom, France, Germany and Japan to become the world's second-largest economy. However, the growth also severely impacted the country's resources and environment, and caused major social displacement. Hu and Wen also took a relatively more conservative approach towards economic reform, expanding support for SOEs.Additionally under Hu, China hosted the Beijing Olympics in 2008.
Xi Jinping and Li Keqiang succeeded Hu and Wen as paramount leader and premier respectively between 2012 and 2013; Li Keqiang was later succeeded by Li Qiang in 2023. Shortly after his ascension to power, Xi launched a vast anti-corruption crackdown, that prosecuted more than 2 million officials by 2022. Leading many new Central Leading Groups to bypass traditional bureaucracy, Xi consolidated power further than his predecessors. Xi has also pursued changes to China's economy, supporting SOEs and making eradicating extreme poverty through "targeted poverty alleviation" a key goal. In 2013, Xi launched the Belt and Road Initiative, a global infrastructure investment project. Xi has also taken a more assertive stance on foreign and security issues. Since 2017, the Chinese government has been engaged in a harsh crackdown in Xinjiang, with an estimated one million people, mostly Uyghurs, but including other ethnic and religious minorities, in internment camps. The National People's Congress in 2018 amended the constitution to remove the two-term limit on holding the Presidency, allowing for a third and further terms. In 2020, the Standing Committee of the National People's Congress (NPCSC) passed a national security law that authorize the Hong Kong government wide-ranging tools to crack down on dissent. From December 2019 to December 2022, the COVID-19 pandemic led the government to enforce strict public health measures intended to completely eradicate the virus, a goal that was eventually abandoned after protests against the policy in 2022.
Geography
China's landscape is vast and diverse, ranging from the Gobi and Taklamakan Deserts in the arid north to the subtropical forests in the wetter south. The Himalaya, Karakoram, Pamir and Tian Shan mountain ranges separate China from much of South and Central Asia. The Yangtze and Yellow Rivers, the third- and sixth-longest in the world, respectively, run from the Tibetan Plateau to the densely populated eastern seaboard. China's coastline along the Pacific Ocean is long and is bounded by the Bohai, Yellow, East China and South China seas. China connects through the Kazakh border to the Eurasian Steppe which has been an artery of communication between East and West since the Neolithic through the Steppe Route – the ancestor of the terrestrial Silk Road(s).
The territory of China lies between latitudes 18° and 54° N, and longitudes 73° and 135° E. The geographical center of China is marked by the Center of the Country Monument at . China's landscapes vary significantly across its vast territory. In the east, along the shores of the Yellow Sea and the East China Sea, there are extensive and densely populated alluvial plains, while on the edges of the Inner Mongolian plateau in the north, broad grasslands predominate. Southern China is dominated by hills and low mountain ranges, while the central-east hosts the deltas of China's two major rivers, the Yellow River and the Yangtze River. Other major rivers include the Xi, Mekong, Brahmaputra and Amur. To the west sit major mountain ranges, most notably the Himalayas. High plateaus feature among the more arid landscapes of the north, such as the Taklamakan and the Gobi Desert. The world's highest point, Mount Everest (8,848 m), lies on the Sino-Nepalese border. The country's lowest point, and the world's third-lowest, is the dried lake bed of Ayding Lake (−154 m) in the Turpan Depression.
Climate
China's climate is mainly dominated by dry seasons and wet monsoons, which lead to pronounced temperature differences between winter and summer. In the winter, northern winds coming from high-latitude areas are cold and dry; in summer, southern winds from coastal areas at lower latitudes are warm and moist.
A major environmental issue in China is the continued expansion of its deserts, particularly the Gobi Desert. Although barrier tree lines planted since the 1970s have reduced the frequency of sandstorms, prolonged drought and poor agricultural practices have resulted in dust storms plaguing northern China each spring, which then spread to other parts of East Asia, including Japan and Korea. China's environmental watchdog, SEPA, stated in 2007 that China is losing per year to desertification. Water quality, erosion, and pollution control have become important issues in China's relations with other countries. Melting glaciers in the Himalayas could potentially lead to water shortages for hundreds of millions of people. According to academics, in order to limit climate change in China to electricity generation from coal in China without carbon capture must be phased out by 2045. With current policies, the GHG emissions of China will probably peak in 2025, and by 2030 they will return to 2022 levels. However, such pathway still leads to 3 degree temperature rise. Official government statistics about Chinese agricultural productivity are considered unreliable, due to exaggeration of production at subsidiary government levels. Much of China has a climate very suitable for agriculture and the country has been the world's largest producer of rice, wheat, tomatoes, eggplant, grapes, watermelon, spinach, and many other crops.
Biodiversity
China is one of 17 megadiverse countries, lying in two of the world's major biogeographic realms: the Palearctic and the Indomalayan. By one measure, China has over 34,687 species of animals and vascular plants, making it the third-most biodiverse country in the world, after Brazil and Colombia. The country signed the Rio de Janeiro Convention on Biological Diversity on 11 June 1992, and became a party to the convention on 5 January 1993. It later produced a National Biodiversity Strategy and Action Plan, with one revision that was received by the convention on 21 September 2010.
China is home to at least 551 species of mammals (the third-highest such number in the world), 1,221 species of birds (eighth), 424 species of reptiles (seventh) and 333 species of amphibians (seventh). Wildlife in China shares habitat with, and bears acute pressure from, the world's largest population of humans. At least 840 animal species are threatened, vulnerable or in danger of local extinction in China, due mainly to human activity such as habitat destruction, pollution and poaching for food, fur and ingredients for traditional Chinese medicine. Endangered wildlife is protected by law, and , the country has over 2,349 nature reserves, covering a total area of 149.95 million hectares, 15 percent of China's total land area. Most wild animals have been eliminated from the core agricultural regions of east and central China, but they have fared better in the mountainous south and west. The Baiji was confirmed extinct on 12 December 2006.
China has over 32,000 species of vascular plants, and is home to a variety of forest types. Cold coniferous forests predominate in the north of the country, supporting animal species such as moose and Asian black bear, along with over 120 bird species. The understory of moist conifer forests may contain thickets of bamboo. In higher montane stands of juniper and yew, the bamboo is replaced by rhododendrons. Subtropical forests, which are predominate in central and southern China, support a high density of plant species including numerous rare endemics. Tropical and seasonal rainforests, though confined to Yunnan and Hainan, contain a quarter of all the animal and plant species found in China. China has over 10,000 recorded species of fungi, and of them, nearly 6,000 are higher fungi.
Environment
In the early 2000s, China has suffered from environmental deterioration and pollution due to its rapid pace of industrialization. Regulations such as the 1979 Environmental Protection Law are fairly stringent, though they are poorly enforced, as they are frequently disregarded by local communities and government officials in favor of rapid economic development. China is the country with the second highest death toll because of air pollution, after India, with approximately 1 million deaths caused by exposure to ambient air pollution. Although China ranks as the highest CO2 emitting country in the world, it only emits 8 tons of CO2 per capita, significantly lower than developed countries such as the United States (16.1), Australia (16.8) and South Korea (13.6). Greenhouse gas emissions by China are the world's largest.
In recent years, China has clamped down on pollution. In March 2014, CCP General Secretary Xi Jinping "declared war" on pollution during the opening of the National People's Congress. In 2020, Xi announced that China aims to peak emissions before 2030 and go carbon-neutral by 2060 in accordance with the Paris Agreement, which, according to Climate Action Tracker, if accomplished it would lower the expected rise in global temperature by 0.2 – 0.3 degrees – "the biggest single reduction ever estimated by the Climate Action Tracker". In September 2021 Xi Jinping announced that China will not build "coal-fired power projects abroad". The decision can be "pivotal" in reducing emissions. The Belt and Road Initiative did not include financing such projects already in the first half of 2021.
The country also had significant water pollution problems; only 84.8% of China's national surface water was graded between Grade I-III by the Ministry of Ecology and Environment in 2021, which indicates that they're suitable for human consumption. China had a 2018 Forest Landscape Integrity Index mean score of 7.14/10, ranking it 53rd globally out of 172 countries. In 2020, a sweeping law was passed by the Chinese government to protect the ecology of the Yangtze River. The new laws include strengthening ecological protection rules for hydropower projects along the river, banning chemical plants within 1 kilometer of the river, relocating polluting industries, severely restricting sand mining as well as a complete fishing ban on all the natural waterways of the river, including all its major tributaries and lakes.
China is also the world's leading investor in renewable energy and its commercialization, with $546 billion invested in 2022; it is a major manufacturer of renewable energy technologies and invests heavily in local-scale renewable energy projects. In 2022, 61.2% of China's electricity came from coal (largest producer in the world), 14.9% from hydroelectric power (largest), 9.3% from wind (largest), 4.7% from solar energy (largest), 4.7% from nuclear energy (second-largest), 3.1% from natural gas (fifth-largest), and 1.9% from bioenergy (largest); in total, 30.8% of China's energy came from renewable energy sources. Despite its emphasis on renewables, China remains deeply connected to global oil markets and next to India, has been the largest importer of Russian crude oil in 2022.
Political geography
The People's Republic of China is the second-largest country in the world by land area after Russia, and the third or fourth largest country in the world by total area. China's total area is generally stated as being approximately . Specific area figures range from according to the Encyclopædia Britannica, to according to the UN Demographic Yearbook, and The World Factbook.
China has the longest combined land border in the world, measuring and its coastline covers approximately from the mouth of the Yalu River (Amnok River) to the Gulf of Tonkin. China borders 14 nations and covers the bulk of East Asia, bordering Vietnam, Laos, and Myanmar in Southeast Asia; India, Bhutan, Nepal, Pakistan and Afghanistan in South Asia; Tajikistan, Kyrgyzstan and Kazakhstan in Central Asia; and Russia, Mongolia, and North Korea in Inner Asia and Northeast Asia. It is narrowly separated from Bangladesh and Thailand to the southwest and south, and has several maritime neighbors such as Japan, Philippines, Malaysia, and Indonesia.
Politics
The People's Republic of China is a one-party state governed by the Marxist–Leninist Chinese Communist Party (CCP). This makes China one of the world's last countries governed by a communist party. The Chinese constitution states that the PRC "is a socialist state governed by a people's democratic dictatorship that is led by the working class and based on an alliance of workers and peasants," and that the state institutions "shall practice the principle of democratic centralism." The main body of the constitution also declares that "the defining feature of socialism with Chinese characteristics is the leadership of the Communist Party of China."
The PRC officially terms itself as a democracy, using terms such as "socialist consultative democracy", and "whole-process people's democracy". However, the country is commonly described as an authoritarian one-party state and a dictatorship, with among the heaviest restrictions worldwide in many areas, most notably against freedom of the press, freedom of assembly, reproductive rights, free formation of social organizations, freedom of religion and free access to the Internet. China has consistently been ranked amongst the lowest as an "authoritarian regime" by the Economist Intelligence Unit's Democracy Index, ranking at 156th out of 167 countries in 2022.
Chinese Communist Party
According to the CCP constitution, its highest body of the CCP is the National Congress held every five years. The National Congress elects the Central Committee, who then elects the party's Politburo, Politburo Standing Committee and the general secretary (party leader), the top leadership of the country. The general secretary holds ultimate power and authority over state and government and serves as the informal paramount leader. The current general secretary is Xi Jinping, who took office on 15 November 2012. At the local level, the secretary of the CCP committee of a subdivision outranks the local government level; CCP committee secretary of a provincial division outranks the governor while the CCP committee secretary of a city outranks the mayor. The CCP is officially guided by "socialism with Chinese characteristics", which is Marxism adapted to Chinese circumstances.
Government
The government in China is under the sole control of the CCP, with the CCP constitution outlining the party as the "highest force for political leadership". The CCP controls appointments in government bodies, with most senior government officials being CCP members.
The National People's Congress (NPC), the nearly 3,000 member legislature, is constitutionally the "highest state organ of power", though it has been also described as a "rubber stamp" body. The NPC meets annually, while the NPC Standing Committee, around 150 member body elected from NPC delegates, meets every couple of months. Elections are indirect and not pluralistic, with nominations at all levels being controlled by the CCP. The NPC is dominated by the CCP, with another eight minor parties having nominal representation under the condition of upholding CCP leadership.
The president is the ceremonial state representative, elected by the NPC. The incumbent president is Xi Jinping, who is also the general secretary of the CCP and the chairman of the Central Military Commission, making him China's paramount leader. The premier is the head of government, with Li Qiang being the incumbent premier. The premier is officially nominated by the president and then elected by the NPC, and has generally been either the second or third-ranking member of the Politburo Standing Committee (PSC). The premier presides over the State Council, China's cabinet, composed of four vice premiers, state councilors, and the heads of ministries and commissions. The Chinese People's Political Consultative Conference (CPPCC) is a political advisory body that is critical in China's "united front" system, which aims to gather non-CCP voices to support the CCP. Similar to the people's congresses, CPPCC's exist at various division, with the National Committee of the CPPCC being chaired by Wang Huning, fourth-ranking member of the PSC.
The governance of China is characterized by a high degree of political centralization but significant economic decentralization. The central government sets the strategic direction while local officials carry it out. Policy instruments or processes are often tested locally before being applied more widely, resulting in a policy process that involves experimentation and feedback. Generally, high level central government leadership refrains from drafting specific policies, instead using the informal networks and site visits to affirm or suggest changes to the direction of local policy experiments or pilot programs. The typical approach is that high level central government leadership begins drafting formal policies, law, or regulations after policy has been developed at local levels.
Administrative divisions
The PRC is constitutionally a unitary state officially divided into 23 provinces, five autonomous regions (each with a designated minority group), and four directly administered municipalities—collectively referred to as "mainland China"—as well as the special administrative regions (SARs) of Hong Kong and Macau. The PRC considers Taiwan to be its 23rd province, although it is governed by the Republic of China (ROC), which claims to be the legitimate representative of China and its territory, though it has downplayed this claim since its democratization. Geographically, all 31 provincial divisions of mainland China can be grouped into six regions: North China, Northeast China, East China, South Central China, Southwest China, and Northwest China.
Foreign relations
The PRC has diplomatic relations with 179 United Nation members states and maintains embassies in 174. Since 2019, China has the largest diplomatic network in the world. In 1971, the PRC replaced the Republic of China (ROC) as the sole representative of China in the United Nations and as one of the five permanent members of the United Nations Security Council. It is a member of intergovernmental organizations including the G20, East Asia Summit, and APEC. China was also a former member and leader of the Non-Aligned Movement, and still considers itself an advocate for developing countries. Along with Brazil, Russia, India and South Africa, China is a member of the BRICS group of emerging major economies and hosted the group's third official summit at Sanya, Hainan in April 2011.
The PRC officially maintains the one-China principle, which holds the view that there is only one sovereign state in the name of China, represented by the PRC, and that Taiwan is part of that China. The unique status of Taiwan has led to countries recognizing the PRC to maintain unique "one-China policies" that differ from each other; some countries explicitly recognize the PRC's claim over Taiwan, while others, including the US and Japan, only acknowledge the claim. Chinese officials have protested on numerous occasions when foreign countries have made diplomatic overtures to Taiwan, especially in the matter of armament sales. Most countries have switched recognition from the ROC to the PRC since the latter replaced the former in the United Nations in 1971 as the sole representative of China.
Much of current Chinese foreign policy is reportedly based on Premier Zhou Enlai's Five Principles of Peaceful Coexistence, and is also driven by the concept of "harmony without uniformity", which encourages diplomatic relations between states despite ideological differences. This policy may have led China to support or maintain close ties with states that are regarded as dangerous or repressive by Western nations, such as Myanmar, North Korea and Iran. China has a close political, economic and military relationship with Russia, and the two states often vote in unison in the United Nations Security Council.
Trade relations
China became the world's largest trading nation in 2013 as measured by the sum of imports and exports, as well as the world's largest commodity importer. comprising roughly 45% of maritime's dry-bulk market.
By 2016, China was the largest trading partner of 124 other countries. China is the largest trading partner for the ASEAN nations, with a total trade value of $669.2 billion in 2021 accounting for 20% of ASEAN's total trade. ASEAN is also China's largest trading partner. In 2020, China became the largest trading partner of the European Union for goods, with the total value of goods trade reaching nearly $700 billion. China, along with ASEAN, Japan, South Korea, Australia and New Zealand, is a member of the Regional Comprehensive Economic Partnership, the world's largest free-trade area covering 30% of the world's population and economic output. China became a member of the World Trade Organization (WTO) in 2001. In 2004, it proposed an entirely new East Asia Summit (EAS) framework as a forum for regional security issues. The EAS, which includes ASEAN Plus Three, India, Australia and New Zealand, held its inaugural summit in 2005.
China has had a long and complex trade relationship with the United States. In 2000, the United States Congress approved "permanent normal trade relations" (PNTR) with China, allowing Chinese exports in at the same low tariffs as goods from most other countries. China has a significant trade surplus with the United States, one of its most important export markets. Economists have argued that the renminbi is undervalued, due to currency intervention from the Chinese government, giving China an unfair trade advantage. The US and other foreign governments have also alleged that China does not respect intellectual property (IP) rights and steals IP through espionage operations, with the US Department of Justice saying that 80% of all the prosecutions related to economic espionage it brings were about conduct to benefit the Chinese state.
Since the turn of the century, China has followed a policy of engaging with African nations for trade and bilateral co-operation; in 2022, Sino-African trade totalled $282 billion, having grown more than 20 times over two decades. According to Madison Condon "China finances more infrastructure projects in Africa than the World Bank and provides billions of dollars in low-interest loans to the continent's emerging economies." China maintains extensive and highly diversified trade links with the European Union, and became its largest trading partner for goods, with the total value of goods trade reaching nearly $700 billion. China has furthermore strengthened its trade ties with major South American economies, and is the largest trading partner of Brazil, Chile, Peru, Uruguay, Argentina, and several others.
In 2013, China initiated the Belt and Road Initiative (BRI), a large global infrastructure building initiative with funding on the order of $50–100 billion per year. BRI could be one of the largest development plans in modern history. It has expanded significantly over the last six years and, , includes 138 countries and 30 international organizations. In addition to intensifying foreign policy relations, the focus here is particularly on building efficient transport routes. The focus is particularly on the maritime Silk Road with its connections to East Africa and Europe and there are Chinese investments or related declarations of intent at numerous ports such as Gwadar, Kuantan, Hambantota, Piraeus and Trieste. However many of these loans made under the Belt and Road program are unsustainable and China has faced a number of calls for debt relief from debtor nations.
Territorial disputes
Ever since its establishment, the PRC has claimed the territories governed by the Republic of China (ROC), a separate political entity today commonly known as Taiwan, as a part of its territory. It regards the island of Taiwan as its Taiwan Province, Kinmen and Matsu as a part of Fujian Province and islands the ROC controls in the South China Sea as a part of Hainan Province and Guangdong Province. These claims are controversial because of the complicated Cross-Strait relations, with the PRC treating the one-China principle as one of its most important diplomatic principles in dealing with other countries.
China has resolved its land borders with 12 out of 14 neighboring countries, having pursued substantial compromises in most of them. China currently has a disputed land border with India and Bhutan. China is additionally involved in maritime disputes with multiple countries over the ownership of several small islands in the East and South China Seas, such as Socotra Rock, the Senkaku Islands and the entirety of South China Sea Islands, along with the EEZ disputes over East China Sea.
Sociopolitical issues and human rights
The situation of human rights in China has attracted significant criticism from a number of foreign governments, foreign press agencies, and non-governmental organizations, alleging widespread civil rights violations such as detention without trial, forced confessions, torture, restrictions of fundamental rights, and excessive use of the death penalty. Since its inception, Freedom House has ranked China as "not free" in its Freedom in the World survey, while Amnesty International has documented significant human rights abuses. The Constitution of the People's Republic of China states that the "fundamental rights" of citizens include freedom of speech, freedom of the press, the right to a fair trial, freedom of religion, universal suffrage, and property rights. However, in practice, these provisions do not afford significant protection against criminal prosecution by the state. China has limited protections regarding LGBT rights.
Although some criticisms of government policies and the ruling CCP are tolerated, censorship of political speech and information are amongst the harshest in the world and routinely used to prevent collective action. China also has the most comprehensive and sophisticated Internet censorship regime in the world, with numerous websites being blocked. The government suppresses popular protests and demonstrations that it considers a potential threat to "social stability", as was the case with the 1989 Tiananmen Square protests and massacre. China additionally uses a massive espionage network of cameras, facial recognition software, sensors, and surveillance of personal technology as a means of social control of persons living in the country.
China is regularly accused of large-scale repression and human rights abuses in Tibet and Xinjiang, where significant amounts of ethnic minorities reside, including violent police crackdowns and religious suppression. In Xinjiang, repression has significantly escalated since 2016, after which at least one million Uyghurs and other ethnic and religion minorities have been detained in internment camps aimed at changing the political thinking of detainees, their identities, and their religious beliefs. According to witnesses, actions including political indoctrination, torture, physical and psychological abuse, forced sterilization, sexual abuse, and forced labor are common in these facilities. According to a 2020 report, China's treatment of Uyghurs meets the UN definition of genocide, while a separate UN Human Rights Office report said they could potentially meet the definitions for crimes against humanity.
Global studies from Pew Research Center in 2014 and 2017 ranked the Chinese government's restrictions on religion as among the highest in the world, despite low to moderate rankings for religious-related social hostilities in the country. The Global Slavery Index estimated that in 2016 more than 3.8 million people were living in "conditions of modern slavery", or 0.25% of the population, including victims of human trafficking, forced labor, forced marriage, child labor, and state-imposed forced labor. The state-imposed re-education through labor (laojiao) system was formally abolished in 2013, but it is not clear to which extent its various practices have stopped. The Chinese penal system also includes the much larger reform through labor (laogai) system, which includes labor prison factories, detention centers, and re-education camps; the Laogai Research Foundation has estimated in June 2008 that there were nearly 1,422 of these facilities, though it cautioned that this number was likely an underestimate.
Public views of government
Political concerns in China include the growing gap between rich and poor and government corruption. Nonetheless, international surveys show a high level of the Chinese public's satisfaction with their government. These views are generally attributed to the material comforts and security available to large segments of the Chinese populace as well as the government's attentiveness and responsiveness. According to the World Values Survey (2017–2020), 95% of Chinese respondents have significant confidence in their government. Confidence decreased to 91% in the survey's 2022 edition. A Harvard University survey published in July 2020 found that citizen satisfaction with the government had increased since 2003, also rating China's government as more effective and capable than ever before in the survey's history.
Military
The People's Liberation Army (PLA) is considered one of the world's most powerful militaries and has rapidly modernized in the recent decades. It consists of the Ground Force (PLAGF), the Navy (PLAN), the Air Force (PLAAF), the Rocket Force (PLARF) and the Strategic Support Force (PLASSF). Its nearly 2.2 million active duty personnel is the largest in the world. The PLA holds the world's third-largest stockpile of nuclear weapons, and the world's second-largest navy by tonnage. China's official military budget for 2022 totalled US$230 billion (1.45 trillion Yuan), the second-largest in the world, though SIPRI estimates that its real expenditure that year was US$292 billion. According to SIPRI, its military spending from 2012 to 2021 averaged US$215 billion per year or 1.7 per cent of GDP, behind only the United States at US$734 billion per year or 3.6 per cent of GDP. The PLA is commanded by the Central Military Commission (CMC) of the party and the state; though officially two separate organizations, the two CMCs have identical membership except during leadership transition periods and effectively function as one organization. The chairman of the CMC is the commander-in-chief of the PLA, with the officeholder also generally being the CCP general secretary, making them the paramount leader of China.
Economy
China has the world's second-largest economy in terms of nominal GDP, and the world's largest in terms of purchasing power parity (PPP). , China accounts for around 18% of global economy by nominal GDP. China is one of the world's fastest-growing major economies, with its economic growth having been almost consistently above 6 percent since the introduction of economic reforms in 1978. According to the World Bank, China's GDP grew from $150 billion in 1978 to $17.96 trillion by 2022. Of the world's 500 largest companies, 142 are headquartered in China.
China was one of the world's foremost economic powers throughout the arc of East Asian and global history. The country had one of the largest economies in the world for most of the past two millennia, during which it has seen cycles of prosperity and decline. Since economic reforms began in 1978, China has developed into a highly diversified economy and one of the most consequential players in international trade. Major sectors of competitive strength include manufacturing, retail, mining, steel, textiles, automobiles, energy generation, green energy, banking, electronics, telecommunications, real estate, e-commerce, and tourism. China has three out of the ten largest stock exchanges in the world—Shanghai, Hong Kong and Shenzhen—that together have a market capitalization of over $15.9 trillion, . China has four (Shanghai, Hong Kong, Beijing, and Shenzhen) out of the world's top ten most competitive financial centers, which is more than any other country in the 2020 Global Financial Centres Index.
Modern-day China is often described as an example of state capitalism or party-state capitalism. In 1992, Jiang Zemin termed the country a socialist market economy. Others have described it as a form of Marxism–Leninism adapted to co-exist with global capitalism. The state dominates in strategic "pillar" sectors such as energy production and heavy industries, but private enterprise has expanded enormously, with around 30 million private businesses recorded in 2008. According to official statistics, privately owned companies constitute more than 60% of China's GDP.
China has been the world's largest manufacturing nation since 2010, after overtaking the US, which had been the largest for the previous hundred years. China has also been the second largest in high-tech manufacturing country since 2012, according to US National Science Foundation. China is the second largest retail market in the world, next to the United States. China leads the world in e-commerce, accounting for over 37% of the global market share in 2021. China is the world's leader in electric vehicle consumption and production, manufacturing and buying half of all the plug-in electric cars (BEV and PHEV) in the world . China is also the leading producer of batteries for electric vehicles as well as several key raw materials for batteries. Long heavily relying on non-renewable energy sources such as coal, China's adaptation of renewable energy has increased significantly in recent years, with their share increasing from 26.3 percent in 2016 to 31.9 percent in 2022.
Wealth
China accounted for 17.9% of the world's total wealth in 2021, second highest in the world after the US. It ranks at 64th at GDP (nominal) per capita, making it an upper-middle income country. Though China used to make up much of the world's poor, it now makes up much of the world's middle-class. China brought more people out of extreme poverty than any other country in history—between 1978 and 2018, China reduced extreme poverty by 800 million. China reduced the extreme poverty rate—per international standard, it refers to an income of less than $1.90/day—from 88% in 1981 to 1.85% by 2013. The portion of people in China living below the international poverty line of $1.90 per day (2011 PPP) fell to 0.3% in 2018 from 66.3% in 1990. Using the lower-middle income poverty line of $3.20 per day, the portion fell to 2.9% in 2018 from 90.0% in 1990. Using the upper-middle income poverty line of $5.50 per day, the portion fell to 17.0% from 98.3% in 1990.
From 1978 to 2018, the average standard of living multiplied by a factor of twenty-six. Wages in China have grown a lot in the last 40 years—real (inflation-adjusted) wages grew seven-fold from 1978 to 2007. Per capita incomes have risen significantly – when the PRC was founded in 1949, per capita income in China was one-fifth of the world average; per capita incomes now equal the world average itself. China's development is highly uneven. Its major cities and coastal areas are far more prosperous compared to rural and interior regions. It has a high level of economic inequality, which has increased quickly after the economic reforms, though has decreased significantly in the 2010s. In 2019 China's Gini coefficient was 0.382, according to the World Bank.
, China was second in the world, after the US, in total number of billionaires and total number of millionaires, with 495 Chinese billionaires and 6.2 million millionaires. In 2019, China overtook the US as the home to the highest number of people who have a net personal wealth of at least $110,000, according to the global wealth report by Credit Suisse. According to the Hurun Global Rich List 2020, China is home to five of the world's top ten cities (Beijing, Shanghai, Hong Kong, Shenzhen, and Guangzhou in the 1st, 3rd, 4th, 5th, and 10th spots, respectively) by the highest number of billionaires, which is more than any other country. China had 85 female billionaires , two-thirds of the global total, and minted 24 new female billionaires in 2020. China has had the world's largest middle-class population since 2015, and the middle-class grew to a size of 400 million by 2018.
China in the global economy
China is a member of the WTO and is the world's largest trading power, with a total international trade value of US$6.3 trillion in 2022. China is the world's largest exporter and second-largest importer of goods. Its foreign exchange reserves reached US$3.128 trillion , making its reserves by far the world's largest. In 2022, China was amongst the world's largest recipient of inward foreign direct investment (FDI), attracting $180 billion, though most of these were speculated to be from Hong Kong. In 2021, China's foreign exchange remittances were $US53 billion making it the second largest recipient of remittances in the world. China also invests abroad, with a total outward FDI of $62.4 billion in 2012, and a number of major takeovers of foreign firms by Chinese companies. China is a major owner of US public debt, holding trillions of dollars worth of U.S. Treasury bonds. China's undervalued exchange rate has caused friction with other major economies, and it has also been widely criticized for manufacturing large quantities of counterfeit goods. In 2020, Harvard University's Economic Complexity Index ranked complexity of China's exports 17th in the world, up from 24th in 2010.
Following the 2007–08 financial crisis, Chinese authorities sought to actively wean off of its dependence on the U.S. dollar as a result of perceived weaknesses of the international monetary system. To achieve those ends, China took a series of actions to further the internationalization of the Renminbi. In 2008, China established the dim sum bond market and expanded the Cross-Border Trade RMB Settlement Pilot Project, which helps establish pools of offshore RMB liquidity. This was followed with bilateral agreements to settle trades directly in renminbi with Russia, Japan, Australia, Singapore, the United Kingdom, and Canada. As a result of the rapid internationalization of the renminbi, it became the eighth-most-traded currency in the world by 2018, an emerging international reserve currency, and a component of the IMF's special drawing rights; however, partly due to capital controls that make the renminbi fall short of being a fully convertible currency, it remains far behind the Euro, Dollar and Japanese Yen in international trade volumes. , Yuan is the world's fifth-most traded currency.
Science and technology
Historical
China was a world leader in science and technology until the Ming dynasty. Ancient and medieval Chinese discoveries and inventions, such as papermaking, printing, the compass, and gunpowder (the Four Great Inventions), became widespread across East Asia, the Middle East and later Europe. Chinese mathematicians were the first to use negative numbers. By the 17th century, the Western World surpassed China in scientific and technological advancement. The causes of this early modern Great Divergence continue to be debated by scholars.
After repeated military defeats by the European colonial powers and Japan in the 19th century, Chinese reformers began promoting modern science and technology as part of the Self-Strengthening Movement. After the Communists came to power in 1949, efforts were made to organize science and technology based on the model of the Soviet Union, in which scientific research was part of central planning. After Mao's death in 1976, science and technology were promoted as one of the Four Modernizations, and the Soviet-inspired academic system was gradually reformed.
Modern era
Since the end of the Cultural Revolution, China has made significant investments in scientific research and is quickly catching up with the US in R&D spending. China officially spent around 2.4% of its GDP on R&D in 2020, totaling to around $377.8 billion. According to the World Intellectual Property Indicators, China received more applications than the US did in 2018 and 2019 and ranked first globally in patents, utility models, trademarks, industrial designs, and creative goods exports in 2021. It was ranked 12th in the Global Innovation Index in 2023, a considerable improvement from its rank of 35th in 2013. Chinese supercomputers have been ranked the fastest in the world on a few occasions; however, these supercomputers rely on critical components —namely processors— designed in foreign countries imported from outside of China. China has also struggled with developing several technologies domestically, such as the most advanced semiconductors and reliable jet engines.
China is developing its education system with an emphasis on science, technology, engineering, and mathematics (STEM). It became the world's largest publisher of scientific papers in 2016. Chinese-born academicians have won prestigious prizes in the sciences and in mathematics, although most of them had conducted their winning research in Western nations.
Space program
The Chinese space program started in 1958 with some technology transfers from the Soviet Union. However, it did not launch the nation's first satellite until 1970 with the Dong Fang Hong I, which made China the fifth country to do so independently.
In 2003, China became the third country in the world to independently send humans into space with Yang Liwei's spaceflight aboard Shenzhou 5. As of 2023, eighteen Chinese nationals have journeyed into space, including two women. In 2011, China launched its first space station testbed, Tiangong-1. In 2013, a Chinese robotic rover Yutu successfully touched down on the lunar surface as part of the Chang'e 3 mission.
In 2019, China became the first country to land a probe—Chang'e 4—on the far side of the Moon. In 2020, Chang'e 5 successfully returned Moon samples to the Earth, making China the third country to do so independently after the United States and the Soviet Union. In 2021, China became the third country to land a spacecraft on Mars and the second one to deploy a rover (Zhurong) on Mars, after the United States. China completed its own modular space station, the Tiangong, in low Earth orbit on 3 November 2022. On 29 November 2022, China performed its first in-orbit crew handover aboard the Tiangong.
In May 2023, China announced a plan to land humans on the Moon by 2030.
Infrastructure
After a decades-long infrastructural boom, China has produced numerous world-leading infrastructural projects: China has the world's largest high-speed rail network, the most supertall skyscrapers in the world, the world's largest power plant (the Three Gorges Dam), and a global satellite navigation system with the largest number of satellites in the world.
Telecommunications
China is the largest telecom market in the world and currently has the largest number of active cellphones of any country in the world, with over 1.69 billion subscribers, . It also has the world's largest number of internet and broadband users, with over 1.05 billion Internet users —equivalent to around 73.7% of its population—and almost all of them being mobile as well. By 2018, China had more than 1 billion 4G users, accounting for 40% of world's total. China is making rapid advances in 5G—by late 2018, China had started large-scale and commercial 5G trials. , China had over 500 million 5G users and 1.45 million base stations installed.
China Mobile, China Unicom and China Telecom, are the three large providers of mobile and internet in China. China Telecom alone served more than 145 million broadband subscribers and 300 million mobile users; China Unicom had about 300 million subscribers; and China Mobile, the largest of them all, had 925 million users, . Combined, the three operators had over 3.4 million 4G base-stations in China. Several Chinese telecommunications companies, most notably Huawei and ZTE, have been accused of spying for the Chinese military.
China has developed its own satellite navigation system, dubbed BeiDou, which began offering commercial navigation services across Asia in 2012 as well as global services by the end of 2018. Upon the completion of the 35th Beidou satellite, which was launched into orbit on 23 June 2020, Beidou followed GPS and GLONASS as the third completed global navigation satellite in the world.
Transport
Since the late 1990s, China's national road network has been significantly expanded through the creation of a network of national highways and expressways. In 2018, China's highways had reached a total length of , making it the longest highway system in the world. China has the world's largest market for automobiles, having surpassed the United States in both auto sales and production. The country has also become a large exporter of automobiles, being the world's second-largest exporter of cars in 2022 after Japan. In early 2023, China has overtaken Japan, becoming the world's largest exporter of cars. A side-effect of the rapid growth of China's road network has been a significant rise in traffic accidents, though the number of fatalities in traffic accidents fell by 20% from 2007 to 2017. In urban areas, bicycles remain a common mode of transport, despite the increasing prevalence of automobiles – , there are approximately 470 million bicycles in China. China's railways, which are operated by the state-owned China State Railway Group Company, are among the busiest in the world, handling a quarter of the world's rail traffic volume on only 6 percent of the world's tracks in 2006. , the country had of railways, the second longest network in the world. The railways strain to meet enormous demand particularly during the Chinese New Year holiday, when the world's largest annual human migration takes place.
China's high-speed rail (HSR) system started construction in the early 2000s. By the end of 2022, high speed rail in China had reached of dedicated lines alone, making it the longest HSR network in the world. Services on the Beijing–Shanghai, Beijing–Tianjin, and Chengdu–Chongqing lines reach up to , making them the fastest conventional high speed railway services in the world. With an annual ridership of over 2.29 billion passengers in 2019, it is the world's busiest. The network includes the Beijing–Guangzhou high-speed railway, the single longest HSR line in the world, and the Beijing–Shanghai high-speed railway, which has three of longest railroad bridges in the world. The Shanghai maglev train, which reaches , is the fastest commercial train service in the world.
Since 2000, the growth of rapid transit systems in Chinese cities has accelerated. , 44 Chinese cities have urban mass transit systems in operation and 39 more have metro systems approved. , China boasts the five longest metro systems in the world with the networks in Shanghai, Beijing, Guangzhou, Chengdu and Shenzhen being the largest.
There were approximately 241 airports in 2021.
China has over 2,000 river and seaports, about 130 of which are open to foreign shipping. In 2021, the Ports of Shanghai, Ningbo-Zhoushan, Shenzhen, Guangzhou, Qingdao, Tianjin and Hong Kong ranked in the top 10 in the world in container traffic and cargo tonnage.
Water supply and sanitation
Water supply and sanitation infrastructure in China is facing challenges such as rapid urbanization, as well as water scarcity, contamination, and pollution. According to data presented by the Joint Monitoring Program for Water Supply and Sanitation of World Health Organization (WHO) and UNICEF in 2015, about 36% of the rural population in China still did not have access to improved sanitation. The ongoing South–North Water Transfer Project intends to abate water shortage in the north.
Demographics
The national census of 2020 recorded the population of the People's Republic of China as approximately 1,411,778,724. According to the 2020 census, about 17.95% of the population were 14 years old or younger, 63.35% were between 15 and 59 years old, and 18.7% were over 60 years old. Between 2010 and 2020, the average population growth rate was 0.53%.
Given concerns about population growth, China implemented a two-child limit during the 1970s, and, in 1979, began to advocate for an even stricter limit of one child per family. Beginning in the mid-1980s, however, given the unpopularity of the strict limits, China began to allow some major exemptions, particularly in rural areas, resulting in what was actually a "1.5"-child policy from the mid-1980s to 2015 (ethnic minorities were also exempt from one child limits). The next major loosening of the policy was enacted in December 2013, allowing families to have two children if one parent is an only child. In 2016, the one-child policy was replaced in favor of a two-child policy. A three-child policy was announced on 31 May 2021, due to population aging, and in July 2021, all family size limits as well as penalties for exceeding them were removed. According to data from the 2020 census, China's total fertility rate is 1.3, but some experts believe that after adjusting for the transient effects of the relaxation of restrictions, the country's actual total fertility rate is as low as 1.1. In 2023, National Bureau of Statistics estimated that the population fell 850,000 from 2021 to 2022, the first decline since 1961.
According to one group of scholars, one-child limits had little effect on population growth or the size of the total population. However, these scholars have been challenged. Their own counterfactual model of fertility decline without such restrictions implies that China averted more than 500 million births between 1970 and 2015, a number which may reach one billion by 2060 given all the lost descendants of births averted during the era of fertility restrictions, with one-child restrictions accounting for the great bulk of that reduction. The policy, along with traditional preference for boys, may have contributed to an imbalance in the sex ratio at birth. According to the 2020 census, the sex ratio at birth was 105.07 boys for every 100 girls, which is beyond the normal range of around 105 boys for every 100 girls. The 2020 census found that males accounted for 51.24 percent of the total population. However, China's sex ratio is more balanced than it was in 1953, when males accounted for 51.82 percent of the total population.
Ethnic groups
China legally recognizes 56 distinct ethnic groups, who altogether comprise the Zhonghua minzu. The largest of these nationalities are the Han Chinese, who constitute more than 91% of the total population. The Han Chinese – the world's largest single ethnic group – outnumber other ethnic groups in every provincial-level division except Tibet and Xinjiang. Ethnic minorities account for less than 10% of the population of China, according to the 2020 census. Compared with the 2010 population census, the Han population increased by 60,378,693 persons, or 4.93%, while the population of the 55 national minorities combined increased by 11,675,179 persons, or 10.26%. The 2020 census recorded a total of 845,697 foreign nationals living in mainland China.
Languages
There are as many as 292 living languages in China. The languages most commonly spoken belong to the Sinitic branch of the Sino-Tibetan language family, which contains Mandarin (spoken by 80% of the population), and other varieties of Chinese language: Yue (including Cantonese and Taishanese), Wu (including Shanghainese and Suzhounese), Min (including Fuzhounese, Hokkien and Teochew), Xiang, Gan and Hakka. Languages of the Tibeto-Burman branch, including Tibetan, Qiang, Naxi and Yi, are spoken across the Tibetan and Yunnan–Guizhou Plateau. Other ethnic minority languages in southwestern China include Zhuang, Thai, Dong and Sui of the Tai-Kadai family, Miao and Yao of the Hmong–Mien family, and Wa of the Austroasiatic family. Across northeastern and northwestern China, local ethnic groups speak Altaic languages including Manchu, Mongolian and several Turkic languages: Uyghur, Kazakh, Kyrgyz, Salar and Western Yugur. Korean is spoken natively along the border with North Korea. Sarikoli, the language of Tajiks in western Xinjiang, is an Indo-European language. Taiwanese indigenous peoples, including a small population on the mainland, speak Austronesian languages.
Standard Mandarin, a variety of Mandarin based on the Beijing dialect, is the official national language of China and is used as a lingua franca in the country between people of different linguistic backgrounds. Mongolian, Uyghur, Tibetan, Zhuang and various other languages are also regionally recognized throughout the country.
Urbanization
China has urbanized significantly in recent decades. The percent of the country's population living in urban areas increased from 20% in 1980 to over 64% in 2021. It is estimated that China's urban population will reach one billion by 2030, potentially equivalent to one-eighth of the world population.
China has over 160 cities with a population of over one million, including the 17 megacities (cities with a population of over 10 million) of Chongqing, Shanghai, Beijing, Chengdu, Guangzhou, Shenzhen, Tianjin, Xi'an, Suzhou, Zhengzhou, Wuhan, Hangzhou, Linyi, Shijiazhuang, Dongguan, Qingdao and Changsha. Among them, the total permanent population of Chongqing, Shanghai, Beijing and Chengdu is above 20 million. Shanghai is China's most populous urban area while Chongqing is its largest city proper, the only city in China with the largest permanent population of over 30 million. By 2025, it is estimated that the country will be home to 221 cities with over a million inhabitants. The figures in the table below are from the 2017 census, and are only estimates of the urban populations within administrative city limits; a different ranking exists when considering the total municipal populations (which includes suburban and rural populations). The large "floating populations" of migrant workers make conducting censuses in urban areas difficult; the figures below include only long-term residents.
Education
Since 1986, compulsory education in China comprises primary and junior secondary school, which together last for nine years. In 2021, about 91.4 percent of students continued their education at a three-year senior secondary school. The Gaokao, China's national university entrance exam, is a prerequisite for entrance into most higher education institutions. , 58.42 percent of secondary school graduates were enrolled in higher education. Vocational education is available to students at the secondary and tertiary level. More than 10 million Chinese students graduated from vocational colleges nationwide every year.
China has the largest education system in the world, with about 282 million students and 17.32 million full-time teachers in over 530,000 schools. Annual education investment went from less than US$50 billion in 2003 to more than US$817 billion in 2020. However, there remains an inequality in education spending. In 2010, the annual education expenditure per secondary school student in Beijing totalled ¥20,023, while in Guizhou, one of the poorest provinces in China, only totalled ¥3,204. Free compulsory education in China consists of primary school and junior secondary school between the ages of 6 and 15. In 2021, the graduation enrollment ratio at compulsory education level reached 95.4 percent, and around 91.4% of Chinese have received secondary education.
China's literacy rate has grown dramatically, from only 20% in 1949 and 65.5% in 1979. to 97% of the population over age 15 in 2020. In the same year, Beijing, Shanghai, Jiangsu, and Zhejiang, amongst the most affluent regions in China, were ranked the highest in the world in the Programme for International Student Assessment ranking for all three categories of Mathematics, Science and Reading.
, China has over 3,000 universities, with over 44.3 million students enrolled in mainland China and 240 million Chinese citizens have received high education, making China the largest higher education system in the world. , China had the world's second-highest number of top universities (the highest in Asia & Oceania region). Currently, China trails only the United States in terms of representation on lists of top 200 universities according to the Academic Ranking of World Universities (ARWU). China is home to the two of the highest ranking universities (Tsinghua University and Peking University) in Asia and emerging economies according to the Times Higher Education World University Rankings. , two universities in mainland China rank in the world's top 15, with Peking University (12th) and Tsinghua University (14th) and three other universities ranking in the world's top 50, namely Fudan, Zhejiang, and Shanghai Jiao Tong according to the QS World University Rankings. These universities are members of the C9 League, an alliance of elite Chinese universities offering comprehensive and leading education.
Health
The National Health and Family Planning Commission, together with its counterparts in the local commissions, oversees the health needs of the Chinese population. An emphasis on public health and preventive medicine has characterized Chinese health policy since the early 1950s. At that time, the Communist Party started the Patriotic Health Campaign, which was aimed at improving sanitation and hygiene, as well as treating and preventing several diseases. Diseases such as cholera, typhoid and scarlet fever, which were previously rife in China, were nearly eradicated by the campaign.
After Deng Xiaoping began instituting economic reforms in 1978, the health of the Chinese public improved rapidly because of better nutrition, although many of the free public health services provided in the countryside disappeared along with the People's Communes. Healthcare in China became mostly privatized, and experienced a significant rise in quality. In 2009, the government began a 3-year large-scale healthcare provision initiative worth US$124 billion. By 2011, the campaign resulted in 95% of China's population having basic health insurance coverage. By 2022, China had established itself as a key producer and exporter of pharmaceuticals, with the country alone producing around 40 percent of active pharmaceutical ingredients in 2017.
, the life expectancy at birth in China is 78 years, and the infant mortality rate is 5 per thousand (in 2021). Both have improved significantly since the 1950s. Rates of stunting, a condition caused by malnutrition, have declined from 33.1% in 1990 to 9.9% in 2010. Despite significant improvements in health and the construction of advanced medical facilities, China has several emerging public health problems, such as respiratory illnesses caused by widespread air pollution, hundreds of millions of cigarette smokers, and an increase in obesity among urban youths. China's large population and densely populated cities have led to serious disease outbreaks in recent years, such as the 2003 outbreak of SARS, although this has since been largely contained. In 2010, air pollution caused 1.2 million premature deaths in China.
The COVID-19 pandemic was first identified in Wuhan in December 2019. Further studies are being carried out around the world on a possible origin for the virus. Beijing says it has been sharing Covid data in "a timely, open and transparent manner in accordance with the law". According to U.S. officials, the Chinese government has been concealing the extent of the outbreak before it became an international pandemic.
Religion
The government of the People's Republic of China and the Chinese Communist Party both officially espouse state atheism, and have conducted antireligious campaigns to this end. Religious affairs and issues in the country are overseen by the CCP's United Front Work Department. Freedom of religion is guaranteed by China's constitution, although religious organizations that lack official approval can be subject to state persecution.
Over the millennia, Chinese civilization has been influenced by various religious movements. The "three teachings", including Confucianism, Taoism, and Buddhism (Chinese Buddhism), historically have a significant role in shaping Chinese culture, enriching a theological and spiritual framework which harks back to the early Shang and Zhou dynasty. Chinese popular or folk religion, which is framed by the three teachings and other traditions, consists in allegiance to the shen (), a character that signifies the "energies of generation", who can be deities of the environment or ancestral principles of human groups, concepts of civility, culture heroes, many of whom feature in Chinese mythology and history. Among the most popular cults are those of Mazu (goddess of the seas), Huangdi (one of the two divine patriarchs of the Chinese race), Guandi (god of war and business), Caishen (god of prosperity and richness), Pangu and many others. China is home to many of the world's tallest religious statues, including the tallest of all, the Spring Temple Buddha in Henan.
Clear data on religious affiliation in China is difficult to gather due to varying definitions of "religion" and the unorganized, diffusive nature of Chinese religious traditions. Scholars note that in China there is no clear boundary between three teachings religions and local folk religious practice. A 2015 poll conducted by Gallup International found that 61% of Chinese people self-identified as "convinced atheist", though Chinese religions or some of their strands are definable as non-theistic and humanistic religions, since they do not believe that divine creativity is completely transcendent, but it is inherent in the world and in particular in the human being. According to a 2014 study, approximately 74% are either non-religious or practice Chinese folk belief, 16% are Buddhists, 2% are Christians, 1% are Muslims, and 8% adhere to other religions including Taoists and folk salvationism. In addition to Han people's local religious practices, there are also various ethnic minority groups in China who maintain their traditional autochthone religions. The various folk religions today comprise 2–3% of the population, while Confucianism as a religious self-identification is common within the intellectual class. Significant faiths specifically connected to certain ethnic groups include Tibetan Buddhism and the Islamic religion of the Hui, Uyghur, Kazakh, Kyrgyz and other peoples in Northwest China. The 2010 population census reported the total number of Muslims in the country as 23.14 million.
A 2021 poll from Ipsos and the Policy Institute at King's College London found that 35% of Chinese people said there was tension between different religious groups, which was the second lowest percentage of the 28 countries surveyed.
Culture and society
Since ancient times, Chinese culture has been heavily influenced by Confucianism. Chinese culture, in turn, has heavily influenced East Asia and Southeast Asia. For much of the country's dynastic era, opportunities for social advancement could be provided by high performance in the prestigious imperial examinations, which have their origins in the Han dynasty. The literary emphasis of the exams affected the general perception of cultural refinement in China, such as the belief that calligraphy, poetry and painting were higher forms of art than dancing or drama. Chinese culture has long emphasized a sense of deep history and a largely inward-looking national perspective. Examinations and a culture of merit remain greatly valued in China today.
The first leaders of the People's Republic of China were born into the traditional imperial order but were influenced by the May Fourth Movement and reformist ideals. They sought to change some traditional aspects of Chinese culture, such as rural land tenure, sexism, and the Confucian system of education, while preserving others, such as the family structure and culture of obedience to the state. Some observers see the period following the establishment of the PRC in 1949 as a continuation of traditional Chinese dynastic history, while others claim that the CCP's rule under Mao Zedong damaged the foundations of Chinese culture, especially through political movements such as the Cultural Revolution of the 1960s, where many aspects of traditional culture were destroyed, having been denounced as "regressive and harmful" or "vestiges of feudalism". Many important aspects of traditional Chinese morals and culture, such as Confucianism, art, literature, and performing arts like Peking opera, were altered to conform to government policies and propaganda at the time. Access to foreign media remains heavily restricted.
Today, the Chinese government has accepted numerous elements of traditional Chinese culture as being integral to Chinese society. With the rise of Chinese nationalism and the end of the Cultural Revolution, various forms of traditional Chinese art, literature, music, film, fashion and architecture have seen a vigorous revival, and folk and variety art in particular have sparked interest nationally and even worldwide.
Tourism
China received 65.7 million inbound international visitors in 2019, and in 2018 was the fourth-most-visited country in the world. It also experiences an enormous volume of domestic tourism; Chinese tourists made an estimated 6 billion travels within the country in 2019. China hosts the world's second-largest number of World Heritage Sites (56) after Italy, and is one of the most popular tourist destinations in the world (first in the Asia-Pacific).
Literature
Chinese literature is based on the literature of the Zhou dynasty. Concepts covered within the Chinese classic texts present a wide range of thoughts and subjects including calendar, military, astrology, herbology, geography and many others. Some of the most important early texts include the I Ching and the Shujing within the Four Books and Five Classics which served as the Confucian authoritative books for the state-sponsored curriculum in dynastic era. Inherited from the Classic of Poetry, classical Chinese poetry developed to its floruit during the Tang dynasty. Li Bai and Du Fu opened the forking ways for the poetic circles through romanticism and realism respectively. Chinese historiography began with the Shiji, the overall scope of the historiographical tradition in China is termed the Twenty-Four Histories, which set a vast stage for Chinese fictions along with Chinese mythology and folklore. Pushed by a burgeoning citizen class in the Ming dynasty, Chinese classical fiction rose to a boom of the historical, town and gods and demons fictions as represented by the Four Great Classical Novels which include Water Margin, Romance of the Three Kingdoms, Journey to the West and Dream of the Red Chamber. Along with the wuxia fictions of Jin Yong and Liang Yusheng, it remains an enduring source of popular culture in the Chinese sphere of influence.
In the wake of the New Culture Movement after the end of the Qing dynasty, Chinese literature embarked on a new era with written vernacular Chinese for ordinary citizens. Hu Shih and Lu Xun were pioneers in modern literature. Various literary genres, such as misty poetry, scar literature, young adult fiction and the xungen literature, which is influenced by magic realism, emerged following the Cultural Revolution. Mo Yan, a xungen literature author, was awarded the Nobel Prize in Literature in 2012.
Cuisine
Chinese cuisine is highly diverse, drawing on several millennia of culinary history and geographical variety, in which the most influential are known as the "Eight Major Cuisines", including Sichuan, Cantonese, Jiangsu, Shandong, Fujian, Hunan, Anhui, and Zhejiang cuisines. Chinese cuisine is also known for its width of cooking methods and ingredients, as well as food therapy that is emphasized by traditional Chinese medicine. Generally, China's staple food is rice in the south, wheat-based breads and noodles in the north. The diet of the common people in pre-modern times was largely grain and simple vegetables, with meat reserved for special occasions. The bean products, such as tofu and soy milk, remain as a popular source of protein. Pork is now the most popular meat in China, accounting for about three-fourths of the country's total meat consumption. While pork dominates the meat market, there is also the vegetarian Buddhist cuisine and the pork-free Chinese Islamic cuisine. Southern cuisine, due to the area's proximity to the ocean and milder climate, has a wide variety of seafood and vegetables; it differs in many respects from the wheat-based diets across dry northern China. Numerous offshoots of Chinese food, such as Hong Kong cuisine and American Chinese food, have emerged in the nations that play host to the Chinese diaspora.
Architecture
Many architectural masters and masterpieces emerged in ancient China, creating many palaces, tombs, temples, gardens, houses, etc. The architecture of China is as old as Chinese civilization. The first communities that can be identified culturally as Chinese were settled chiefly in the basin of the Yellow River. Chinese architecture is the embodiment of an architectural style that has developed over millennia in China and has remained a vestigial source of perennial influence on the development of East Asian architecture. Since its emergence during the early ancient era, the structural principles of its architecture have remained largely unchanged. The main changes involved diverse decorative details. Starting with the Tang dynasty, Chinese architecture has had a major influence on the architectural styles of neighboring East Asian countries such as Japan, Korea, and Mongolia. and minor influences on the architecture of Southeast and South Asia including the countries of Malaysia, Singapore, Indonesia, Sri Lanka, Thailand, Laos, Cambodia, Vietnam and the Philippines.
Chinese architecture is characterized by bilateral symmetry, use of enclosed open spaces, feng shui (e.g. directional hierarchies), a horizontal emphasis, and an allusion to various cosmological, mythological or in general symbolic elements. Chinese architecture traditionally classifies structures according to type, ranging from pagodas to palaces.
Chinese architecture varies widely based on status or affiliation, such as whether the structures were constructed for emperors, commoners, or for religious purposes. Other variations in Chinese architecture are shown in vernacular styles associated with different geographic regions and different ethnic heritages, such as the stilt houses in the south, the Yaodong buildings in the northwest, the yurt buildings of nomadic people, and the Siheyuan buildings in the north.
Music
Chinese music covers a highly diverse range of music from traditional music to modern music. Chinese music dates back before the pre-imperial times. Traditional Chinese musical instruments were traditionally grouped into eight categories known as bayin (八音). Traditional Chinese opera is a form of musical theatre in China originating thousands of years and has regional style forms such as Beijing opera and Cantonese opera. Chinese pop (C-Pop) includes mandopop and cantopop. Chinese rap, Chinese hip hop and Hong Kong hip hop have become popular in contemporary times.
Cinema
Cinema was first introduced to China in 1896 and the first Chinese film, Dingjun Mountain, was released in 1905. China has the largest number of movie screens in the world since 2016, China became the largest cinema market in the world in 2020. The top 3 highest-grossing films in China were The Battle at Lake Changjin (2021), Wolf Warrior 2 (2017), and Hi, Mom (2021).
Fashion
Hanfu is the historical clothing of the Han people in China. The qipao or cheongsam is a popular Chinese female dress. The hanfu movement has been popular in contemporary times and seeks to revitalize Hanfu clothing.
Sports
China has one of the oldest sporting cultures in the world. There is evidence that archery (shèjiàn) was practiced during the Western Zhou dynasty. Swordplay (jiànshù) and cuju, a sport loosely related to association football date back to China's early dynasties as well.
Physical fitness is widely emphasized in Chinese culture, with morning exercises such as qigong and tai chi widely practiced, and commercial gyms and private fitness clubs are gaining popularity across the country. Basketball is currently the most popular spectator sport in China. The Chinese Basketball Association and the American National Basketball Association also have a huge national following amongst the Chinese populace, with native-born and NBA-bound Chinese players and well-known national household names such as Yao Ming and Yi Jianlian being held in high esteem among Chinese basketball fans. China's professional football league, now known as Chinese Super League, was established in 1994, it is the largest football market in East Asia. Other popular sports in the country include martial arts, table tennis, badminton, swimming and snooker. Board games such as go (known as wéiqí in Chinese), xiangqi, mahjong, and more recently chess, are also played at a professional level. In addition, China is home to a huge number of cyclists, with an estimated 470 million bicycles . Many more traditional sports, such as dragon boat racing, Mongolian-style wrestling and horse racing are also popular.
China has participated in the Olympic Games since 1932, although it has only participated as the PRC since 1952. China hosted the 2008 Summer Olympics in Beijing, where its athletes received 48 gold medals – the highest number of gold medals of any participating nation that year. China also won the most medals of any nation at the 2012 Summer Paralympics, with 231 overall, including 95 gold medals. In 2011, Shenzhen in Guangdong, China hosted the 2011 Summer Universiade. China hosted the 2013 East Asian Games in Tianjin and the 2014 Summer Youth Olympics in Nanjing; the first country to host both regular and Youth Olympics. Beijing and its nearby city Zhangjiakou of Hebei province collaboratively hosted the 2022 Winter Olympics, making Beijing the first dual olympic city in the world by holding both the Summer Olympics and the Winter Olympics.
See also
Outline of China
Notes
References
Further reading
Farah, Paolo (2006). "Five Years of China's WTO Membership: EU and US Perspectives on China's Compliance with Transparency Commitments and the Transitional Review Mechanism". Legal Issues of Economic Integration. Kluwer Law International. Volume 33, Number 3. pp. 263–304. Abstract.
Heilig, Gerhard K. (2006/2007). China Bibliography – Online . China-Profile.com.
Jacques, Martin (2009).When China Rules the World: The End of the Western World and the Birth of a New Global Order. Penguin Books. Rev. ed. (28 August 2012).
Jaffe, Amy Myers, "Green Giant: Renewable Energy and Chinese Power", Foreign Affairs, vol. 97, no. 2 (March / April 2018), pp. 83–93.
Johnson, Ian, "What Holds China Together?", The New York Review of Books, vol. LXVI, no. 14 (26 September 2019), pp. 14, 16, 18. "The Manchus... had [in 1644] conquered the last ethnic Chinese empire, the Ming [and established Imperial China's last dynasty, the Qing]... The Manchus expanded the empire's borders northward to include all of Mongolia, and westward to Tibet and Xinjiang." [p. 16.] "China's rulers have no faith that anything but force can keep this sprawling country intact." [p. 18.]
External links
Government
The Central People's Government of People's Republic of China
General information
China at a Glance from People's Daily
Country profile – China at BBC News
China. The World Factbook. Central Intelligence Agency.
China, People's Republic of from UCB Libraries GovPubs
Maps
Google Maps—China
Atheist states
BRICS nations
Countries and territories where Chinese is an official language
Communist states
Countries in Asia
Cradle of civilization
East Asian countries
E7 nations
G20 members
Member states of the United Nations
Northeast Asian countries
One-party states
Republics
States with limited recognition
States and territories established in 1949 |
5407 | https://en.wikipedia.org/wiki/California | California | California is a state in the Western United States. With over 38.9million residents across a total area of approximately , it is the most populous U.S. state, the third-largest U.S. state by area, and the most populated subnational entity in North America. California borders Oregon to the north, Nevada and Arizona to the east, and the Mexican state of Baja California to the south; it has a coastline along the Pacific Ocean to the west.
The Greater Los Angeles and San Francisco Bay areas in California are the nation's second and fifth-most populous urban regions, respectively. Greater Los Angeles has over 18.7 million residents and the San Francisco Bay Area has over 9.6 million residents. Los Angeles is the state's most populous city and the nation's second-most populous city. San Francisco is the second-most densely populated major city in the country. Los Angeles County is the country's most populous county, and San Bernardino County is the nation's largest county by area. Sacramento is the state's capital.
California's economy is the largest of any state within the United States, with a $3.6 trillion gross state product (GSP) . It is the largest sub-national economy in the world. If California were a sovereign nation, it would rank as the world's fifth-largest economy , behind India and ahead of the United Kingdom, as well as the 37th most populous. The Greater Los Angeles area and the San Francisco area are the nation's second- and fourth-largest urban economies ($1.0trillion and $0.6trillion respectively ). The San Francisco Bay Area Combined Statistical Area had the nation's highest gross domestic product per capita ($106,757) among large primary statistical areas in 2018, and is home to five of the world's ten largest companies by market capitalization and four of the world's ten richest people. Slightly over 84 percent of the state's residents 25 or older hold a high school degree, the lowest high school education rate of all 50 states.
Prior to European colonization, California was one of the most culturally and linguistically diverse areas in pre-Columbian North America, and the indigenous peoples of California constituted the highest Native American population density north of what is now Mexico. European exploration in the 16th and 17th centuries led to the colonization of California by the Spanish Empire. In 1804, it was included in Alta California province within the Viceroyalty of New Spain. The area became a part of Mexico in 1821, following its successful war for independence, but was ceded to the United States in 1848 after the Mexican–American War. The California Gold Rush started in 1848 and led to dramatic social and demographic changes, including the depopulation of indigenous peoples in the California genocide. The western portion of Alta California was then organized and admitted as the 31st state on September 9, 1850, as a free state, following the Compromise of 1850.
Notable contributions to popular culture, ranging from entertainment, sports, music, and fashion, have their origins in California. The state also has made substantial contributions in the fields of communication, information, innovation, education, environmentalism, entertainment, economics, politics, technology, and religion. California is the home of Hollywood, the oldest and one of the largest film industries in the world, profoundly influencing global entertainment. It is considered the origin of the American film industry, hippie counterculture, beach and car culture, the personal computer, the internet, fast food, diners, burger joints, skateboarding, and the fortune cookie, among other inventions. The San Francisco Bay Area and the Greater Los Angeles Area are widely seen as the centers of the global technology and U.S. film industries, respectively. California's economy is very diverse. California's agricultural industry has the highest output of any U.S. state, and is led by its dairy, almonds, and grapes. With the busiest ports in the country (Los Angeles and Long Beach), California plays a pivotal role in the global supply chain, hauling in about 40% of all goods imported to the United States.
The state's extremely diverse geography ranges from the Pacific Coast and metropolitan areas in the west to the Sierra Nevada mountains in the east, and from the redwood and Douglas fir forests in the northwest to the Mojave Desert in the southeast. Two-thirds of the nation's earthquake risk lies in California. The Central Valley, a fertile agricultural area, dominates the state's center. California is well known for its warm Mediterranean climate along the coast and monsoon seasonal weather inland. The large size of the state results in climates that vary from moist temperate rainforest in the north to arid desert in the interior, as well as snowy alpine in the mountains. Droughts and wildfires are an ongoing issue for the state.
Etymology
The Spaniards gave the name to the peninsula of Baja California and to Alta California, the latter region becoming the present-day state of California.
The name derived from the mythical island of California in the fictional story of Queen Calafia, as recorded in a 1510 work The Adventures of Esplandián by Castilian author Garci Rodríguez de Montalvo. This work was the fifth in a popular Spanish chivalric romance series that began with . Queen Calafia's kingdom was said to be a remote land rich in gold and pearls, inhabited by beautiful Black women who wore gold armor and lived like Amazons, as well as griffins and other strange beasts. In the fictional paradise, the ruler Queen Calafia fought alongside Muslims and her name may have been chosen to echo the Muslim title caliph, used for Muslim leaders.
Official abbreviations of the state's name include CA, Cal., Calif., and US-CA.
History
Indigenous
California was one of the most culturally and linguistically diverse areas in pre-Columbian North America. Historians generally agree that there were at least 300,000 people living in California prior to European colonization. The indigenous peoples of California included more than 70 distinct ethnic groups, inhabiting environments ranging from mountains and deserts to islands and redwood forests.
Living in these diverse geographic areas, the indigenous peoples developed complex forms of ecosystem management, including forest gardening to ensure the regular availability of food and medicinal plants. This was a form of sustainable agriculture. To mitigate destructive large wildfires from ravaging the natural environment, indigenous peoples developed a practice of controlled burning. This practice was recognized for its benefits by the California government in 2022.
These groups were also diverse in their political organization, with bands, tribes, villages, and, on the resource-rich coasts, large chiefdoms, such as the Chumash, Pomo and Salinan. Trade, intermarriage, craft specialists, and military alliances fostered social and economic relationships between many groups. Although nations would sometimes war, most armed conflicts were between groups of men for vengeance. Acquiring territory was not usually the purpose of these small-scale battles.
Men and women generally had different roles in society. Women were often responsible for weaving, harvesting, processing, and preparing food, while men for hunting and other forms of physical labor. Most societies also had roles for people whom the Spanish referred to as joyas, who they saw as "men who dressed as women". Joyas were responsible for death, burial, and mourning rituals, and they performed women's social roles. Indigenous societies had terms such as two-spirit to refer to them. The Chumash referred to them as 'aqi. The early Spanish settlers detested and sought to eliminate them.
Spanish period
The first Europeans to explore the coast of California were the members of a Spanish maritime expedition led by Portuguese captain Juan Rodríguez Cabrillo in 1542. Cabrillo was commissioned by Antonio de Mendoza, the Viceroy of New Spain, to lead an expedition up the Pacific coast in search of trade opportunities; they entered San Diego Bay on September 28, 1542, and reached at least as far north as San Miguel Island. Privateer and explorer Francis Drake explored and claimed an undefined portion of the California coast in 1579, landing north of the future city of San Francisco. Sebastián Vizcaíno explored and mapped the coast of California in 1602 for New Spain, putting ashore in Monterey. Despite the on-the-ground explorations of California in the 16th century, Rodríguez's idea of California as an island persisted. Such depictions appeared on many European maps well into the 18th century.
The Portolá expedition of 1769–70 was a pivotal event in the Spanish colonization of California, resulting in the establishment of numerous missions, presidios, and pueblos. The military and civil contingent of the expedition was led by Gaspar de Portolá, who traveled over land from Sonora into California, while the religious component was headed by Junípero Serra, who came by sea from Baja California. In 1769, Portolá and Serra established Mission San Diego de Alcalá and the Presidio of San Diego, the first religious and military settlements founded by the Spanish in California. By the end of the expedition in 1770, they would establish the Presidio of Monterey and Mission San Carlos Borromeo de Carmelo on Monterey Bay.
After the Portolà expedition, Spanish missionaries led by Father-President Serra set out to establish 21 Spanish missions of California along El Camino Real ("The Royal Road") and along the California coast, 16 sites of which having been chosen during the Portolá expedition. Numerous major cities in California grew out of missions, including San Francisco (Mission San Francisco de Asís), San Diego (Mission San Diego de Alcalá), Ventura (Mission San Buenaventura), or Santa Barbara (Mission Santa Barbara), among others.
Juan Bautista de Anza led a similarly important expedition throughout California in 1775–76, which would extend deeper into the interior and north of California. The Anza expedition selected numerous sites for missions, presidios, and pueblos, which subsequently would be established by settlers. Gabriel Moraga, a member of the expedition, would also christen many of California's prominent rivers with their names in 1775–1776, such as the Sacramento River and the San Joaquin River. After the expedition, Gabriel's son, José Joaquín Moraga, would found the pueblo of San Jose in 1777, making it the first civilian-established city in California.
During this same period, sailors from the Russian Empire explored along the northern coast of California. In 1812, the Russian-American Company established a trading post and small fortification at Fort Ross on the North Coast. Fort Ross was primarily used to supply Russia's Alaskan colonies with food supplies. The settlement did not meet much success, failing to attract settlers or establish long term trade viability, and was abandoned by 1841.
During the War of Mexican Independence, Alta California was largely unaffected and uninvolved in the revolution, though many Californios supported independence from Spain, which many believed had neglected California and limited its development. Spain's trade monopoly on California had limited local trade prospects. Following Mexican independence, California ports were freely able to trade with foreign merchants. Governor Pablo Vicente de Solá presided over the transition from Spanish colonial rule to independent Mexican rule.
Mexican period
In 1821, the Mexican War of Independence gave the Mexican Empire (which included California) independence from Spain. For the next 25 years, Alta California remained a remote, sparsely populated, northwestern administrative district of the newly independent country of Mexico, which shortly after independence became a republic.
The missions, which controlled most of the best land in the state, were secularized by 1834 and became the property of the Mexican government. The governor granted many square leagues of land to others with political influence. These huge ranchos or cattle ranches emerged as the dominant institutions of Mexican California. The ranchos developed under ownership by Californios (Hispanics native of California) who traded cowhides and tallow with Boston merchants. Beef did not become a commodity until the 1849 California Gold Rush.
From the 1820s, trappers and settlers from the United States and Canada began to arrive in Northern California. These new arrivals used the Siskiyou Trail, California Trail, Oregon Trail and Old Spanish Trail to cross the rugged mountains and harsh deserts in and surrounding California.
The early government of the newly independent Mexico was highly unstable, and in a reflection of this, from 1831 onwards, California also experienced a series of armed disputes, both internal and with the central Mexican government. During this tumultuous political period Juan Bautista Alvarado was able to secure the governorship during 1836–1842. The military action which first brought Alvarado to power had momentarily declared California to be an independent state, and had been aided by Anglo-American residents of California, including Isaac Graham. In 1840, one hundred of those residents who did not have passports were arrested, leading to the Graham Affair, which was resolved in part with the intercession of Royal Navy officials.
One of the largest ranchers in California was John Marsh. After failing to obtain justice against squatters on his land from the Mexican courts, he determined that California should become part of the United States. Marsh conducted a letter-writing campaign espousing the California climate, the soil, and other reasons to settle there, as well as the best route to follow, which became known as "Marsh's route". His letters were read, reread, passed around, and printed in newspapers throughout the country, and started the first wagon trains rolling to California. He invited immigrants to stay on his ranch until they could get settled, and assisted in their obtaining passports.
After ushering in the period of organized emigration to California, Marsh became involved in a military battle between the much-hated Mexican general, Manuel Micheltorena and the California governor he had replaced, Juan Bautista Alvarado. The armies of each met at the Battle of Providencia near Los Angeles. Marsh had been forced against his will to join Micheltorena's army. Ignoring his superiors, during the battle, he signaled the other side for a parley. There were many settlers from the United States fighting on both sides. He convinced each side that they had no reason to be fighting each other. As a result of Marsh's actions, they abandoned the fight, Micheltorena was defeated, and California-born Pio Pico was returned to the governorship. This paved the way to California's ultimate acquisition by the United States.
U.S. Conquest and the California Republic
In 1846, a group of American settlers in and around Sonoma rebelled against Mexican rule during the Bear Flag Revolt. Afterward, rebels raised the Bear Flag (featuring a bear, a star, a red stripe and the words "California Republic") at Sonoma. The Republic's only president was William B. Ide, who played a pivotal role during the Bear Flag Revolt. This revolt by American settlers served as a prelude to the later American military invasion of California and was closely coordinated with nearby American military commanders.
The California Republic was short-lived; the same year marked the outbreak of the Mexican–American War (1846–1848).
Commodore John D. Sloat of the United States Navy sailed into Monterey Bay in 1846 and began the U.S. military invasion of California, with Northern California capitulating in less than a month to the United States forces. In Southern California, Californios continued to resist American forces. Notable military engagements of the conquest include the Battle of San Pasqual and the Battle of Dominguez Rancho in Southern California, as well as the Battle of Olómpali and the Battle of Santa Clara in Northern California. After a series of defensive battles in the south, the Treaty of Cahuenga was signed by the Californios on January 13, 1847, securing a censure and establishing de facto American control in California.
Early American period
Following the Treaty of Guadalupe Hidalgo (February 2, 1848) that ended the war, the westernmost portion of the annexed Mexican territory of Alta California soon became the American state of California, and the remainder of the old territory was then subdivided into the new American Territories of Arizona, Nevada, Colorado and Utah. The even more lightly populated and arid lower region of old Baja California remained as a part of Mexico. In 1846, the total settler population of the western part of the old Alta California had been estimated to be no more than 8,000, plus about 100,000 Native Americans, down from about 300,000 before Hispanic settlement in 1769.
In 1848, only one week before the official American annexation of the area, gold was discovered in California, this being an event which was to forever alter both the state's demographics and its finances. Soon afterward, a massive influx of immigration into the area resulted, as prospectors and miners arrived by the thousands. The population burgeoned with United States citizens, Europeans, Middle Easterns, Chinese and other immigrants during the great California Gold Rush. By the time of California's application for statehood in 1850, the settler population of California had multiplied to 100,000. By 1854, more than 300,000 settlers had come. Between 1847 and 1870, the population of San Francisco increased from 500 to 150,000.
The seat of government for California under Spanish and later Mexican rule had been located in Monterey from 1777 until 1845. Pio Pico, the last Mexican governor of Alta California, had briefly moved the capital to Los Angeles in 1845. The United States consulate had also been located in Monterey, under consul Thomas O. Larkin.
In 1849, a state Constitutional Convention was first held in Monterey. Among the first tasks of the convention was a decision on a location for the new state capital. The first full legislative sessions were held in San Jose (1850–1851). Subsequent locations included Vallejo (1852–1853), and nearby Benicia (1853–1854); these locations eventually proved to be inadequate as well. The capital has been located in Sacramento since 1854 with only a short break in 1862 when legislative sessions were held in San Francisco due to flooding in Sacramento.
Once the state's Constitutional Convention had finalized its state constitution, it applied to the U.S. Congress for admission to statehood. On September 9, 1850, as part of the Compromise of 1850, California became a free state and September9 a state holiday.
During the American Civil War (1861–1865), California sent gold shipments eastward to Washington in support of the Union. However, due to the existence of a large contingent of pro-South sympathizers within the state, the state was not able to muster any full military regiments to send eastwards to officially serve in the Union war effort. Still, several smaller military units within the Union army, such as the "California 100 Company", were unofficially associated with the state of California due to a majority of their members being from California.
At the time of California's admission into the Union, travel between California and the rest of the continental United States had been a time-consuming and dangerous feat. Nineteen years later, and seven years after it was greenlighted by President Lincoln, the first transcontinental railroad was completed in 1869. California was then reachable from the eastern States in a week's time.
Much of the state was extremely well suited to fruit cultivation and agriculture in general. Vast expanses of wheat, other cereal crops, vegetable crops, cotton, and nut and fruit trees were grown (including oranges in Southern California), and the foundation was laid for the state's prodigious agricultural production in the Central Valley and elsewhere.
In the nineteenth century, a large number of migrants from China traveled to the state as part of the Gold Rush or to seek work. Even though the Chinese proved indispensable in building the transcontinental railroad from California to Utah, perceived job competition with the Chinese led to anti-Chinese riots in the state, and eventually the US ended migration from China partially as a response to pressure from California with the 1882 Chinese Exclusion Act.
California Genocide
Under earlier Spanish and Mexican rule, California's original native population had precipitously declined, above all, from Eurasian diseases to which the indigenous people of California had not yet developed a natural immunity. Under its new American administration, California's first governor Peter Hardeman Burnett instituted policies that have been described as a state-sanctioned policy of elimination toward California's indigenous people. Burnett announced in 1851 in his Second Annual Message to the Legislature: "That a war of extermination will continue to be waged between the races until the Indian race becomes extinct must be expected. While we cannot anticipate the result with but painful regret, the inevitable destiny of the race is beyond the power and wisdom of man to avert."
As in other American states, indigenous peoples were forcibly removed from their lands by American settlers, like miners, ranchers, and farmers. Although California had entered the American union as a free state, the "loitering or orphaned Indians," were de facto enslaved by their new Anglo-American masters under the 1850 Act for the Government and Protection of Indians. One of these de facto slave auctions was approved by the Los Angeles City Council and occurred for nearly twenty years. There were many massacres in which hundreds of indigenous people were killed by settlers for their land.
Between 1850 and 1860, the California state government paid around 1.5million dollars (some 250,000 of which was reimbursed by the federal government) to hire militias with the stated purpose of protecting settlers, however these militias perpetrated numerous massacres of indigenous people. Indigenous people were also forcibly moved to reservations and rancherias, which were often small and isolated and without enough natural resources or funding from the government to adequately sustain the populations living on them. As a result, settler colonialism was a calamity for indigenous people. Several scholars and Native American activists, including Benjamin Madley and Ed Castillo, have described the actions of the California government as a genocide, as well as the 40th governor of California Gavin Newsom. Benjamin Madley estimates that from 1846 to 1873, between 9,492 and 16,092 indigenous people were killed, including between 1,680 and 3,741 killed by the U.S. Army.
1900–present
In the twentieth century, thousands of Japanese people migrated to the US and California specifically to attempt to purchase and own land in the state. However, the state in 1913 passed the Alien Land Act, excluding Asian immigrants from owning land. During World War II, Japanese Americans in California were interned in concentration camps such as at Tule Lake and Manzanar. In 2020, California officially apologized for this internment.
Migration to California accelerated during the early 20th century with the completion of major transcontinental highways like the Lincoln Highway and Route 66. In the period from 1900 to 1965, the population grew from fewer than one million to the greatest in the Union. In 1940, the Census Bureau reported California's population as 6.0% Hispanic, 2.4% Asian, and 89.5% non-Hispanic white.
To meet the population's needs, major engineering feats like the California and Los Angeles Aqueducts; the Oroville and Shasta Dams; and the Bay and Golden Gate Bridges were built across the state. The state government also adopted the California Master Plan for Higher Education in 1960 to develop a highly efficient system of public education.
Meanwhile, attracted to the mild Mediterranean climate, cheap land, and the state's wide variety of geography, filmmakers established the studio system in Hollywood in the 1920s. California manufactured 8.7 percent of total United States military armaments produced during World War II, ranking third (behind New York and Michigan) among the 48 states. California however easily ranked first in production of military ships during the war (transport, cargo, [merchant ships] such as Liberty ships, Victory ships, and warships) at drydock facilities in San Diego, Los Angeles, and the San Francisco Bay Area. After World War II, California's economy greatly expanded due to strong aerospace and defense industries, whose size decreased following the end of the Cold War. Stanford University and its Dean of Engineering Frederick Terman began encouraging faculty and graduates to stay in California instead of leaving the state, and develop a high-tech region in the area now known as Silicon Valley. As a result of these efforts, California is regarded as a world center of the entertainment and music industries, of technology, engineering, and the aerospace industry, and as the United States center of agricultural production. Just before the Dot Com Bust, California had the fifth-largest economy in the world among nations.
In the mid and late twentieth century, a number of race-related incidents occurred in the state. Tensions between police and African Americans, combined with unemployment and poverty in inner cities, led to violent riots, such as the 1965 Watts riots and 1992 Rodney King riots. California was also the hub of the Black Panther Party, a group known for arming African Americans to defend against racial injustice and for organizing free breakfast programs for schoolchildren. Additionally, Mexican, Filipino, and other migrant farm workers rallied in the state around Cesar Chavez for better pay in the 1960s and 1970s.
During the 20th century, two great disasters happened in California. The 1906 San Francisco earthquake and 1928 St. Francis Dam flood remain the deadliest in U.S. history.
Although air pollution problems have been reduced, health problems associated with pollution have continued. The brown haze known as "smog" has been substantially abated after the passage of federal and state restrictions on automobile exhaust.
An energy crisis in 2001 led to rolling blackouts, soaring power rates, and the importation of electricity from neighboring states. Southern California Edison and Pacific Gas and Electric Company came under heavy criticism.
Housing prices in urban areas continued to increase; a modest home which in the 1960s cost $25,000 would cost half a million dollars or more in urban areas by 2005. More people commuted longer hours to afford a home in more rural areas while earning larger salaries in the urban areas. Speculators bought houses they never intended to live in, expecting to make a huge profit in a matter of months, then rolling it over by buying more properties. Mortgage companies were compliant, as everyone assumed the prices would keep rising. The bubble burst in 2007–8 as housing prices began to crash and the boom years ended. Hundreds of billions in property values vanished and foreclosures soared as many financial institutions and investors were badly hurt.
In the twenty-first century, droughts and frequent wildfires attributed to climate change have occurred in the state. From 2011 to 2017, a persistent drought was the worst in its recorded history. The 2018 wildfire season was the state's deadliest and most destructive, most notably Camp Fire.
One of the first confirmed COVID-19 cases in the United States that occurred in California was first of which was confirmed on January 26, 2020. Meaning, all of the early confirmed cases were persons who had recently travelled to China in Asia, as testing was restricted to this group. On this January 29, 2020, as disease containment protocols were still being developed, the U.S. Department of State evacuated 195 persons from Wuhan, China aboard a chartered flight to March Air Reserve Base in Riverside County, and in this process, it may have granted and conferred to escalated within the land and the US at cosmic. On February 5, 2020, the U.S. evacuated 345 more citizens from Hubei Province to two military bases in California, Travis Air Force Base in Solano County and Marine Corps Air Station Miramar, San Diego, where they were quarantined for 14 days. A state of emergency was largely declared in this state of the nation on March 4, 2020, and as of February 24, 2021, remains in effect. A mandatory statewide stay-at-home order was issued on March 19, 2020, due to increase, which was ended on January 25, 2021, allowing citizens to return to normal life. On April 6, 2021, the state announced plans to fully reopen the economy by June 15, 2021.
In 2019, the 40th governor of California, Gavin Newsom formally apologized to the indigenous peoples of California for the California genocide: "Genocide. No other way to describe it, and that's the way it needs to be described in the history books." Newsom further acknowledged that "the actions of the state 150 years ago have ongoing ramifications even today." Cultural and language revitalization efforts among indigenous Californians have progressed among several tribes as of 2022. Some land returns to indigenous stewardship have occurred throughout California. In 2022, the largest dam removal and river restoration project in US history was announced for the Klamath River as a win for California tribes.
Geography
Covering an area of , California is the third-largest state in the United States in area, after Alaska and Texas. California is one of the most geographically diverse states in the union and is often geographically bisected into two regions, Southern California, comprising the ten southernmost counties, and Northern California, comprising the 48 northernmost counties. It is bordered by Oregon to the north, Nevada to the east and northeast, Arizona to the southeast, the Pacific Ocean to the west and shares an international border with the Mexican state of Baja California to the south (with which it makes up part of The Californias region of North America, alongside Baja California Sur).
In the middle of the state lies the California Central Valley, bounded by the Sierra Nevada in the east, the coastal mountain ranges in the west, the Cascade Range to the north and by the Tehachapi Mountains in the south. The Central Valley is California's productive agricultural heartland.
Divided in two by the Sacramento-San Joaquin River Delta, the northern portion, the Sacramento Valley serves as the watershed of the Sacramento River, while the southern portion, the San Joaquin Valley is the watershed for the San Joaquin River. Both valleys derive their names from the rivers that flow through them. With dredging, the Sacramento and the San Joaquin Rivers have remained deep enough for several inland cities to be seaports.
The Sacramento-San Joaquin River Delta is a critical water supply hub for the state. Water is diverted from the delta and through an extensive network of pumps and canals that traverse nearly the length of the state, to the Central Valley and the State Water Projects and other needs. Water from the Delta provides drinking water for nearly 23million people, almost two-thirds of the state's population as well as water for farmers on the west side of the San Joaquin Valley.
Suisun Bay lies at the confluence of the Sacramento and San Joaquin Rivers. The water is drained by the Carquinez Strait, which flows into San Pablo Bay, a northern extension of San Francisco Bay, which then connects to the Pacific Ocean via the Golden Gate strait.
The Channel Islands are located off the Southern coast, while the Farallon Islands lie west of San Francisco.
The Sierra Nevada (Spanish for "snowy range") includes the highest peak in the contiguous 48 states, Mount Whitney, at . The range embraces Yosemite Valley, famous for its glacially carved domes, and Sequoia National Park, home to the giant sequoia trees, the largest living organisms on Earth, and the deep freshwater lake, Lake Tahoe, the largest lake in the state by volume.
To the east of the Sierra Nevada are Owens Valley and Mono Lake, an essential migratory bird habitat. In the western part of the state is Clear Lake, the largest freshwater lake by area entirely in California. Although Lake Tahoe is larger, it is divided by the California/Nevada border. The Sierra Nevada falls to Arctic temperatures in winter and has several dozen small glaciers, including Palisade Glacier, the southernmost glacier in the United States.
The Tulare Lake was the largest freshwater lake west of the Mississippi River. A remnant of Pleistocene-era Lake Corcoran, Tulare Lake dried up by the early 20th century after its tributary rivers were diverted for agricultural irrigation and municipal water uses.
About 45 percent of the state's total surface area is covered by forests, and California's diversity of pine species is unmatched by any other state. California contains more forestland than any other state except Alaska. Many of the trees in the California White Mountains are the oldest in the world; an individual bristlecone pine is over 5,000 years old.
In the south is a large inland salt lake, the Salton Sea. The south-central desert is called the Mojave; to the northeast of the Mojave lies Death Valley, which contains the lowest and hottest place in North America, the Badwater Basin at . The horizontal distance from the bottom of Death Valley to the top of Mount Whitney is less than . Indeed, almost all of southeastern California is arid, hot desert, with routine extreme high temperatures during the summer. The southeastern border of California with Arizona is entirely formed by the Colorado River, from which the southern part of the state gets about half of its water.
A majority of California's cities are located in either the San Francisco Bay Area or the Sacramento metropolitan area in Northern California; or the Los Angeles area, the Inland Empire, or the San Diego metropolitan area in Southern California. The Los Angeles Area, the Bay Area, and the San Diego metropolitan area are among several major metropolitan areas along the California coast.
As part of the Ring of Fire, California is subject to tsunamis, floods, droughts, Santa Ana winds, wildfires, and landslides on steep terrain; California also has several volcanoes. It has many earthquakes due to several faults running through the state, the largest being the San Andreas Fault. About 37,000 earthquakes are recorded each year; most are too small to be felt, but two-thirds of the human risk from earthquakes lies in California.
Climate
Most of the state has a Mediterranean climate. The cool California Current offshore often creates summer fog near the coast. Farther inland, there are colder winters and hotter summers. The maritime moderation results in the shoreline summertime temperatures of Los Angeles and San Francisco being the coolest of all major metropolitan areas of the United States and uniquely cool compared to areas on the same latitude in the interior and on the east coast of the North American continent. Even the San Diego shoreline bordering Mexico is cooler in summer than most areas in the contiguous United States. Just a few miles inland, summer temperature extremes are significantly higher, with downtown Los Angeles being several degrees warmer than at the coast. The same microclimate phenomenon is seen in the climate of the Bay Area, where areas sheltered from the ocean experience significantly hotter summers and colder winters in contrast with nearby areas closer to the ocean.
Northern parts of the state have more rain than the south. California's mountain ranges also influence the climate: some of the rainiest parts of the state are west-facing mountain slopes. Coastal northwestern California has a temperate climate, and the Central Valley has a Mediterranean climate but with greater temperature extremes than the coast. The high mountains, including the Sierra Nevada, have an alpine climate with snow in winter and mild to moderate heat in summer.
California's mountains produce rain shadows on the eastern side, creating extensive deserts. The higher elevation deserts of eastern California have hot summers and cold winters, while the low deserts east of the Southern California mountains have hot summers and nearly frostless mild winters. Death Valley, a desert with large expanses below sea level, is considered the hottest location in the world; the highest temperature in the world, , was recorded there on July 10, 1913. The lowest temperature in California was on January 20, 1937, in Boca.
The table below lists average temperatures for January and August in a selection of places throughout the state; some highly populated and some not. This includes the relatively cool summers of the Humboldt Bay region around Eureka, the extreme heat of Death Valley, and the mountain climate of Mammoth in the Sierra Nevada.
The wide range of climates leads to a high demand for water. Over time, droughts have been increasing due to climate change and overextraction, becoming less seasonal and more year-round, further straining California's electricity supply and water security and having an impact on California business, industry, and agriculture.
In 2022, a new state program was created in collaboration with indigenous peoples of California to revive the practice of controlled burns as a way of clearing excessive forest debris and making landscapes more resilient to wildfires. Native American use of fire in ecosystem management was outlawed in 1911, yet has now been recognized.
Ecology
California is one of the ecologically richest and most diverse parts of the world, and includes some of the most endangered ecological communities. California is part of the Nearctic realm and spans a number of terrestrial ecoregions.
California's large number of endemic species includes relict species, which have died out elsewhere, such as the Catalina ironwood (Lyonothamnus floribundus). Many other endemics originated through differentiation or adaptive radiation, whereby multiple species develop from a common ancestor to take advantage of diverse ecological conditions such as the California lilac (Ceanothus). Many California endemics have become endangered, as urbanization, logging, overgrazing, and the introduction of exotic species have encroached on their habitat.
Flora and fauna
California boasts several superlatives in its collection of flora: the largest trees, the tallest trees, and the oldest trees. California's native grasses are perennial plants, and there are close to hundred succulent species native to the state. After European contact, these were generally replaced by invasive species of European annual grasses; and, in modern times, California's hills turn a characteristic golden-brown in summer.
Because California has the greatest diversity of climate and terrain, the state has six life zones which are the lower Sonoran Desert; upper Sonoran (foothill regions and some coastal lands), transition (coastal areas and moist northeastern counties); and the Canadian, Hudsonian, and Arctic Zones, comprising the state's highest elevations.
Plant life in the dry climate of the lower Sonoran zone contains a diversity of native cactus, mesquite, and paloverde. The Joshua tree is found in the Mojave Desert. Flowering plants include the dwarf desert poppy and a variety of asters. Fremont cottonwood and valley oak thrive in the Central Valley. The upper Sonoran zone includes the chaparral belt, characterized by forests of small shrubs, stunted trees, and herbaceous plants. Nemophila, mint, Phacelia, Viola, and the California poppy (Eschscholzia californica, the state flower) also flourish in this zone, along with the lupine, more species of which occur here than anywhere else in the world.
The transition zone includes most of California's forests with the redwood (Sequoia sempervirens) and the "big tree" or giant sequoia (Sequoiadendron giganteum), among the oldest living things on earth (some are said to have lived at least 4,000 years). Tanbark oak, California laurel, sugar pine, madrona, broad-leaved maple, and Douglas-fir also grow here. Forest floors are covered with swordfern, alumnroot, barrenwort, and trillium, and there are thickets of huckleberry, azalea, elder, and wild currant. Characteristic wild flowers include varieties of mariposa, tulip, and tiger and leopard lilies.
The high elevations of the Canadian zone allow the Jeffrey pine, red fir, and lodgepole pine to thrive. Brushy areas are abundant with dwarf manzanita and ceanothus; the unique Sierra puffball is also found here. Right below the timberline, in the Hudsonian zone, the whitebark, foxtail, and silver pines grow. At about , begins the Arctic zone, a treeless region whose flora include a number of wildflowers, including Sierra primrose, yellow columbine, alpine buttercup, and alpine shooting star.
Palm trees are a well-known feature of California, particularly in Southern California and Los Angeles; many species have been imported, though the Washington filifera (commonly known as the California fan palm) is native to the state, mainly growing in the Colorado Desert oases. Other common plants that have been introduced to the state include the eucalyptus, acacia, pepper tree, geranium, and Scotch broom. The species that are federally classified as endangered are the Contra Costa wallflower, Antioch Dunes evening primrose, Solano grass, San Clemente Island larkspur, salt marsh bird's beak, McDonald's rock-cress, and Santa Barbara Island liveforever. , 85 plant species were listed as threatened or endangered.
In the deserts of the lower Sonoran zone, the mammals include the jackrabbit, kangaroo rat, squirrel, and opossum. Common birds include the owl, roadrunner, cactus wren, and various species of hawk. The area's reptilian life include the sidewinder viper, desert tortoise, and horned toad. The upper Sonoran zone boasts mammals such as the antelope, brown-footed woodrat, and ring-tailed cat. Birds unique to this zone are the California thrasher, bushtit, and California condor.
In the transition zone, there are Colombian black-tailed deer, black bears, gray foxes, cougars, bobcats, and Roosevelt elk. Reptiles such as the garter snakes and rattlesnakes inhabit the zone. In addition, amphibians such as the water puppy and redwood salamander are common too. Birds such as the kingfisher, chickadee, towhee, and hummingbird thrive here as well.
The Canadian zone mammals include the mountain weasel, snowshoe hare, and several species of chipmunks. Conspicuous birds include the blue-fronted jay, mountain chickadee, hermit thrush, American dipper, and Townsend's solitaire. As one ascends into the Hudsonian zone, birds become scarcer. While the gray-crowned rosy finch is the only bird native to the high Arctic region, other bird species such as Anna's hummingbird and Clark's nutcracker. Principal mammals found in this region include the Sierra coney, white-tailed jackrabbit, and the bighorn sheep. , the bighorn sheep was listed as endangered by the U.S. Fish and Wildlife Service. The fauna found throughout several zones are the mule deer, coyote, mountain lion, northern flicker, and several species of hawk and sparrow.
Aquatic life in California thrives, from the state's mountain lakes and streams to the rocky Pacific coastline. Numerous trout species are found, among them rainbow, golden, and cutthroat. Migratory species of salmon are common as well. Deep-sea life forms include sea bass, yellowfin tuna, barracuda, and several types of whale. Native to the cliffs of northern California are seals, sea lions, and many types of shorebirds, including migratory species.
, 118 California animals were on the federal endangered list; 181 plants were listed as endangered or threatened. Endangered animals include the San Joaquin kitfox, Point Arena mountain beaver, Pacific pocket mouse, salt marsh harvest mouse, Morro Bay kangaroo rat (and five other species of kangaroo rat), Amargosa vole, California least tern, California condor, loggerhead shrike, San Clemente sage sparrow, San Francisco garter snake, five species of salamander, three species of chub, and two species of pupfish. Eleven butterflies are also endangered and two that are threatened are on the federal list. Among threatened animals are the coastal California gnatcatcher, Paiute cutthroat trout, southern sea otter, and northern spotted owl. California has a total of of National Wildlife Refuges. , 123 California animals were listed as either endangered or threatened on the federal list. Also, , 178 species of California plants were listed either as endangered or threatened on this federal list.
Rivers
The most prominent river system within California is formed by the Sacramento River and San Joaquin River, which are fed mostly by snowmelt from the west slope of the Sierra Nevada, and respectively drain the north and south halves of the Central Valley. The two rivers join in the Sacramento–San Joaquin River Delta, flowing into the Pacific Ocean through San Francisco Bay. Many major tributaries feed into the Sacramento–San Joaquin system, including the Pit River, Feather River and Tuolumne River.
The Klamath and Trinity Rivers drain a large area in far northwestern California. The Eel River and Salinas River each drain portions of the California coast, north and south of San Francisco Bay, respectively. The Mojave River is the primary watercourse in the Mojave Desert, and the Santa Ana River drains much of the Transverse Ranges as it bisects Southern California. The Colorado River forms the state's southeast border with Arizona.
Most of California's major rivers are dammed as part of two massive water projects: the Central Valley Project, providing water for agriculture in the Central Valley, and the California State Water Project diverting water from Northern to Southern California. The state's coasts, rivers, and other bodies of water are regulated by the California Coastal Commission.
Regions
California is traditionally separated into Northern California and Southern California, divided by a straight border which runs across the state, separating the northern 48 counties from the southern 10 counties. Despite the persistence of the northern-southern divide, California is more precisely divided into many regions, multiple of which stretch across the northern-southern divide.
Major divisions
Northern California
Southern California
Regions
Cities and towns
The state has 482 incorporated cities and towns, of which 460 are cities and 22 are towns. Under California law, the terms "city" and "town" are explicitly interchangeable; the name of an incorporated municipality in the state can either be "City of (Name)" or "Town of (Name)".
Sacramento became California's first incorporated city on February 27, 1850. San Jose, San Diego, and Benicia tied for California's second incorporated city, each receiving incorporation on March 27, 1850. Jurupa Valley became the state's most recent and 482nd incorporated municipality, on July 1, 2011.
The majority of these cities and towns are within one of five metropolitan areas: the Los Angeles Metropolitan Area, the San Francisco Bay Area, the Riverside-San Bernardino Area, the San Diego metropolitan area, or the Sacramento metropolitan area.
Demographics
Population
Nearly one out of every eight Americans lives in California. The United States Census Bureau reported that the population of California was 39,538,223 on April 1, 2020, a 6.13% increase since the 2010 census. The estimated state population in 2022 was 39.22 million. For over a century (1900–2020), California experienced steady population growth, adding an average of more than 300,000 people per year from 1940 onward. California's rate of growth began to slow by the 1990s, although it continued to experience population growth in the first two decades of the 21st century. The state experienced population declines in 2020 and 2021, attributable to declining birth rates, COVID-19 pandemic deaths, and less internal migration from other states to California.
The Greater Los Angeles Area is the second-largest metropolitan area in the United States (U.S.), while Los Angeles is the second-largest city in the U.S. Conversely, San Francisco is the most densely-populated city in California and one of the most densely populated cities in the U.S.. Also, Los Angeles County has held the title of most populous U.S. county for decades, and it alone is more populous than 42 U.S. states. Including Los Angeles, four of the top 20 most populous cities in the U.S. are in California: Los Angeles (2nd), San Diego (8th), San Jose (10th), and San Francisco (17th). The center of population of California is located four miles west-southwest of the city of Shafter, Kern County.
As of 2019, California ranked second among states by life expectancy, with a life expectancy of 80.9 years.
Starting in the year 2010, for the first time since the California Gold Rush, California-born residents made up the majority of the state's population. Along with the rest of the United States, California's immigration pattern has also shifted over the course of the late 2000s to early 2010s. Immigration from Latin American countries has dropped significantly with most immigrants now coming from Asia. In total for 2011, there were 277,304 immigrants. Fifty-seven percent came from Asian countries versus 22% from Latin American countries. Net immigration from Mexico, previously the most common country of origin for new immigrants, has dropped to zero / less than zero since more Mexican nationals are departing for their home country than immigrating.
The state's population of undocumented immigrants has been shrinking in recent years, due to increased enforcement and decreased job opportunities for lower-skilled workers. The number of migrants arrested attempting to cross the Mexican border in the Southwest decreased from a high of 1.1million in 2005 to 367,000 in 2011. Despite these recent trends, illegal aliens constituted an estimated 7.3 percent of the state's population, the third highest percentage of any state in the country, totaling nearly 2.6million. In particular, illegal immigrants tended to be concentrated in Los Angeles, Monterey, San Benito, Imperial, and Napa Counties—the latter four of which have significant agricultural industries that depend on manual labor. More than half of illegal immigrants originate from Mexico. The state of California and some California cities, including Los Angeles, Oakland and San Francisco, have adopted sanctuary policies.
According to HUD's 2022 Annual Homeless Assessment Report, there were an estimated 171,521 homeless people in California.
Race and ethnicity
According to the United States Census Bureau in 2018 the population self-identified as (alone or in combination): 72.1% White (including Hispanic Whites), 36.8% non-Hispanic whites, 15.3% Asian, 6.5% Black or African American, 1.6% Native American and Alaska Native, 0.5% Native Hawaiian or Pacific Islander, and 3.9% two or more races.
By ethnicity, in 2018 the population was 60.7% non-Hispanic (of any race) and 39.3% Hispanic or Latino (of any race). Hispanics are the largest single ethnic group in California. Non-Hispanic whites constituted 36.8% of the state's population. Californios are the Hispanic residents native to California, who make up the Spanish-speaking community that has existed in California since 1542, of varying Mexican American/Chicano, Criollo Spaniard, and Mestizo origin.
, 75.1% of California's population younger than age 1 were minorities, meaning they had at least one parent who was not non-Hispanic white (white Hispanics are counted as minorities).
In terms of total numbers, California has the largest population of White Americans in the United States, an estimated 22,200,000 residents. The state has the 5th largest population of African Americans in the United States, an estimated 2,250,000 residents. California's Asian American population is estimated at 4.4million, constituting a third of the nation's total. California's Native American population of 285,000 is the most of any state.
According to estimates from 2011, California has the largest minority population in the United States by numbers, making up 60% of the state population. Over the past 25 years, the population of non-Hispanic whites has declined, while Hispanic and Asian populations have grown. Between 1970 and 2011, non-Hispanic whites declined from 80% of the state's population to 40%, while Hispanics grew from 32% in 2000 to 38% in 2011. It is currently projected that Hispanics will rise to 49% of the population by 2060, primarily due to domestic births rather than immigration. With the decline of immigration from Latin America, Asian Americans now constitute the fastest growing racial/ethnic group in California; this growth is primarily driven by immigration from China, India and the Philippines, respectively.
Most of California's immigrant population are born in Mexico (3.9 million), the Philippines (825,200), China (768,400), India (556,500) and Vietnam (502,600).
California has the largest multiracial population in the United States. California has the highest rate of interracial marriage.
Languages
English serves as California's de jure and de facto official language. According to the 2021 American Community Survey conducted by the United States Census Bureau, 56.08% (20,763,638) of California residents age5 and older spoke only English at home, while 43.92% spoke another language at home. 60.35% of people who speak a language other than English at home are able to speak English "well" or "very well", with this figure varying significantly across the different linguistic groups. Like most U.S. states (32 out of 50), California law enshrines English as its official language, and has done so since the passage of Proposition 63 by California voters in 1986. Various government agencies do, and are often required to, furnish documents in the various languages needed to reach their intended audiences.
Spanish is the most commonly spoken language in California, behind English, spoken by 28.18% (10,434,308) of the population (in 2021). The Spanish language has been spoken in California since 1542 and is deeply intertwined with California's cultural landscape and history. Spanish was the official administrative language of California through the Spanish and Mexican eras, until 1848. Following the U.S. Conquest of California and the Treaty of Guadalupe-Hidalgo, the U.S. Government guaranteed the rights of Spanish speaking Californians. The first Constitution of California was written in both languages at the Monterey Constitutional Convention of 1849 and protected the rights of Spanish speakers to use their language in government proceedings and mandating that all government documents be published in both English and Spanish.
Despite the initial recognition of Spanish by early American governments in California, the revised 1879 constitution stripped the rights of Spanish speakers and the official status of Spanish. The growth of the English-only movement by the mid-20th century led to the passage of 1986 California Proposition 63, which enshrined English as the only official language in California and ended Spanish language instruction in schools. 2016 California Proposition 58 reversed the prohibition on bilingual education, though there are still many barriers to the proliferation of Spanish bilingual education, including a shortage of teachers and lack of funding. The government of California has since made efforts to promote Spanish language access and bilingual education, as have private educational institutions in California. Many businesses in California promote the usage of Spanish by their employees, to better serve both California's Hispanic population and the larger Spanish-speaking world.
California has historically been one of the most linguistically diverse areas in the world, with more than 70 indigenous languages derived from 64 root languages in six language families. A survey conducted between 2007 and 2009 identified 23 different indigenous languages among California farmworkers. All of California's indigenous languages are endangered, although there are now efforts toward language revitalization. California has the highest concentration nationwide of Chinese, Vietnamese and Punjabi speakers.
As a result of the state's increasing diversity and migration from other areas across the country and around the globe, linguists began noticing a noteworthy set of emerging characteristics of spoken American English in California since the late 20th century. This variety, known as California English, has a vowel shift and several other phonological processes that are different from varieties of American English used in other regions of the United States.
Religion
The largest religious denominations by number of adherents as a percentage of California's population in 2014 were the Catholic Church with 28 percent, Evangelical Protestants with 20 percent, and Mainline Protestants with 10 percent. Together, all kinds of Protestants accounted for 32 percent. Those unaffiliated with any religion represented 27 percent of the population. The breakdown of other religions is 1% Muslim, 2% Hindu and 2% Buddhist. This is a change from 2008, when the population identified their religion with the Catholic Church with 31 percent; Evangelical Protestants with 18 percent; and Mainline Protestants with 14 percent. In 2008, those unaffiliated with any religion represented 21 percent of the population. The breakdown of other religions in 2008 was 0.5% Muslim, 1% Hindu and 2% Buddhist. The American Jewish Year Book placed the total Jewish population of California at about 1,194,190 in 2006. According to the Association of Religion Data Archives (ARDA) the largest denominations by adherents in 2010 were the Catholic Church with 10,233,334; The Church of Jesus Christ of Latter-day Saints with 763,818; and the Southern Baptist Convention with 489,953.
The first priests to come to California were Catholic missionaries from Spain. Catholics founded 21 missions along the California coast, as well as the cities of Los Angeles and San Francisco. California continues to have a large Catholic population due to the large numbers of Mexicans and Central Americans living within its borders. California has twelve dioceses and two archdioceses, the Archdiocese of Los Angeles and the Archdiocese of San Francisco, the former being the largest archdiocese in the United States.
A Pew Research Center survey revealed that California is somewhat less religious than the rest of the states: 62 percent of Californians say they are "absolutely certain" of their belief in God, while in the nation 71 percent say so. The survey also revealed 48 percent of Californians say religion is "very important", compared to 56 percent nationally.
Culture
The culture of California is a Western culture and most clearly has its modern roots in the culture of the United States, but also, historically, many Hispanic Californio and Mexican influences. As a border and coastal state, California culture has been greatly influenced by several large immigrant populations, especially those from Latin America and Asia.
California has long been a subject of interest in the public mind and has often been promoted by its boosters as a kind of paradise. In the early 20th century, fueled by the efforts of state and local boosters, many Americans saw the Golden State as an ideal resort destination, sunny and dry all year round with easy access to the ocean and mountains. In the 1960s, popular music groups such as the Beach Boys promoted the image of Californians as laid-back, tanned beach-goers.
The California Gold Rush of the 1850s is still seen as a symbol of California's economic style, which tends to generate technology, social, entertainment, and economic fads and booms and related busts.
Media and entertainment
Hollywood and the rest of the Los Angeles area is a major global center for entertainment, with the U.S. film industry's "Big Five" major film studios (Columbia, Disney, Paramount, Universal, and Warner Bros.) as well as many minor film studios being based in or around the area. Many animation studios are also headquartered in the state.
The four major American television commercial broadcast networks (ABC, CBS, NBC, and Fox) as well as other networks all have production facilities and offices in the state. All the four major commercial broadcast networks, plus the two major Spanish-language networks (Telemundo and Univision) each have at least three owned-and-operated TV stations in California, including at least one in Los Angeles and at least one in San Francisco.
One of the oldest radio stations in the United States still in existence, KCBS (AM) in the San Francisco Bay Area, was founded in 1909. Universal Music Group, one of the "Big Four" record labels, is based in Santa Monica, while Warner Records is based in Los Angeles. Many independent record labels, such as Mind of a Genius Records, are also headquartered in the state. California is also the birthplace of several international music genres, including the Bakersfield sound, Bay Area thrash metal, alternative rock, g-funk, nu metal, glam metal, thrash metal, psychedelic rock, stoner rock, punk rock, hardcore punk, metalcore, pop punk, surf music, third wave ska, west coast hip hop, west coast jazz, jazz rap, and many other genres. Other genres such as pop rock, indie rock, hard rock, hip hop, pop, rock, rockabilly, country, heavy metal, grunge, new wave and disco were popularized in the state. In addition, many British bands, such as Led Zeppelin, Deep Purple, Black Sabbath, and the Rolling Stones settled in the state after becoming internationally famous.
As the home of Silicon Valley, the Bay Area is the headquarters of several prominent internet media, social media, and other technology companies. Three of the "Big Five" technology companies (Apple, Meta, and Google) are based in the area as well as other services such as Netflix, Pandora Radio, Twitter, Yahoo!, and YouTube. Other prominent companies that are headquartered here include HP inc. and Intel. Microsoft and Amazon also have offices in the area.
California, particularly Southern California, is considered the birthplace of modern car culture.
Several fast food, fast casual, and casual dining chains were also founded California, including some that have since expanded internationally like California Pizza Kitchen, Denny's, IHOP, McDonald's, Panda Express, and Taco Bell.
Sports
California has nineteen major professional sports league franchises, far more than any other state. The San Francisco Bay Area has six major league teams spread in its three major cities: San Francisco, San Jose, and Oakland, while the Greater Los Angeles Area is home to ten major league franchises. San Diego and Sacramento each have one major league team. The NFL Super Bowl has been hosted in California 12 times at five different stadiums: Los Angeles Memorial Coliseum, the Rose Bowl, Stanford Stadium, Levi's Stadium, and San Diego's Qualcomm Stadium. A thirteenth, Super Bowl LVI, was held at Sofi Stadium in Inglewood on February 13, 2022.
California has long had many respected collegiate sports programs. California is home to the oldest college bowl game, the annual Rose Bowl, among others.
The NFL has three teams in the state: the Los Angeles Rams, Los Angeles Chargers, and San Francisco 49ers.
MLB has five teams in the state: the San Francisco Giants, Oakland Athletics, Los Angeles Dodgers, Los Angeles Angels, and San Diego Padres.
The NBA has four teams in the state: the Golden State Warriors, Los Angeles Clippers, Los Angeles Lakers, and Sacramento Kings. Additionally, the WNBA also has one team in the state: the Los Angeles Sparks.
The NHL has three teams in the state: the Anaheim Ducks, Los Angeles Kings, and San Jose Sharks.
MLS has three teams in the state: the Los Angeles Galaxy, San Jose Earthquakes, and Los Angeles Football Club.
MLR has one team in the state: the San Diego Legion.
California is the only U.S. state to have hosted both the Summer and Winter Olympics. The 1932 and 1984 summer games were held in Los Angeles. Squaw Valley Ski Resort (now Palisades Tahoe) in the Lake Tahoe region hosted the 1960 Winter Olympics. Los Angeles will host the 2028 Summer Olympics, marking the fourth time that California will have hosted the Olympic Games. Multiple games during the 1994 FIFA World Cup took place in California, with the Rose Bowl hosting eight matches (including the final), while Stanford Stadium hosted six matches.
In addition to the Olympic games, California also hosts the California State Games.
Many sports, such as surfing, snowboarding, and skateboarding, were invented in California, while others like volleyball, beach soccer, and skiing were popularized in the state.
Other sports that are big in the state include golf, rodeo, tennis, mountain climbing, marathon running, horse racing, bowling, mixed martial arts, boxing, and motorsports, especially NASCAR and Formula One.
Education
California has the most school students in the country, with over 6.2 million in the 2005–06 school year, giving California more students in school than 36 states have in total population and one of the highest projected enrollments in the country.
Public secondary education consists of high schools that teach elective courses in trades, languages, and liberal arts with tracks for gifted, college-bound and industrial arts students. California's public educational system is supported by a unique constitutional amendment that requires a minimum annual funding level for grades K–12 and community colleges that grows with the economy and student enrollment figures.
In 2016, California's K–12 public school per-pupil spending was ranked 22nd in the nation ($11,500 per student vs. $11,800 for the U.S. average).
For 2012, California's K–12 public schools ranked 48th in the number of employees per student, at 0.102 (the U.S. average was 0.137), while paying the 7th most per employee, $49,000 (the U.S. average was $39,000).
A 2007 study concluded that California's public school system was "broken" in that it suffered from overregulation.
Higher education
California public postsecondary education is organized into three separate systems:
The state's public research university system is the University of California (UC). As of fall 2011, the University of California had a combined student body of 234,464 students. There are ten UC campuses; nine are general campuses offering both undergraduate and graduate programs which culminate in the award of bachelor's degrees, master's degrees, and doctorates; there is one specialized campus, UC San Francisco, which is entirely dedicated to graduate education in health care, and is home to the UCSF Medical Center, the highest-ranked hospital in California. The system was originally intended to accept the top one-eighth of California high school students, but several of the campuses have become even more selective. The UC system historically held exclusive authority to award the doctorate, but this has since changed and CSU now has limited statutory authorization to award a handful of types of doctoral degrees independently of UC.
The California State University (CSU) system has almost 430,000 students. The CSU (which takes the definite article in its abbreviated form, while UC does not) was originally intended to accept the top one-third of California high school students, but several of the campuses have become much more selective. The CSU was originally authorized to award only bachelor's and master's degrees, and could award the doctorate only as part of joint programs with UC or private universities. Since then, CSU has been granted the authority to independently award several doctoral degrees (in specific academic fields that do not intrude upon UC's traditional jurisdiction).
The California Community Colleges system provides lower-division coursework culminating in the associate degree, as well as basic skills and workforce training culminating in various kinds of certificates. (Fifteen California community colleges now award four-year bachelor's degrees in disciplines which are in high demand in their geographical area.) It is the largest network of higher education in the U.S., composed of 112 colleges serving a student population of over 2.6million.
California is also home to notable private universities such as Stanford University, the California Institute of Technology (Caltech), the University of Southern California, the Claremont Colleges, Santa Clara University, Loyola Marymount University, the University of San Diego, the University of San Francisco, Chapman University, Pepperdine University, Occidental College, and University of the Pacific, among numerous other private colleges and universities, including many religious and special-purpose institutions. California has a particularly high density of arts colleges, including the California College of the Arts, California Institute of the Arts, San Francisco Art Institute, Art Center College of Design, and Academy of Art University, among others.
Economy
California's economy ranks among the largest in the world. , the gross state product (GSP) was $3.6trillion ($92,190 per capita), the largest in the United States. California is responsible for one seventh of the nation's gross domestic product (GDP). , California's nominal GDP is larger than all but four countries (the United States, China, Japan, and Germany). In terms of purchasing power parity (PPP), it is larger than all but eight countries (the United States, China, India, Japan, Germany, Russia, Brazil, and Indonesia). California's economy is larger than Africa and Australia and is almost as large as South America. The state recorded total, non-farm employment of 16,677,800 among 966,224 employer establishments.
As the largest and second-largest U.S. ports respectively, the Port of Los Angeles and the Port of Long Beach in Southern California collectively play a pivotal role in the global supply chain, together hauling in about 40% of all imports to the United States by TEU volume. The Port of Oakland and Port of Hueneme are the 10th and 26th largest seaports in the U.S., respectively, by number of TEUs handled.
The five largest sectors of employment in California are trade, transportation, and utilities; government; professional and business services; education and health services; and leisure and hospitality. In output, the five largest sectors are financial services, followed by trade, transportation, and utilities; education and health services; government; and manufacturing. California has an unemployment rate of 3.9% .
California's economy is dependent on trade and international related commerce accounts for about one-quarter of the state's economy. In 2008, California exported $144billion worth of goods, up from $134billion in 2007 and $127billion in 2006.
Computers and electronic products are California's top export, accounting for 42 percent of all the state's exports in 2008.
Agriculture
Agriculture is an important sector in California's economy. According to the USDA in 2011, the three largest California agricultural products by value were milk and cream, shelled almonds, and grapes. Farming-related sales more than quadrupled over the past three decades, from $7.3billion in 1974 to nearly $31billion in 2004. This increase has occurred despite a 15 percent decline in acreage devoted to farming during the period, and water supply suffering from chronic instability. Factors contributing to the growth in sales-per-acre include more intensive use of active farmlands and technological improvements in crop production. In 2008, California's 81,500 farms and ranches generated $36.2billion products revenue. In 2011, that number grew to $43.5billion products revenue. The agriculture sector accounts for two percent of the state's GDP and employs around three percent of its total workforce.
Income
Per capita GDP in 2007 was $38,956, ranking eleventh in the nation. Per capita income varies widely by geographic region and profession. The Central Valley is the most impoverished, with migrant farm workers making less than minimum wage. According to a 2005 report by the Congressional Research Service, the San Joaquin Valley was characterized as one of the most economically depressed regions in the United States, on par with the region of Appalachia.
Using the supplemental poverty measure, California has a poverty rate of 23.5%, the highest of any state in the country. However, using the official measure the poverty rate was only 13.3% as of 2017. Many coastal cities include some of the wealthiest per-capita areas in the United States. The high-technology sectors in Northern California, specifically Silicon Valley, in Santa Clara and San Mateo counties, have emerged from the economic downturn caused by the dot-com bust.
In 2019, there were 1,042,027 millionaire households in the state, more than any other state in the nation. In 2010, California residents were ranked first among the states with the best average credit score of 754.
State finances
State spending increased from $56billion in 1998 to $127billion in 2011. California has the third highest per capita spending on welfare among the states, as well as the highest spending on welfare at $6.67billion. In January 2011, California's total debt was at least $265billion. On June 27, 2013, Governor Jerry Brown signed a balanced budget (no deficit) for the state, its first in decades; however, the state's debt remains at $132billion.
With the passage of Proposition 30 in 2012 and Proposition 55 in 2016, California now levies a 13.3% maximum marginal income tax rate with ten tax brackets, ranging from 1% at the bottom tax bracket of $0 annual individual income to 13.3% for annual individual income over $1,000,000 (though the top brackets are only temporary until Proposition 55 expires at the end of 2030). While Proposition 30 also enacted a minimum state sales tax of 7.5%, this sales tax increase was not extended by Proposition 55 and reverted to a previous minimum state sales tax rate of 7.25% in 2017. Local governments can and do levy additional sales taxes in addition to this minimum rate.
All real property is taxable annually; the ad valorem tax is based on the property's fair market value at the time of purchase or the value of new construction. Property tax increases are capped at 2% annually or the rate of inflation (whichever is lower), per Proposition 13.
Infrastructure
Energy
Because it is the most populous state in the United States, California is one of the country's largest users of energy. The state has extensive hydro-electric energy generation facilities, however, moving water is the single largest energy use in the state. Also, due to high energy rates, conservation mandates, mild weather in the largest population centers and strong environmental movement, its per capita energy use is one of the smallest of any state in the United States. Due to the high electricity demand, California imports more electricity than any other state, primarily hydroelectric power from states in the Pacific Northwest (via Path 15 and Path 66) and coal- and natural gas-fired production from the desert Southwest via Path 46.
The state's crude oil and natural gas deposits are located in the Central Valley and along the coast, including the large Midway-Sunset Oil Field. Natural gas-fired power plants typically account for more than one-half of state electricity generation.
As a result of the state's strong environmental movement, California has some of the most aggressive renewable energy goals in the United States. Senate Bill SB 1020 (the Clean Energy, Jobs and Affordability Act of 2022) commits the state to running its operations on clean, renewable energy resources by 2035, and SB 1203 also requires the state to achieve net-zero operations for all agencies. Currently, several solar power plants such as the Solar Energy Generating Systems facility are located in the Mojave Desert. California's wind farms include Altamont Pass, San Gorgonio Pass, and Tehachapi Pass. The Tehachapi area is also where the Tehachapi Energy Storage Project is located. Several dams across the state provide hydro-electric power. It would be possible to convert the total supply to 100% renewable energy, including heating, cooling and mobility, by 2050.
California has one major nuclear power plant (Diablo Canyon) in operation. The San Onofre nuclear plant was shut down in 2013. More than 1,700tons of radioactive waste are stored at San Onofre, and sit on the coast where there is a record of past tsunamis. Voters banned the approval of new nuclear power plants since the late 1970s because of concerns over radioactive waste disposal. In addition, several cities such as Oakland, Berkeley and Davis have declared themselves as nuclear-free zones.
Transportation
Highways
California's vast terrain is connected by an extensive system of controlled-access highways ('freeways'), limited-access roads ('expressways'), and highways. California is known for its car culture, giving California's cities a reputation for severe traffic congestion. Construction and maintenance of state roads and statewide transportation planning are primarily the responsibility of the California Department of Transportation, nicknamed "Caltrans". The rapidly growing population of the state is straining all of its transportation networks, and California has some of the worst roads in the United States. The Reason Foundation's 19th Annual Report on the Performance of State Highway Systems ranked California's highways the third-worst of any state, with Alaska second, and Rhode Island first.
The state has been a pioneer in road construction. One of the state's more visible landmarks, the Golden Gate Bridge, was the longest suspension bridge main span in the world at between 1937 (when it opened) and 1964. With its orange paint and panoramic views of the bay, this highway bridge is a popular tourist attraction and also accommodates pedestrians and bicyclists. The San Francisco–Oakland Bay Bridge (often abbreviated the "Bay Bridge"), completed in 1936, transports about 280,000 vehicles per day on two-decks. Its two sections meet at Yerba Buena Island through the world's largest diameter transportation bore tunnel, at wide by high. The Arroyo Seco Parkway, connecting Los Angeles and Pasadena, opened in 1940 as the first freeway in the Western United States. It was later extended south to the Four Level Interchange in downtown Los Angeles, regarded as the first stack interchange ever built.
The California Highway Patrol is the largest statewide police agency in the United States in employment with more than 10,000 employees. They are responsible for providing any police-sanctioned service to anyone on California's state-maintained highways and on state property.
By the end of 2021, 30,610,058 people in California held a California Department of Motor Vehicles-issued driver's licenses or state identification card, and there were 36,229,205 registered vehicles, including 25,643,076 automobiles, 853,368 motorcycles, 8,981,787 trucks and trailers, and 121,716 miscellaneous vehicles (including historical vehicles and farm equipment).
Air travel
Los Angeles International Airport (LAX), the 4th busiest airport in the world in 2018, and San Francisco International Airport (SFO), the 25th busiest airport in the world in 2018, are major hubs for trans-Pacific and transcontinental traffic. There are about a dozen important commercial airports and many more general aviation airports throughout the state.
Railroads
Inter-city rail travel is provided by Amtrak California; the three routes, the Capitol Corridor, Pacific Surfliner, and San Joaquin, are funded by Caltrans. These services are the busiest intercity rail lines in the United States outside the Northeast Corridor and ridership is continuing to set records. The routes are becoming increasingly popular over flying, especially on the LAX-SFO route. Integrated subway and light rail networks are found in Los Angeles (Metro Rail) and San Francisco (MUNI Metro). Light rail systems are also found in San Jose (VTA), San Diego (San Diego Trolley), Sacramento (RT Light Rail), and Northern San Diego County (Sprinter). Furthermore, commuter rail networks serve the San Francisco Bay Area (ACE, BART, Caltrain, SMART), Greater Los Angeles (Metrolink), and San Diego County (Coaster).
The California High-Speed Rail Authority was authorized in 1996 by the state legislature to plan a California High-Speed Rail system to put before the voters. The plan they devised, 2008 California Proposition 1A, connecting all the major population centers in the state, was approved by the voters at the November 2008 general election. The first phase of construction was begun in 2015, and the first segment long, is planned to be put into operation by the end of 2030. Planning and work on the rest of the system is continuing, with funding for completing it is an ongoing issue. California's 2023 integrated passenger rail master plan includes a high speed rail system.
Busses
Nearly all counties operate bus lines, and many cities operate their own city bus lines as well. Intercity bus travel is provided by Greyhound, Megabus, and Amtrak Thruway Motorcoach.
Water
California's interconnected water system is the world's largest, managing over of water per year, centered on six main systems of aqueducts and infrastructure projects. Water use and conservation in California is a politically divisive issue, as the state experiences periodic droughts and has to balance the demands of its large agricultural and urban sectors, especially in the arid southern portion of the state. The state's widespread redistribution of water also invites the frequent scorn of environmentalists.
The California Water Wars, a conflict between Los Angeles and the Owens Valley over water rights, is one of the most well-known examples of the struggle to secure adequate water supplies. Former California Governor Arnold Schwarzenegger said: "We've been in crisis for quite some time because we're now 38million people and not anymore 18million people like we were in the late 60s. So it developed into a battle between environmentalists and farmers and between the south and the north and between rural and urban. And everyone has been fighting for the last four decades about water."
Government and politics
State government
The capital city of California is Sacramento.
The state is organized into three branches of government—the executive branch consisting of the governor and the other independently elected constitutional officers; the legislative branch consisting of the Assembly and Senate; and the judicial branch consisting of the Supreme Court of California and lower courts. The state also allows ballot propositions: direct participation of the electorate by initiative, referendum, recall, and ratification. Before the passage of Proposition 14 in 2010, California allowed each political party to choose whether to have a closed primary or a primary where only party members and independents vote. After June 8, 2010, when Proposition 14 was approved, excepting only the United States president and county central committee offices, all candidates in the primary elections are listed on the ballot with their preferred party affiliation, but they are not the official nominee of that party. At the primary election, the two candidates with the top votes will advance to the general election regardless of party affiliation. If at a special primary election, one candidate receives more than 50% of all the votes cast, they are elected to fill the vacancy and no special general election will be held.
Executive branch
The California executive branch consists of the governor and seven other elected constitutional officers: lieutenant governor, attorney general, secretary of state, state controller, state treasurer, insurance commissioner, and state superintendent of public instruction. They serve four-year terms and may be re-elected only once.
The many California state agencies that are under the governor's cabinet are grouped together to form cabinet-level entities that are referred to by government officials as "superagencies". Those departments that are directly under the other independently elected officers work separately from these superagencies.
Legislative branch
The California State Legislature consists of a 40-member Senate and 80-member Assembly. Senators serve four-year terms and Assembly members two. Members of the Assembly are subject to term limits of six terms, and members of the Senate are subject to term limits of three terms.
Judicial branch
California's legal system is explicitly based upon English common law but carries many features from Spanish civil law, such as community property. California's prison population grew from 25,000 in 1980 to over 170,000 in 2007. Capital punishment is a legal form of punishment and the state has the largest "Death Row" population in the country (though Oklahoma and Texas are far more active in carrying out executions). California has performed 13 executions since 1976, with the last being in 2006.
California's judiciary system is the largest in the United States with a total of 1,600 judges (the federal system has only about 840). At the apex is the seven-member Supreme Court of California, while the California Courts of Appeal serve as the primary appellate courts and the California Superior Courts serve as the primary trial courts. Justices of the Supreme Court and Courts of Appeal are appointed by the governor, but are subject to retention by the electorate every 12 years.
The administration of the state's court system is controlled by the Judicial Council, composed of the chief justice of the California Supreme Court, 14 judicial officers, four representatives from the State Bar of California, and one member from each house of the state legislature.
In fiscal year 2020–2021, the state judiciary's 2,000 judicial officers and 18,000 judicial branch employees processed approximately 4.4 million cases.
Local government
California has an extensive system of local government that manages public functions throughout the state. Like most states, California is divided into counties, of which there are 58 (including San Francisco) covering the entire state. Most urbanized areas are incorporated as cities. School districts, which are independent of cities and counties, handle public education. Many other functions, such as fire protection and water supply, especially in unincorporated areas, are handled by special districts.
Counties
California is divided into 58 counties. Per Article 11, Section 1, of the Constitution of California, they are the legal subdivisions of the state. The county government provides countywide services such as law enforcement, jails, elections and voter registration, vital records, property assessment and records, tax collection, public health, health care, social services, libraries, flood control, fire protection, animal control, agricultural regulations, building inspections, ambulance services, and education departments in charge of maintaining statewide standards. In addition, the county serves as the local government for all unincorporated areas. Each county is governed by an elected board of supervisors.
City and town governments
Incorporated cities and towns in California are either charter or general-law municipalities. General-law municipalities owe their existence to state law and are consequently governed by it; charter municipalities are governed by their own city or town charters. Municipalities incorporated in the 19th century tend to be charter municipalities. All ten of the state's most populous cities are charter cities. Most small cities have a council–manager form of government, where the elected city council appoints a city manager to supervise the operations of the city. Some larger cities have a directly elected mayor who oversees the city government. In many council-manager cities, the city council selects one of its members as a mayor, sometimes rotating through the council membership—but this type of mayoral position is primarily ceremonial. The Government of San Francisco is the only consolidated city-county in California, where both the city and county governments have been merged into one unified jurisdiction.
School districts and special districts
About 1,102 school districts, independent of cities and counties, handle California's public education. California school districts may be organized as elementary districts, high school districts, unified school districts combining elementary and high school grades, or community college districts.
There are about 3,400 special districts in California. A special district, defined by California Government Code § 16271(d) as "any agency of the state for the local performance of governmental or proprietary functions within limited boundaries", provides a limited range of services within a defined geographic area. The geographic area of a special district can spread across multiple cities or counties, or could consist of only a portion of one. Most of California's special districts are single-purpose districts, and provide one service.
Federal representation
The state of California sends 52 members to the House of Representatives, the nation's largest congressional state delegation. Consequently, California also has the largest number of electoral votes in national presidential elections, with 54. The former speaker of the House of Representatives is the representative of California's 20th district, Kevin McCarthy.
California is represented by U.S. senator Alex Padilla, a native and former secretary of state of California, its class 1 Senate seat is currently vacant following the death of Dianne Feinstein. Former U.S. senator Kamala Harris, a native, former district attorney from San Francisco, former attorney general of California, resigned on January 18, 2021, to assume her role as the current Vice President of the United States. In the 1992 U.S. Senate election, California became the first state to elect a Senate delegation entirely composed of women, due to the victories of Feinstein and Barbara Boxer. Set to follow the Vice President-Elect, Gov. Newsom appointed Secretary of State Alex Padilla to finish the rest of Harris's term which ends in 2022, Padilla has vowed to run for the full term in that election cycle. Padilla was sworn in on January 20, 2021, the same day as the inauguration of Joe Biden as well as Harris.
Armed forces
In California, , the U.S. Department of Defense had a total of 117,806 active duty servicemembers of which 88,370 were Sailors or Marines, 18,339 were Airmen, and 11,097 were Soldiers, with 61,365 Department of Defense civilian employees. Additionally, there were a total of 57,792 Reservists and Guardsman in California.
In 2010, Los Angeles County was the largest origin of military recruits in the United States by county, with 1,437 individuals enlisting in the military. However, , Californians were relatively under-represented in the military as a proportion to its population.
In 2000, California, had 2,569,340 veterans of United States military service: 504,010 served in World War II, 301,034 in the Korean War, 754,682 during the Vietnam War, and 278,003 during 1990–2000 (including the Persian Gulf War). , there were 1,942,775 veterans living in California, of which 1,457,875 served during a period of armed conflict, and just over four thousand served before World WarII (the largest population of this group of any state).
California's military forces consist of the Army and Air National Guard, the naval and state military reserve (militia), and the California Cadet Corps.
On August 5, 1950, a nuclear-capable United States Air Force Boeing B-29 Superfortress bomber carrying a nuclear bomb crashed shortly after takeoff from Fairfield-Suisun Air Force Base. Brigadier General Robert F. Travis, command pilot of the bomber, was among the dead.
Ideology
California has an idiosyncratic political culture compared to the rest of the country, and is sometimes regarded as a trendsetter. In socio-cultural mores and national politics, Californians are perceived as more liberal than other Americans, especially those who live in the inland states. In the 2016 United States presidential election, California had the third highest percentage of Democratic votes behind the District of Columbia and Hawaii. In the 2020 United States presidential election, it had the 6th highest behind the District of Columbia, Vermont, Massachusetts, Maryland, and Hawaii. According to the Cook Political Report, California contains five of the 15 most Democratic congressional districts in the United States.
Among the political idiosyncrasies, California was the second state to recall their state governor (the first state being North Dakota in 1921), the second state to legalize abortion, and the only state to ban marriage for gay couples twice by vote (including Proposition8 in 2008). Voters also passed Proposition 71 in 2004 to fund stem cell research, making California the second state to legalize stem cell research after New Jersey, and Proposition 14 in 2010 to completely change the state's primary election process. California has also experienced disputes over water rights; and a tax revolt, culminating with the passage of Proposition 13 in 1978, limiting state property taxes. California voters have rejected affirmative action on multiple occasions, most recently in November 2020.
The state's trend towards the Democratic Party and away from the Republican Party can be seen in state elections. From 1899 to 1939, California had Republican governors. Since 1990, California has generally elected Democratic candidates to federal, state and local offices, including current Governor Gavin Newsom; however, the state has elected Republican Governors, though many of its Republican Governors, such as Arnold Schwarzenegger, tend to be considered moderate Republicans and more centrist than the national party.
Several political movements have advocated for California independence. The California National Party and the California Freedom Coalition both advocate for California independence along the lines of progressivism and civic nationalism. The Yes California movement attempted to organize an independence referendum via ballot initiative for 2019, which was then postponed.
The Democrats also now hold a supermajority in both houses of the state legislature. There are 62 Democrats and 18 Republicans in the Assembly; and 32 Democrats and 8 Republicans in the Senate.
The trend towards the Democratic Party is most obvious in presidential elections. From 1952 through 1988, California was a Republican leaning state, with the party carrying the state's electoral votes in nine of ten elections, with 1964 as the exception. Southern California Republicans Richard Nixon and Ronald Reagan were both elected twice as the 37th and 40th U.S. Presidents, respectively. However, Democrats have won all of California's electoral votes for the last eight elections, starting in 1992.
In the United States House, the Democrats held a 34–19 edge in the CA delegation of the 110th United States Congress in 2007. As the result of gerrymandering, the districts in California were usually dominated by one or the other party, and few districts were considered competitive. In 2008, Californians passed Proposition 20 to empower a 14-member independent citizen commission to redraw districts for both local politicians and Congress. After the 2012 elections, when the new system took effect, Democrats gained four seats and held a 38–15 majority in the delegation. Following the 2018 midterm House elections, Democrats won 46 out of 53 congressional house seats in California, leaving Republicans with seven.
In general, Democratic strength is centered in the populous coastal regions of the Los Angeles metropolitan area and the San Francisco Bay Area. Republican strength is still greatest in eastern parts of the state. Orange County had remained largely Republican until the 2016 and 2018 elections, in which a majority of the county's votes were cast for Democratic candidates. One study ranked Berkeley, Oakland, Inglewood and San Francisco in the top 20 most liberal American cities; and Bakersfield, Orange, Escondido, Garden Grove, and Simi Valley in the top 20 most conservative cities.
In October 2022, out of the 26,876,800 people eligible to vote, 21,940,274 people were registered to vote. Of the people registered, the three largest registered groups were Democrats (10,283,258), Republicans (5,232,094), and No Party Preference (4,943,696). Los Angeles County had the largest number of registered Democrats (2,996,565) and Republicans (958,851) of any county in the state.
California retains the death penalty, though it has not been used since 2006. There is currently a gubernatorial hold on executions. Authorized methods of execution include the gas chamber.
Twinned regions
California has region twinning arrangements with:
Catalonia, autonomous community of Spain
Alberta, province of Canada
Jeju Province of South Korea
Guangdong, province of China
See also
Index of California-related articles
Outline of California
List of people from California
Notes
References
Citations
Works cited
Further reading
Matthews, Glenna. The Golden State in the Civil War: Thomas Starr King, the Republican Party, and the Birth of Modern California. New York: Cambridge University Press, 2012.
External links
State of California
California State Guide, from the Library of Congress
data.ca.gov: open data portal from California state agencies
California State Facts from USDA
California Drought: Farm and Food Impacts from USDA, Economic Research Service
1973 documentary featuring aerial views of the California coastline from Mt. Shasta to Los Angeles
Early City Views (Los Angeles)
States and territories established in 1850
States of the United States
States of the West Coast of the United States
1850 establishments in California
Former Spanish colonies
Western United States
1850 establishments in the United States
Contiguous United States |
5408 | https://en.wikipedia.org/wiki/Columbia%20River | Columbia River | The Columbia River (Upper Chinook: or ; Sahaptin: Nch’i-Wàna or Nchi wana; Sinixt dialect ) is the largest river in the Pacific Northwest region of North America. The river forms in the Rocky Mountains of British Columbia, Canada. It flows northwest and then south into the U.S. state of Washington, then turns west to form most of the border between Washington and the state of Oregon before emptying into the Pacific Ocean. The river is long, and its largest tributary is the Snake River. Its drainage basin is roughly the size of France and extends into seven states of the United States and one Canadian province. The fourth-largest river in the United States by volume, the Columbia has the greatest flow of any North American river entering the Pacific. The Columbia has the 36th greatest discharge of any river in the world.
The Columbia and its tributaries have been central to the region's culture and economy for thousands of years. They have been used for transportation since ancient times, linking the region's many cultural groups. The river system hosts many species of anadromous fish, which migrate between freshwater habitats and the saline waters of the Pacific Ocean. These fish—especially the salmon species—provided the core subsistence for native peoples.
The first documented European discovery of the Columbia River occurred when Bruno de Heceta sighted the river's mouth in 1775. On May 11, 1792, a private American ship, Columbia Rediviva, under Captain Robert Gray from Boston became the first non-indigenous vessel to enter the river. Later in 1792, William Robert Broughton of the British Royal Navy commanding HMS Chatham as part of the Vancouver Expedition, navigated past the Oregon Coast Range and 100 miles upriver to what is now Vancouver, Washington. In the following decades, fur-trading companies used the Columbia as a key transportation route. Overland explorers entered the Willamette Valley through the scenic, but treacherous Columbia River Gorge, and pioneers began to settle the valley in increasing numbers. Steamships along the river linked communities and facilitated trade; the arrival of railroads in the late 19th century, many running along the river, supplemented these links.
Since the late 19th century, public and private sectors have extensively developed the river. To aid ship and barge navigation, locks have been built along the lower Columbia and its tributaries, and dredging has opened, maintained, and enlarged shipping channels. Since the early 20th century, dams have been built across the river for power generation, navigation, irrigation, and flood control. The 14 hydroelectric dams on the Columbia's main stem and many more on its tributaries produce more than 44 percent of total U.S. hydroelectric generation. Production of nuclear power has taken place at two sites along the river. Plutonium for nuclear weapons was produced for decades at the Hanford Site, which is now the most contaminated nuclear site in the United States. These developments have greatly altered river environments in the watershed, mainly through industrial pollution and barriers to fish migration.
Course
The Columbia begins its journey in the southern Rocky Mountain Trench in British Columbia (BC). Columbia Lake above sea level and the adjoining Columbia Wetlands form the river's headwaters. The trench is a broad, deep, and long glacial valley between the Canadian Rockies and the Columbia Mountains in BC. For its first , the Columbia flows northwest along the trench through Windermere Lake and the town of Invermere, a region known in BC as the Columbia Valley, then northwest to Golden and into Kinbasket Lake. Rounding the northern end of the Selkirk Mountains, the river turns sharply south through a region known as the Big Bend Country, passing through Revelstoke Lake and the Arrow Lakes. Revelstoke, the Big Bend, and the Columbia Valley combined are referred to in BC parlance as the Columbia Country. Below the Arrow Lakes, the Columbia passes the cities of Castlegar, located at the Columbia's confluence with the Kootenay River, and Trail, two major population centers of the West Kootenay region. The Pend Oreille River joins the Columbia about north of the United States–Canada border.
The Columbia enters eastern Washington flowing south and turning to the west at the Spokane River confluence. It marks the southern and eastern borders of the Colville Indian Reservation and the western border of the Spokane Indian Reservation. The river turns south after the Okanogan River confluence, then southeasterly near the confluence with the Wenatchee River in central Washington. This C-shaped segment of the river is also known as the "Big Bend". During the Missoula Floods 1015,000 years ago, much of the floodwater took a more direct route south, forming the ancient river bed known as the Grand Coulee. After the floods, the river found its present course, and the Grand Coulee was left dry. The construction of the Grand Coulee Dam in the mid-20th century impounded the river, forming Lake Roosevelt, from which water was pumped into the dry coulee, forming the reservoir of Banks Lake.
The river flows past The Gorge Amphitheatre, a prominent concert venue in the Northwest, then through Priest Rapids Dam, and then through the Hanford Nuclear Reservation. Entirely within the reservation is Hanford Reach, the only U.S. stretch of the river that is completely free-flowing, unimpeded by dams, and not a tidal estuary. The Snake River and Yakima River join the Columbia in the Tri-Cities population center. The Columbia makes a sharp bend to the west at the Washington–Oregon border. The river defines that border for the final of its journey.
The Deschutes River joins the Columbia near The Dalles. Between The Dalles and Portland, the river cuts through the Cascade Range, forming the dramatic Columbia River Gorge. No other rivers except for the Klamath and Pit River completely breach the Cascadesthe other rivers that flow through the range also originate in or very near the mountains. The headwaters and upper course of the Pit River are on the Modoc Plateau; downstream, the Pit cuts a canyon through the southern reaches of the Cascades. In contrast, the Columbia cuts through the range nearly a thousand miles from its source in the Rocky Mountains. The gorge is known for its strong and steady winds, scenic beauty, and its role as an important transportation link. The river continues west, bending sharply to the north-northwest near Portland and Vancouver, Washington, at the Willamette River confluence. Here the river slows considerably, dropping sediment that might otherwise form a river delta. Near Longview, Washington and the Cowlitz River confluence, the river turns west again. The Columbia empties into the Pacific Ocean just west of Astoria, Oregon, over the Columbia Bar, a shifting sandbar that makes the river's mouth one of the most hazardous stretches of water to navigate in the world. Because of the danger and the many shipwrecks near the mouth, it acquired a reputation as the "Graveyard of Ships".
The Columbia drains an area of about . Its drainage basin covers nearly all of Idaho, large portions of British Columbia, Oregon, and Washington, and ultimately all of Montana west of the Continental Divide, and small portions of Wyoming, Utah, and Nevada; the total area is similar to the size of France. Roughly of the river's length and 85 percent of its drainage basin are in the US. The Columbia is the twelfth-longest river and has the sixth-largest drainage basin in the United States. In Canada, where the Columbia flows for and drains , the river ranks 23rd in length, and the Canadian part of its basin ranks 13th in size among Canadian basins.
The Columbia shares its name with nearby places, such as British Columbia, as well as with landforms and bodies of water.
Discharge
With an average flow at the mouth of about , the Columbia is the largest river by discharge flowing into the Pacific from the Americas and is the fourth-largest by volume in the U.S. The average flow where the river crosses the international border between Canada and the United States is from a drainage basin of . This amounts to about 15 percent of the entire Columbia watershed. The Columbia's highest recorded flow, measured at The Dalles, was in June 1894, before the river was dammed. The lowest flow recorded at The Dalles was on April 16, 1968, and was caused by the initial closure of the John Day Dam, upstream. The Dalles is about from the mouth; the river at this point drains about or about 91 percent of the total watershed. Flow rates on the Columbia are affected by many large upstream reservoirs, many diversions for irrigation, and, on the lower stretches, reverse flow from the tides of the Pacific Ocean. The National Ocean Service observes water levels at six tide gauges and issues tide forecasts for twenty-two additional locations along the river between the entrance at the North Jetty and the base of Bonneville Dam, its head of tide.
The Columbia River multiannual average discharge:
Geology
When the rifting of Pangaea, due to the process of plate tectonics, pushed North America away from Europe and Africa and into the Panthalassic Ocean (ancestor to the modern Pacific Ocean), the Pacific Northwest was not part of the continent. As the North American continent moved westward, the Farallon Plate subducted under its western margin. As the plate subducted, it carried along island arcs which were accreted to the North American continent, resulting in the creation of the Pacific Northwest between 150 and 90 million years ago. The general outline of the Columbia Basin was not complete until between 60 and 40 million years ago, but it lay under a large inland sea later subject to uplift. Between 50 and 20 million years ago, from the Eocene through the Miocene eras, tremendous volcanic eruptions frequently modified much of the landscape traversed by the Columbia. The lower reaches of the ancestral river passed through a valley near where Mount Hood later arose. Carrying sediments from erosion and erupting volcanoes, it built a thick delta that underlies the foothills on the east side of the Coast Range near Vernonia in northwestern Oregon. Between 17 million and 6 million years ago, huge outpourings of flood basalt lava covered the Columbia River Plateau and forced the lower Columbia into its present course. The modern Cascade Range began to uplift 5 to 4 million years ago. Cutting through the uplifting mountains, the Columbia River significantly deepened the Columbia River Gorge.
The river and its drainage basin experienced some of the world's greatest known catastrophic floods toward the end of the last ice age. The periodic rupturing of ice dams at Glacial Lake Missoula resulted in the Missoula Floods, with discharges exceeding the combined flow of all the other rivers in the world, dozens of times over thousands of years. The exact number of floods is unknown, but geologists have documented at least 40; evidence suggests that they occurred between about 19,000 and 13,000 years ago.
The floodwaters rushed across eastern Washington, creating the channeled scablands, which are a complex network of dry canyon-like channels, or coulees that are often braided and sharply gouged into the basalt rock underlying the region's deep topsoil. Numerous flat-topped buttes with rich soil stand high above the chaotic scablands. Constrictions at several places caused the floodwaters to pool into large temporary lakes, such as Lake Lewis, in which sediments were deposited. Water depths have been estimated at at Wallula Gap and over modern Portland, Oregon. Sediments were also deposited when the floodwaters slowed in the broad flats of the Quincy, Othello, and Pasco Basins. The floods' periodic inundation of the lower Columbia River Plateau deposited rich sediments; 21st-century farmers in the Willamette Valley "plow fields of fertile Montana soil and clays from Washington's Palouse".
Over the last several thousand years a series of large landslides have occurred on the north side of the Columbia River Gorge, sending massive amounts of debris south from Table Mountain and Greenleaf Peak into the gorge near the present site of Bonneville Dam. The most recent and significant is known as the Bonneville Slide, which formed a massive earthen dam, filling of the river's length. Various studies have placed the date of the Bonneville Slide anywhere between 1060 and 1760 AD; the idea that the landslide debris present today was formed by more than one slide is relatively recent and may explain the large range of estimates. It has been suggested that if the later dates are accurate there may be a link with the 1700 Cascadia earthquake. The pile of debris resulting from the Bonneville Slide blocked the river until rising water finally washed away the sediment. It is not known how long it took the river to break through the barrier; estimates range from several months to several years. Much of the landslide's debris remained, forcing the river about south of its previous channel and forming the Cascade Rapids. In 1938, the construction of Bonneville Dam inundated the rapids as well as the remaining trees that could be used to refine the estimated date of the landslide.
In 1980, the eruption of Mount St. Helens deposited large amounts of sediment in the lower Columbia, temporarily reducing the depth of the shipping channel by .
Indigenous peoples
Humans have inhabited the Columbia's watershed for more than 15,000 years, with a transition to a sedentary lifestyle based mainly on salmon starting about 3,500 years ago. In 1962, archaeologists found evidence of human activity dating back 11,230 years at the Marmes Rockshelter, near the confluence of the Palouse and Snake rivers in eastern Washington. In 1996 the skeletal remains of a 9,000-year-old prehistoric man (dubbed Kennewick Man) were found near Kennewick, Washington. The discovery rekindled debate in the scientific community over the origins of human habitation in North America and sparked a protracted controversy over whether the scientific or Native American community was entitled to possess and/or study the remains.
Many different Native Americans and First Nations peoples have a historical and continuing presence on the Columbia. South of the Canada–US border, the Colville, Spokane, Coeur d'Alene, Yakama, Nez Perce, Cayuse, Palus, Umatilla, Cowlitz, and the Confederated Tribes of Warm Springs live along the US stretch. Along the upper Snake River and Salmon River, the Shoshone Bannock tribes are present. The Sinixt or Lakes people lived on the lower stretch of the Canadian portion, while above that the Shuswap people (Secwepemc in their own language) reckon the whole of the upper Columbia east to the Rockies as part of their territory. The Canadian portion of the Columbia Basin outlines the traditional homelands of the Canadian Kootenay–Ktunaxa.
The Chinook tribe, which is not federally recognized, who live near the lower Columbia River, call it or in the Upper Chinook (Kiksht) language, and it is Nch’i-Wàna or Nchi wana to the Sahaptin (Ichishkíin Sɨ́nwit)-speaking peoples of its middle course in present-day Washington. The river is known as by the Sinixt people, who live in the area of the Arrow Lakes in the river's upper reaches in Canada. All three terms essentially mean "the big river".
Oral histories describe the formation and destruction of the Bridge of the Gods, a land bridge that connected the Oregon and Washington sides of the river in the Columbia River Gorge. The bridge, which aligns with geological records of the Bonneville Slide, was described in some stories as the result of a battle between gods, represented by Mount Adams and Mount Hood, in their competition for the affection of a goddess, represented by Mount St. Helens. Native American stories about the bridge differ in their details but agree in general that the bridge permitted increased interaction between tribes on the north and south sides of the river.
Horses, originally acquired from Spanish New Mexico, spread widely via native trade networks, reaching the Shoshone of the Snake River Plain by 1700. The Nez Perce, Cayuse, and Flathead people acquired their first horses around 1730. Along with horses came aspects of the emerging plains culture, such as equestrian and horse training skills, greatly increased mobility, hunting efficiency, trade over long distances, intensified warfare, the linking of wealth and prestige to horses and war, and the rise of large and powerful tribal confederacies. The Nez Perce and Cayuse kept large herds and made annual long-distance trips to the Great Plains for bison hunting, adopted the plains culture to a significant degree, and became the main conduit through which horses and the plains culture diffused into the Columbia River region. Other peoples acquired horses and aspects of the plains culture unevenly. The Yakama, Umatilla, Palus, Spokane, and Coeur d'Alene maintained sizable herds of horses and adopted some of the plains cultural characteristics, but fishing and fish-related economies remained important. Less affected groups included the Molala, Klickitat, Wenatchi, Okanagan, and Sinkiuse-Columbia peoples, who owned small numbers of horses and adopted few plains culture features. Some groups remained essentially unaffected, such as the Sanpoil and Nespelem people, whose culture remained centered on fishing.
Natives of the region encountered foreigners at several times and places during the 18th and 19th centuries. European and American vessels explored the coastal area around the mouth of the river in the late 18th century, trading with local natives. The contact would prove devastating to the Indian tribes; a large portion of their population was wiped out by a smallpox epidemic. Canadian explorer Alexander Mackenzie crossed what is now interior British Columbia in 1793. From 1805 to 1806, the Lewis and Clark Expedition entered the Oregon Country along the Clearwater and Snake rivers, and encountered numerous small settlements of natives. Their records recount tales of hospitable traders who were not above stealing small items from the visitors. They also noted brass teakettles, a British musket, and other artifacts that had been obtained in trade with coastal tribes. From the earliest contact with westerners, the natives of the mid- and lower Columbia were not tribal, but instead congregated in social units no larger than a village, and more often at a family level; these units would shift with the season as people moved about, following the salmon catch up and down the river's tributaries.
Sparked by the 1847 Whitman Massacre, a number of violent battles were fought between American settlers and the region's natives. The subsequent Indian Wars, especially the Yakima War, decimated the native population and removed much land from native control. As years progressed, the right of natives to fish along the Columbia became the central issue of contention with the states, commercial fishers, and private property owners. The US Supreme Court upheld fishing rights in landmark cases in 1905 and 1918, as well as the 1974 case United States v. Washington, commonly called the Boldt Decision.
Fish were central to the culture of the region's natives, both as sustenance and as part of their religious beliefs. Natives drew fish from the Columbia at several major sites, which also served as trading posts. Celilo Falls, located east of the modern city of The Dalles, was a vital hub for trade and the interaction of different cultural groups, being used for fishing and trading for 11,000 years. Prior to contact with westerners, villages along this stretch may have at times had a population as great as 10,000. The site drew traders from as far away as the Great Plains.
The Cascades Rapids of the Columbia River Gorge, and Kettle Falls and Priest Rapids in eastern Washington, were also major fishing and trading sites.
In prehistoric times the Columbia's salmon and steelhead runs numbered an estimated annual average of 10 to 16 million fish. In comparison, the largest run since 1938 was in 1986, with 3.2 million fish entering the Columbia. The annual catch by natives has been estimated at . The most important and productive native fishing site was located at Celilo Falls, which was perhaps the most productive inland fishing site in North America. The falls were located at the border between Chinookan- and Sahaptian-speaking peoples and served as the center of an extensive trading network across the Pacific Plateau. Celilo was the oldest continuously inhabited community on the North American continent.
Salmon canneries established by white settlers beginning in 1866 had a strong negative impact on the salmon population, and in 1908 US President Theodore Roosevelt observed that the salmon runs were but a fraction of what they had been 25 years prior.
As river development continued in the 20th century, each of these major fishing sites was flooded by a dam, beginning with Cascades Rapids in 1938. The development was accompanied by extensive negotiations between natives and US government agencies. The Confederated Tribes of Warm Springs, a coalition of various tribes, adopted a constitution and incorporated after the 1938 completion of the Bonneville Dam flooded Cascades Rapids; Still, in the 1930s, there were natives who lived along the river and fished year round, moving along with the fish's migration patterns throughout the seasons. The Yakama were slower to do so, organizing a formal government in 1944. In the 21st century, the Yakama, Nez Perce, Umatilla, and Warm Springs tribes all have treaty fishing rights along the Columbia and its tributaries.
In 1957 Celilo Falls was submerged by the construction of The Dalles Dam, and the native fishing community was displaced. The affected tribes received a $26.8 million settlement for the loss of Celilo and other fishing sites submerged by The Dalles Dam. The Confederated Tribes of Warm Springs used part of its $4 million settlement to establish the Kah-Nee-Ta resort south of Mount Hood.
New waves of explorers
Some historians believe that Japanese or Chinese vessels blown off course reached the Northwest Coast long before Europeans—possibly as early as 219 BCE. Historian Derek Hayes claims that "It is a near certainty that Japanese or Chinese people arrived on the northwest coast long before any European." It is unknown whether they landed near the Columbia. Evidence exists that Spanish castaways reached the shore in 1679 and traded with the Clatsop; if these were the first Europeans to see the Columbia, they failed to send word home to Spain.
In the 18th century, there was strong interest in discovering a Northwest Passage that would permit navigation between the Atlantic (or inland North America) and the Pacific Ocean. Many ships in the area, especially those under Spanish and British command, searched the northwest coast for a large river that might connect to Hudson Bay or the Missouri River. The first documented European discovery of the Columbia River was that of Bruno de Heceta, who in 1775 sighted the river's mouth. On the advice of his officers, he did not explore it, as he was short-staffed and the current was strong. He considered it a bay, and called it Ensenada de Asunción (Assumption Cove). Later Spanish maps, based on his sighting, showed a river, labeled Río de San Roque (The Saint Roch River), or an entrance, called Entrada de Hezeta, named for Bruno de Hezeta, who sailed the region. Following Hezeta's reports, British maritime fur trader Captain John Meares searched for the river in 1788 but concluded that it did not exist. He named Cape Disappointment for the non-existent river, not realizing the cape marks the northern edge of the river's mouth.
What happened next would form the basis for decades of both cooperation and dispute between British and American exploration of, and ownership claim to, the region. Royal Navy commander George Vancouver sailed past the mouth in April 1792 and observed a change in the water's color, but he accepted Meares' report and continued on his journey northward. Later that month, Vancouver encountered the American captain Robert Gray at the Strait of Juan de Fuca. Gray reported that he had seen the entrance to the Columbia and had spent nine days trying but failing to enter.
On May 12, 1792, Gray returned south and crossed the Columbia Bar, becoming the first known explorer of European descent to enter the river. Gray's fur trading mission had been financed by Boston merchants, who outfitted him with a private vessel named Columbia Rediviva; he named the river after the ship on May 18. Gray spent nine days trading near the mouth of the Columbia, then left without having gone beyond upstream. The farthest point reached was Grays Bay at the mouth of Grays River. Gray's discovery of the Columbia River was later used by the United States to support its claim to the Oregon Country, which was also claimed by Russia, Great Britain, Spain and other nations.
In October 1792, Vancouver sent Lieutenant William Robert Broughton, his second-in-command, up the river. Broughton got as far as the Sandy River at the western end of the Columbia River Gorge, about upstream, sighting and naming Mount Hood. Broughton formally claimed the river, its drainage basin, and the nearby coast for Britain. In contrast, Gray had not made any formal claims on behalf of the United States.
Because the Columbia was at the same latitude as the headwaters of the Missouri River, there was some speculation that Gray and Vancouver had discovered the long-sought Northwest Passage. A 1798 British map showed a dotted line connecting the Columbia with the Missouri. When the American explorers Meriwether Lewis and William Clark charted the vast, unmapped lands of the American West in their overland expedition (1803–1805), they found no passage between the rivers. After crossing the Rocky Mountains, Lewis and Clark built dugout canoes and paddled down the Snake River, reaching the Columbia near the present-day Tri-Cities, Washington. They explored a few miles upriver, as far as Bateman Island, before heading down the Columbia, concluding their journey at the river's mouth and establishing Fort Clatsop, a short-lived establishment that was occupied for less than three months.
Canadian explorer David Thompson, of the North West Company, spent the winter of 180708 at Kootanae House near the source of the Columbia at present-day Invermere, BC. Over the next few years he explored much of the river and its northern tributaries. In 1811 he traveled down the Columbia to the Pacific Ocean, arriving at the mouth just after John Jacob Astor's Pacific Fur Company had founded Astoria. On his return to the north, Thompson explored the one remaining part of the river he had not yet seen, becoming the first Euro-descended person to travel the entire length of the river.
In 1825, the Hudson's Bay Company (HBC) established Fort Vancouver on the bank of the Columbia, in what is now Vancouver, Washington, as the headquarters of the company's Columbia District, which encompassed everything west of the Rocky Mountains, north of California, and south of Russian-claimed Alaska. Chief Factor John McLoughlin, a physician who had been in the fur trade since 1804, was appointed superintendent of the Columbia District. The HBC reoriented its Columbia District operations toward the Pacific Ocean via the Columbia, which became the region's main trunk route. In the early 1840s Americans began to colonize the Oregon country in large numbers via the Oregon Trail, despite the HBC's efforts to discourage American settlement in the region. For many the final leg of the journey involved travel down the lower Columbia River to Fort Vancouver. This part of the Oregon Trail, the treacherous stretch from The Dalles to below the Cascades, could not be traversed by horses or wagons (only watercraft, at great risk). This prompted the 1846 construction of the Barlow Road.
In the Treaty of 1818 the United States and Britain agreed that both nations were to enjoy equal rights in Oregon Country for 10 years. By 1828, when the so-called "joint occupation" was renewed indefinitely, it seemed probable that the lower Columbia River would in time become the border between the two nations. For years the Hudson's Bay Company successfully maintained control of the Columbia River and American attempts to gain a foothold were fended off. In the 1830s, American religious missions were established at several locations in the lower Columbia River region. In the 1840s a mass migration of American settlers undermined British control. The Hudson's Bay Company tried to maintain dominance by shifting from the fur trade, which was in decline, to exporting other goods such as salmon and lumber. Colonization schemes were attempted, but failed to match the scale of American settlement. Americans generally settled south of the Columbia, mainly in the Willamette Valley. The Hudson's Bay Company tried to establish settlements north of the river, but nearly all the British colonists moved south to the Willamette Valley. The hope that the British colonists might dilute the American presence in the valley failed in the face of the overwhelming number of American settlers. These developments rekindled the issue of "joint occupation" and the boundary dispute. While some British interests, especially the Hudson's Bay Company, fought for a boundary along the Columbia River, the Oregon Treaty of 1846 set the boundary at the 49th parallel. As part of the treaty, the British retained all areas north of the line while the United States acquired the south. The Columbia River became much of the border between the U.S. territories of Oregon and Washington. Oregon became a U.S. state in 1859, while Washington later entered into the Union in 1889.
By the turn of the 20th century, the difficulty of navigating the Columbia was seen as an impediment to the economic development of the Inland Empire region east of the Cascades. The dredging and dam building that followed would permanently alter the river, disrupting its natural flow but also providing electricity, irrigation, navigability and other benefits to the region.
Navigation
American captain Robert Gray and British captain George Vancouver, who explored the river in 1792, proved that it was possible to cross the Columbia Bar. Many of the challenges associated with that feat remain today; even with modern engineering alterations to the mouth of the river, the strong currents and shifting sandbar make it dangerous to pass between the river and the Pacific Ocean.
The use of steamboats along the river, beginning with the British Beaver in 1836 and followed by American vessels in 1850, contributed to the rapid settlement and economic development of the region. Steamboats operated in several distinct stretches of the river: on its lower reaches, from the Pacific Ocean to Cascades Rapids; from the Cascades to the Dalles-Celilo Falls; from Celilo to Priests Rapids; on the Wenatchee Reach of eastern Washington; on British Columbia's Arrow Lakes; and on tributaries like the Willamette, the Snake and Kootenay Lake. The boats, initially powered by burning wood, carried passengers and freight throughout the region for many years. Early railroads served to connect steamboat lines interrupted by waterfalls on the river's lower reaches. In the 1880s, railroads maintained by companies such as the Oregon Railroad and Navigation Company began to supplement steamboat operations as the major transportation links along the river.
Opening the passage to Lewiston
As early as 1881, industrialists proposed altering the natural channel of the Columbia to improve navigation. Changes to the river over the years have included the construction of jetties at the river's mouth, dredging, and the construction of canals and navigation locks. Today, ocean freighters can travel upriver as far as Portland and Vancouver, and barges can reach as far inland as Lewiston, Idaho.
The shifting Columbia Bar makes passage between the river and the Pacific Ocean difficult and dangerous, and numerous rapids along the river hinder navigation. Pacific Graveyard, a 1964 book by James A. Gibbs, describes the many shipwrecks near the mouth of the Columbia. Jetties, first constructed in 1886, extend the river's channel into the ocean. Strong currents and the shifting sandbar remain a threat to ships entering the river and necessitate continuous maintenance of the jetties.
In 1891, the Columbia was dredged to enhance shipping. The channel between the ocean and Portland and Vancouver was deepened from to . The Columbian called for the channel to be deepened to as early as 1905, but that depth was not attained until 1976.
Cascade Locks and Canal were first constructed in 1896 around the Cascades Rapids, enabling boats to travel safely through the Columbia River Gorge. The Celilo Canal, bypassing Celilo Falls, opened to river traffic in 1915. In the mid-20th century, the construction of dams along the length of the river submerged the rapids beneath a series of reservoirs. An extensive system of locks allowed ships and barges to pass easily between reservoirs. A navigation channel reaching Lewiston, Idaho, along the Columbia and Snake rivers, was completed in 1975. Among the main commodities are wheat and other grains, mainly for export. As of 2016, the Columbia ranked third, behind the Mississippi and Paraná rivers, among the world's largest export corridors for grain.
The 1980 eruption of Mount St. Helens caused mudslides in the area, which reduced the Columbia's depth by for a stretch, disrupting Portland's economy.
Deeper shipping channel
Efforts to maintain and improve the navigation channel have continued to the present day. In 1990 a new round of studies examined the possibility of further dredging on the lower Columbia. The plans were controversial from the start because of economic and environmental concerns.
In 1999, Congress authorized deepening the channel between Portland and Astoria from , which will make it possible for large container and grain ships to reach Portland and Vancouver. The project has met opposition because of concerns about stirring up toxic sediment on the riverbed. Portland-based Northwest Environmental Advocates brought a lawsuit against the Army Corps of Engineers, but it was rejected by the Ninth U.S. Circuit Court of Appeals in August 2006. The project includes measures to mitigate environmental damage; for instance, the US Army Corps of Engineers must restore 12 times the area of wetland damaged by the project. In early 2006, the Corps spilled of hydraulic oil into the Columbia, drawing further criticism from environmental organizations.
Work on the project began in 2005 and concluded in 2010. The project's cost is estimated at $150 million. The federal government is paying 65 percent, Oregon and Washington are paying $27 million each, and six local ports are also contributing to the cost.
Dams
In 1902, the United States Bureau of Reclamation was established to aid in the economic development of arid western states. One of its major undertakings was building Grand Coulee Dam to provide irrigation for the of the Columbia Basin Project in central Washington. With the onset of World War II, the focus of dam construction shifted to production of hydroelectricity. Irrigation efforts resumed after the war.
River development occurred within the structure of the 1909 International Boundary Waters Treaty between the United States and Canada. The United States Congress passed the Rivers and Harbors Act of 1925, which directed the U.S. Army Corps of Engineers and the Federal Power Commission to explore the development of the nation's rivers. This prompted agencies to conduct the first formal financial analysis of hydroelectric development; the reports produced by various agencies were presented in House Document 308. Those reports, and subsequent related reports, are referred to as 308 Reports.
In the late 1920s, political forces in the Northwestern United States generally favored the private development of hydroelectric dams along the Columbia. But the overwhelming victories of gubernatorial candidate George W. Joseph in the 1930 Republican primary, and later his law partner Julius Meier, were understood to demonstrate strong public support for public ownership of dams. In 1933, President Franklin D. Roosevelt signed a bill that enabled the construction of the Bonneville and Grand Coulee dams as public works projects. The legislation was attributed to the efforts of Oregon Senator Charles McNary, Washington Senator Clarence Dill, and Oregon Congressman Charles Martin, among others.
In 1948, floods swept through the Columbia watershed, destroying Vanport, then the second largest city in Oregon, and impacting cities as far north as Trail, BC. The flooding prompted the U.S. Congress to pass the Flood Control Act of 1950, authorizing the federal development of additional dams and other flood control mechanisms. By that time local communities had become wary of federal hydroelectric projects, and sought local control of new developments; a public utility district in Grant County, Washington, ultimately began construction of the dam at Priest Rapids.
In the 1960s, the United States and Canada signed the Columbia River Treaty, which focused on flood control and the maximization of downstream power generation. Canada agreed to build dams and provide reservoir storage, and the United States agreed to deliver to Canada one-half of the increase in United States downstream power benefits as estimated five years in advance. Canada's obligation was met by building three dams (two on the Columbia, and one on the Duncan River), the last of which was completed in 1973.
Today the main stem of the Columbia River has fourteen dams, of which three are in Canada and eleven in the United States. Four mainstem dams and four lower Snake River dams contain navigation locks to allow ship and barge passage from the ocean as far as Lewiston, Idaho. The river system as a whole has more than 400 dams for hydroelectricity and irrigation. The dams address a variety of demands, including flood control, navigation, stream flow regulation, storage, and delivery of stored waters, reclamation of public lands and Indian reservations, and the generation of hydroelectric power.
The larger U.S. dams are owned and operated by the federal government (some by the Army Corps of Engineers and some by the Bureau of Reclamation), while the smaller dams are operated by public utility districts and private power companies. The federally operated system is known as the Federal Columbia River Power System, which includes 31 dams on the Columbia and its tributaries. The system has altered the seasonal flow of the river to meet higher electricity demands during the winter. At the beginning of the 20th century, roughly 75 percent of the Columbia's flow occurred in the summer, between April and September. By 1980, the summer proportion had been lowered to about 50 percent, essentially eliminating the seasonal pattern.
The installation of dams dramatically altered the landscape and ecosystem of the river. At one time, the Columbia was one of the top salmon-producing river systems in the world. Previously active fishing sites, such as Celilo Falls in the eastern Columbia River Gorge, have exhibited a sharp decline in fishing along the Columbia in the last century, and salmon populations have been dramatically reduced. Fish ladders have been installed at some dam sites to help the fish journey to spawning waters. Chief Joseph Dam has no fish ladders and completely blocks fish migration to the upper half of the Columbia River system.
Irrigation
The Bureau of Reclamation's Columbia Basin Project focused on the generally dry region of central Washington known as the Columbia Basin, which features rich loess soil. Several groups developed competing proposals, and in 1933, President Franklin D. Roosevelt authorized the Columbia Basin Project. The Grand Coulee Dam was the project's central component; upon completion, it pumped water up from the Columbia to fill the formerly dry Grand Coulee, forming Banks Lake. By 1935, the intended height of the dam was increased from a range between to , a height that would extend the lake impounded by the dam to the Canada–United States border; the project had grown from a local New Deal relief measure to a major national project.
The project's initial purpose was irrigation, but the onset of World War II created a high electricity demand, mainly for aluminum production and for the development of nuclear weapons at the Hanford Site. Irrigation began in 1951. The project provides water to more than of fertile but arid land in central Washington, transforming the region into a major agricultural center. Important crops include orchard fruit, potatoes, alfalfa, mint, beans, beets, and wine grapes.
Since 1750, the Columbia has experienced six multi-year droughts. The longest, lasting 12 years in the mid‑19th century, reduced the river's flow to 20 percent below average. Scientists have expressed concern that a similar drought would have grave consequences in a region so dependent on the Columbia. In 1992–1993, a lesser drought affected farmers, hydroelectric power producers, shippers, and wildlife managers.
Many farmers in central Washington build dams on their property for irrigation and to control frost on their crops. The Washington Department of Ecology, using new techniques involving aerial photographs, estimated there may be as many as a hundred such dams in the area, most of which are illegal. Six such dams have failed in recent years, causing hundreds of thousands of dollars of damage to crops and public roads. Fourteen farms in the area have gone through the permitting process to build such dams legally.
Hydroelectricity
The Columbia's heavy flow and large elevation drop over a short distance, , give it tremendous capacity for hydroelectricity generation. In comparison, the Mississippi drops less than . The Columbia alone possesses one-third of the United States's hydroelectric potential. In 2012, the river and its tributaries accounted for 29 GW of hydroelectric generating capacity, contributing 44 percent of the total hydroelectric generation in the nation.
The largest of the 150 hydroelectric projects, the Grand Coulee Dam and Chief Joseph Dam are also the largest in the United States. As of 2017, Grand Coulee is the fifth largest hydroelectric plant in the world.
Inexpensive hydropower supported the location of a large aluminum industry in the region because its reduction from bauxite requires large amounts of electricity. Until 2000, the Northwestern United States produced up to 17 percent of the world's aluminum and 40 percent of the aluminum produced in the United States. The commoditization of power in the early 21st century, coupled with a drought that reduced the generation capacity of the river, damaged the industry and by 2001, Columbia River aluminum producers had idled 80 percent of its production capacity. By 2003, the entire United States produced only 15 percent of the world's aluminum and many smelters along the Columbia had gone dormant or out of business.
Power remains relatively inexpensive along the Columbia, and since the mid-2000 several global enterprises have moved server farm operations into the area to avail themselves of cheap power. Downriver of Grand Coulee, each dam's reservoir is closely regulated by the Bonneville Power Administration (BPA), the U.S. Army Corps of Engineers, and various Washington public utility districts to ensure flow, flood control, and power generation objectives are met. Increasingly, hydro-power operations are required to meet standards under the U.S. Endangered Species Act and other agreements to manage operations to minimize impacts on salmon and other fish, and some conservation and fishing groups support removing four dams on the lower Snake River, the largest tributary of the Columbia.
In 1941, the BPA hired Oklahoma folksinger Woody Guthrie to write songs for a documentary film promoting the benefits of hydropower. In the month he spent traveling the region Guthrie wrote 26 songs, which have become an important part of the cultural history of the region.
Ecology and environment
Fish migration
The Columbia supports several species of anadromous fish that migrate between the Pacific Ocean and freshwater tributaries of the river. Sockeye salmon, Coho and Chinook ("king") salmon, and steelhead, all of the genus Oncorhynchus, are ocean fish that migrate up the rivers at the end of their life cycles to spawn. White sturgeon, which take 15 to 25 years to mature, typically migrate between the ocean and the upstream habitat several times during their lives.
Salmon populations declined dramatically after the establishment of canneries in 1867. In 1879 it was reported that 545,450 salmon, with an average weight of were caught (in a recent season) and mainly canned for export to England. A can weighing could be sold for 8d or 9d. By 1908, there was widespread concern about the decline of salmon and sturgeon. In that year, the people of Oregon passed two laws under their newly instituted program of citizens' initiatives limiting fishing on the Columbia and other rivers. Then in 1948, another initiative banned the use of seine nets (devices already used by Native Americans, and refined by later settlers) altogether.
Dams interrupt the migration of anadromous fish. Salmon and steelhead return to the streams in which they were born to spawn; where dams prevent their return, entire populations of salmon die. Some of the Columbia and Snake River dams employ fish ladders, which are effective to varying degrees at allowing these fish to travel upstream. Another problem exists for the juvenile salmon headed downstream to the ocean. Previously, this journey would have taken two to three weeks. With river currents slowed by the dams, and the Columbia converted from a wild river to a series of slackwater pools, the journey can take several months, which increases the mortality rate. In some cases, the Army Corps of Engineers transports juvenile fish downstream by truck or river barge. The Chief Joseph Dam and several dams on the Columbia's tributaries entirely block migration, and there are no migrating fish on the river above these dams. Sturgeons have different migration habits and can survive without ever visiting the ocean. In many upstream areas cut off from the ocean by dams, sturgeon simply live upstream of the dam.
Not all fish have suffered from the modifications to the river; the northern pikeminnow (formerly known as the squawfish) thrives in the warmer, slower water created by the dams. Research in the mid-1980s found that juvenile salmon were suffering substantially from the predatory pikeminnow, and in 1990, in the interest of protecting salmon, a "bounty" program was established to reward anglers for catching pikeminnow.
In 1994, the salmon catch was smaller than usual in the rivers of Oregon, Washington, and British Columbia, causing concern among commercial fishermen, government agencies, and tribal leaders. US government intervention, to which the states of Alaska, Idaho, and Oregon objected, included an 11-day closure of an Alaska fishery. In April 1994 the Pacific Fisheries Management Council unanimously approved the strictest regulations in 18 years, banning all commercial salmon fishing for that year from Cape Falcon north to the Canada–US border. In the winter of 1994, the return of coho salmon far exceeded expectations, which was attributed in part to the fishing ban.
Also in 1994, United States Secretary of the Interior Bruce Babbitt proposed the removal of several Pacific Northwest dams because of their impact on salmon spawning. The Northwest Power Planning Council approved a plan that provided more water for fish and less for electricity, irrigation, and transportation. Environmental advocates have called for the removal of certain dams in the Columbia system in the years since. Of the 227 major dams in the Columbia River drainage basin, the four Washington dams on the lower Snake River are often identified for removal, for example in an ongoing lawsuit concerning a Bush administration plan for salmon recovery. These dams and reservoirs limit the recovery of upriver salmon runs to Idaho's Salmon and Clearwater rivers. Historically, the Snake produced over 1.5 million spring and summer Chinook salmon, a number that has dwindled to several thousand in recent years. Idaho Power Company's Hells Canyon dams have no fish ladders (and do not pass juvenile salmon downstream), and thus allow no steelhead or salmon to migrate above Hells Canyon. In 2007, the destruction of the Marmot Dam on the Sandy River was the first dam removal in the system. Other Columbia Basin dams that have been removed include Condit Dam on Washington's White Salmon River, and the Milltown Dam on the Clark Fork in Montana.
Pollution
In southeastern Washington, a stretch of the river passes through the Hanford Site, established in 1943 as part of the Manhattan Project. The site served as a plutonium production complex, with nine nuclear reactors and related facilities along the banks of the river. From 1944 to 1971, pump systems drew cooling water from the river and, after treating this water for use by the reactors, returned it to the river. Before being released back into the river, the used water was held in large tanks known as retention basins for up to six hours. Longer-lived isotopes were not affected by this retention, and several terabecquerels entered the river every day. By 1957, the eight plutonium production reactors at Hanford dumped a daily average of 50,000 curies of radioactive material into the Columbia. These releases were kept secret by the federal government until the release of declassified documents in the late 1980s. Radiation was measured downstream as far west as the Washington and Oregon coasts.
The nuclear reactors were decommissioned at the end of the Cold War, and the Hanford site is the focus of one of the world's largest environmental cleanup, managed by the Department of Energy under the oversight of the Washington Department of Ecology and the Environmental Protection Agency. Nearby aquifers contain an estimated 270 billion US gallons (1 billion m3) of groundwater contaminated by high-level nuclear waste that has leaked out of Hanford's underground storage tanks. , 1 million US gallons (3,785 m3) of highly radioactive waste is traveling through groundwater toward the Columbia River. This waste is expected to reach the river in 12 to 50 years if cleanup does not proceed on schedule.
In addition to concerns about nuclear waste, numerous other pollutants are found in the river. These include chemical pesticides, bacteria, arsenic, dioxins, and polychlorinated biphenyls (PCB).
Studies have also found significant levels of toxins in fish and the waters they inhabit within the basin. Accumulation of toxins in fish threatens the survival of fish species, and human consumption of these fish can lead to health problems. Water quality is also an important factor in the survival of other wildlife and plants that grow in the Columbia River drainage basin. The states, Indian tribes, and federal government are all engaged in efforts to restore and improve the water, land, and air quality of the Columbia River drainage basin and have committed to work together to accomplish critical ecosystem restoration efforts. Several cleanup efforts are underway, including Superfund projects at Portland Harbor, Hanford, and Lake Roosevelt.
Timber industry activity further contaminates river water, for example in the increased sediment runoff that results from clearcuts. The Northwest Forest Plan, a piece of federal legislation from 1994, mandated that timber companies consider the environmental impacts of their practices on rivers like the Columbia.
On July 1, 2003, Christopher Swain became the first person to swim the Columbia River's entire length, to raise public awareness about the river's environmental health.
Nutrient cycle
Both natural and anthropogenic processes are involved in the cycling of nutrients in the Columbia River basin. Natural processes in the system include estuarine mixing of fresh and ocean waters, and climate variability patterns such as the Pacific Decadal Oscillation and the El Nino Southern Oscillation (both climatic cycles that affect the amount of regional snowpack and river discharge). Natural sources of nutrients in the Columbia River include weathering, leaf litter, salmon carcasses, runoff from its tributaries, and ocean estuary exchange. Major anthropogenic impacts on nutrients in the basin are due to fertilizers from agriculture, sewage systems, logging, and the construction of dams.
Nutrient dynamics vary in the river basin from the headwaters to the main river and dams, to finally reaching the Columbia River estuary and ocean. Upstream in the headwaters, salmon runs are the main source of nutrients. Dams along the river impact nutrient cycling by increasing residence time of nutrients, and reducing the transport of silicate to the estuary, which directly impacts diatoms, a type of phytoplankton. The dams are also a barrier to salmon migration and can increase the amount of methane locally produced. The Columbia River estuary exports high rates of nutrients into the Pacific, except for nitrogen, which is delivered into the estuary by ocean upwelling sources.
Watershed
Most of the Columbia's drainage basin (which, at , is about the size of France) lies roughly between the Rocky Mountains on the east and the Cascade Mountains on the west. In the United States and Canada the term watershed is often used to mean drainage basin. The term Columbia Basin is used to refer not only to the entire drainage basin but also to subsets of the river's watershed, such as the relatively flat and unforested area in eastern Washington bounded by the Cascades, the Rocky Mountains, and the Blue Mountains. Within the watershed are diverse landforms including mountains, arid plateaus, river valleys, rolling uplands, and deep gorges. Grand Teton National Park lies in the watershed, as well as parts of Yellowstone National Park, Glacier National Park, Mount Rainier National Park, and North Cascades National Park. Canadian National Parks in the watershed include Kootenay National Park, Yoho National Park, Glacier National Park, and Mount Revelstoke National Park. Hells Canyon, the deepest gorge in North America, and the Columbia Gorge are in the watershed. Vegetation varies widely, ranging from western hemlock and western redcedar in the moist regions to sagebrush in the arid regions. The watershed provides habitat for 609 known fish and wildlife species, including the bull trout, bald eagle, gray wolf, grizzly bear, and Canada lynx.
The World Wide Fund for Nature (WWF) divides the waters of the Columbia and its tributaries into three freshwater ecoregions: Columbia Glaciated, Columbia Unglaciated, and Upper Snake. The Columbia Glaciated ecoregion, about a third of the total watershed, lies in the north and was covered with ice sheets during the Pleistocene. The ecoregion includes the mainstem Columbia north of the Snake River and tributaries such as the Yakima, Okanagan, Pend Oreille, Clark Fork, and Kootenay rivers. The effects of glaciation include a number of large lakes and a relatively low diversity of freshwater fish. The Upper Snake ecoregion is defined as the Snake River watershed above Shoshone Falls, which totally blocks fish migration. This region has 14 species of fish, many of which are endemic. The Columbia Unglaciated ecoregion makes up the rest of the watershed. It includes the mainstem Columbia below the Snake River and tributaries such as the Salmon, John Day, Deschutes, and lower Snake Rivers. Of the three ecoregions it is the richest in terms of freshwater species diversity. There are 35 species of fish, of which four are endemic. There are also high levels of mollusk endemism.
In 2016, over eight million people lived within the Columbia's drainage basin. Of this total about 3.5 million people lived in Oregon, 2.1 million in Washington, 1.7 million in Idaho, half a million in British Columbia, and 0.4 million in Montana. Population in the watershed has been rising for many decades and is projected to rise to about 10 million by 2030. The highest population densities are found west of the Cascade Mountains along the I-5 corridor, especially in the Portland-Vancouver urban area. High densities are also found around Spokane, Washington, and Boise, Idaho. Although much of the watershed is rural and sparsely populated, areas with recreational and scenic values are growing rapidly. The central Oregon county of Deschutes is the fastest-growing in the state. Populations have also been growing just east of the Cascades in central Washington around the city of Yakima and the Tri-Cities area. Projections for the coming decades assume growth throughout the watershed. The Canadian part of the Okanagan subbasin is also growing rapidly.
Climate varies greatly within the watershed. Elevation ranges from sea level at the river mouth to more than in the mountains, and temperatures vary with elevation. The highest peak is Mount Rainier, at . High elevations have cold winters and short cool summers; interior regions are subject to great temperature variability and severe droughts. Over some of the watershed, especially west of the Cascade Mountains, precipitation maximums occur in winter, when Pacific storms come ashore. Atmospheric conditions block the flow of moisture in summer, which is generally dry except for occasional thunderstorms in the interior. In some of the eastern parts of the watershed, especially shrub-steppe regions with Continental climate patterns, precipitation maximums occur in early summer. Annual precipitation varies from more than a year in the Cascades to less than in the interior. Much of the watershed gets less than a year.
Several major North American drainage basins and many minor ones border the Columbia River's drainage basin. To the east, in northern Wyoming and Montana, the Continental Divide separates the Columbia watershed from the Mississippi-Missouri watershed, which empties into the Gulf of Mexico. To the northeast, mostly along the southern border between British Columbia and Alberta, the Continental Divide separates the Columbia watershed from the Nelson-Lake Winnipeg-Saskatchewan watershed, which empties into Hudson Bay. The Mississippi and Nelson watersheds are separated by the Laurentian Divide, which meets the Continental Divide at Triple Divide Peak near the headwaters of the Columbia's Flathead River tributary. This point marks the meeting of three of North America's main drainage patterns, to the Pacific Ocean, to Hudson Bay, and to the Atlantic Ocean via the Gulf of Mexico.
Further north along the Continental Divide, a short portion of the combined Continental and Laurentian divides separate the Columbia watershed from the MacKenzie-Slave-Athabasca watershed, which empties into the Arctic Ocean. The Nelson and Mackenzie watersheds are separated by a divide between streams flowing to the Arctic Ocean and those of the Hudson Bay watershed. This divide meets the Continental Divide at Snow Dome (also known as Dome), near the northernmost bend of the Columbia River.
To the southeast, in western Wyoming, another divide separates the Columbia watershed from the Colorado–Green watershed, which empties into the Gulf of California. The Columbia, Colorado, and Mississippi watersheds meet at Three Waters Mountain in the Wind River Range of . To the south, in Oregon, Nevada, Utah, Idaho, and Wyoming, the Columbia watershed is divided from the Great Basin, whose several watersheds are endorheic, not emptying into any ocean but rather drying up or sinking into sumps. Great Basin watersheds that share a border with the Columbia watershed include Harney Basin, Humboldt River, and Great Salt Lake. The associated triple divide points are Commissary Ridge North, Wyoming, and Sproats Meadow Northwest, Oregon. To the north, mostly in British Columbia, the Columbia watershed borders the Fraser River watershed. To the west and southwest the Columbia watershed borders a number of smaller watersheds that drain to the Pacific Ocean, such as the Klamath River in Oregon and California and the Puget Sound Basin in Washington.
Major tributaries
The Columbia receives more than 60 significant tributaries. The four largest that empty directly into the Columbia (measured either by discharge or by size of watershed) are the Snake River (mostly in Idaho), the Willamette River (in northwest Oregon), the Kootenay River (mostly in British Columbia), and the Pend Oreille River (mostly in northern Washington and Idaho, also known as the lower part of the Clark Fork). Each of these four averages more than and drains an area of more than .
The Snake is by far the largest tributary. Its watershed of is larger than the state of Idaho. Its discharge is roughly a third of the Columbia's at the rivers' confluence but compared to the Columbia upstream of the confluence the Snake is longer (113%) and has a larger drainage basin (104%).
The Pend Oreille River system (including its main tributaries, the Clark Fork and Flathead rivers) is also similar in size to the Columbia at their confluence. Compared to the Columbia River above the two rivers' confluence, the Pend Oreille-Clark-Flathead is nearly as long (about 86%), its basin about three-fourths as large (76%), and its discharge over a third (37%).
{| class="wikitable collapsible sortable state = uncollapsed"
|-
!Tributary
!colspan=2|Average discharge
!colspan=2|Drainage basin
|-
!
!ft3/s
!m3/s
!mi2
!km2
|-
|Snake River
|
|<ref> Sum of Subregion 1704, Upper Snake, Subregion 1705, Middle Snake, and Subregion 1706, Lower Snake.</ref>
|-
|Willamette River
|
|
|-
|Kootenay River (Kootenai)
|
|
|-
|Pend Oreille River
|
|
|-
|Cowlitz River
|
|
|-
|Spokane River
|
|
|-
|Lewis River
|
|
|-
|Deschutes River
|
|
|-
|Yakima River
|
|
|-
|Wenatchee River
|
|
|-
|Okanogan River
|
|
|-
|Kettle River
|
|
|-
|Sandy River
|
|
|-
|John Day River
|
|
|}
See also
Columbia Park (Kennewick, Washington), a recreational area
Columbia River Estuary
Columbia River Maritime Museum, Astoria, Oregon
Empire Builder, an Amtrak rail line that follows the river from Portland to Pasco, Washington
Estella Mine, an abandoned mine with a view of the Columbia River Valley
Historic Columbia River Highway, a scenic highway on the Oregon side
List of crossings of the Columbia River
List of dams in the Columbia River watershed
List of longest rivers of Canada
List of longest rivers of the United States (by main stem)
List of longest streams of Oregon
Lists of ecoregions in North America and Oregon
Lists of rivers of British Columbia, Oregon, and Washington
Okanagan Trail, a historic trail that followed the Columbia and Okanagan rivers
Robert Gray's Columbia River expedition
Notes
References
Sources
(see here for full online transcription)
Further reading
White, Richard. The Organic Machine: The Remaking of the Columbia River (Hill and Wang, 1996)
External links
BC Hydro
Bibliography on Water Resources and International Law Peace Palace Library
Columbia River US Environmental Protection Agency
Columbia River Gorge National Scenic Area from the US Forest Service
Columbia River Inter-Tribal Fish Commission
, dating to the 17th century
University of Washington Libraries Digital Collections – Tollman and Canaris Photographs Photographs document the salmon fishing industry on the southern Washington coast and in the lower Columbia River around the year 1897 and offer insights about commercial salmon fishing and the techniques used at the beginning of the 20th century.
Virtual World: Columbia River National Geographic'' via Internet Archive
Borders of Oregon
Borders of Washington (state)
Drainage basins of the Pacific Ocean
International rivers of North America
Rivers of Benton County, Washington
Rivers of British Columbia
Rivers of Chelan County, Washington
Rivers of Clark County, Washington
Rivers of Clatsop County, Oregon
Rivers of Cowlitz County, Washington
Rivers of Franklin County, Washington
Rivers of Hood River County, Oregon
Rivers of Multnomah County, Oregon
Rivers of Oregon
Rivers of Wasco County, Oregon
Rivers of Washington (state)
Rivers of Douglas County, Washington
Rivers with fish ladders |
5412 | https://en.wikipedia.org/wiki/Contra%20dance | Contra dance | Contra dance (also contradance, contra-dance and other variant spellings) is a form of folk dancing made up of long lines of couples. It has mixed origins from English country dance, Scottish country dance, and French dance styles in the 17th century. Sometimes described as New England folk dance or Appalachian folk dance, contra dances can be found around the world, but are most common in the United States (periodically held in nearly every state), Canada, and other Anglophone countries.
A contra dance event is a social dance that one can attend without a partner. The dancers form couples, and the couples form sets of two couples in long lines starting from the stage and going down the length of the dance hall. Throughout the course of a dance, couples progress up and down these lines, dancing with each other couple in the line. The dance is led by a caller who teaches the sequence of moves, called "figures," in the dance before the music starts. In a single dance, a caller may include anywhere from six to twelve figures, which are repeated as couples progress up and down the lines. Each time through the dance takes 64 beats, after which the pattern is repeated. The essence of the dance is in following the pattern with your set and your line; since there is no required footwork, many people find contra dance easier to learn than other forms of social dancing.
Almost all contra dances are danced to live music. The music played includes, but is not limited to, Irish, Scottish, old-time, bluegrass and French-Canadian folk tunes. The fiddle is considered the core instrument, though other stringed instruments can be used, such as the guitar, banjo, bass and mandolin, as well as the piano, accordion, flute, clarinet and more. Techno contra dances are done to techno music, typically accompanied by DJ lighting. Music in a dance can consist of a single tune or a medley of tunes, and key changes during the course of a dance are common.
Many callers and bands perform for local contra dances, and some are hired to play for dances around the U.S. and Canada. Many dancers travel regionally (or even nationally) to contra dance weekends and week-long contra dance camps, where they can expect to find other dedicated and skilled dancers, callers, and bands.
History
Contra dance has European origins, and over 100 years of cultural influences from many different sources.
At the end of the 17th century, English country dances were taken up by French dance masters. The French called these dances contredanses (which roughly translated by sound "countrydance" to "contredanse"), as indicated in a 1706 dance book called Recueil de Contredances. As time progressed, these dances returned to England and were spread and reinterpreted in the United States, and eventually the French form of the name came to be associated with the American folk dances, where they were alternatively called "country dances" or in some parts of New England such as New Hampshire, "contradances".
Contra dances were fashionable in the United States and were considered one of the most popular social dances across class lines in the late 18th century, though these events were usually referred to as "country dances" until the 1780s, when the term contra dance became more common to describe these events. In the mid-19th century, group dances started to decline in popularity in favor of quadrilles, lancers, and couple dances such as the waltz and polka.
By the late 19th century, contras were mostly confined to rural settings. This began to change with the square dance revival of the 1920s, pioneered by Henry Ford, founder of the Ford Motor Company, in part as a response in opposition to modern jazz influences in the United States. In the 1920s, Ford asked his friend Benjamin Lovett, a dance coordinator in Massachusetts, to come to Michigan to begin a dance program. Initially, Lovett could not as he was under contract at a local inn; consequently, Ford bought the property rights to the inn. Lovett and Ford initiated a dance program in Dearborn, Michigan that included several folk dances, including contras. Ford also published a book titled Good Morning: After a Sleep of Twenty-Five Years, Old-Fashioned Dancing Is Being Revived in 1926 detailing steps for some contra dances.
In the 1930s and 1940s, the popularity of Jazz, Swing, and "Big Band" music caused contra dance to decline in several parts of the US; the tradition carried on primarily in towns within the northeastern portions of North America, such as Ohio, the Maritime provinces of Canada, and particularly in New England. Ralph Page almost single-handedly maintained the New England tradition until it was revitalized in the 1950s and 1960s, particularly by Ted Sannella and Dudley Laufman.
The New England contra dance tradition was also maintained in Vermont by the Ed Larkin Old Time Contra Dancers, formed by Edwin Loyal Larkin in 1934. The group Larkin founded is still performing, teaching the dances, and holding monthly open house dances in Tunbridge, Vermont.
By then, early dance camps, retreats, and weekends had emerged, such as Pinewoods Camp, in Plymouth, Massachusetts, which became primarily a music and dance camp in 1933, and NEFFA, the New England Folk Festival, also in Massachusetts, which began in 1944. Pittsburgh Contra Dance celebrated its 100th anniversary in 2015. These and others continue to be popular and some offer other dances and activities besides contra dancing.
In the 1970s, Sannella and other callers introduced dance moves from English Country Dance, such as heys and gypsies, to the contra dances. New dances, such as Shadrack's Delight by Tony Parkes, featured symmetrical dancing by all couples. (Previously, the actives and inactives – see Progression – had significantly different roles). Double progression dances, popularized by Herbie Gaudreau, added to the aerobic nature of the dances, and one caller, Gene Hubert, wrote a quadruple progression dance, Contra Madness. Becket formation was introduced, with partners starting the dance next to each other in the line instead of opposite each other.
The Brattleboro Dawn Dance started in 1976, and continues to run semiannually.
In the early 1980s, Tod Whittemore started the first Saturday dance in the Peterborough Town House, which remains one of the more popular regional dances. The Peterborough dance influenced Bob McQuillen, who became a notable musician in New England. As musicians and callers moved to other locations, they founded contra dances in Michigan, Washington, Oregon, California, Texas, and elsewhere.
Events
Contra dances take place in more than 200 cities and towns across the U.S. (), as well as other countries.
Contra dance events are open to all, regardless of experience, unless explicitly labeled otherwise. It is common to see dancers with a wide range of ages, from children to the elderly. Most dancers are white and middle or upper-middle class. Contra dances are family-friendly, and alcohol consumption is not part of the culture. Many events offer beginner-level instructions prior to the dance. A typical evening of contra dance is three hours long, including an intermission. The event consists of a number of individual contra dances, each lasting about 15 minutes, and typically a band intermission with some waltzes, schottisches, polkas, or Swedish hambos. In some places, square dances are thrown into the mix, sometimes at the discretion of the caller. Music for the evening is typically performed by a live band, playing jigs and reels from Ireland, Scotland, Canada, or the USA. The tunes may range from traditional originating a century ago, to modern compositions including electric guitar, synth keyboard, and driving percussion – so long as the music fits the timing for contra dance patterns. Sometimes, a rock tune will be woven in.
Generally, a leader, known as a caller, will teach each individual dance just before the music for that dance begins. During this introductory walk-through, participants learn the dance by walking through the steps and formations, following the caller's instructions. The caller gives the instructions orally, and sometimes augments them with demonstrations of steps by experienced dancers in the group. The walk-through usually proceeds in the order of the moves as they will be done with the music; in some dances, the caller may vary the order of moves during the dance, a fact that is usually explained as part of the caller's instructions.
After the walk-through, the music begins and the dancers repeat that sequence some number of times before that dance ends, often 10 to 15 minutes, depending on the length of the contra lines. Calls are normally given at least the first few times through, and often for the last. At the end of each dance, the dancers thank their partners. The contra dance tradition in North America is to change partners for every dance, while in the United Kingdom typically people dance with the same partner the entire evening. One who attends an evening of contra dances in North America does not need to bring his or her own partner. In the short break between individual dances, the dancers invite each other to dance. Booking ahead by asking partner or partners ahead of time for each individual dance is common at some venues, but has been discouraged by some.
Most contra dances do not have an expected dress code. No special outfits are worn, but comfortable and loose-fitting clothing that does not restrict movement is usually recommended. Women usually wear skirts or dresses as they are cooler than wearing trousers; some men also dance in kilts or skirts. Low heeled, broken-in, soft-soled, non-marking shoes, such as dance shoes, sneakers, or sandals, are recommended and, in some places, required. As dancing can be aerobic, dancers are sometimes encouraged to bring a change of clothes.
As in any social dance, cooperation is vital to contra dancing. Since over the course of any single dance, individuals interact with not just their partners but everyone else in the set, contra dancing might be considered a group activity. As will necessarily be the case when beginners are welcomed in by more practiced dancers, mistakes are made; most dancers are willing to help beginners in learning the steps. However, because the friendly, social nature of the dances can be misinterpreted or even abused, some groups have created anti-harassment policies.
Form
Formations
Contra dances are arranged in long lines of couples. A pair of lines is called a set. Sets are generally arranged so they run the length of the hall, with the top of the set being the end closest to the band and caller, and the bottom of the set being the end farthest from the caller.
Couples consist of two people, traditionally one male and one female, though same-sex pairs are increasingly common. Traditionally the dancers are referred to as the lady and gent, though various other terms have been used: some dances have used men and women, rejecting ladies and gents as elitist; others have used gender-neutral role terms including bares and bands, jets and rubies, and larks and ravens or robins. Couples interact primarily with an adjacent couple for each round of the dance. Each sub-group of two interacting couples is known to choreographers as a minor set and to dancers as a foursome or hands four. Couples in the same minor set are neighbors. Minor sets originate at the head of the set, starting with the topmost dancers as the ones (the active couple or actives); the other couple are twos (or inactives). The ones are said to be above their neighboring twos; twos are below. If there is an uneven number of couples dancing, the bottom-most couple will wait out the first time through the dance.
There are four common ways of arranging couples in the minor sets: proper, improper, Becket, and triple formations. Traditionally, most dances were in the proper formation, with all the gents in one line and all the ladies in the other. Until the end of the nineteenth century, minor sets were most commonly triples. In the twentieth century, duple-minor dances became more common. Since the mid twentieth century, there has been a shift towards improper dances, in which gents and ladies alternate on each side of the set, being the most common formation. Triple dances have also lost popularity in modern contras, while Becket formation, in which dancers stand next to their partners, facing another couple, is a modern innovation.
Progression
A fundamental aspect of contra dancing is that, during a single dance, each dancer has one partner, but interacts with many different people. During a single dance, the same pattern is repeated over and over (one time through lasts roughly 30 seconds), but each time, a pair of dancers will dance with new neighbors (moving on to new neighbors is called progressing). Dancers do not need to memorize these patterns in advance, since the dance leader, or caller, will generally explain the pattern for this dance before the music begins, and give people a chance to walk through the pattern so dancers can learn the moves. The walk through also helps dancers understand how the dance pattern leads them toward new people each time. Once the music starts, the caller continues to describe each move until the dancers are comfortable with that dance pattern. The dance progression is built into the contra dance pattern as continuous motion with the music, and does not interrupt the dancing. While all dancers in the room are part of the same dance pattern, half of the couples in the room are moving toward the band at any moment and half are moving away, so when everybody steps forward, they find new people to dance with. Once a couple reaches the end of the set, they switch direction, dancing back along the set the other way.
A single dance runs around ten minutes, long enough to progress at least 15–20 times. If the sets are short to medium length the caller often tries to run the dance until each couple has danced with every other couple both as a one and a two and returned to where they started. A typical room of contra dancers may include about 120 people; but this varies from 30 people in smaller towns, to over 300 people in cities like Washington DC, Los Angeles, or New York. With longer sets (more than 60 people), one dance typically does not allow dancing with every dancer in the group.
Choreography
Contra dance choreography specifies the dance formation, the figures, and the sequence of those figures in a dance. Contra dance figures (with a few exceptions) do not have defined footwork; within the limits of the music and the comfort of their fellow dancers, individuals move according to their own taste.
Most contra dances consist of a sequence of about 6 to 12 individual figures, prompted by the caller in time to the music as the figures are danced. As the sequence repeats, the caller may cut down his or her prompting, and eventually drop out, leaving the dancers to each other and the music.
A figure is a pattern of movement that typically takes eight counts, although figures with four or 16 counts are also common. Each dance is a collection of figures assembled to allow the dancers to progress along the set (see "Progression", above).
A count (as used above) is one half of a musical measure, such as one quarter note in time or three eighth notes in time. A count may also be called a step, as contra dance is a walking form, and each count of a dance typically matches a single physical step in a figure.
Typical contra dance choreography comprises four parts, each 16 counts (8 measures) long. The parts are called A1, A2, B1 and B2. This nomenclature stems from the music: Most contra dance tunes (as written) have two parts (A and B), each 8 measures long, and each fitting one part of the dance. The A and B parts are each played twice in a row, hence, A1, A2, B1, B2. While the same music is generally played in, for example, parts A1 and A2, distinct choreography is followed in those parts. Thus, a contra dance is typically 64 counts, and goes with a 32 measure tune. Tunes of this form are called "square"; tunes that deviate from this form are called "crooked".
Sample contra dances:
Traditional – the actives do most of the movement
Chorus jig (proper duple minor)
A1 (16) Actives down the outside and back. (The inactives stand still or substitute a swing).
A2 (16) Actives down the center, turn individually, come back, and cast off. (The inactives stand still for the first , take a step up the hall, and then participate in the cast).
B1 (16) Actives turn contra corners. (The inactives participate in half the turns.)
B2 (16) Actives meet in the middle for a balance and swing, end swing facing up. (The inactives stand still.)
Note: inactives will often clog in place or otherwise participate in the dance, even though the figures do not call for them to move.
Modern – the dance is symmetrical for actives and inactives
"Hay in the Barn" by Chart Guthrie (improper duple minor)
A1 (16) Neighbors balance and swing
A2 (8) Ladies chain across, (8) half hey, ladies pass right shoulders to start.
B1 (16) Partners balance and swing.
B2 (8) Ladies chain across, (8) half hey, ladies pass right shoulders to start.
Many modern contra dances have these characteristics:
longways for as many as will
first couples improper, or Becket formation
flowing choreography
no-one stationary for more than 16 beats (e.g. first couple balance and swing, finish facing down to make lines of four)
containing at least one swing and normally both a partner swing and a neighbor swing
the vast majority of the moves from a set of well-known moves that the dancers know already
composed mostly of moves that keep all dancers connected
generally danced to 32 bar jigs or reels played at between 110 and 130 bpm
danced with a smooth walk with many spins and twirls
An event which consists primarily (or solely) of dances in this style is sometimes referred to as a "modern urban contra dance".
Music
The most common contra dance repertoire is rooted in the Anglo-Celtic tradition as it developed in North America. Irish, Scottish, French Canadian, and Old-time tunes are common, and Klezmer tunes have also been used. The old-time repertoire includes very few of the jigs common in the others.
Tunes used for a contra dance are nearly always "square" 64-beat tunes, in which one time through the tune is each of two 16-beat parts played twice (this is notated AABB). However, any 64-beat tune will do; for instance, three 8-beat parts could be played AABB AACC, or two 8-beat parts and one 16-beat part could be played AABB CC. Tunes not 64 beats long are called "crooked" and are almost never used for contra dancing, although a few crooked dances have been written as novelties. Contra tunes are played at a narrow range of tempos, between 108 and 132 bpm.
Fiddles are considered to be the primary melody instrument in contra dancing, though other stringed instruments can also be used, such as the mandolin or banjo, in addition to a few wind instruments, for example, the accordion. The piano, guitar, and double bass are frequently found in the rhythm section of a contra dance band. Occasionally, percussion instruments are also used in contra dancing, such as the Irish bodhran or less frequently, the dumbek or washboard. The last few years have seen some of the bands incorporate the Quebecois practice of tapping feet on a board while playing an instrument (often the fiddle).
Until the 1970s it was traditional to play a single tune for the duration of a contra dance (about 5 to 10 minutes). Since then, contra dance musicians have typically played tunes in sets of two or three related (and sometimes contrasting) tunes, though single-tune dances are again becoming popular with some northeastern bands. In the Celtic repertoires it is common to change keys with each tune. A set might start with a tune in G, switch to a tune in D, and end with a tune in Bm. Here, D is related to G as its dominant (5th), while D and Bm share a key signature of two sharps. In the old-time tradition the musicians will either play the same tune for the whole dance, or switch to tunes in the same key. This is because the tunings of the five-string banjo are key-specific. An old-time band might play a set of tunes in D, then use the time between dances to retune for a set of tunes in A. (Fiddlers also may take this opportunity to retune; tune- or key-specific fiddle tunings are uncommon in American Anglo-Celtic traditions other than old-time.)
In the Celtic repertoires it is most common for bands to play sets of reels and sets of jigs. However, since the underlying beat structure of jigs and reels is the same (two "counts" per bar) bands will occasionally mix jigs and reels in a set.
Some of the most popular contra dance bands in recent years are Great Bear, Perpetual E-Motion, Buddy System, Crowfoot, Elixir, the Mean Lids, Nor'easter, Nova, Pete's Posse, the Stringrays, the Syncopaths, and Wild Asparagus.
Techno contras
In recent years, younger contra dancers have begun establishing "crossover contra" or "techno contra" – contra dancing to techno, hip-hop, and other modern forms of music. While challenging for DJs and callers, the fusion of contra patterns with moves from hip-hop, tango, and other forms of dance has made this form of contra dance a rising trend since 2008. Techno differs from other contra dancing in that it is usually done to recorded music, although there are some bands that play live for techno dances. Techno has become especially prevalent in Asheville, North Carolina, but regular techno contra dance series are spreading up the East Coast to locales such as Charlottesville, Virginia; Washington, D.C.; Amherst, Massachusetts; Greenfield, Massachusetts; and various North Carolina dance communities, with one-time or annual events cropping up in locations farther west, including California, Portland, Oregon, and Washington state. They also sometimes appear as late night events during contra dance weekends. In response to the demand for techno contra, a number of contra dance callers have developed repertoires of recorded songs to play that go well with particular contra dances; these callers are known as DJs. A kind of techno/traditional contra fusion has arisen, with at least one band, Buddy System, playing live music melded with synth sounds for techno contra dances.
See also
Ceili dance
Country Dance and Song Society
Dutch crossing
International folk dance
Quadrille
Citations
General and cited references
See chapter VI, "Frolics for Fun: Dances, Weddings and Dinner Parties, pages 109 – 124.
(Reprint: first published in 1956 by American Squares as a part of the American Squares Dance Series)
See chapter entitled "Country Dancing," Pages 57 – 120. (The first edition was published in 1939.)
External links
Contra dance associations
Country Dance and Song Society (CDSS) preserves a variety of Anglo-American folk traditions in North America, including folk music, folk song, English country dance, contra dance and morris dance.
Anglo-American Dance Service Based in Belgium, promoting contra dance and English dance in Western Europe.
Descriptions & definitions
Gary Shapiro's What Is Contra Dance?
Hamilton Contra Dances A Contra Dance Primer
Hamilton Contra Dances Contraculture: An introduction to contradancing
Sharon Barrett Kennedy's "Now, What in the World is Contra Dancing?"
Different traditions and cultures in contra dance
Colin Hume's Advice to Americans in England
Research resources
University of New Hampshire Special Collections: New Hampshire Library of Traditional Music and Dance
Finding contra dances
CDSS Dance Map – interactive, crowd sourced map of contra and folk dances around the world
Contra Dance Links – comprehensive, up-to-date lists of local dances, weekend dances, musicians, callers, etc.
The Dance Gypsy – locations of contra dances, and many other folk dances, around the world
Try Contra – Find contra dances using ZIP Code search.
National Contra Grid – Look up dances by day-of-week & City.
ContraDance.org – Description, Links, videos, and local schedule.
In the United Kingdom
UK Contra Clubs
Are You Dancing – calendar of social dance events in the UK, including contras
English Folk Dance and Song Society dance calendar – calendar of folk dance events in the UK, including contras
In France
Paris Contra Dance
Video
Contra dance in Oswego, New York, with music by the Great Bear Trio. 2013.
Two American country dance films on DVD: "Country Corners" (1976), and "Full of Life A-Dancin'" (1978).
Contra dance in Tacoma, Washington, with music by Crowfoot. 2009.
Welcome to the Contra Dance – dancers discuss their experiences contra dancing, set over photographs of contras
The New Contra Dance Commercial (2 minute look at contra in a few dance halls, see playlist)
Why We Contra Dance (dancers discuss why they enjoy contra dance, with video of dancing)
Dancing Community (dancers from Louisville talk about their contra dancing experiences, with video of dancing)
Contra Dancing and New Dancers (new contra dancers in Atlanta, Georgia, discuss their experience)
A History of Contra (documentary of contra dancing, spanning 150+ years of dance culture)
Contra dance in Chattanooga, Tennessee with music by Buddy System and calling by Seth Tepfer, 2019
The Contra Dance (Doug Plummer's 3 minute slide + video set, with Ed Howe's fiddle music from May 2019)
Contra dance in Glen Echo, Maryland with music by Elixir and calling by Nils Fredland, Contrastock 4, 2014.
Contra dance in Pinellas, Florida with music by ContraForce and calling by Charlotte Crittenden, 2017)
Example Contra Dance Lesson (caller Cis Hinkle explains the basics, with contra vocabulary)
Contra Nils Walkthrough and Dance
Articles containing video clips
Country dance
Folk dance
Social dance |
5416 | https://en.wikipedia.org/wiki/Capitalism | Capitalism | Capitalism is an economic system based on the private ownership of the means of production and their operation for profit. Central characteristics of capitalism include capital accumulation, competitive markets, price systems, private property, property rights recognition, voluntary exchange, and wage labor. In a market economy, decision-making and investments are determined by owners of wealth, property, or ability to maneuver capital or production ability in capital and financial markets—whereas prices and the distribution of goods and services are mainly determined by competition in goods and services markets.
Economists, historians, political economists, and sociologists have adopted different perspectives in their analyses of capitalism and have recognized various forms of it in practice. These include laissez-faire or free-market capitalism, anarcho-capitalism, state capitalism, and welfare capitalism. Different forms of capitalism feature varying degrees of free markets, public ownership, obstacles to free competition, and state-sanctioned social policies. The degree of competition in markets and the role of intervention and regulation, as well as the scope of state ownership, vary across different models of capitalism. The extent to which different markets are free and the rules defining private property are matters of politics and policy. Most of the existing capitalist economies are mixed economies that combine elements of free markets with state intervention and in some cases economic planning.
Capitalism in its modern form emerged from agrarianism in 16th century England and mercantilist practices by European countries in the 16th to 18th centuries. The Industrial Revolution of the 18th century established capitalism as a dominant mode of production, characterized by factory work and a complex division of labor. Through the process of globalization, capitalism spread across the world in the 19th and 20th centuries, especially before World War I and after the end of the Cold War. During the 19th century, capitalism was largely unregulated by the state, but became more regulated in the post-World War II period through Keynesianism, followed by a return of more unregulated capitalism starting in the 1980s through neoliberalism.
Market economies have existed under many forms of government and in many different times, places and cultures. Modern industrial capitalist societies developed in Western Europe in a process that led to the Industrial Revolution. Economic growth is a characteristic tendency of capitalist economies.
Etymology
The term "capitalist", meaning an owner of capital, appears earlier than the term "capitalism" and dates to the mid-17th century. "Capitalism" is derived from capital, which evolved from , a late Latin word based on , meaning "head"—which is also the origin of "chattel" and "cattle" in the sense of movable property (only much later to refer only to livestock). emerged in the 12th to 13th centuries to refer to funds, stock of merchandise, sum of money or money carrying interest. By 1283, it was used in the sense of the capital assets of a trading firm and was often interchanged with other words—wealth, money, funds, goods, assets, property and so on.
The Hollantse () Mercurius uses "capitalists" in 1633 and 1654 to refer to owners of capital. In French, Étienne Clavier referred to capitalistes in 1788, four years before its first recorded English usage by Arthur Young in his work Travels in France (1792). In his Principles of Political Economy and Taxation (1817), David Ricardo referred to "the capitalist" many times. English poet Samuel Taylor Coleridge used "capitalist" in his work Table Talk (1823). Pierre-Joseph Proudhon used the term in his first work, What is Property? (1840), to refer to the owners of capital. Benjamin Disraeli used the term in his 1845 work Sybil.
The initial use of the term "capitalism" in its modern sense is attributed to Louis Blanc in 1850 ("What I call 'capitalism' that is to say the appropriation of capital by some to the exclusion of others") and Pierre-Joseph Proudhon in 1861 ("Economic and social regime in which capital, the source of income, does not generally belong to those who make it work through their labor"). Karl Marx frequently referred to the "capital" and to the "capitalist mode of production" in Das Kapital (1867). Marx did not use the form capitalism but instead used capital, capitalist and capitalist mode of production, which appear frequently. Due to the word being coined by socialist critics of capitalism, economist and historian Robert Hessen stated that the term "capitalism" itself is a term of disparagement and a misnomer for economic individualism. Bernard Harcourt agrees with the statement that the term is a misnomer, adding that it misleadingly suggests that there is such as a thing as "capital" that inherently functions in certain ways and is governed by stable economic laws of its own.
In the English language, the term "capitalism" first appears, according to the Oxford English Dictionary (OED), in 1854, in the novel The Newcomes by novelist William Makepeace Thackeray, where the word meant "having ownership of capital". Also according to the OED, Carl Adolph Douai, a German American socialist and abolitionist, used the term "private capitalism" in 1863.
History
Capitalism, in its modern form, can be traced to the emergence of agrarian capitalism and mercantilism in the early Renaissance, in city-states like Florence. Capital has existed incipiently on a small scale for centuries in the form of merchant, renting and lending activities and occasionally as small-scale industry with some wage labor. Simple commodity exchange and consequently simple commodity production, which is the initial basis for the growth of capital from trade, have a very long history. During the Islamic Golden Age, Arabs promulgated capitalist economic policies such as free trade and banking. Their use of Indo-Arabic numerals facilitated bookkeeping. These innovations migrated to Europe through trade partners in cities such as Venice and Pisa. Italian mathematicians traveled the Mediterranean talking to Arab traders and returned to popularize the use of Indo-Arabic numerals in Europe.
Agrarianism
The economic foundations of the feudal agricultural system began to shift substantially in 16th-century England as the manorial system had broken down and land began to become concentrated in the hands of fewer landlords with increasingly large estates. Instead of a serf-based system of labor, workers were increasingly employed as part of a broader and expanding money-based economy. The system put pressure on both landlords and tenants to increase the productivity of agriculture to make profit; the weakened coercive power of the aristocracy to extract peasant surpluses encouraged them to try better methods, and the tenants also had incentive to improve their methods in order to flourish in a competitive labor market. Terms of rent for land were becoming subject to economic market forces rather than to the previous stagnant system of custom and feudal obligation.
Mercantilism
The economic doctrine prevailing from the 16th to the 18th centuries is commonly called mercantilism. This period, the Age of Discovery, was associated with the geographic exploration of foreign lands by merchant traders, especially from England and the Low Countries. Mercantilism was a system of trade for profit, although commodities were still largely produced by non-capitalist methods. Most scholars consider the era of merchant capitalism and mercantilism as the origin of modern capitalism, although Karl Polanyi argued that the hallmark of capitalism is the establishment of generalized markets for what he called the "fictitious commodities", i.e. land, labor and money. Accordingly, he argued that "not until 1834 was a competitive labor market established in England, hence industrial capitalism as a social system cannot be said to have existed before that date".
England began a large-scale and integrative approach to mercantilism during the Elizabethan Era (1558–1603). A systematic and coherent explanation of balance of trade was made public through Thomas Mun's argument England's Treasure by Forraign Trade, or the Balance of our Forraign Trade is The Rule of Our Treasure. It was written in the 1620s and published in 1664.
European merchants, backed by state controls, subsidies and monopolies, made most of their profits by buying and selling goods. In the words of Francis Bacon, the purpose of mercantilism was "the opening and well-balancing of trade; the cherishing of manufacturers; the banishing of idleness; the repressing of waste and excess by sumptuary laws; the improvement and husbanding of the soil; the regulation of prices...".
After the period of the proto-industrialization, the British East India Company and the Dutch East India Company, after massive contributions from the Mughal Bengal, inaugurated an expansive era of commerce and trade. These companies were characterized by their colonial and expansionary powers given to them by nation-states. During this era, merchants, who had traded under the previous stage of mercantilism, invested capital in the East India Companies and other colonies, seeking a return on investment.
Industrial Revolution
In the mid-18th century a group of economic theorists, led by David Hume (1711–1776) and Adam Smith (1723–1790), challenged fundamental mercantilist doctrines—such as the belief that the world's wealth remained constant and that a state could only increase its wealth at the expense of another state.
During the Industrial Revolution, industrialists replaced merchants as a dominant factor in the capitalist system and effected the decline of the traditional handicraft skills of artisans, guilds and journeymen. Industrial capitalism marked the development of the factory system of manufacturing, characterized by a complex division of labor between and within work process and the routine of work tasks; and eventually established the domination of the capitalist mode of production.
Industrial Britain eventually abandoned the protectionist policy formerly prescribed by mercantilism. In the 19th century, Richard Cobden (1804–1865) and John Bright (1811–1889), who based their beliefs on the Manchester School, initiated a movement to lower tariffs. In the 1840s Britain adopted a less protectionist policy, with the 1846 repeal of the Corn Laws and the 1849 repeal of the Navigation Acts. Britain reduced tariffs and quotas, in line with David Ricardo's advocacy of free trade.
Modernity
Broader processes of globalization carried capitalism across the world. By the beginning of the nineteenth century, a series of loosely connected market systems had come together as a relatively integrated global system, in turn intensifying processes of economic and other globalization. Late in the 20th century, capitalism overcame a challenge by centrally-planned economies and is now the encompassing system worldwide, with the mixed economy as its dominant form in the industrialized Western world.
Industrialization allowed cheap production of household items using economies of scale, while rapid population growth created sustained demand for commodities. The imperialism of the 18th-century decisively shaped globalization in this period.
After the First and Second Opium Wars (1839–1860) and the completion of the British conquest of India, vast populations of Asia became ready consumers of European exports. Also in this period, Europeans colonized areas of sub-Saharan Africa and the Pacific islands. The conquest of new parts of the globe, notably sub-Saharan Africa, by Europeans yielded valuable natural resources such as rubber, diamonds and coal and helped fuel trade and investment between the European imperial powers, their colonies and the United States:
From the 1870s to the early 1920s, the global financial system was mainly tied to the gold standard. The United Kingdom first formally adopted this standard in 1821. Soon to follow were Canada in 1853, Newfoundland in 1865, the United States and Germany (de jure) in 1873. New technologies, such as the telegraph, the transatlantic cable, the radiotelephone, the steamship and railways allowed goods and information to move around the world to an unprecedented degree.
In the United States, the term "capitalist" primarily referred to powerful businessmen until the 1920s due to widespread societal skepticism and criticism of capitalism and its most ardent supporters.
Contemporary capitalist societies developed in the West from 1950 to the present and this type of system continues to expand throughout different regions of the world—relevant examples started in the United States after the 1950s, France after the 1960s, Spain after the 1970s, Poland after 2015, and others. At this stage capitalist markets are considered developed and are characterized by developed private and public markets for equity and debt, a high standard of living (as characterized by the World Bank and the IMF), large institutional investors and a well-funded banking system. A significant managerial class has emerged and decides on a significant proportion of investments and other decisions. A different future than that envisioned by Marx has started to emerge—explored and described by Anthony Crosland in the United Kingdom in his 1956 book The Future of Socialism and by John Kenneth Galbraith in North America in his 1958 book The Affluent Society, 90 years after Marx's research on the state of capitalism in 1867.
The postwar boom ended in the late 1960s and early 1970s and the economic situation grew worse with the rise of stagflation. Monetarism, a modification of Keynesianism that is more compatible with laissez-faire analyses, gained increasing prominence in the capitalist world, especially under the years in office of Ronald Reagan in the United States (1981–1989) and of Margaret Thatcher in the United Kingdom (1979–1990). Public and political interest began shifting away from the so-called collectivist concerns of Keynes's managed capitalism to a focus on individual choice, called "remarketized capitalism".
The end of the Cold War and the dissolution of the Soviet Union allowed for capitalism to become a truly global system in a way not seen since before World War I. The development of the neoliberal global economy would have been impossible without the fall of communism.
Harvard Kennedy School economist Dani Rodrik distinguishes between three historical variants of capitalism:
Capitalism 1.0 during the 19th century entailed largely unregulated markets with a minimal role for the state (aside from national defense, and protecting property rights)
Capitalism 2.0 during the post-World War II years entailed Keynesianism, a substantial role for the state in regulating markets, and strong welfare states
Capitalism 2.1 entailed a combination of unregulated markets, globalization, and various national obligations by states
Relationship to democracy
The relationship between democracy and capitalism is a contentious area in theory and in popular political movements. The extension of adult-male suffrage in 19th-century Britain occurred along with the development of industrial capitalism and representative democracy became widespread at the same time as capitalism, leading capitalists to posit a causal or mutual relationship between them. However, according to some authors in the 20th-century, capitalism also accompanied a variety of political formations quite distinct from liberal democracies, including fascist regimes, absolute monarchies and single-party states. Democratic peace theory asserts that democracies seldom fight other democracies, but critics of that theory suggest that this may be because of political similarity or stability rather than because they are "democratic" or "capitalist". Moderate critics argue that though economic growth under capitalism has led to democracy in the past, it may not do so in the future as authoritarian régimes have been able to manage economic growth using some of capitalism's competitive principles without making concessions to greater political freedom.
Political scientists Torben Iversen and David Soskice see democracy and capitalism as mutually supportive. Robert Dahl argued in On Democracy that capitalism was beneficial for democracy because economic growth and a large middle class were good for democracy. He also argued that a market economy provided a substitute for government control of the economy, which reduces the risks of tyranny and authoritarianism.
In his book The Road to Serfdom (1944), Friedrich Hayek (1899–1992) asserted that the free-market understanding of economic freedom as present in capitalism is a requisite of political freedom. He argued that the market mechanism is the only way of deciding what to produce and how to distribute the items without using coercion. Milton Friedman and Ronald Reagan also promoted this view. Friedman claimed that centralized economic operations are always accompanied by political repression. In his view, transactions in a market economy are voluntary and that the wide diversity that voluntary activity permits is a fundamental threat to repressive political leaders and greatly diminishes their power to coerce. Some of Friedman's views were shared by John Maynard Keynes, who believed that capitalism was vital for freedom to survive and thrive. Freedom House, an American think-tank that conducts international research on, and advocates for, democracy, political freedom and human rights, has argued that "there is a high and statistically significant correlation between the level of political freedom as measured by Freedom House and economic freedom as measured by the Wall Street Journal/Heritage Foundation survey".
In Capital in the Twenty-First Century (2013), Thomas Piketty of the Paris School of Economics asserted that inequality is the inevitable consequence of economic growth in a capitalist economy and the resulting concentration of wealth can destabilize democratic societies and undermine the ideals of social justice upon which they are built.
States with capitalistic economic systems have thrived under political regimes deemed to be authoritarian or oppressive. Singapore has a successful open market economy as a result of its competitive, business-friendly climate and robust rule of law. Nonetheless, it often comes under fire for its style of government which, though democratic and consistently one of the least corrupt, operates largely under a one-party rule. Furthermore, it does not vigorously defend freedom of expression as evidenced by its government-regulated press, and its penchant for upholding laws protecting ethnic and religious harmony, judicial dignity and personal reputation. The private (capitalist) sector in the People's Republic of China has grown exponentially and thrived since its inception, despite having an authoritarian government. Augusto Pinochet's rule in Chile led to economic growth and high levels of inequality by using authoritarian means to create a safe environment for investment and capitalism. Similarly, Suharto's authoritarian reign and extirpation of the Communist Party of Indonesia allowed for the expansion of capitalism in Indonesia.
The term "capitalism" in its modern sense is often attributed to Karl Marx. In his Das Kapital, Marx analyzed the "capitalist mode of production" using a method of understanding today known as Marxism. However, Marx himself rarely used the term "capitalism" while it was used twice in the more political interpretations of his work, primarily authored by his collaborator Friedrich Engels. In the 20th century, defenders of the capitalist system often replaced the term "capitalism" with phrases such as free enterprise and private enterprise and replaced "capitalist" with rentier and investor in reaction to the negative connotations associated with capitalism.
Characteristics
In general, capitalism as an economic system and mode of production can be summarized by the following:
Capital accumulation: production for profit and accumulation as the implicit purpose of all or most of production, constriction or elimination of production formerly carried out on a common social or private household basis.
Commodity production: production for exchange on a market; to maximize exchange-value instead of use-value.
Private ownership of the means of production:
High levels of wage labor.
The investment of money to make a profit.
The use of the price mechanism to allocate resources between competing uses.
Economically efficient use of the factors of production and raw materials due to maximization of value added in the production process.
Freedom of capitalists to act in their self-interest in managing their business and investments.
Capital suppliance by "the single owner of a firm, or by shareholders in the case of a joint-stock company."
Market
In free market and laissez-faire forms of capitalism, markets are used most extensively with minimal or no regulation over the pricing mechanism. In mixed economies, which are almost universal today, markets continue to play a dominant role, but they are regulated to some extent by the state in order to correct market failures, promote social welfare, conserve natural resources, fund defense and public safety or other rationale. In state capitalist systems, markets are relied upon the least, with the state relying heavily on state-owned enterprises or indirect economic planning to accumulate capital.
Competition arises when more than one producer is trying to sell the same or similar products to the same buyers. Adherents of the capitalist theory believe that competition leads to innovation and more affordable prices. Monopolies or cartels can develop, especially if there is no competition. A monopoly occurs when a firm has exclusivity over a market. Hence, the firm can engage in rent seeking behaviors such as limiting output and raising prices because it has no fear of competition.
Governments have implemented legislation for the purpose of preventing the creation of monopolies and cartels. In 1890, the Sherman Antitrust Act became the first legislation passed by the United States Congress to limit monopolies.
Wage labor
Wage labor, usually referred to as paid work, paid employment, or paid labor, refers to the socioeconomic relationship between a worker and an employer in which the worker sells their labor power under a formal or informal employment contract. These transactions usually occur in a labor market where wages or salaries are market-determined.
In exchange for the money paid as wages (usual for short-term work-contracts) or salaries (in permanent employment contracts), the work product generally becomes the undifferentiated property of the employer. A wage laborer is a person whose primary means of income is from the selling of their labor in this way.
Profit motive
The profit motive, in the theory of capitalism, is the desire to earn income in the form of profit. Stated differently, the reason for a business's existence is to turn a profit. The profit motive functions according to rational choice theory, or the theory that individuals tend to pursue what is in their own best interests. Accordingly, businesses seek to benefit themselves and/or their shareholders by maximizing profit.
In capitalist theoretics, the profit motive is said to ensure that resources are being allocated efficiently. For instance, Austrian economist Henry Hazlitt explains: "If there is no profit in making an article, it is a sign that the labor and capital devoted to its production are misdirected: the value of the resources that must be used up in making the article is greater than the value of the article itself".
Socialist theorists note that, unlike merchantilists, capitalists accumulate their profits while expecting their profit rates to remain the same. This causes problems as earnings in the rest of society do not increase in the same proportion. <ref>"What is capitalism" Australian Socialist' https://search.informit.org/doi/10.3316/informit.818838886883514 </ref>
Private property
The relationship between the state, its formal mechanisms, and capitalist societies has been debated in many fields of social and political theory, with active discussion since the 19th century. Hernando de Soto is a contemporary Peruvian economist who has argued that an important characteristic of capitalism is the functioning state protection of property rights in a formal property system where ownership and transactions are clearly recorded.
According to de Soto, this is the process by which physical assets are transformed into capital, which in turn may be used in many more ways and much more efficiently in the market economy. A number of Marxian economists have argued that the Enclosure Acts in England and similar legislation elsewhere were an integral part of capitalist primitive accumulation and that specific legal frameworks of private land ownership have been integral to the development of capitalism.
Private property rights are not absolute, as in many countries the state has the power to seize private property, typically for public use, under the powers of eminent domain.
Market competition
In capitalist economics, market competition is the rivalry among sellers trying to achieve such goals as increasing profits, market share and sales volume by varying the elements of the marketing mix: price, product, distribution and promotion. Merriam-Webster defines competition in business as "the effort of two or more parties acting independently to secure the business of a third party by offering the most favourable terms". It was described by Adam Smith in The Wealth of Nations (1776) and later economists as allocating productive resources to their most highly valued uses and encouraging efficiency. Smith and other classical economists before Antoine Augustine Cournot were referring to price and non-price rivalry among producers to sell their goods on best terms by bidding of buyers, not necessarily to a large number of sellers nor to a market in final equilibrium. Competition is widespread throughout the market process. It is a condition where "buyers tend to compete with other buyers, and sellers tend to compete with other sellers". In offering goods for exchange, buyers competitively bid to purchase specific quantities of specific goods which are available, or might be available if sellers were to choose to offer such goods. Similarly, sellers bid against other sellers in offering goods on the market, competing for the attention and exchange resources of buyers. Competition results from scarcity, as it is not possible to satisfy all conceivable human wants, and occurs as people try to meet the criteria being used to determine allocation.
In the works of Adam Smith, the idea of capitalism is made possible through competition which creates growth. Although capitalism has not entered mainstream economics at the time of Smith, it is vital to the construction of his ideal society. One of the foundational blocks of capitalism is competition. Smith believed that a prosperous society is one where "everyone should be free to enter and leave the market and change trades as often as he pleases." He believed that the freedom to act in one's self-interest is essential for the success of a capitalist society. The fear arises that if all participants focus on their own goals, society's well-being will be water under the bridge. Smith maintains that despite the concerns of intellectuals, "global trends will hardly be altered if they refrain from pursuing their personal ends." He insisted that the actions of a few participants cannot alter the course of society. Instead, Smith maintained that they should focus on personal progress instead and that this will result in overall growth to the whole.
Competition between participants, "who are all endeavoring to justle one another out of employment, obliges every man to endeavor to execute his work" through competition towards growth.
Economic growth
Economic growth is a characteristic tendency of capitalist economies.
As a mode of production
The capitalist mode of production refers to the systems of organising production and distribution within capitalist societies. Private money-making in various forms (renting, banking, merchant trade, production for profit and so on) preceded the development of the capitalist mode of production as such.
The term capitalist mode of production is defined by private ownership of the means of production, extraction of surplus value by the owning class for the purpose of capital accumulation, wage-based labor and, at least as far as commodities are concerned, being market-based.
Capitalism in the form of money-making activity has existed in the shape of merchants and money-lenders who acted as intermediaries between consumers and producers engaging in simple commodity production (hence the reference to "merchant capitalism") since the beginnings of civilisation. What is specific about the "capitalist mode of production" is that most of the inputs and outputs of production are supplied through the market (i.e. they are commodities) and essentially all production is in this mode. By contrast, in flourishing feudalism most or all of the factors of production, including labor, are owned by the feudal ruling class outright and the products may also be consumed without a market of any kind, it is production for use within the feudal social unit and for limited trade. This has the important consequence that, under capitalism, the whole organisation of the production process is reshaped and re-organised to conform with economic rationality as bounded by capitalism, which is expressed in price relationships between inputs and outputs (wages, non-labor factor costs, sales and profits) rather than the larger rational context faced by society overall—that is, the whole process is organised and re-shaped in order to conform to "commercial logic". Essentially, capital accumulation comes to define economic rationality in capitalist production.
A society, region or nation is capitalist if the predominant source of incomes and products being distributed is capitalist activity, but even so this does not yet mean necessarily that the capitalist mode of production is dominant in that society.
Mixed economies rely on the nation they are in to provide some goods or services, while the free market produces and maintains the rest.
Role of government
Government agencies regulate the standards of service in many industries, such as airlines and broadcasting, as well as financing a wide range of programs. In addition, the government regulates the flow of capital and uses financial tools such as the interest rate to control such factors as inflation and unemployment.
Supply and demand
In capitalist economic structures, supply and demand is an economic model of price determination in a market. It postulates that in a perfectly competitive market, the unit price for a particular good will vary until it settles at a point where the quantity demanded by consumers (at the current price) will equal the quantity supplied by producers (at the current price), resulting in an economic equilibrium for price and quantity.
The "basic laws" of supply and demand, as described by David Besanko and Ronald Braeutigam, are the following four:
If demand increases (demand curve shifts to the right) and supply remains unchanged, then a shortage occurs, leading to a higher equilibrium price.
If demand decreases (demand curve shifts to the left) and supply remains unchanged, then a surplus occurs, leading to a lower equilibrium price.
If demand remains unchanged and supply increases (supply curve shifts to the right), then a surplus occurs, leading to a lower equilibrium price.
If demand remains unchanged and supply decreases (supply curve shifts to the left), then a shortage occurs, leading to a higher equilibrium price.
Supply schedule
A supply schedule is a table that shows the relationship between the price of a good and the quantity supplied.
Demand schedule
A demand schedule, depicted graphically as the demand curve, represents the amount of some goods that buyers are willing and able to purchase at various prices, assuming all determinants of demand other than the price of the good in question, such as income, tastes and preferences, the price of substitute goods and the price of complementary goods, remain the same. According to the law of demand, the demand curve is almost always represented as downward sloping, meaning that as price decreases, consumers will buy more of the good.
Just like the supply curves reflect marginal cost curves, demand curves are determined by marginal utility curves.
Equilibrium
In the context of supply and demand, economic equilibrium refers to a state where economic forces such as supply and demand are balanced and in the absence of external influences the (equilibrium) values of economic variables will not change. For example, in the standard text-book model of perfect competition equilibrium occurs at the point at which quantity demanded and quantity supplied are equal. Market equilibrium, in this case, refers to a condition where a market price is established through competition such that the amount of goods or services sought by buyers is equal to the amount of goods or services produced by sellers. This price is often called the competitive price or market clearing price and will tend not to change unless demand or supply changes.
Partial equilibrium
Partial equilibrium, as the name suggests, takes into consideration only a part of the market to attain equilibrium. Jain proposes (attributed to George Stigler): "A partial equilibrium is one which is based on only a restricted range of data, a standard example is price of a single product, the prices of all other products being held fixed during the analysis".
History
According to Hamid S. Hosseini, the "power of supply and demand" was discussed to some extent by several early Muslim scholars, such as fourteenth century Mamluk scholar Ibn Taymiyyah, who wrote: "If desire for goods increases while its availability decreases, its price rises. On the other hand, if availability of the good increases and the desire for it decreases, the price comes down".
John Locke's 1691 work Some Considerations on the Consequences of the Lowering of Interest and the Raising of the Value of Money includes an early and clear description of supply and demand and their relationship. In this description, demand is rent: "The price of any commodity rises or falls by the proportion of the number of buyer and sellers" and "that which regulates the price... [of goods] is nothing else but their quantity in proportion to their rent".
David Ricardo titled one chapter of his 1817 work Principles of Political Economy and Taxation "On the Influence of Demand and Supply on Price".
In Principles of Political Economy and Taxation, Ricardo more rigorously laid down the idea of the assumptions that were used to build his ideas of supply and demand.
In his 1870 essay "On the Graphical Representation of Supply and Demand", Fleeming Jenkin in the course of "introduc[ing] the diagrammatic method into the English economic literature" published the first drawing of supply and demand curves therein, including comparative statics from a shift of supply or demand and application to the labor market. The model was further developed and popularized by Alfred Marshall in the 1890 textbook Principles of Economics.
Types
There are many variants of capitalism in existence that differ according to country and region. They vary in their institutional makeup and by their economic policies. The common features among all the different forms of capitalism are that they are predominantly based on the private ownership of the means of production and the production of goods and services for profit; the market-based allocation of resources; and the accumulation of capital.
They include advanced capitalism, corporate capitalism, finance capitalism, free-market capitalism, mercantilism, social capitalism, state capitalism and welfare capitalism. Other theoretical variants of capitalism include anarcho-capitalism, community capitalism, humanistic capitalism, neo-capitalism, state monopoly capitalism, and technocapitalism.
Advanced
Advanced capitalism is the situation that pertains to a society in which the capitalist model has been integrated and developed deeply and extensively for a prolonged period. Various writers identify Antonio Gramsci as an influential early theorist of advanced capitalism, even if he did not use the term himself. In his writings, Gramsci sought to explain how capitalism had adapted to avoid the revolutionary overthrow that had seemed inevitable in the 19th century. At the heart of his explanation was the decline of raw coercion as a tool of class power, replaced by use of civil society institutions to manipulate public ideology in the capitalists' favour.Holub, Renate (2005) Antonio Gramsci: Beyond Marxism and Postmodernism
Jürgen Habermas has been a major contributor to the analysis of advanced-capitalistic societies. Habermas observed four general features that characterise advanced capitalism:
Concentration of industrial activity in a few large firms.
Constant reliance on the state to stabilise the economic system.
A formally democratic government that legitimises the activities of the state and dissipates opposition to the system.
The use of nominal wage increases to pacify the most restless segments of the work force.
Corporate
Corporate capitalism is a free or mixed-market capitalist economy characterized by the dominance of hierarchical, bureaucratic corporations.
Finance
Finance capitalism is the subordination of processes of production to the accumulation of money profits in a financial system. In their critique of capitalism, Marxism and Leninism both emphasise the role of finance capital as the determining and ruling-class interest in capitalist society, particularly in the latter stages.
Rudolf Hilferding is credited with first bringing the term finance capitalism into prominence through Finance Capital, his 1910 study of the links between German trusts, banks and monopolies—a study subsumed by Vladimir Lenin into Imperialism, the Highest Stage of Capitalism (1917), his analysis of the imperialist relations of the great world powers. Lenin concluded that the banks at that time operated as "the chief nerve centres of the whole capitalist system of national economy". For the Comintern (founded in 1919), the phrase "dictatorship of finance capitalism" became a regular one.
Fernand Braudel would later point to two earlier periods when finance capitalism had emerged in human history—with the Genoese in the 16th century and with the Dutch in the 17th and 18th centuries—although at those points it developed from commercial capitalism. Giovanni Arrighi extended Braudel's analysis to suggest that a predominance of finance capitalism is a recurring, long-term phenomenon, whenever a previous phase of commercial/industrial capitalist expansion reaches a plateau.
Free market
A capitalist free-market economy is an economic system where prices for goods and services are set entirely by the forces of supply and demand and are expected, by its adherents, to reach their point of equilibrium without intervention by government policy. It typically entails support for highly competitive markets and private ownership of the means of production. Laissez-faire capitalism is a more extensive form of this free-market economy, but one in which the role of the state is limited to protecting property rights. In anarcho-capitalist theory, property rights are protected by private firms and market-generated law. According to anarcho-capitalists, this entails property rights without statutory law through market-generated tort, contract and property law, and self-sustaining private industry.
Fernand Braudel argued that free market exchange and capitalism are to some degree opposed; free market exchange involves transparent public transactions and a large number of equal competitors, while capitalism involves a small number of participants using their capital to control the market via private transactions, control of information, and limitation of competition.
Mercantile
Mercantilism is a nationalist form of early capitalism that came into existence approximately in the late 16th century. It is characterized by the intertwining of national business interests with state-interest and imperialism. Consequently, the state apparatus is used to advance national business interests abroad. An example of this is colonists living in America who were only allowed to trade with and purchase goods from their respective mother countries (e.g., Britain, France and Portugal). Mercantilism was driven by the belief that the wealth of a nation is increased through a positive balance of trade with other nations—it corresponds to the phase of capitalist development sometimes called the primitive accumulation of capital.
Social
A social market economy is a free-market or mixed-market capitalist system, sometimes classified as a coordinated market economy, where government intervention in price formation is kept to a minimum, but the state provides significant services in areas such as social security, health care, unemployment benefits and the recognition of labor rights through national collective bargaining arrangements.
This model is prominent in Western and Northern European countries as well as Japan, albeit in slightly different configurations. The vast majority of enterprises are privately owned in this economic model.
Rhine capitalism is the contemporary model of capitalism and adaptation of the social market model that exists in continental Western Europe today.
State
State capitalism is a capitalist market economy dominated by state-owned enterprises, where the state enterprises are organized as commercial, profit-seeking businesses. The designation has been used broadly throughout the 20th century to designate a number of different economic forms, ranging from state-ownership in market economies to the command economies of the former Eastern Bloc. According to Aldo Musacchio, a professor at Harvard Business School, state capitalism is a system in which governments, whether democratic or autocratic, exercise a widespread influence on the economy either through direct ownership or various subsidies. Musacchio notes a number of differences between today's state capitalism and its predecessors. In his opinion, gone are the days when governments appointed bureaucrats to run companies: the world's largest state-owned enterprises are now traded on the public markets and kept in good health by large institutional investors. Contemporary state capitalism is associated with the East Asian model of capitalism, dirigisme and the economy of Norway. Alternatively, Merriam-Webster defines state capitalism as "an economic system in which private capitalism is modified by a varying degree of government ownership and control".
In Socialism: Utopian and Scientific, Friedrich Engels argued that state-owned enterprises would characterize the final stage of capitalism, consisting of ownership and management of large-scale production and communication by the bourgeois state. In his writings, Vladimir Lenin characterized the economy of Soviet Russia as state capitalist, believing state capitalism to be an early step toward the development of socialism.V.I. Lenin. To the Russian Colony in North America . Lenin Collected Works, Progress Publishers, 1971, Moscow, vol. 42, pp. 425c–27a.
Some economists and left-wing academics including Richard D. Wolff and Noam Chomsky, as well as many Marxist philosophers and revolutionaries such as Raya Dunayevskaya and C.L.R. James, argue that the economies of the former Soviet Union and Eastern Bloc represented a form of state capitalism because their internal organization within enterprises and the system of wage labor remained intact.Noam Chomsky (1986). The Soviet Union Versus Socialism . Our Generation. Retrieved 9 July 2015.
The term is not used by Austrian School economists to describe state ownership of the means of production. The economist Ludwig von Mises argued that the designation of state capitalism was a new label for the old labels of state socialism and planned economy and differed only in non-essentials from these earlier designations.
Welfare
Welfare capitalism is capitalism that includes social welfare policies. Today, welfare capitalism is most often associated with the models of capitalism found in Central Mainland and Northern Europe such as the Nordic model, social market economy and Rhine capitalism. In some cases, welfare capitalism exists within a mixed economy, but welfare states can and do exist independently of policies common to mixed economies such as state interventionism and extensive regulation.
A mixed economy is a largely market-based capitalist economy consisting of both private and public ownership of the means of production and economic interventionism through macroeconomic policies intended to correct market failures, reduce unemployment and keep inflation low. The degree of intervention in markets varies among different countries. Some mixed economies such as France under dirigisme also featured a degree of indirect economic planning over a largely capitalist-based economy.
Most modern capitalist economies are defined as mixed economies to some degree, however French economist Thomas Piketty state that capitalist economies might shift to a much more laissez-faire approach in the near future.
Eco-capitalism
Eco-capitalism, also known as "environmental capitalism" or (sometimes) "green capitalism", is the view that capital exists in nature as "natural capital" (ecosystems that have ecological yield) on which all wealth depends. Therefore, governments should use market-based policy-instruments (such as a carbon tax) to resolve environmental problems.
The term "Blue Greens" is often applied to those who espouse eco-capitalism. Eco-capitalism can be thought of as the right-wing equivalent to Red Greens.
Sustainable capitalism
Sustainable capitalism is a conceptual form of capitalism based upon sustainable practices that seek to preserve humanity and the planet, while reducing externalities and bearing a resemblance of capitalist economic policy. A capitalistic economy must expand to survive and find new markets to support this expansion. Capitalist systems are often destructive to the environment as well as certain individuals without access to proper representation. However, sustainability provides quite the opposite; it implies not only a continuation, but a replenishing of resources. Sustainability is often thought of to be related to environmentalism, and sustainable capitalism applies sustainable principles to economic governance and social aspects of capitalism as well.
The importance of sustainable capitalism has been more recently recognized, but the concept is not new. Changes to the current economic model would have heavy social environmental and economic implications and require the efforts of individuals, as well as compliance of local, state and federal governments. Controversy surrounds the concept as it requires an increase in sustainable practices and a marked decrease in current consumptive behaviors.
This is a concept of capitalism described in Al Gore and David Blood's manifesto for the Generation Investment Management to describe a long-term political, economic and social structure which would mitigate current threats to the planet and society. According to their manifesto, sustainable capitalism would integrate the environmental, social and governance (ESG) aspects into risk assessment in attempt to limit externalities. Most of the ideas they list are related to economic changes, and social aspects, but strikingly few are explicitly related to any environmental policy change.
Capital accumulation
The accumulation of capital is the process of "making money" or growing an initial sum of money through investment in production. Capitalism is based on the accumulation of capital, whereby financial capital is invested in order to make a profit and then reinvested into further production in a continuous process of accumulation. In Marxian economic theory, this dynamic is called the law of value. Capital accumulation forms the basis of capitalism, where economic activity is structured around the accumulation of capital, defined as investment in order to realize a financial profit. In this context, "capital" is defined as money or a financial asset invested for the purpose of making more money (whether in the form of profit, rent, interest, royalties, capital gain or some other kind of return).
In mainstream economics, accounting and Marxian economics, capital accumulation is often equated with investment of profit income or savings, especially in real capital goods. The concentration and centralisation of capital are two of the results of such accumulation. In modern macroeconomics and econometrics, the phrase "capital formation" is often used in preference to "accumulation", though the United Nations Conference on Trade and Development (UNCTAD) refers nowadays to "accumulation". The term "accumulation" is occasionally used in national accounts.
Wage labor
Wage labor refers to the sale of labor under a formal or informal employment contract to an employer. These transactions usually occur in a labor market where wages are market determined. In Marxist economics, these owners of the means of production and suppliers of capital are generally called capitalists. The description of the role of the capitalist has shifted, first referring to a useless intermediary between producers, then to an employer of producers, and finally to the owners of the means of production. Labor includes all physical and mental human resources, including entrepreneurial capacity and management skills, which are required to produce products and services. Production is the act of making goods or services by applying labor power.Robbins, Richard H. Global problems and the culture of capitalism. Boston: Allyn & Bacon, 2007. Print.
Criticism
Criticism of capitalism comes from various political and philosophical approaches, including anarchist, socialist, religious and nationalist viewpoints. Of those who oppose it or want to modify it, some believe that capitalism should be removed through revolution while others believe that it should be changed slowly through political reforms.
Prominent critiques of capitalism allege that it is inherently exploitative, alienating, unstable, unsustainable, and economically inefficient—and that it creates massive economic inequality, commodifies people, degrades the environment, is anti-democratic, and leads to an erosion of human rights because of its incentivization of imperialist expansion and war.
Other critics argue that such inequities are not due to the ethic-neutral construct of the economic system commonly known as capitalism, but to the ethics of those who shape and execute the system. For example, some contend that Milton Friedman's (human) ethic of 'maximizing shareholder value' creates a harmful form of capitalism, while a Millard Fuller or John Bogle (human) ethic of 'enough' creates a sustainable form. Equitable ethics and unified ethical decision-making is theorized to create a less damaging form of capitalism.
See also
Anti-capitalism
Advanced capitalism
Ancient economic thought
Bailout Capitalism
Capitalism (disambiguation)
Christian views on poverty and wealth
Communism
Corporatocracy
Crony capitalism
Economic sociology
Free market
Global financial crisis in September 2008
Humanistic economics
Invisible hand
Late capitalism
Le Livre noir du capitalisme Market socialism
Perspectives on capitalism by school of thought
Post-capitalism
Post-Fordism
Racial capitalism
Rent-seeking
State monopoly capitalism
Surveillance capitalism
Perestroika
References
Notes
Bibliography
Krahn, Harvey J., and Graham S. Lowe (1993). Work, Industry, and Canadian Society. Second ed. Scarborough, Ont.: Nelson Canada. xii, 430 p.
Further reading
Alperovitz, Gar (2011). America Beyond Capitalism: Reclaiming Our Wealth, Our Liberty, and Our Democracy, 2nd Edition. Democracy Collaborative Press. .
Ascher, Ivan. Portfolio Society: On the Capitalist Mode of Prediction. Zone Books, 2016.
Baptist, Edward E. The Half Has Never Been Told: Slavery and the Making of American Capitalism. New York, Basic Books, 2014. .
Braudel, Fernand. Civilization and Capitalism.
Callinicos, Alex. "Wage Labour and State Capitalism – A reply to Peter Binns and Mike Haynes", International Socialism, second series, 12, Spring 1979.
Farl, Erich. "The Genealogy of State Capitalism". In: International London, vol. 2, no. 1, 1973.
Gough, Ian. State Expenditure in Advanced Capitalism New Left Review.
Habermas, J. [1973] Legitimation Crisis (eng. translation by T. McCarthy). Boston, Beacon. From Google books ; excerpt.
Hyman, Louis and Edward E. Baptist (2014). American Capitalism: A Reader. Simon & Schuster. .
Jameson, Fredric (1991). Postmodernism, or, the Cultural Logic of Late Capitalism.
Kotler, Philip (2015). Confronting Capitalism: Real Solutions for a Troubled Economic System. AMACOM.
Mandel, Ernest (1999). Late Capitalism.
Marcel van der Linden, Western Marxism and the Soviet Union. New York, Brill Publishers, 2007.
Mayfield, Anthony. "Economics", in his On the Brink: Resource Depletion, Debt Collapse, and Super-technology ([Vancouver, B.C., Canada]: On the Brink Publishing, 2013), pp. 50–104.
Panitch, Leo, and Sam Gindin (2012). The Making of Global Capitalism: the Political Economy of American Empire. London, Verso. .
Polanyi, Karl (2001). The Great Transformation: The Political and Economic Origins of Our Time. Beacon Press; 2nd ed.
Richards, Jay W. (2009). Money, Greed, and God: Why Capitalism is the Solution and Not the Problem. New York: HarperOne.
Roberts, Paul Craig (2013). The Failure of Laissez-faire Capitalism: towards a New Economics for a Full World. Atlanta, Ga.: Clarity Press.
Robinson, William I. Global Capitalism and the Crisis of Humanity. Cambridge University Press, 2014.
Hoevet, Ocean. "Capital as a Social Relation" (New Palgrave article)
Sombart, Werner (1916) Der moderne Kapitalismus. Historisch-systematische Darstellung des gesamteuropäischen Wirtschaftslebens von seinen Anfängen bis zur Gegenwart. Final edn. 1916, repr. 1969, paperback edn. (3 vols. in 6): 1987 Munich: dtv. (Also in Spanish; no English translation yet.)
Tarnoff, Ben, "Better, Faster, Stronger" (review of John Tinnell, The Philosopher of Palo Alto: Mark Weisner, Xerox PARC, and the Original Internet of Things, University of Chicago Press, 347 pp.; and Malcolm Harris, Palo Alto: A History of California, Capitalism, and the World, Little, Brown, 708 pp.), The New York Review of Books, vol. LXX, no. 14 (21 September 2023), pp. 38–40. "[Palo Alto is] a place where the [United States'] contradictions are sharpened to their finest points, above all the defining and enduring contradictions between democratic principle and antidemocratic practice. There is nothing as American as celebrating equality while subverting it. Or as Californian." (p. 40.)
External links
Capitalism at Encyclopædia Britannica'' Online.
Selected Titles on Capitalism and Its Discontents. Harvard University Press.
Accounting
Banking
Business
Economic liberalism
Production economics
Profit
Social philosophy
Western culture
Finance |
5421 | https://en.wikipedia.org/wiki/Cardiology | Cardiology | Cardiology () is the study of the heart. Cardiology is a branch of medicine that deals with disorders of the heart and the cardiovascular system. The field includes medical diagnosis and treatment of congenital heart defects, coronary artery disease, heart failure, valvular heart disease, and electrophysiology. Physicians who specialize in this field of medicine are called cardiologists, a specialty of internal medicine. Pediatric cardiologists are pediatricians who specialize in cardiology. Physicians who specialize in cardiac surgery are called cardiothoracic surgeons or cardiac surgeons, a specialty of general surgery.
Specializations
All cardiologists in the branch of medicine study the disorders of the heart, but the study of adult and child heart disorders each require different training pathways. Therefore, an adult cardiologist (often simply called "cardiologist") is inadequately trained to take care of children, and pediatric cardiologists are not trained to treat adult heart disease. Surgical aspects are not included in cardiology and are in the domain of cardiothoracic surgery. For example, coronary artery bypass surgery (CABG), cardiopulmonary bypass and valve replacement are surgical procedures performed by surgeons, not cardiologists. However, some minimally invasive procedures such as cardiac catheterization and pacemaker implantation are performed by cardiologists who have additional training in non-surgical interventions (interventional cardiology and electrophysiology respectively).
Adult cardiology
Cardiology is a specialty of internal medicine. To be a cardiologist in the United States, a three-year residency in internal medicine is followed by a three-year fellowship in cardiology. It is possible to specialize further in a sub-specialty. Recognized sub-specialties in the U.S. by the Accreditation Council for Graduate Medical Education are cardiac electrophysiology, echocardiography, interventional cardiology, and nuclear cardiology. Recognized subspecialties in the U.S. by the American Osteopathic Association Bureau of Osteopathic Specialists include clinical cardiac electrophysiology and interventional cardiology. In India, a three-year residency in General Medicine or Pediatrics after M.B.B.S. and then three years of residency in cardiology are needed to be a D.M./Diplomate of National Board (DNB) in Cardiology.
Per Doximity, adult cardiologists earn an average of $436,849 per year in the U.S.
Cardiac electrophysiology
Cardiac electrophysiology is the science of elucidating, diagnosing, and treating the electrical activities of the heart. The term is usually used to describe studies of such phenomena by invasive (intracardiac) catheter recording of spontaneous activity as well as of cardiac responses to programmed electrical stimulation (PES). These studies are performed to assess complex arrhythmias, elucidate symptoms, evaluate abnormal electrocardiograms, assess risk of developing arrhythmias in the future, and design treatment. These procedures increasingly include therapeutic methods (typically radiofrequency ablation, or cryoablation) in addition to diagnostic and prognostic procedures. Other therapeutic modalities employed in this field include antiarrhythmic drug therapy and implantation of pacemakers and automatic implantable cardioverter-defibrillators (AICD).
The cardiac electrophysiology study typically measures the response of the injured or cardiomyopathic myocardium to PES on specific pharmacological regimens in order to assess the likelihood that the regimen will successfully prevent potentially fatal sustained ventricular tachycardia (VT) or ventricular fibrillation (VF) in the future. Sometimes a series of electrophysiology-study drug trials must be conducted to enable the cardiologist to select the one regimen for long-term treatment that best prevents or slows the development of VT or VF following PES. Such studies may also be conducted in the presence of a newly implanted or newly replaced cardiac pacemaker or AICD.
Clinical cardiac electrophysiology
Clinical cardiac electrophysiology is a branch of the medical specialty of cardiology and is concerned with the study and treatment of rhythm disorders of the heart. Cardiologists with expertise in this area are usually referred to as electrophysiologists. Electrophysiologists are trained in the mechanism, function, and performance of the electrical activities of the heart. Electrophysiologists work closely with other cardiologists and cardiac surgeons to assist or guide therapy for heart rhythm disturbances (arrhythmias). They are trained to perform interventional and surgical procedures to treat cardiac arrhythmia.
The training required to become an electrophysiologist is long and requires 8 years after medical school (within the U.S.). Three years of internal medicine residency, three years of cardiology fellowship, and two years of clinical cardiac electrophysiology.
Cardiogeriatrics
Cardiogeriatrics, or geriatric cardiology, is the branch of cardiology and geriatric medicine that deals with the cardiovascular disorders in elderly people.
Cardiac disorders such as coronary heart disease, including myocardial infarction, heart failure, cardiomyopathy, and arrhythmias such as atrial fibrillation, are common and are a major cause of mortality in elderly people. Vascular disorders such as atherosclerosis and peripheral arterial disease cause significant morbidity and mortality in aged people.
Imaging
Cardiac imaging includes echocardiography (echo), cardiac magnetic resonance imaging (CMR), and computed tomography of the heart.
Those who specialize in cardiac imaging may undergo more training in all imaging modes or focus on a single imaging modality.
Echocardiography (or "echo") uses standard two-dimensional, three-dimensional, and Doppler ultrasound to create images of the heart.
Those who specialize in echo may spend a significant amount of their clinical time reading echos and performing transesophageal echo, in particular using the latter during procedures such as insertion of a left atrial appendage occlusion device.
Cardiac MRI utilizes special protocols to image heart structure and function with specific sequences for certain diseases such as hemochromatosis and amyloidosis.
Cardiac CT utilizes special protocols to image heart structure and function with particular emphasis on coronary arteries.
Interventional cardiology
Interventional cardiology is a branch of cardiology that deals specifically with the catheter based treatment of structural heart diseases. A large number of procedures can be performed on the heart by catheterization, including angiogram, angioplasty, atherectomy, and stent implantation. These procedures all involve insertion of a sheath into the femoral artery or radial artery (but, in practice, any large peripheral artery or vein) and cannulating the heart under visualization (most commonly fluoroscopy). This cannulation allows indirect access to the heart, bypassing the trauma caused by surgical opening of the chest.
The main advantages of using the interventional cardiology or radiology approach are the avoidance of the scars and pain, and long post-operative recovery. Additionally, interventional cardiology procedure of primary angioplasty is now the gold standard of care for an acute myocardial infarction. This procedure can also be done proactively, when areas of the vascular system become occluded from atherosclerosis. The Cardiologist will thread this sheath through the vascular system to access the heart. This sheath has a balloon and a tiny wire mesh tube wrapped around it, and if the cardiologist finds a blockage or stenosis, they can inflate the balloon at the occlusion site in the vascular system to flatten or compress the plaque against the vascular wall. Once that is complete a stent is placed as a type of scaffold to hold the vasculature open permanently.
Cardiomyopathy/heart failure
Specialization of general cardiology to just that of the cardiomyopathies leads to also specializing in heart transplant and pulmonary hypertension. Cardiomyopathy is a heart disease of the heart muscle, where the heart muscle becomes inflamed and thick.
Cardiooncology
A recent specialization of cardiology is that of cardiooncology.
This area specializes in the cardiac management in those with cancer and, in particular, those with plans for chemotherapy or whom have experienced cardiac complications of chemotherapy.
Preventive cardiology and cardiac rehabilitation
In recent times, the focus is gradually shifting to preventive cardiology due to increased cardiovascular disease burden at an early age. According to the WHO, 37% of all premature deaths are due to cardiovascular diseases and out of this, 82% are in low and middle income countries. Clinical cardiology is the sub specialty of cardiology which looks after preventive cardiology and cardiac rehabilitation. Preventive cardiology also deals with routine preventive checkup though noninvasive tests, specifically electrocardiography, fasegraphy, stress tests, lipid profile and general physical examination to detect any cardiovascular diseases at an early age, while cardiac rehabilitation is the upcoming branch of cardiology which helps a person regain their overall strength and live a normal life after a cardiovascular event. A subspecialty of preventive cardiology is sports cardiology. Because heart disease is the leading cause of death in the world including United States (cdc.gov), national health campaigns and randomized control research has developed to improve heart health.
Pediatric cardiology
Helen B. Taussig is known as the founder of pediatric cardiology. She became famous through her work with Tetralogy congenital heart defect in which oxygenated and deoxygenated blood enters the circulatory system resulting from a ventricular septal defect (VSD) right beneath the aorta. This condition causes newborns to have a bluish-tint, cyanosis, and have a deficiency of oxygen to their tissues, hypoxemia. She worked with Alfred Blalock and Vivien Thomas at the Johns Hopkins Hospital where they experimented with dogs to look at how they would attempt to surgically cure these "blue babies". They eventually figured out how to do just that by the anastomosis of the systemic artery to the pulmonary artery and called this the Blalock-Taussig Shunt.
Tetralogy of Fallot, pulmonary atresia, double outlet right ventricle, transposition of the great arteries, persistent truncus arteriosus, and Ebstein's anomaly are various congenital cyanotic heart diseases, in which the blood of the newborn is not oxygenated efficiently, due to the heart defect.
Adult congenital heart disease
As more children with congenital heart disease are surviving into adulthood, a hybrid of adult & pediatric cardiology has emerged called adult congenital heart disease (ACHD).
This field can be entered as either adult or pediatric cardiology.
ACHD specializes in congenital diseases in the setting of adult diseases (e.g., coronary artery disease, COPD, diabetes) that is, otherwise, atypical for adult or pediatric cardiology.
The heart
As the center focus of cardiology, the heart has numerous anatomical features (e.g., atria, ventricles, heart valves) and numerous physiological features (e.g., systole, heart sounds, afterload) that have been encyclopedically documented for many centuries. The heart is located in the middle of the abdomen with its tip slightly towards the left side of the abdomen.
Disorders of the heart lead to heart disease and cardiovascular disease and can lead to a significant number of deaths: cardiovascular disease is the leading cause of death in the U.S. and caused 24.95% of total deaths in 2008.
The primary responsibility of the heart is to pump blood throughout the body.
It pumps blood from the body — called the systemic circulation — through the lungs — called the pulmonary circulation — and then back out to the body.
This means that the heart is connected to and affects the entirety of the body. Simplified, the heart is a circuit of the circulation.
While plenty is known about the healthy heart, the bulk of study in cardiology is in disorders of the heart and restoration, and where possible, of function.
The heart is a muscle that squeezes blood and functions like a pump. The heart's systems can be classified as either electrical or mechanical, and both of these systems are susceptible to failure or dysfunction.
The electrical system of the heart is centered on the periodic contraction (squeezing) of the muscle cells that is caused by the cardiac pacemaker located in the sinoatrial node.
The study of the electrical aspects is a sub-field of electrophysiology called cardiac electrophysiology and is epitomized with the electrocardiogram (ECG/EKG).
The action potentials generated in the pacemaker propagate throughout the heart in a specific pattern. The system that carries this potential is called the electrical conduction system.
Dysfunction of the electrical system manifests in many ways and may include Wolff–Parkinson–White syndrome, ventricular fibrillation, and heart block.
The mechanical system of the heart is centered on the fluidic movement of blood and the functionality of the heart as a pump.
The mechanical part is ultimately the purpose of the heart and many of the disorders of the heart disrupt the ability to move blood.
Heart failure is one condition in which the mechanical properties of the heart have failed or are failing, which means insufficient blood is being circulated. Failure to move a sufficient amount of blood through the body can cause damage or failure of other organs and may result in death if severe.
Coronary circulation
Coronary circulation is the circulation of blood in the blood vessels of the heart muscle (the myocardium). The vessels that deliver oxygen-rich blood to the myocardium are known as coronary arteries. The vessels that remove the deoxygenated blood from the heart muscle are known as cardiac veins. These include the great cardiac vein, the middle cardiac vein, the small cardiac vein and the anterior cardiac veins.
As the left and right coronary arteries run on the surface of the heart, they can be called epicardial coronary arteries. These arteries, when healthy, are capable of autoregulation to maintain coronary blood flow at levels appropriate to the needs of the heart muscle. These relatively narrow vessels are commonly affected by atherosclerosis and can become blocked, causing angina or myocardial infarction (a.k.a a heart attack). The coronary arteries that run deep within the myocardium are referred to as subendocardial.
The coronary arteries are classified as "end circulation", since they represent the only source of blood supply to the myocardium; there is very little redundant blood supply, which is why blockage of these vessels can be so critical.
Cardiac examination
The cardiac examination (also called the "precordial exam"), is performed as part of a physical examination, or when a patient presents with chest pain suggestive of a cardiovascular pathology. It would typically be modified depending on the indication and integrated with other examinations especially the respiratory examination.
Like all medical examinations, the cardiac examination follows the standard structure of inspection, palpation and auscultation.
Heart disorders
Cardiology is concerned with the normal functionality of the heart and the deviation from a healthy heart. Many disorders involve the heart itself, but some are outside of the heart and in the vascular system. Collectively, the two are jointly termed the cardiovascular system, and diseases of one part tend to affect the other.
Coronary artery disease
Coronary artery disease, also known as "ischemic heart disease", is a group of diseases that includes: stable angina, unstable angina, myocardial infarction, and is one of the causes of sudden cardiac death. It is within the group of cardiovascular diseases of which it is the most common type. A common symptom is chest pain or discomfort which may travel into the shoulder, arm, back, neck, or jaw. Occasionally it may feel like heartburn. Usually symptoms occur with exercise or emotional stress, last less than a few minutes, and get better with rest. Shortness of breath may also occur and sometimes no symptoms are present. The first sign is occasionally a heart attack. Other complications include heart failure or an irregular heartbeat.
Risk factors include: high blood pressure, smoking, diabetes, lack of exercise, obesity, high blood cholesterol, poor diet, and excessive alcohol, among others. Other risks include depression. The underlying mechanism involves atherosclerosis of the arteries of the heart. A number of tests may help with diagnoses including: electrocardiogram, cardiac stress testing, coronary computed tomographic angiography, and coronary angiogram, among others.
Prevention is by eating a healthy diet, regular exercise, maintaining a healthy weight and not smoking. Sometimes medication for diabetes, high cholesterol, or high blood pressure are also used. There is limited evidence for screening people who are at low risk and do not have symptoms. Treatment involves the same measures as prevention. Additional medications such as antiplatelets including aspirin, beta blockers, or nitroglycerin may be recommended. Procedures such as percutaneous coronary intervention (PCI) or coronary artery bypass surgery (CABG) may be used in severe disease. In those with stable CAD it is unclear if PCI or CABG in addition to the other treatments improve life expectancy or decreases heart attack risk.
In 2013 CAD was the most common cause of death globally, resulting in 8.14 million deaths (16.8%) up from 5.74 million deaths (12%) in 1990. The risk of death from CAD for a given age has decreased between 1980 and 2010 especially in developed countries. The number of cases of CAD for a given age has also decreased between 1990 and 2010. In the U.S. in 2010 about 20% of those over 65 had CAD, while it was present in 7% of those 45 to 64, and 1.3% of those 18 to 45. Rates are higher among men than women of a given age.
Cardiomyopathy
Heart failure or formally cardiomyopathy, is the impaired function of the heart and there are numerous causes and forms of heart failure.
Cardiac arrhythmia
Cardiac arrhythmia, also known as "cardiac dysrhythmia" or "irregular heartbeat", is a group of conditions in which the heartbeat is too fast, too slow, or irregular in its rhythm. A heart rate that is too fast – above 100 beats per minute in adults – is called tachycardia. A heart rate that is too slow – below 60 beats per minute – is called bradycardia. Many types of arrhythmia present no symptoms. When symptoms are present, they may include palpitations, or feeling a pause between heartbeats. More serious symptoms may include lightheadedness, passing out, shortness of breath, or chest pain. While most types of arrhythmia are not serious, some predispose a person to complications such as stroke or heart failure. Others may result in cardiac arrest.
There are four main types of arrhythmia: extra beats, supraventricular tachycardias, ventricular arrhythmias, and bradyarrhythmias. Extra beats include premature atrial contractions, premature ventricular contractions, and premature junctional contractions. Supraventricular tachycardias include atrial fibrillation, atrial flutter, and paroxysmal supraventricular tachycardia. Ventricular arrhythmias include ventricular fibrillation and ventricular tachycardia. Arrhythmias are due to problems with the electrical conduction system of the heart. Arrhythmias may occur in children; however, the normal range for the heart rate is different and depends on age. A number of tests can help diagnose arrhythmia, including an electrocardiogram and Holter monitor.
Most arrhythmias can be effectively treated. Treatments may include medications, medical procedures such as a pacemaker, and surgery. Medications for a fast heart rate may include beta blockers or agents that attempt to restore a normal heart rhythm such as procainamide. This later group may have more significant side effects especially if taken for a long period of time. Pacemakers are often used for slow heart rates. Those with an irregular heartbeat are often treated with blood thinners to reduce the risk of complications. Those who have severe symptoms from an arrhythmia may receive urgent treatment with a jolt of electricity in the form of cardioversion or defibrillation.
Arrhythmia affects millions of people. In Europe and North America, as of 2014, atrial fibrillation affects about 2% to 3% of the population. Atrial fibrillation and atrial flutter resulted in 112,000 deaths in 2013, up from 29,000 in 1990. Sudden cardiac death is the cause of about half of deaths due to cardiovascular disease or about 15% of all deaths globally. About 80% of sudden cardiac death is the result of ventricular arrhythmias. Arrhythmias may occur at any age but are more common among older people.
Cardiac arrest
Cardiac arrest is a sudden stop in effective blood flow due to the failure of the heart to contract effectively. Symptoms include loss of consciousness and abnormal or absent breathing. Some people may have chest pain, shortness of breath, or nausea before this occurs. If not treated within minutes, death usually occurs.
The most common cause of cardiac arrest is coronary artery disease. Less common causes include major blood loss, lack of oxygen, very low potassium, heart failure, and intense physical exercise. A number of inherited disorders may also increase the risk including long QT syndrome. The initial heart rhythm is most often ventricular fibrillation. The diagnosis is confirmed by finding no pulse. While a cardiac arrest may be caused by heart attack or heart failure these are not the same.
Prevention includes not smoking, physical activity, and maintaining a healthy weight. Treatment for cardiac arrest is immediate cardiopulmonary resuscitation (CPR) and, if a shockable rhythm is present, defibrillation. Among those who survive targeted temperature management may improve outcomes. An implantable cardiac defibrillator may be placed to reduce the chance of death from recurrence.
In the United States, cardiac arrest outside of hospital occurs in about 13 per 10,000 people per year (326,000 cases). In hospital cardiac arrest occurs in an additional 209,000 Cardiac arrest becomes more common with age. It affects males more often than females. The percentage of people who survive with treatment is about 8%. Many who survive have significant disability. Many U.S. television shows, however, have portrayed unrealistically high survival rates of 67%.
Hypertension
Hypertension, also known as "high blood pressure", is a long term medical condition in which the blood pressure in the arteries is persistently elevated. High blood pressure usually does not cause symptoms. Long term high blood pressure, however, is a major risk factor for coronary artery disease, stroke, heart failure, peripheral vascular disease, vision loss, and chronic kidney disease.
Lifestyle factors can increase the risk of hypertension. These include excess salt in the diet, excess body weight, smoking, and alcohol consumption. Hypertension can also be caused by other diseases, or occur as a side-effect of drugs.
Blood pressure is expressed by two measurements, the systolic and diastolic pressures, which are the maximum and minimum pressures, respectively. Normal blood pressure when at rest is within the range of 100–140 millimeters mercury (mmHg) systolic and 60–90 mmHg diastolic. High blood pressure is present if the resting blood pressure is persistently at or above 140/90 mmHg for most adults. Different numbers apply to children. When diagnosing high blood pressure, ambulatory blood pressure monitoring over a 24-hour period appears to be more accurate than "in-office" blood pressure measurement at a physician's office or other blood pressure screening location.
Lifestyle changes and medications can lower blood pressure and decrease the risk of health complications. Lifestyle changes include weight loss, decreased salt intake, physical exercise, and a healthy diet. If changes in lifestyle are insufficient, blood pressure medications may be used. A regimen of up to three medications effectively controls blood pressure in 90% of people. The treatment of moderate to severe high arterial blood pressure (defined as >160/100 mmHg) with medication is associated with an improved life expectancy and reduced morbidity. The effect of treatment for blood pressure between 140/90 mmHg and 160/100 mmHg is less clear, with some studies finding benefits while others do not. High blood pressure affects between 16% and 37% of the population globally. In 2010, hypertension was believed to have been a factor in 18% (9.4 million) deaths.
Essential vs Secondary hypertension
Essential hypertension is the form of hypertension that by definition has no identifiable cause. It is the most common type of hypertension, affecting 95% of hypertensive patients, it tends to be familial and is likely to be the consequence of an interaction between environmental and genetic factors. Prevalence of essential hypertension increases with age, and individuals with relatively high blood pressure at younger ages are at increased risk for the subsequent development of hypertension.
Hypertension can increase the risk of cerebral, cardiac, and renal events.
Secondary hypertension is a type of hypertension which is caused by an identifiable underlying secondary cause. It is much less common than essential hypertension, affecting only 5% of hypertensive patients. It has many different causes including endocrine diseases, kidney diseases, and tumors. It also can be a side effect of many medications.
Complications of hypertension
Complications of hypertension are clinical outcomes that result from persistent elevation of blood pressure. Hypertension is a risk factor for all clinical manifestations of atherosclerosis since it is a risk factor for atherosclerosis itself.It is an independent predisposing factor for heart failure, coronary artery disease, stroke, renal disease, and peripheral arterial disease. It is the most important risk factor for cardiovascular morbidity and mortality, in industrialized countries.
Congenital heart defects
A congenital heart defect, also known as a "congenital heart anomaly" or "congenital heart disease", is a problem in the structure of the heart that is present at birth. Signs and symptoms depend on the specific type of problem. Symptoms can vary from none to life-threatening. When present they may include rapid breathing, bluish skin, poor weight gain, and feeling tired. It does not cause chest pain. Most congenital heart problems do not occur with other diseases. Complications that can result from heart defects include heart failure.
The cause of a congenital heart defect is often unknown. Certain cases may be due to infections during pregnancy such as rubella, use of certain medications or drugs such as alcohol or tobacco, parents being closely related, or poor nutritional status or obesity in the mother. Having a parent with a congenital heart defect is also a risk factor. A number of genetic conditions are associated with heart defects including Down syndrome, Turner syndrome, and Marfan syndrome. Congenital heart defects are divided into two main groups: cyanotic heart defects and non-cyanotic heart defects, depending on whether the child has the potential to turn bluish in color. The problems may involve the interior walls of the heart, the heart valves, or the large blood vessels that lead to and from the heart.
Congenital heart defects are partly preventable through rubella vaccination, the adding of iodine to salt, and the adding of folic acid to certain food products. Some defects do not need treatment. Other may be effectively treated with catheter based procedures or heart surgery. Occasionally a number of operations may be needed. Occasionally heart transplantation is required. With appropriate treatment outcomes, even with complex problems, are generally good.
Heart defects are the most common birth defect. In 2013 they were present in 34.3 million people globally. They affect between 4 and 75 per 1,000 live births depending upon how they are diagnosed. About 6 to 19 per 1,000 cause a moderate to severe degree of problems. Congenital heart defects are the leading cause of birth defect-related deaths. In 2013 they resulted in 323,000 deaths down from 366,000 deaths in 1990.
Tetralogy of Fallot
Tetralogy of Fallot is the most common congenital heart disease arising in 1–3 cases per 1,000 births. The cause of this defect is a ventricular septal defect (VSD) and an overriding aorta. These two defects combined causes deoxygenated blood to bypass the lungs and going right back into the circulatory system. The modified Blalock-Taussig shunt is usually used to fix the circulation. This procedure is done by placing a graft between the subclavian artery and the ipsilateral pulmonary artery to restore the correct blood flow.
Pulmonary atresia
Pulmonary atresia happens in 7–8 per 100,000 births and is characterized by the aorta branching out of the right ventricle. This causes the deoxygenated blood to bypass the lungs and enter the circulatory system. Surgeries can fix this by redirecting the aorta and fixing the right ventricle and pulmonary artery connection.
There are two types of pulmonary atresia, classified by whether or not the baby also has a ventricular septal defect.
Pulmonary atresia with an intact ventricular septum: This type of pulmonary atresia is associated with complete and intact septum between the ventricles.
Pulmonary atresia with a ventricular septal defect: This type of pulmonary atresia happens when a ventricular septal defect allows blood to flow into and out of the right ventricle.
Double outlet right ventricle
Double outlet right ventricle (DORV) is when both great arteries, the pulmonary artery and the aorta, are connected to the right ventricle. There is usually a VSD in different particular places depending on the variations of DORV, typically 50% are subaortic and 30%. The surgeries that can be done to fix this defect can vary due to the different physiology and blood flow in the defected heart. One way it can be cured is by a VSD closure and placing conduits to restart the blood flow between the left ventricle and the aorta and between the right ventricle and the pulmonary artery. Another way is systemic-to-pulmonary artery shunt in cases associated with pulmonary stenosis. Also, a balloon atrial septostomy can be done to relieve hypoxemia caused by DORV with the Taussig-Bing anomaly while surgical correction is awaited.
Transposition of great arteries
There are two different types of transposition of the great arteries, Dextro-transposition of the great arteries and Levo-transposition of the great arteries, depending on where the chambers and vessels connect. Dextro-transposition happens in about 1 in 4,000 newborns and is when the right ventricle pumps blood into the aorta and deoxygenated blood enters the bloodstream. The temporary procedure is to create an atrial septal defect. A permanent fix is more complicated and involves redirecting the pulmonary return to the right atrium and the systemic return to the left atrium, which is known as the Senning procedure. The Rastelli procedure can also be done by rerouting the left ventricular outflow, dividing the pulmonary trunk, and placing a conduit in between the right ventricle and pulmonary trunk. Levo-transposition happens in about 1 in 13,000 newborns and is characterized by the left ventricle pumping blood into the lungs and the right ventricle pumping the blood into the aorta. This may not produce problems at the beginning, but will eventually due to the different pressures each ventricle uses to pump blood. Switching the left ventricle to be the systemic ventricle and the right ventricle to pump blood into the pulmonary artery can repair levo-transposition.
Persistent truncus arteriosus
Persistent truncus arteriosus is when the truncus arteriosus fails to split into the aorta and pulmonary trunk. This occurs in about 1 in 11,000 live births and allows both oxygenated and deoxygenated blood into the body. The repair consists of a VSD closure and the Rastelli procedure.
Ebstein anomaly
Ebstein's anomaly is characterized by a right atrium that is significantly enlarged and a heart that is shaped like a box. This is very rare and happens in less than 1% of congenital heart disease cases. The surgical repair varies depending on the severity of the disease.
Pediatric cardiology is a sub-specialty of pediatrics. To become a pediatric cardiologist in the U.S., one must complete a three-year residency in pediatrics, followed by a three-year fellowship in pediatric cardiology. Per doximity, pediatric cardiologists make an average of $303,917 in the U.S.
Diagnostic tests in cardiology
Diagnostic tests in cardiology are the methods of identifying heart conditions associated with healthy vs. unhealthy, pathologic heart function. The starting point is obtaining a medical history, followed by Auscultation. Then blood tests, electrophysiological procedures, and cardiac imaging can be ordered for further analysis. Electrophysiological procedures include electrocardiogram, cardiac monitoring, cardiac stress testing, and the electrophysiology study.
Trials
Cardiology is known for randomized controlled trials that guide clinical treatment of cardiac diseases. While dozens are published every year, there are landmark trials that shift treatment significantly. Trials often have an acronym of the trial name, and this acronym is used to reference the trial and its results. Some of these landmark trials include:
V-HeFT (1986) — use of vasodilators (hydralazine & isosorbide dinitrate) in heart failure
ISIS-2 (1988) — use of aspirin in myocardial infarction
CASE I (1991) — use of antiarrhythmic agents after a heart attack increases mortality
SOLVD (1991) — use of ACE inhibitors in heart failure
4S (1994) — statins reduce risk of heart disease
CURE (1991) — use of dual antiplatelet therapy in NSTEMI
MIRACLE (2002) — use of cardiac resynchronization therapy in heart failure
SCD-HeFT (2005) — the use of implantable cardioverter-defibrillator in heart failure
RELY (2009), ROCKET-AF (2011), ARISTOTLE (2011) — use of DOACs in atrial fibrillation instead of warfarin
ISCHEMIA (2020) — medical therapy is as good as coronary stents in stable heart disease
Cardiology community
Associations
American College of Cardiology
American Heart Association
European Society of Cardiology
Heart Rhythm Society
Canadian Cardiovascular Society
Indian Heart Association
National Heart Foundation of Australia
Cardiology Society of India
Journals
Acta Cardiologica
American Journal of Cardiology
Annals of Cardiac Anaesthesia
Current Research: Cardiology
Cardiology in Review
Circulation
Circulation Research
Clinical and Experimental Hypertension
Clinical Cardiology
EP – Europace
European Heart Journal
Heart
Heart Rhythm
International Journal of Cardiology
Journal of the American College of Cardiology
Pacing and Clinical Electrophysiology
Indian Heart Journal
Cardiologists
Robert Atkins (1930–2003), known for the Atkins diet
Eugene Braunwald (born 1929), editor of Braunwald's Heart Disease and 1000+ publications
Wallace Brigden (1916–2008), identified cardiomyopathy
Manoj Durairaj (1971– ), cardiologist from Pune, India who received Pro Ecclesia et Pontifice
Willem Einthoven (1860–1927), a physiologist who built the first practical ECG and won the 1924 Nobel Prize in Physiology or Medicine ("for the discovery of the mechanism of the electrocardiogram")
Werner Forssmann (1904–1979), who infamously performed the first human catheterization on himself that led to him being let go from Berliner Charité Hospital, quitting cardiology as a speciality, and then winning the 1956 Nobel Prize in Physiology or Medicine ("for their discoveries concerning heart catheterization and pathological changes in the circulatory system")
Andreas Gruentzig (1939–1985), first developed balloon angioplasty
William Harvey (1578–1657), wrote Exercitatio Anatomica de Motu Cordis et Sanguinis in Animalibus that first described the closed circulatory system and whom Forssmann described as founding cardiology in his Nobel lecture
Murray S. Hoffman (1924–2018) As president of the Colorado Heart Association, he initiated one of the first jogging programs promoting cardiac health
Max Holzmann (1899–1994), co-founder of the Swiss Society of Cardiology, president from 1952 to 1955
Samuel A. Levine (1891–1966), recognized the sign known as Levine's sign as well as the current grading of the intensity of heart murmurs, known as the Levine scale
Henry Joseph Llewellyn "Barney" Marriott (1917–2007), ECG interpretation and Practical Electrocardiography
Bernard Lown (1921–2021), original developer of the defibrillator
Woldemar Mobitz (1889–1951), described and classified the two types of second-degree atrioventricular block often called "Mobitz Type I" and "Mobitz Type II"
Jacqueline Noonan (1928–2020), discoverer of Noonan syndrome that is the top syndromic cause of congenital heart disease
John Parkinson (1885–1976), known for Wolff–Parkinson–White syndrome
Helen B. Taussig (1898–1986), founder of pediatric cardiology and extensively worked on blue baby syndrome
Paul Dudley White (1886–1973), known for Wolff–Parkinson–White syndrome
Fredrick Arthur Willius (1888–1972), founder of the cardiology department at the Mayo Clinic and an early pioneer of electrocardiography
Louis Wolff (1898–1972), known for Wolff–Parkinson–White syndrome
Karel Frederik Wenckebach (1864–1940), first described what is now called type I second-degree atrioventricular block in 1898
See also
Glossary of medicine
List of cardiac pharmaceutical agents
Outline of cardiology
References
Sources
External links
American Heart Association |
5428 | https://en.wikipedia.org/wiki/History%20of%20Cambodia | History of Cambodia | The history of Cambodia, a country in mainland Southeast Asia, can be traced back to Indian civilization. Detailed records of a political structure on the territory of what is now Cambodia first appear in Chinese annals in reference to Funan, a polity that encompassed the southernmost part of the Indochinese peninsula during the 1st to 6th centuries. Centered at the lower Mekong, Funan is noted as the oldest regional Hindu culture, which suggests prolonged socio-economic interaction with maritime trading partners of the Indosphere in the west. By the 6th century a civilization, called Chenla or Zhenla in Chinese annals, firmly replaced Funan, as it controlled larger, more undulating areas of Indochina and maintained more than a singular centre of power.
The Khmer Empire was established by the early 9th century. Sources refer here to a mythical initiation and consecration ceremony to claim political legitimacy by founder Jayavarman II at Mount Kulen (Mount Mahendra) in 802 CE. A succession of powerful sovereigns, continuing the Hindu devaraja cult tradition, reigned over the classical era of Khmer civilization until the 11th century. A new dynasty of provincial origin introduced Buddhism, which according to some scholars resulted in royal religious discontinuities and general decline. The royal chronology ends in the 14th century. Great achievements in administration, agriculture, architecture, hydrology, logistics, urban planning and the arts are testimony to a creative and progressive civilisation - in its complexity a cornerstone of Southeast Asian cultural legacy.
The decline continued through a transitional period of approximately 100 years followed by the Middle Period of Cambodian history, also called the Post-Angkor Period, beginning in the mid 15th century. Although the Hindu cults had by then been all but replaced, the monument sites at the old capital remained an important spiritual centre.
Yet since the mid 15th century the core population steadily moved to the east and – with brief exceptions – settled at the confluence of the Mekong and Tonle Sap rivers at Chaktomuk, Longvek and Oudong.
Maritime trade was the basis for a very prosperous 16th century. But, as a result foreigners – Muslim Malays and Cham, Christian European adventurers and missionaries – increasingly disturbed and influenced government affairs. Ambiguous fortunes, a robust economy on the one hand and a disturbed culture and compromised royalty on the other were constant features of the Longvek era.
By the 15th century, the Khmers' traditional neighbours, the Mon people in the west and the Cham people in the east had gradually been pushed aside or replaced by the resilient Siamese/Thai and Annamese/Vietnamese, respectively. These powers had perceived, understood and increasingly followed the imperative of controlling the lower Mekong basin as the key to control all Indochina. A weak Khmer kingdom only encouraged the strategists in Ayutthaya (later in Bangkok) and in Huế. Attacks on and conquests of Khmer royal residences left sovereigns without a ceremonial and legitimate power base. Interference in succession and marriage policies added to the decay of royal prestige. Oudong was established in 1601 as the last royal residence of the Middle Period.
The 19th-century arrival of then technologically more advanced and ambitious European colonial powers with concrete policies of global control put an end to regional feuds and as Siam/Thailand, although humiliated and on the retreat, escaped colonisation as a buffer state, Vietnam was to be the focal point of French colonial ambition.
Cambodia, although largely neglected, had entered the Indochinese Union as a perceived entity and was capable to carry and reclaim its identity and integrity into modernity.
After 80 years of colonial hibernation, the brief episode of Japanese occupation during World War II, that coincided with the investiture of king Sihanouk was the opening act for the irreversible process towards re-emancipation and modern Cambodian history.
The Kingdom of Cambodia (1953–70), independent since 1953, struggled to remain neutral in a world shaped by polarisation of the nuclear powers USA and Soviet Union.
As the Indochinese war escalated, and Cambodia became increasingly involved, the Khmer Republic resulted in 1970. Another result was a civil war which by 1975, ended with the takeover by the Khmer Rouge. Cambodia endured its darkest hour – Democratic Kampuchea and the long aftermath of Vietnamese occupation, the People's Republic of Kampuchea and the UN Mandate towards Modern Cambodia since 1993.
Prehistory and early history
Radiocarbon dating of a cave at Laang Spean in Battambang Province, northwest Cambodia confirmed the presence of Hoabinhian stone tools from 6000–7000 BCE and pottery from 4200 BCE. Starting in 2009 archaeological research of the Franco-Cambodian Prehistoric Mission has documented a complete cultural sequence from 71.000 years BP to the Neolithic period in the cave. Finds since 2012 lead to the common interpretation, that the cave contains the archaeological remains of a first occupation by hunter and gatherer groups, followed by Neolithic people with highly developed hunting strategies and stone tool making techniques, as well as highly artistic pottery making and design, and with elaborate social, cultural, symbolic and exequial practices. Cambodia participated in the Maritime Jade Road, which was in place in the region for 3,000 years, beginning in 2000 BCE to 1000 CE.
Skulls and human bones found at Samrong Sen in Kampong Chhnang Province date from 1500 BCE. Heng Sophady (2007) has drawn comparisons between Samrong Sen and the circular earthwork sites of eastern Cambodia. These people may have migrated from South-eastern China to the Indochinese Peninsula. Scholars trace the first cultivation of rice and the first bronze making in Southeast Asia to these people.
2010 Examination of skeletal material from graves at Phum Snay in north-west Cambodia revealed an exceptionally high number of injuries, especially to the head, likely to have been caused by interpersonal violence. The graves also contain a quantity of swords and other offensive weapons used in conflict.
The Iron Age period of Southeast Asia begins around 500 BCE and lasts until the end of the Funan era - around 500 A.D. as it provides the first concrete evidence for sustained maritime trade and socio-political interaction with India and South Asia. By the 1st century settlers have developed complex, organised societies and a varied religious cosmology, that required advanced spoken languages very much related to those of the present day. The most advanced groups lived along the coast and in the lower Mekong River valley and the delta regions in houses on stilts where they cultivated rice, fished and kept domesticated animals.
Funan Kingdom (1st century – 550/627)
Chinese annals contain detailed records of the first known organised polity, the Kingdom of Funan, on Cambodian and Vietnamese territory characterised by "high population and urban centers, the production of surplus food...socio-political stratification [and] legitimized by Indian religious ideologies". Centered around the lower Mekong and Bassac rivers from the first to sixth century CE with "walled and moated cities" such as Angkor Borei in Takeo Province and Óc Eo in modern An Giang Province, Vietnam.
Early Funan was composed of loose communities, each with its own ruler, linked by a common culture and a shared economy of rice farming people in the hinterland and traders in the coastal towns, who were economically interdependent, as surplus rice production found its way to the ports.
By the second century CE Funan controlled the strategic coastline of Indochina and the maritime trade routes. Cultural and religious ideas reached Funan via the Indian Ocean trade route. Trade with India had commenced well before 500 BCE as Sanskrit hadn't yet replaced Pali. Funan's language has been determined as to have been an early form of Khmer and its written form was Sanskrit.
In the period 245–250 CE dignitaries of the Chinese Kingdom of Wu visited the Funan city Vyadharapura. Envoys Kang Tai and Zhu Ying defined Funan as to be a distinct Hindu culture. Trade with China had begun after the southward expansion of the Han Dynasty, around the 2nd century BCE Effectively Funan "controlled strategic land routes in addition to coastal areas" and occupied a prominent position as an "economic and administrative hub" between The Indian Ocean trade network and China, collectively known as the Maritime Silk Road. Trade routes, that eventually ended in distant Rome are corroborated by Roman and Persian coins and artefacts, unearthed at archaeological sites of 2nd and 3rd century settlements.
Funan is associated with myths, such as the Kattigara legend and the Khmer founding legend in which an Indian Brahman or prince named Preah Thaong in Khmer, Kaundinya in Sanskrit and Hun-t’ien in Chinese records marries the local ruler, a princess named Nagi Soma (Lieu-Ye in Chinese records), thus establishing the first Cambodian royal dynasty.
Scholars debate as to how deep the narrative is rooted in actual events and on Kaundinya's origin and status. A Chinese document, that underwent 4 alterations and a 3rd-century epigraphic inscription of Champa are the contemporary sources. Some scholars consider the story to be simply an allegory for the diffusion of Indic Hindu and Buddhist beliefs into ancient local cosmology and culture whereas some historians dismiss it chronologically.
Chinese annals report that Funan reached its territorial climax in the early 3rd century under the rule of king Fan Shih-man, extending as far south as Malaysia and as far west as Burma. A system of mercantilism in commercial monopolies was established. Exports ranged from forest products to precious metals and commodities such as gold, elephants, ivory, rhinoceros horn, kingfisher feathers, wild spices like cardamom, lacquer, hides and aromatic wood. Under Fan Shih-man Funan maintained a formidable fleet and was administered by an advanced bureaucracy, based on a "tribute-based economy, that produced a surplus which was used to support foreign traders along its coasts and ostensibly to launch expansionist missions to the west and south".
Historians maintain contradicting ideas about Funan's political status and integrity. Miriam T. Stark calls it simply Funan: [The]"notion of Fu Nan as an early "state"...has been built largely by historians using documentary and historical evidence" and Michael Vickery remarks: "Nevertheless, it is...unlikely that the several ports constituted a unified state, much less an 'empire'". Other sources though, imply imperial status: "Vassal kingdoms spread to southern Vietnam in the east and to the Malay peninsula in the west" and "Here we will look at two empires of this period...Funan and Srivijaya".
The question of how Funan came to an end is in the face of almost universal scholarly conflict impossible to pin down. Chenla is the name of Funan's successor in Chinese annals, first appearing in 616/617 CE
The archaeological approach to and interpretation of the entire early historic period is considered to be a decisive supplement for future research. The "Lower Mekong Archaeological Project" focuses on the development of political complexity in this region during the early historic period. LOMAP survey results of 2003 to 2005, for example, have helped to determine that "...the region's importance continued unabated throughout the pre-Angkorian period...and that at least three [surveyed areas] bear Angkorian-period dates and suggest the continued importance of the delta."
Chenla Kingdom (6th century – 802)
The History of the Chinese Sui dynasty contains records that a state called Chenla sent an embassy to China in 616 or 617 CE It says, that Chenla was a vassal of Funan, but under its ruler Citrasena-Mahendravarman conquered Funan and gained independence.
Most of the Chinese recordings on Chenla, including that of Chenla conquering Funan, have been contested since the 1970s as they are generally based on single remarks in the Chinese annals, as author Claude Jacques emphasised the very vague character of the Chinese terms 'Funan' and 'Chenla', while more domestic epigraphic sources become available. Claude Jacques summarises: "Very basic historical mistakes have been made" because "the history of pre-Angkorean Cambodia was reconstructed much more on the basis of Chinese records than on that of [Cambodian] inscriptions" and as new inscriptions were discovered, researchers "preferred to adjust the newly discovered facts to the initial outline rather than to call the Chinese reports into question".
The notion of Chenla's centre being in modern Laos has also been contested. "All that is required is that it be inland from Funan." The most important political record of pre-Angkor Cambodia, the inscription K53 from Ba Phnom, dated 667 CE does not indicate any political discontinuity, either in royal succession of kings Rudravarman, Bhavavarman I, Mahendravarman [Citrasena], Īśānavarman, and Jayavarman I or in the status of the family of officials who produced the inscription. Another inscription of a few years later, K44, 674 CE, commemorating a foundation in Kampot province under the patronage of Jayavarman I, refers to an earlier foundation in the time of King Raudravarma, presumably Rudravarman of Funan, and again there is no suggestion of political discontinuity.
The History of the T'ang asserts that shortly after 706 the country was split into Land Chenla and Water Chenla. The names signify a northern and a southern half, which may conveniently be referred to as Upper and Lower Chenla.
By the late 8th century Water Chenla had become a vassal of the Sailendra dynasty of Java – the last of its kings were killed and the polity incorporated into the Javanese monarchy around 790 CE. Land Chenla acquired independence under Jayavarman II in 802 CE
The Khmers, vassals of Funan, reached the Mekong river from the northern Menam River via the Mun River Valley. Chenla, their first independent state developed out of Funanese influence.
Ancient Chinese records mention two kings, Shrutavarman and Shreshthavarman who ruled at the capital Shreshthapura located in modern-day southern Laos. The immense influence on the identity of Cambodia to come was wrought by the Khmer Kingdom of Bhavapura, in the modern day Cambodian city of Kampong Thom. Its legacy was its most important sovereign, Ishanavarman who completely conquered the kingdom of Funan during 612–628. He chose his new capital at the Sambor Prei Kuk, naming it Ishanapura.
Khmer Empire (802–1431)
The six centuries of the Khmer Empire are characterised by unparalleled technical and artistic progress and achievements, political integrity and administrative stability. The empire represents the cultural and technical apogee of the Cambodian and Southeast Asian pre-industrial civilisation.
The Khmer Empire was preceded by Chenla, a polity with shifting centres of power, which was split into Land Chenla and Water Chenla in the early 8th century. By the late 8th century Water Chenla was absorbed by the Malays of the Srivijaya Empire and the Javanese of the Shailandra Empire and eventually incorporated into Java and Srivijaya.
Jayavarman II, ruler of Land Chenla, initiates a mythical Hindu consecration ceremony at Mount Kulen (Mount Mahendra) in 802 CE, intended to proclaim political autonomy and royal legitimacy. As he declared himself devaraja - god-king, divinely appointed and uncontested, he simultaneously declares independence from Shailandra and Srivijaya. He established Hariharalaya, the first capital of the Angkorean area near the modern town of Roluos.
Indravarman I (877–889) and his son and successor Yasovarman I (889–900), who established the capital Yasodharapura ordered the construction of huge water reservoirs (barays) north of the capital. The water management network depended on elaborate configurations of channels, ponds, and embankments built from huge quantities of clayey sand, the available bulk material on the Angkor plain. Dikes of the East Baray still exist today, which are more than long and wide. The largest component is the West Baray, a reservoir about long and across, containing approximately 50 million m3 of water.
Royal administration was based on the religious idea of the Shivaite Hindu state and the central cult of the sovereign as warlord and protector – the "Varman". This centralised system of governance appointed royal functionaries to provinces. The Mahidharapura dynasty – its first king was Jayavarman VI (1080 to 1107), which originated west of the Dângrêk Mountains in the Mun river valley discontinued the old "ritual policy", genealogical traditions and crucially, Hinduism as exclusive state religion. Some historians relate the empires' decline to these religious discontinuities.
The area that comprises the various capitals was spread out over around , it is nowadays commonly called Angkor. The combination of sophisticated wet-rice agriculture, based on an engineered irrigation system and the Tonlé Sap's spectacular abundance in fish and aquatic fauna, as protein source guaranteed a regular food surplus. Recent Geo-surveys have confirmed that Angkor maintained the largest pre-industrial settlement complex worldwide during the 12th and 13th centuries – some three quarters of a million people lived there. Sizeable contingents of the public workforce were to be redirected to monument building and infrastructure maintenance. A growing number of researchers relates the progressive over-exploitation of the delicate local eco-system and its resources alongside large scale deforestation and resulting erosion to the empires' eventual decline.
Under king Suryavarman II (1113–1150) the empire reached its greatest geographic extent as it directly or indirectly controlled Indochina, the Gulf of Thailand and large areas of northern maritime Southeast Asia. Suryavarman II commissioned the temple of Angkor Wat, built in a period of 37 years, its five towers representing Mount Meru is considered to be the most accomplished expression of classical Khmer architecture. However, territorial expansion ended when Suryavarman II was killed in battle attempting to invade Đại Việt. It was followed by a period of dynastic upheaval and a Cham invasion that culminated in the sack of Angkor in 1177.
King Jayavarman VII (reigned 1181–1219) is generally considered to be Cambodia's greatest King. A Mahayana Buddhist, he initiates his reign by striking back against Champa in a successful campaign. During his nearly forty years in power he becomes the most prolific monument builder, who establishes the city of Angkor Thom with its central temple the Bayon. Further outstanding works are attributed to him – Banteay Kdei, Ta Prohm, Neak Pean and Sra Srang. The construction of an impressive number of utilitarian and secular projects and edifices, such as maintenance of the extensive road network of Suryavarman I, in particular the royal road to Phimai and the many rest houses, bridges and hospitals make Jayavarman VII unique among all imperial rulers.
In August 1296, the Chinese diplomat Zhou Daguan arrived at Angkor and remained at the court of king Srindravarman until July 1297. He wrote a detailed report, The Customs of Cambodia, on life in Angkor. His portrayal is one of the most important sources of understanding historical Angkor as the text offers valuable information on the everyday life and the habits of the inhabitants of Angkor.
The last Sanskrit inscription is dated 1327, and records the succession of Indrajayavarman by Jayavarman IX Parameshwara (1327–1336).
The empire was an agrarian state that consisted essentially of three social classes, the elite, workers and slaves. The elite included advisers, military leaders, courtiers, priests, religious ascetics and officials. Workers included agricultural labourers and also a variety of craftsman for construction projects. Slaves were often captives from military campaigns or distant villages. Coinage did not exist and the barter economy was based on agricultural produce, principally rice, with regional trade as an insignificant part of the economy.
Post-Angkor Period of Cambodia (1431–1863)
The term "Post-Angkor Period of Cambodia", also the "Middle Period" refers to the historical era from the early 15th century to 1863, the beginning of the French Protectorate of Cambodia. Reliable sources – particularly for the 15th and 16th century – are very rare. A conclusive explanation that relates to concrete events manifesting the decline of the Khmer Empire has not yet been produced. However, most modern historians contest that several distinct and gradual changes of religious, dynastic, administrative and military nature, environmental problems and ecological imbalance coincided with shifts of power in Indochina and must all be taken into account to make an interpretation. In recent years, focus has notably shifted towards studies on climate changes, human–environment interactions and the ecological consequences.
Epigraphy in temples, ends in the third decade of the fourteenth, and does not resume until the mid-16th century. Recording of the Royal Chronology discontinues with King Jayavarman IX Parameshwara (or Jayavarma-Paramesvara) – there exists not a single contemporary record of even a king's name for over 200 years. Construction of monumental temple architecture had come to a standstill after Jayavarman VII's reign. According to author Michael Vickery there only exist external sources for Cambodia's 15th century, the Chinese Ming Shilu annals and the earliest Royal Chronicle of Ayutthaya. Wang Shi-zhen (王世貞), a Chinese scholar of the 16th century, remarked: "The official historians are unrestrained and are skilful at concealing the truth; but the memorials and statutes they record and the documents they copy cannot be discarded."
The central reference point for the entire 15th century is a Siamese intervention of some undisclosed nature at the capital Yasodharapura (Angkor Thom) around the year 1431. Historians relate the event to the shift of Cambodia's political centre southward to the region of Phnom Penh, Longvek and later Oudong.
Sources for the 16th century are more numerous. The kingdom is centred at the Mekong, prospering as an integral part of the Asian maritime trade network, via which the first contact with European explorers and adventurers does occur. Wars with the Siamese result in loss of territory and eventually the conquest of the capital Longvek in 1594. Richard Cocks, of the East India Company established trade with Cochin, China, and Cambodia by 1618, but the Cambodia commerce was not authorized by the directors in London and was short-lived until it was revived in 1651, again without authorization. The Vietnamese on their "Southward March" reach Prei Nokor/Saigon at the Mekong Delta in the 17th century. This event initiates the slow process of Cambodia losing access to the seas and independent marine trade.
Siamese and Vietnamese dominance intensified during the 17th and 18th century, resulting in frequent displacements of the seat of power as the Khmer royal authority decreased to the state of a vassal. In the early 19th century with dynasties in Vietnam and Siam firmly established, Cambodia was placed under joint suzerainty, having lost its national sovereignty. British agent John Crawfurd states: "...the King of that ancient Kingdom is ready to throw himself under the protection of any European nation..." To save Cambodia from being incorporated into Vietnam and Siam, the Cambodians entreated the aid of the Luzones/Lucoes (Filipinos from Luzon-Philippines) that previously participated in the Burmese-Siamese wars as mercenaries. When the embassy arrived in Luzon, the rulers were now Spaniards, so they asked them for aid too, together with their Latin American troops imported from Mexico, in order to restore the then Christianised King, Satha II, as monarch of Cambodia, this, after a Thai/Siamese invasion was repelled. However that was only temporary. Nevertheless, the future King, Ang Duong, also enlisted the aid of the French who were allied to the Spanish (As Spain was ruled by a French royal dynasty the Bourbons). The Cambodian king agreed to colonial France's offers of protection in order to restore the existence of the Cambodian monarchy, which took effect with King Norodom Prohmbarirak signing and officially recognising the French protectorate on 11 August 1863.
French colonial period (1863–1953)
In August 1863 King Norodom signed an agreement with the French placing the kingdom under the protection of France. The original treaty left Cambodian sovereignty intact, but French control gradually increased, with important landmarks in 1877, 1884, and 1897, until by the end of the century the king's authority no longer existed outside the palace. Norodom died in 1904, and his two successors, Sisowath and Monivong, were content to allow the French to control the country, but in 1940 France was defeated in a brief border war with Thailand and forced to surrender the provinces of Battambang and Angkor (the ancient site of Angkor itself was retained). King Monivong died in April 1941, and the French placed the obscure Prince Sihanouk on the throne as king, believing that the inexperienced 18-year old would be more pliable than Monivong's middle-aged son, Prince Monireth.
Cambodia's situation at the end of the war was chaotic. The Free French, under General Charles de Gaulle, were determined to recover Indochina, though they offered Cambodia and the other Indochinese protectorates a carefully circumscribed measure of self-government. Convinced that they had a "civilizing mission", they envisioned Indochina's participation in a French Union of former colonies that shared the common experience of French culture.
Administration of Sihanouk (1953–70)
On 9 March 1945, during the Japanese occupation of Cambodia, young king Norodom Sihanouk proclaimed an independent Kingdom of Kampuchea, following a formal request by the Japanese. Shortly thereafter the Japanese government nominally ratified the independence of Cambodia and established a consulate in Phnom Penh. The new government did away with the romanization of the Khmer language that the French colonial administration was beginning to enforce and officially reinstated the Khmer script. This measure taken by the short-lived governmental authority would be popular and long-lasting, for since then no government in Cambodia has tried to romanise the Khmer language again.
After Allied military units entered Cambodia, the Japanese military forces present in the country were disarmed and repatriated. The French were able to reimpose the colonial administration in Phnom Penh in October the same year.
Sihanouk's "royal crusade for independence" resulted in grudging French acquiescence to his demands for a transfer of sovereignty. A partial agreement was struck in October 1953. Sihanouk then declared that independence had been achieved and returned in triumph to Phnom Penh. As a result of the 1954 Geneva Conference on Indochina, Cambodia was able to bring about the withdrawal of the Viet Minh troops from its territory and to withstand any residual impingement upon its sovereignty by external powers.
Neutrality was the central element of Cambodian foreign policy during the 1950s and 1960s. By the mid-1960s, parts of Cambodia's eastern provinces were serving as bases for North Vietnamese Army and National Liberation Front (NVA/NLF) forces operating against South Vietnam, and the port of Sihanoukville was being used to supply them. As NVA/VC activity grew, the United States and South Vietnam became concerned, and in 1969, the United States began a 14-month-long series of bombing raids targeted at NVA/VC elements, contributing to destabilisation. The bombing campaign took place no further than ten, and later inside the Cambodian border, areas where the Cambodian population had been evicted by the NVA. Prince Sihanouk, fearing that the conflict between communist North Vietnam and South Vietnam might spill over to Cambodia, publicly opposed the idea of a bombing campaign by the United States along the Vietnam–Cambodia border and inside Cambodian territory. However, Peter Rodman claimed, "Prince Sihanouk complained bitterly to us about these North Vietnamese bases in his country and invited us to attack them". In December 1967 Washington Post journalist Stanley Karnow was told by Sihanouk that if the US wanted to bomb the Vietnamese communist sanctuaries, he would not object, unless Cambodians were killed. The same message was conveyed to US President Johnson's emissary Chester Bowles in January 1968. So the US had no real motivation to overthrow Sihanouk. However, Prince Sihanouk wanted Cambodia to stay out of the North Vietnam–South Vietnam conflict and was very critical of the United States government and its allies (the South Vietnamese government). Prince Sihanouk, facing internal struggles of his own, due to the rise of the Khmer Rouge, did not want Cambodia to be involved in the conflict. Sihanouk wanted the United States and its allies (South Vietnam) to keep the war away from the Cambodian border. Sihanouk did not allow the United States to use Cambodian air space and airports for military purposes. This upset the United States greatly and contributed to their view of Prince Sihanouk as a North Vietnamese sympathiser and a thorn on the United States. However, declassified documents indicate that, as late as March 1970, the Nixon administration was hoping to garner "friendly relations" with Sihanouk.
Throughout the 1960s, domestic Cambodian politics became polarised. Opposition to the government grew within the middle class and leftists including Paris-educated leaders like Son Sen, Ieng Sary, and Saloth Sar (later known as Pol Pot), who led an insurgency under the clandestine Communist Party of Kampuchea (CPK). Sihanouk called these insurgents the Khmer Rouge, literally the "Red Khmer". But the 1966 national assembly elections showed a significant swing to the right, and General Lon Nol formed a new government, which lasted until 1967. During 1968 and 1969, the insurgency worsened. However, members of the government and army, who resented Sihanouk's ruling style as well as his tilt away from the United States, did have a motivation to overthrow him.
Khmer Republic and the War (1970–75)
While visiting Beijing in 1970 Sihanouk was ousted by a military coup led by Prime Minister General Lon Nol and Prince Sisowath Sirik Matak in the early hours of 18 March 1970.
However, as early as 12 March 1970, the CIA Station Chief told Washington that based on communications from Sirik Matak, Lon Nol's cousin, that "the (Cambodian) army was ready for a coup". Lon Nol assumed power after the military coup and immediately allied Cambodia with the United States. Son Ngoc Thanh, an opponent of Pol Pot, announced his support for the new government. On 9 October, the Cambodian monarchy was abolished, and the country was renamed the Khmer Republic. The new regime immediately demanded that the Vietnamese communists leave Cambodia.
Hanoi rejected the new republic's request for the withdrawal of NVA troops. In response, the United States moved to provide material assistance to the new government's armed forces, which were engaged against both CPK insurgents and NVA forces. The North Vietnamese and Viet Cong forces, desperate to retain their sanctuaries and supply lines from North Vietnam, immediately launched armed attacks on the new government. The North Vietnamese quickly overran large parts of eastern Cambodia, reaching to within of Phnom Penh. The North Vietnamese turned the newly won territories over to the Khmer Rouge. The king urged his followers to help in overthrowing this government, hastening the onset of civil war.
In April 1970, US President Richard Nixon announced to the American public that US and South Vietnamese ground forces had entered Cambodia in a campaign aimed at destroying NVA base areas in Cambodia (see Cambodian Incursion). The US had already been bombing Vietnamese positions in Cambodia for well over a year by that point. Although a considerable quantity of equipment was seized or destroyed by US and South Vietnamese forces, containment of North Vietnamese forces proved elusive.
The Khmer Republic's leadership was plagued by disunity among its three principal figures: Lon Nol, Sihanouk's cousin Sirik Matak, and National Assembly leader In Tam. Lon Nol remained in power in part because none of the others were prepared to take his place. In 1972, a constitution was adopted, a parliament elected, and Lon Nol became president. But disunity, the problems of transforming a 30,000-man army into a national combat force of more than 200,000 men, and spreading corruption weakened the civilian administration and army.
The Khmer Rouge insurgency inside Cambodia continued to grow, aided by supplies and military support from North Vietnam. Pol Pot and Ieng Sary asserted their dominance over the Vietnamese-trained communists, many of whom were purged. At the same time, the Khmer Rouge (CPK) forces became stronger and more independent of their Vietnamese patrons. By 1973, the CPK were fighting battles against government forces with little or no North Vietnamese troop support, and they controlled nearly 60% of Cambodia's territory and 25% of its population.
The government made three unsuccessful attempts to enter into negotiations with the insurgents, but by 1974, the CPK was operating openly as divisions, and some of the NVA combat forces had moved into South Vietnam. Lon Nol's control was reduced to small enclaves around the cities and main transportation routes. More than two million refugees from the war lived in Phnom Penh and other cities.
On New Year's Day 1975, Communist troops launched an offensive which, in 117 days of the hardest fighting of the war, caused the collapse of the Khmer Republic. Simultaneous attacks around the perimeter of Phnom Penh pinned down Republican forces, while other CPK units overran fire bases controlling the vital lower Mekong resupply route. A US-funded airlift of ammunition and rice ended when Congress refused additional aid for Cambodia. The Lon Nol government in Phnom Penh surrendered on 17 April 1975, just five days after the US mission evacuated Cambodia.
Foreign involvement in the rise of the Khmer Rouge
The relationship between the massive carpet bombing of Cambodia by the United States and the growth of the Khmer Rouge, in terms of recruitment and popular support, has been a matter of interest to historians. Some historians, including Michael Ignatieff, Adam Jones and Greg Grandin, have cited the United States intervention and bombing campaign (spanning 1965–1973) as a significant factor which lead to increased support for the Khmer Rouge among the Cambodian peasantry. According to Ben Kiernan, the Khmer Rouge "would not have won power without U.S. economic and military destabilization of Cambodia. ... It used the bombing's devastation and massacre of civilians as recruitment propaganda and as an excuse for its brutal, radical policies and its purge of moderate communists and Sihanoukists." Pol Pot biographer David P. Chandler writes that the bombing "had the effect the Americans wanted – it broke the Communist encirclement of Phnom Penh", but it also accelerated the collapse of rural society and increased social polarization. Peter Rodman and Michael Lind claimed that the United States intervention saved the Lon Nol regime from collapse in 1970 and 1973. Craig Etcheson acknowledged that U.S. intervention increased recruitment for the Khmer Rouge but disputed that it was a primary cause of the Khmer Rouge victory. William Shawcross wrote that the United States bombing and ground incursion plunged Cambodia into the chaos that Sihanouk had worked for years to avoid.
By 1973, Vietnamese support of the Khmer Rouge had largely disappeared. China "armed and trained" the Khmer Rouge both during the civil war and the years afterward.
Owing to Chinese, U.S., and Western support, the Khmer Rouge-dominated Coalition Government of Democratic Kampuchea (CGDK) held Cambodia's UN seat until 1993, long after the Cold War had ended. China has defended its ties with the Khmer Rouge. Chinese Foreign Ministry spokeswoman Jiang Yu said that "the government of Democratic Kampuchea had a legal seat at the United Nations, and had established broad foreign relations with more than 70 countries".
Democratic Kampuchea (Khmer Rouge era) (1975–79)
Immediately after its victory, the CPK ordered the evacuation of all cities and towns, sending the entire urban population into the countryside to work as farmers, as the CPK was trying to reshape society into a model that Pol Pot had conceived.
The new government sought to completely restructure Cambodian society. Remnants of the old society were abolished and religion was suppressed. Agriculture was collectivised, and the surviving part of the industrial base was abandoned or placed under state control. Cambodia had neither a currency nor a banking system.
Democratic Kampuchea's relations with Vietnam and Thailand worsened rapidly as a result of border clashes and ideological differences. While communist, the CPK was fiercely nationalistic, and most of its members who had lived in Vietnam were purged. Democratic Kampuchea established close ties with the People's Republic of China, and the Cambodian-Vietnamese conflict became part of the Sino-Soviet rivalry, with Moscow backing Vietnam. Border clashes worsened when the Democratic Kampuchea military attacked villages in Vietnam. The regime broke off relations with Hanoi in December 1977, protesting Vietnam's alleged attempt to create an Indochina Federation. In mid-1978, Vietnamese forces invaded Cambodia, advancing about before the arrival of the rainy season.
The reasons for Chinese support of the CPK was to prevent a pan-Indochina movement, and maintain Chinese military superiority in the region. The Soviet Union supported a strong Vietnam to maintain a second front against China in case of hostilities and to prevent further Chinese expansion. Since Stalin's death, relations between Mao-controlled China and the Soviet Union had been lukewarm at best. In February to March 1979, China and Vietnam would fight the brief Sino-Vietnamese War over the issue.
In December 1978, Vietnam announced the formation of the Kampuchean United Front for National Salvation (KUFNS) under Heng Samrin, a former DK division commander. It was composed of Khmer Communists who had remained in Vietnam after 1975 and officials from the eastern sector—like Heng Samrin and Hun Sen—who had fled to Vietnam from Cambodia in 1978. In late December 1978, Vietnamese forces launched a full invasion of Cambodia, capturing Phnom Penh on 7 January 1979 and driving the remnants of Democratic Kampuchea's army westward toward Thailand.
Within the CPK, the Paris-educated leadership—Pol Pot, Ieng Sary, Nuon Chea, and Son Sen—were in control. A new constitution in January 1976 established Democratic Kampuchea as a Communist People's Republic, and a 250-member Assembly of the Representatives of the People of Kampuchea (PRA) was selected in March to choose the collective leadership of a State Presidium, the chairman of which became the head of state.
Prince Sihanouk resigned as head of state on 2 April. On 14 April, after its first session, the PRA announced that Khieu Samphan would chair the State Presidium for a 5-year term. It also picked a 15-member cabinet headed by Pol Pot as prime minister. Prince Sihanouk was put under virtual house arrest.
Destruction and deaths caused by the regime
20,000 people died of exhaustion or disease during the evacuation of Phnom Penh and its aftermath. Many of those forced to evacuate the cities were resettled in newly created villages, which lacked food, agricultural implements, and medical care. Many who lived in cities had lost the skills necessary for survival in an agrarian environment. Thousands starved before the first harvest. Hunger and malnutrition—bordering on starvation—were constant during those years. Most military and civilian leaders of the former regime who failed to disguise their pasts were executed.
Some of the ethnicities in Cambodia, such as the Cham and Vietnamese, suffered specific and targeted and violent persecutions, to the point of some international sources referring to it as the "Cham genocide". Entire families and towns were targeted and attacked with the goal of significantly diminishing their numbers and eventually eliminated them. Life in 'Democratic Kampuchea' was strict and brutal. In many areas of the country people were rounded up and executed for speaking a foreign language, wearing glasses, scavenging for food, absent for government assigned work, and even crying for dead loved ones. Former businessmen and bureaucrats were hunted down and killed along with their entire families; the Khmer Rouge feared that they held beliefs that could lead them to oppose their regime. A few Khmer Rouge loyalists were even killed for failing to find enough 'counter-revolutionaries' to execute.
When Cambodian socialists began to rebel in the eastern zone of Cambodia, Pol Pot ordered his armies to exterminate 1.5 million eastern Cambodians which he branded as "Cambodians with Vietnamese minds" in the area. The purge was done speedily and efficiently as Pol Pot's soldiers quickly killed at least more than 100,000 to 250,000 eastern Cambodians right after deporting them to execution site locations in Central, North and North-Western Zones within a month's time, making it the bloodiest episode of mass murder under Pol Pot's regime.
Religious institutions were not spared by the Khmer Rouge as well, in fact religion was so viciously persecuted to such an extent that the vast majority of Cambodia's historic architecture, 95% of Cambodia's Buddhist temples, was completely destroyed.
Ben Kiernan estimates that 1.671 million to 1.871 million Cambodians died as a result of Khmer Rouge policy, or between 21% and 24% of Cambodia's 1975 population. A study by French demographer Marek Sliwinski calculated slightly fewer than 2 million unnatural deaths under the Khmer Rouge out of a 1975 Cambodian population of 7.8 million; 33.5% of Cambodian men died under the Khmer Rouge compared to 15.7% of Cambodian women. According to a 2001 academic source, the most widely accepted estimates of excess deaths under the Khmer Rouge range from 1.5 million to 2 million, although figures as low as 1 million and as high as 3 million have been cited; conventionally accepted estimates of deaths due to Khmer Rouge executions range from 500,000 to 1 million, "a third to one half of excess mortality during the period." However, a 2013 academic source (citing research from 2009) indicates that execution may have accounted for as much as 60% of the total, with 23,745 mass graves containing approximately 1.3 million suspected victims of execution. While considerably higher than earlier and more widely accepted estimates of Khmer Rouge executions, the Documentation Center of Cambodia (DC-Cam)'s Craig Etcheson defended such estimates of over one million executions as "plausible, given the nature of the mass grave and DC-Cam's methods, which are more likely to produce an under-count of bodies rather than an over-estimate." Demographer Patrick Heuveline estimated that between 1.17 million and 3.42 million Cambodians died unnatural deaths between 1970 and 1979, with between 150,000 and 300,000 of those deaths occurring during the civil war. Heuveline's central estimate is 2.52 million excess deaths, of which 1.4 million were the direct result of violence. Despite being based on a house-to-house survey of Cambodians, the estimate of 3.3 million deaths promulgated by the Khmer Rouge's successor regime, the People's Republic of Kampuchea (PRK), is generally considered to be an exaggeration; among other methodological errors, the PRK authorities added the estimated number of victims that had been found in the partially-exhumed mass graves to the raw survey results, meaning that some victims would have been double-counted.
An estimated 300,000 Cambodians starved to death between 1979 and 1980, largely as a result of the after-effects of Khmer Rouge policies.
Vietnamese occupation and the PRK (1979–93)
On 10 January 1979, after the Vietnamese army and the KUFNS (Kampuchean United Front for National Salvation) invaded Cambodia and overthrew the Khmer Rouge, the new People's Republic of Kampuchea (PRK) was established with Heng Samrin as head of state. Pol Pot's Khmer Rouge forces retreated rapidly to the jungles near the Thai border. The Khmer Rouge and the PRK began a costly struggle that played into the hands of the larger powers China, the United States and the Soviet Union. The Khmer People's Revolutionary Party's rule gave rise to a guerrilla movement of three major resistance groups – the FUNCINPEC (Front Uni National pour un Cambodge Indépendant, Neutre, Pacifique, et Coopératif), the KPLNF (Khmer People's National Liberation Front) and the PDK (Party of Democratic Kampuchea, the Khmer Rouge under the nominal presidency of Khieu Samphan). "All held dissenting perceptions concerning the purposes and modalities of Cambodia's future". Civil war displaced 600,000 Cambodians, who fled to refugee camps along the border to Thailand and tens of thousands of people were murdered throughout the country.
Peace efforts began in Paris in 1989 under the State of Cambodia, culminating two years later in October 1991 in a comprehensive peace settlement. The United Nations was given a mandate to enforce a ceasefire and deal with refugees and disarmament known as the United Nations Transitional Authority in Cambodia (UNTAC).
Modern Cambodia (1993–present)
On 23 October 1991, the Paris Conference reconvened to sign a comprehensive settlement giving the UN full authority to supervise a cease-fire, repatriate the displaced Khmer along the border with Thailand, disarm and demobilise the factional armies, and prepare the country for free and fair elections. Prince Sihanouk, President of the Supreme National Council of Cambodia (SNC), and other members of the SNC returned to Phnom Penh in November 1991, to begin the resettlement process in Cambodia. The UN Advance Mission for Cambodia (UNAMIC) was deployed at the same time to maintain liaison among the factions and begin demining operations to expedite the repatriation of approximately 370,000 Cambodians from Thailand.
On 16 March 1992, the UN Transitional Authority in Cambodia (UNTAC) arrived in Cambodia to begin implementation of the UN settlement plan and to become operational on 15 March 1992 under Yasushi Akashi, the Special Representative of the U.N. Secretary General. UNTAC grew into a 22,000-strong civilian and military peacekeeping force tasked to ensure the conduct of free and fair elections for a constituent assembly.
Over 4 million Cambodians (about 90% of eligible voters) participated in the May 1993 elections. Pre-election violence and intimidation was widespread, caused by SOC (State of Cambodia – made up largely of former PDK cadre) security forces, mostly against the FUNCINPEC and BLDP parties according to UNTAC. The Khmer Rouge or Party of Democratic Kampuchea (PDK), whose forces were never actually disarmed or demobilized blocked local access to polling places. Prince Ranariddh's (son of Norodom Sihanouk) royalist Funcinpec Party was the top vote recipient with 45.5% of the vote, followed by Hun Sen's Cambodian People's Party and the Buddhist Liberal Democratic Party, respectively. Funcinpec then entered into a coalition with the other parties that had participated in the election. A coalition government resulted between the Cambodian People's Party and FUNCINPEC, with two co-prime ministers – Hun Sen, since 1985 the prime minister in the Communist government, and Norodom Ranariddh.
The parties represented in the 120-member assembly proceeded to draft and approve a new constitution, which was promulgated 24 September 1993. It established a multiparty liberal democracy in the framework of a constitutional monarchy, with the former Prince Sihanouk elevated to King. Prince Ranariddh and Hun Sen became First and Second Prime Ministers, respectively, in the Royal Cambodian Government (RGC). The constitution provides for a wide range of internationally recognised human rights.
Hun Sen and his government have seen much controversy. Hun Sen was a former Khmer Rouge commander who was originally installed by the Vietnamese and, after the Vietnamese left the country, maintains his strong man position by violence and oppression when deemed necessary. In 1997, fearing the growing power of his co-Prime Minister, Prince Norodom Ranariddh, Hun launched a coup, using the army to purge Ranariddh and his supporters. Ranariddh was ousted and fled to Paris while other opponents of Hun Sen were arrested, tortured and some summarily executed.
On 4 October 2004, the Cambodian National Assembly ratified an agreement with the United Nations on the establishment of a tribunal to try senior leaders responsible for the atrocities committed by the Khmer Rouge. International donor countries have pledged a US$43 Million share of the three-year tribunal budget as Cambodia contributes US$13.3 Million. The tribunal has sentenced several senior Khmer Rouge leaders since 2008.
Cambodia is still infested with countless land mines, indiscriminately planted by all warring parties during the decades of war and upheaval.
The Cambodia National Rescue Party was dissolved ahead of the 2018 Cambodian general election and the ruling Cambodian People's Party also enacted tighter curbs on mass media. The CPP won every seat in the National Assembly without a major opposition, effectively solidifying de facto one-party rule in the country.
Cambodia’s longtime Prime Minister Hun Sen, one of the world’s longest-serving leaders, has a very firm grip on power. He has been accused of the crackdown on opponents and critics. His Cambodian People’s Party (CPP) has been in power since 1979. In December 2021, Prime Minister Hun Sen announced his support for his son Hun Manet to succeed him after the next election, which is expected to take place in 2023.
In July 2023 election, the ruling Cambodian People’s Party (CPP) easily won by landslide in flawed election, after disqualification of Cambodia’s most important opposition, Candlelight Party. On 22 August 2023, Hun Manet was sworn in as the new Cambodian prime minister.
See also
References
Attribution:
–
Works cited
Further reading
Chanda, Nayan. "China and Cambodia: In the mirror of history." Asia Pacific Review 9.2 (2002): 1-11.
Chandler, David. A history of Cambodia (4th ed. 2009) online.
Corfield, Justin. The history of Cambodia (ABC-CLIO, 2009).
Herz, Martin F. Short History of Cambodia (1958) online
Slocomb, Margaret. An economic history of Cambodia in the twentieth century (National University of Singapore Press, 2010).
Strangio, Sebastian. Cambodia: From Pol Pot to Hun Sen and Beyond (2020)
External links
Records of the United Nations Advance Mission in Cambodia (UNAMIC) (1991-1992) at the United Nations Archives
Constitution of Cambodia
State Department Background Note: Cambodia
Summary of UNTAC mission
History of Cambodian Civil War from the Dean Peter Krogh Foreign Affairs Digital Archives
Cambodia under Sihanouk, 1954–70
Selective Mortality During the Khmer Rouge Period in Cambodia
Crossroads in Cambodia: The United Nation's responsibility to withdrawn involvement from the establishment of a Cambodian Tribunal to prosecute the Khmer Rouge
BBC article
David Chandler - A History Of Cambodia, 4th Edition Westview Press ( 2009) |
5429 | https://en.wikipedia.org/wiki/Geography%20of%20Cambodia | Geography of Cambodia | Cambodia is a country in mainland Southeast Asia. It borders Thailand, Laos, Vietnam, the Gulf of Thailand and covers a total area of approximately . The country is situated in its entirety inside the tropical Indomalayan realm and the Indochina Time zone (ICT).
Cambodia's main geographical features are the low lying Central Plain that includes the Tonlé Sap basin, the lower Mekong River flood-plains and the Bassac River plain surrounded by mountain ranges to the north, east, in the south-west and south. The central lowlands extend into Vietnam to the south-east. The south and south-west of the country constitute a long coast at the Gulf of Thailand, characterized by sizable mangrove marshes, peninsulas, sandy beaches and headlands and bays. Cambodia's territorial waters account for over 50 islands. The highest peak is Phnom Aural, sitting above sea level.
The landmass is bisected by the Mekong River, which at is the longest river in Cambodia. After extensive rapids, turbulent sections and cataracts in Laos, the river enters the country at Stung Treng province, is predominantly calm and navigable during the entire year as it widens considerably in the lowlands. The Mekong's waters disperse into the surrounding wetlands of central Cambodia and strongly affect the seasonal nature of the Tonlé Sap lake.
Two third of the country's population live in the lowlands, where the rich sediment deposited during the Mekong's annual flooding makes the agricultural lands highly fertile. As deforestation and over-exploitation affected Cambodia only in recent decades, forests, low mountain ranges and local eco-regions still retain much of their natural potential and although still home to the largest areas of contiguous and intact forests in mainland Southeast Asia, multiple serious environmental issues persist and accumulate, which are closely related to rapid population growth, uncontrolled globalization and inconsequential administration.
The majority of the country lies within the Tropical savanna climate zone, as the coastal areas in the South and West receive noticeably more and steady rain before and during the wet season. These areas constitute the easternmost fringes of the south-west monsoon, determined to be inside the Tropical monsoon climate. Countrywide there are two seasons of relatively equal length, defined by varying precipitation as temperatures and humidity are generally high and steady throughout the entire year.
Geological development
Mainland Southeast Asia consists of allochthonous continental blocks from Gondwanaland. These include the South China, Indochina, Sibumasu, and West Burma blocks, which amalgamated to form the Southeast Asian continent during the Paleozoic and Mesozoic periods.
The current geological structure of South China and South-East Asia is determined to be the response to the "Indo-sinian" collision in South-East Asia during the Carboniferous. The Indo-Sinian orogeny was followed by extension of the Indo-Chinese block, the formation of rift basins and thermal subsidence during the early Triassic.
The Indochina continental block, which is separated from the South China Block by the Jinshajiang-Ailaoshan Suture zone, is an amalgamation of the Viet-Lao, Khorat-Kontum, Uttaradit (UTD), and Chiang Mai-West Kachin terranes, all of which are separated by suture zones or ductile shear zones.
The Khorat-Kontum terrane, which includes western Laos, Cambodia and southern Vietnam, consists of the Kontum metamorphic complex, Paleozoic shallow marine deposits, upper Permian arc volcanic rocks and Mesozoic terrigenous sedimentary rocks.
The central plains consist mainly of Quaternary sands, loam and clay, as most of the northern mountain regions and the coastal region are largely composed of Cretaceous granite, Triassic stones and Jurassic sandstone formations.
General topography
Bowl- or saucer-shaped, Cambodia covers in the south-western part of the Indochinese peninsula as its landmass and marine territory is situated entirely within the tropics.
The bowl's bottom represents Cambodia's interior, about 75 percent, consisting of alluvial flood-plains of the Tonlé Sap basin, the lower Mekong River and the Bassac River plain, whose waters feed the large and almost centrally located wetlands. As humans preferably settle in these fertile and easily accessible central lowlands, major transformations and widespread cultivation through wet-rice agriculture have over the centuries shaped the landscape into distinctive regional cultivated lands.
Domestic plants, such as sugar palms, Coconut trees and banana groves almost exclusively skirt extensive rice paddies, as natural vegetation is confined to elevated lands and near waterways. The Mekong traverses the north to south-east portions of the country, where the low-lying plains extend into Vietnam and reach the South China Sea at the Mekong Delta region.
Cambodia's low mountain ranges - representing the walls of the bowl - remain as the result of only rather recent substantial infrastructural development and economic exploitation - in particular in remote areas - formidably forested. The country is fringed to the north by the Dangrek Mountains plateau, bordering Thailand and Laos, to the north-east by the Annamite Range, in the south-west by the Cardamom Mountains and in the South by the Elephant Mountains. Highlands to the north-east and to the east merge into the Central Highlands and the Mekong Delta lowlands of Vietnam.
A heavily indented coastline at the Gulf of Thailand of length and 60 offshore islands, that dot the territorial waters and locally merge with tidal mangrove marshes - the environmental basis for a remarkable range of marine and coastal eco-regions.
Soils
"Sandy materials cover a large proportion of the landscape of Cambodia, on account of the siliceous sedimentary formations that underlie much of the Kingdom. Mesozoic sandstone dominates most of the basement geology in Cambodia and hence has a dominating influence on the properties of upland soils. Arenosols (sandy soils featuring very weak or no soil development) are mapped on only 1.6% of the land area."
"Sandy surface textures are more prevalent than the deep sandy soils that fit the definition for Arenosols. Sandy textured profiles are common amongst the most prevalent soil groups, including Acrisols and Leptosols. The Acrisols are the most prevalent soil group occupying the lowlands - nearly half of the land area of Cambodia. Low fertility and toxic amounts of aluminium pose limitations to its agricultural use, crops that can be successfully cultivated include rubber tree, oil palm, coffee and sugar cane.
The main subgroups are: Gleyic Acrisols (20.5%, Haplic Acrisols (13.3%), Plinthic Acrisol (8.7%) and Ferric Acrisol (6.3%)."
Geographical extremes
Northernmost point: Ta Veaeng District, Rattanakiri Province ()
Southernmost point: Koh Poulo Wai, Kampot Province ()
Easternmost point: Ou Ya Dav District, Rattanakiri Province ()
Westernmost point: Malai District, Banteay Meanchey Province ()
Regions
Central plain
The vast alluvial and lacustrine interconnected Cambodian flood-plain is a geologically relatively recent depression where the sediments of the Mekong and its tributaries accumulate as waters are subject to frequent course changes. The area covers . The Tonlé Sap lake and - river system occupies the lowest area. The Tonle Sap river is a waterway that branches off the Mekong near Phnom Penh in the north-westerly direction and meets the Tonle Sap lake after around . Its waters' flow reverses direction every year, caused by greatly varying amounts of water carried by the Mekong over the course of a year and the impact of monsoonal rains, that coincides with the river's maximum.
The plains of the Mekong and Tonle Sap basin are confined in the North by the Dangrek and Central Annamite Mountains, and to the South by the Cardamom Mountains and Elephant Mountains. The plains completely surround the Tonle Sap Lake in the western half of the country and wind their way through the middle of the country following the course of the Mekong River. The two basins actually form a single body of water, the whole of which effects about 75% of Cambodia’s land cover.
Flow reversal
The Mekong river and its tributaries increase water volumes in spring (May) on the northern hemisphere, mainly caused by melting snows. As the Mekong enters Cambodia (over 95% of its waters have already joined the river) it widens and inundates large areas.
The plain's deepest point - the Tonle Sap - flooded area varies from a low of around with a depth of around 1 meter at the end of the dry season (April) to and a depth of up to 9 meters in October/November. This figure rose to during 2000 when some of the worst flood conditions recorded caused over 800 deaths in Cambodia and Vietnam.
Inflow starts in May/June with maximum rates of flow of around 10,000 m3/s by late August and ends in October/November, amplified by precipitation of the annual monsoon. In November the lake reaches its maximum size. The annual monsoon coincides to cease around this time of the year. As the Mekong river begins its minimum around this time of the year and its water level falls deeper than the inundated Tonle Sap lake, Tonle Sap river and surrounding wetlands, waters of the lake's basin now drains via the Tonle Sap river into the Mekong.
As a result the Tonle Sap River (length around ) flows 6 months a year from South-East (Mekong) to North-West (lake) and 6 month a year in the opposite direction. The mean annual reverse flow volume in the Tonle Sap is , or about half of the maximum lake volume. A further 10% is estimated to enter the system by overland flow from the Mekong. The Mekong branches off into several arms near Phnom Penh and reaches Vietnamese territory south of Koh Thom and Loek Daek districts of Kandal Province.
Southern Mountains
This region represents the eastern parts of the original extent of the wet evergreen forests that cover the Cardamom - and Elephant Mountains in South-West Cambodia and along the mountains east of Bangkok in Thailand.
The densely wooded hills receive rainfall of annually on their western slopes (which are subject to the South-West monsoons) but only on their eastern - rain shadow - slopes.
The Cardamom/Krâvanh Mountains
Occupying Koh Kong Province and Kampong Speu Province, running in a north-western to south-eastern direction and rising to more than . The highest mountain of Cambodia, Phnom Aural, at is located in Aoral District in Kampong Speu Province.
The Cardamom Mountains form - including the north-western part of Chanthaburi Province, Thailand, the 'Soi Dao Mountains' - the Cardamom Mountains Moist Forests Ecoregion, that is considered to be one of the most species-rich and intact natural habitats in the region. The climate, size inaccessibility and seclusion of the mountains have allowed a rich variety of wildlife to thrive. The Cardamom and Elephant Mountains remain to be fully researched and documented.
The Elephant Mountains
Chuŏr Phnum Dâmrei - A north-south-trending range of high hills, an extension of the Cardamom/Krâvanh Mountains, in south-eastern Cambodia, rising to elevations of between 500 and 1,000 meters. Extending north from the Gulf of Thailand, they reach a high point in the Bok Koŭ ridge at Mount Bokor near the sea.
To the south-west of the Southern mountain ranges extends a narrow coastal plain that contains the Kampong Saom Bay area and the Sihanoukville peninsula, facing the Gulf of Thailand.
Northern Mountains
The Dangrek Mountains
A forested range of hills averaging , dividing Thailand from Cambodia, mainly formed of massive sandstone with slate and silt. A few characteristic basalt hills are located on the northern side of the mountain chain. This east–west-trending range extends from the Mekong River westward for approximately , merging with the highland area near San Kamphaeng, Thailand. Essentially the southern escarpment of the sandstone Khorat Plateau of northeastern Thailand, the Dângrêk range slopes gradually northward to the Mun River in Thailand but falls more abruptly in the south to the Cambodian plain. Its highest point is .
The watershed along the escarpment in general terms marks the boundary between Thailand and Cambodia, however there are exceptions. The region is covered in dry evergreen forest, mixed dipterocarp forest, and deciduous dipterocarp forests. Tree species like Pterocarpus macrocarpus, Shorea siamensis and Xylia xylocarpa var. kerrii dominate. Illegal logging are issues on both, the Thai as well as on the Cambodian side, leaving large hill stretches denuded, vulnerable tree species such as Dalbergia cochinchinensis have been affected. Forest fires are common during the dry season.
Annamite Range
Lying to the east of the Mekong River, the long chain of mountains called the Annamite Mountains of Indochina and the lowlands that surround them make up the Greater Annamites ecoregion. Levels of rainfall vary from annually. Mean annual temperatures are about . This eco-region contains some of the last relatively intact moist forests in Indochina. Moisture-laden monsoon winds, that blow in from the Gulf of Tonkin ensure permanent high air humidity. Plants and animals adapted to moist conditions, to seek refuge here and evolve into highly specialized types that are found nowhere else on Earth.
Ethnically diverse
More than 30 ethnic groups of indigenous people live in the Annamites, each with their distinctive and traditional music, language, dress and customs. The natural resources of the Greater Annamites are vital to all of these people.
Eastern Highlands
Tall grasses and deciduous forests cover the ground east of the Mekong River in Mondulkiri, where the transitional plains merge with the eastern highlands at altitudes from . The landscape has suffered from rubber farming, logging and particularly mining, although sizable areas of pristine jungle survive, which are home to rare and endemic wildlife.
Coast
Cambodia's coastal area covers , distributed among four provinces: Sihanoukville province, Kampot province, Koh Kong province, and Kep province. The total length of the Cambodian coastal area has been disputed. The most widely accepted length is , a 1997 survey by the DANIDA organization announced a length at , and in 1973 the Oil Authority found the coast to be long. The Food and Agriculture Organization claims a length of in one of its studies.
The southern mountain ranges drain to the south and west towards the shallow sea. Sediments on the continental shelf are the basis for extensive mangroves marshes, in particular in the Koh Kong province and the Ream National Park.
Islands
Cambodia’s islands fall under administration of the 4 coastal provinces. "There are 60 islands in Cambodia's coastal waters. They include 23 in Koh Kong province, 2 in Kampot province, 22 in Sihanoukville and 13 in Kep city.[sic]" Most islands are, apart from the two small groups of the outer islands, in relative proximity to the coast. The islands and the coastal region of Koh Kong Province are mainly composed of upper Jurassic and lower Cretaceous sandstone massives. The north-westernmost islands near and around the Kaoh Pao river delta (Prek Kaoh Pao) area are to a great extent sediments of estuaries and rivers, very flat and engulfed in contiguous mangrove marshes.
Climate
Cambodia's climate, like that of much of the rest of mainland Southeast Asia is dominated by monsoons, which are known as tropical wet and dry because of the distinctly marked seasonal differences. The monsoonal air-flows are caused by annual alternating high pressure and low pressure over the Central Asian landmass. In summer, moisture-laden air—the southwest monsoon—is drawn landward from the Indian Ocean.
The flow is reversed during the winter, and the northeast monsoon sends back dry air. The southwest monsoon brings the rainy season from mid-May to mid-September or to early October, and the northeast monsoon flow of drier and cooler air lasts from early November to March. Temperatures are fairly uniform throughout the Tonlé Sap Basin area, with only small variations from the average annual mean of around .
The maximum mean is about ; the minimum mean, about . Maximum temperatures of higher than , however, are common and, just before the start of the rainy season, they may rise to more than . Minimum night temperatures sporadically fall below . in January, the coldest month. May is the warmest month - although strongly influenced by the beginning of the wet season, as the area constitutes the easternmost fringe of the south-west monsoon. Tropical cyclones only rarely cause damage in Cambodia.
The total annual rainfall average is between , and the heaviest amounts fall in the southeast. Rainfall from April to September in the Tonlé Sap Basin-Mekong Lowlands area averages annually, but the amount varies considerably from year to year. Rainfall around the basin increases with elevation. It is heaviest in the mountains along the coast in the southwest, which receive from to more than of precipitation annually as the southwest monsoon reaches the coast.
This area of greatest rainfall drains mostly to the sea; only a small quantity goes into the rivers flowing into the basin. Relative humidity is high throughout the entire year; usually exceeding 90%. During the dry season daytime humidity rates average around 50 percent or slightly lower, climbing to about 90% during the rainy season.
Hydrology
The Mekong River and its tributaries comprise one of the largest river systems
in the world. The central Tonle Sap, the Great Lake has several input rivers, the most important being the Tonle Sap River, which contributes 62% of the total water supply during the rainy season. Direct rainfall on the lake and the other rivers in the sub-basin contribute the remaining 38%. Major rivers are the Sen river, Sreng River, Stung Pouthisat River, Sisophon River, Mongkol Borei River, and Sangkae River.
Smaller rivers in the southeast, the Cardamom Mountains and Elephant Range form separate drainage divides. To the east the rivers flow into the Tonle Sap, as in the south-west rivers flow into the Gulf of Thailand. Toward the southern slopes of the Elephant Mountains, small rivers flow south-eastward on the eastern side of the divide.
The Mekong River flows southward from the Cambodia-Laos border to a point south of Kratié (town), where it turns west for about and then turns southwest towards Phnom Penh. Extensive rapids run north of Kratie city. From Kampong Cham Province the gradient slopes very gently, and inundation of areas along the river occurs at flood stage. From June through November—through breaks in the natural levees that have built up along its course. At Phnom Penh four major water courses meet at a point called the Chattomukh (Four Faces). The Mekong River flows in from the northeast and the Tonle Sap river emanates from the Tonle Sap—flows in from the northwest. They divide into two parallel channels, the Mekong River proper and the Bassac River, and flow independently through the delta areas of Cambodia and Vietnam to the South China Sea.
The flow of water into the Tonle Sap is seasonal. In spring, the flow of the Mekong River, fed by monsoon rains, increases to a point where its outlets through the delta can't handle the enormous volume of water. At this point, the water pushes northward up the Tonle Sap river and empties into the Tonle Sap lake, thereby increasing the size of the lake from about to about at the height of the flooding. After the Mekong's waters crest — when its downstream channels can handle the volume of water — the flow reverses, and water flows out of the engorged lake.
As the level of the Tonle Sap retreats, it deposits a new layer of sediment. The annual flooding, combined with poor drainage immediately around the lake, transforms the surrounding area into marshlands, unusable for agricultural purposes during the dry season. The sediment deposited into the lake during the Mekong's flood stage appears to be greater than the quantity carried away later by the Tonle Sap River. Gradual silting of the lake would seem to be occurring; during low-water level, it is only about deep, while at flood stage it is between deep.
Vegetation & ecoregions
Cambodia has one of the highest levels of forest cover in the region as the interdependence of Cambodia’s geography and hydrology makes it rich in natural resources and biological diversity - among the bio-richest countries in Southeast Asia. The Royal Government of Cambodia estimates Cambodia contains approximately 10.36 million hectares of forest cover, representing approximately 57.07% of Cambodia’s land area (2011). On the contrary, international observers and independent sources provide rather different numbers. Consensus permeates, as most sources agree, that deforestation in Cambodia, loss of seasonal wetlands and habitat destruction - among countless minor factors - correlates with the absence of strict administrative control and indifference in law enforcement - not only in Cambodia but the entire region.
Figures and assessments are numerous as are available sources. as seen in numbers below, which provide a wide range for interpretation. About (1%) of forest cover is planted forest. Overall Cambodia’s forests contain an estimated 464 million metric tonnes of carbon stock in living forest biomass. Approximately 40% of Cambodia’s Forests have some level of protection, while one of the Cambodia Millennium Development Goals targets is to achieve a 60% forest cover by 2015.
According to the Forestry Administration statistics, a total of 380,000 hectares of forest were cleared between 2002 and 2005/2006 - a deforestation rate of 0.5% per year. The main cause of deforestation has been determined to be large-scale agricultural expansions.
Southern Annamites Montane Rain Forests ecoregion
The Southern Annamites Montane Rain Forests ecoregion of the montane forests of Kontuey Nea, "the dragon's tail" in the remote north-west of Cambodia, where the boundaries of Cambodia, Laos, and Vietnam meet [this is in the northeast, not the northwest?], is remarkably rich in biodiversity. The relatively intact forests occupy a broad topographic range - from lowlands with wet evergreen forests to montane habitats with evergreen hardwood and conifer forests. The complex geological, topographic and climatic ( rainfall and temperature ) facets that characterize the region make forest structure and composition unique and very variable. There is an unusually high number of near-endemic and endemic species among the many species to be found in the area. The entire eco-region has a size of .
The Great Lake ecosystem
The Tonle Sap, also known as the Great Lake in central Cambodia is the largest freshwater lake in Southeast Asia and one of the richest inland fishing grounds in the world. The Lake functions as a natural flood water reservoir for the Mekong system as a whole and therefore is an important source of water for the Mekong Delta during the dry season. The ecosystem has developed as a result of the Mekong’s seasonal flow fluctuations.
A belt of freshwater mangroves known as the "flooded forest" surrounds the lake. The floodplains in turn are surrounded by low hills, covered with evergreen seasonal tropical forest with substantial dipterocarp vegetation or deciduous dry forest. The eco-region consists of a mosaic of habitats for a great number of species. The forest gradually yields to bushes and finally grassland with increasing distance from the lake.
Henri Mouhot: "Travels in the Central Parts of Indo-China" 1864
On higher quality soils or at higher elevation, areas of mixed deciduous forest and semi-evergreen forests occur. This variety of vegetation types accounts for the quantity and diversity of species of the Great Lake ecosystem. Interlocking forest, - grassland and marshland patches provide the many facets and refugia for the abundant local wildlife.
The lake’s flooded forest and the surrounding floodplains are of utmost importance for Cambodia's agriculture as the region represents the cultural heart of Cambodia, the center of the national freshwater fishery industry - the nation's primary protein source.
Threats to the lake include widespread pollution, stress through growth of the local population which is dependent on the lake for subsistence and livelihood, over-harvesting of fish and other aquatic - often endangered - species, habitat destruction and potential changes in the hydrology, such as the construction and operation of dams, that disrupt the lake's natural flood cycle. However, concerns that the lake is rapidly filling with sediment seem - according to studies - to be unfounded at the present time.
Wetlands
Wetlands cover more than 30% of Cambodia. In addition to the Mekong River and the Tonle Sap floodplain there are the Stung Sen River and the coastal Stung Koh Pao - and Stung Kep estuaries of Koh Kong Province and Kep Province. The freshwater wetlands of Cambodia represent one of the most diverse ecosystems worldwide. The area’s extensive wetland habitats are the product of the annual Mekong maximum, the simultaneous wet season and the drainage paths of a number of minor rivers. See also:Geography of Cambodia#Hydrology The numerous and varied wetlands are Cambodia's central and traditional settlement area, the productive environments for rice cultivation, freshwater fisheries, other forms of agriculture and aquaculture and the constantly growing tourism sector. Considering the eco-region's importance, a variety of plans for local wetland management consolidation exist with varying degrees of completion.
Coastal habitats
The Cambodian coastline consists of of over 30 species of mangroves - among the most biologically diverse wetlands on earth. The most pristine mangrove forests are found in Koh Kong Province. In addition to mangroves, sea-grass beds extend throughout the coastal areas, especially in Kampot Province, the Sihanoukville Bay Delta and the Kep municipal waters. The meadows are highly productive, but few animals feed directly on the grasses. Those that do tend to be vertebrates such as sea turtles, dabbling ducks and geese.
"With their roots deep in mud, jagged and gnarled mangrove trees are able to grow in the brackish wetlands between land and sea where other plant life cannot survive. The trees offer refuge and nursery grounds for fish, crabs, shrimp, and mollusks. They are nesting - and migratory sites for hundreds of bird species. They also provide homes for monkeys, lizards, sea turtles, and many other animals as well as countless insects."
"Until relatively recently, the mangroves of Koh Kong, Cambodia have remained relatively intact. This is partly because of the region’s location — it is an isolated, inaccessible place — and because decades of war and conflict perversely protected the forests from over-exploitation. Local people, however, tended to use the forest's sustainability, for food, fuel, medicine, building materials, and other basic needs."
Fauna
Cambodia is home to a wide array of wildlife. There are 212 mammal species, 536 bird species, 176 reptile species (including 89 subspecies), 850 freshwater fish species (Tonlé Sap Lake area), and 435 marine fish species.
Many of the country's species are recognized by the IUCN or World Conservation Union as threatened, endangered, or critically endangered due to deforestation and habitat destruction, poaching, illegal wildlife trade, farming, fishing, and unauthorized forestry concessions. Intensive poaching may have already driven Cambodia's national animal, the Kouprey, to extinction. Wild tigers, Eld's deer, wild water buffaloes and hog deer are at critically low numbers.
Protected areas
"The 1993 Royal Decree on the Protection of Natural Areas recognized 23 protected areas, which at the time covered more than 18% of the country’s total land area."
Natural parks (sometimes described as ‘national parks’)
Wildlife reserves
Protected scenic view areas (sometimes described as ‘protected landscapes’)
Multi-purpose areas
Political and human geography
Cambodia borders Vietnam over a length of , Thailand over a length of and Laos over a length of , with in total and an additional of coastline. The capital (reach thani) and provinces (khaet) of Cambodia are first-level administrative divisions. Cambodia is divided into 25 provinces including the capital.
Municipalities and districts are the second-level administrative divisions of Cambodia. The provinces are subdivided into 159 districts and 26 municipalities. The districts and municipalities in turn are further divided into communes (khum) and quarters (sangkat).
Land use
Cambodia, Laos and Vietnam have experienced major changes in land use and land cover over the last two decades. The emergence from cold war rivalries and recent major economic reforms result in a shift from subsistence agrarian modes of production to market-based agricultural production and industrialized economies, which are heavily integrated into regional and global trade systems.
Regional divisions
Cambodia's boundaries were for the most part based upon those recognized by France and by neighboring countries during the colonial period. The boundary with Thailand runs along the watershed of the Dangrek Mountains, although only in its northern sector. The border with Laos and the border with Vietnam result from French administrative decisions and do not follow major natural features. Border disputes have broken out in the past and do persist between Cambodia and Thailand as well as between Cambodia and Vietnam.
Area and boundaries
Area:
total:
land:
water:
Maritime claims:
territorial sea:
contiguous zone:
exclusive economic zone:
continental shelf:
Elevation extremes:
lowest point:
Gulf of Thailand
highest point:
Phnum Aoral
Border disputes
Cambodian–Thai border dispute
Cambodian–Vietnamese land dispute
Lakes
Tonlé Sap Lake
Yak Loum Crater Lake – Ratanakiri
Natural resources
Oil and natural gas - In addition to the four parts of mining project, the oilfield, Block A was discovered in 2005 and located offshore in the gulf of Thailand Chevron would operate and hold a 30% interest Block A which cover . It is expected to get 30-year-production permit in the second quarter of 2011.
In late 1969, the Cambodian government granted a permit to a French company to explore for petroleum in the Gulf of Thailand. By 1972 none had been located, and exploration ceased when the Khmer Republic (see Appendix B) fell in 1975. Subsequent oil and gas discoveries in the Gulf of Thailand and in the South China Sea, however, could spark renewed interest in Cambodia's offshore area, especially because the country is on the same continental shelf as its Southeast Asian oil-producing neighbors.
Timber
Dipterocarpus alatus (chheuteal tan) sawnwood, veneer, plywood
Anisoptera glabra (mersawa, phdiek) sawnwood, veneer, plywood
Hopea odorata (koki) Sawmilling, construction (bridges, boats)
Shorea vulgaris (choë(r) chông) sawmilling, construction (housing)
Heritiera javanica (synonym Tarrietia javanica) sawnwood (decorative, furniture)
Gemstones - Gemstone areas are located in Samlot district of Battambang, Paillin, Ratanakkiri, and Takéo Province
Iron ore - Hermatite (Fe2O3); Magnetite (Fe3O4); Limonite (2Fe2O3, 3H2O) - was found in two areas, one located in Phnom Deck and the others located in Koh Keo of Preah Vihear Province, and Thalaborivath of Stung Treng Province. According to General Department of Mineral, the total iron reserves in Phnom Deck area are estimated at 5 to 6 Million tons and other deposits may add 2 to 3 Million tons.
Gold - Gold deposit was found in four provinces: Kampong Cham (The Rumchek in Memot area), Kampong Thom (Phnom Chi area), Preah Vihear (Phnom Deck in Roveing district), Ratanakiri (Oyadav district) and Mondulkiri
Bauxite – was found in Battambang Province and Chhlong district in Mondulkiri Province.
Antimony (Sb) – found in Sre Peang area, Pursat Province
Chromium (Cr) – found in Sre Peang area, Pursat Province
manganese
phosphates
Hydro-power - Hydroelectric dams: Lower Se San 2 Dam, Stung Treng Dam
Arable land
Marine resources
Total renewable water resources:
(2011)
Freshwater withdrawal (domestic/industrial/agricultural):
Total: /yr (4%/2%/94%)
Per capita: /yr (2006)
Environmental issues
Natural hazards
Monsoonal rains (June to November)
Mekong flooding
Occasional droughts
Human impact
Issues
Illegal logging activities throughout the country
rubber tree mono-cultures and strip mining for gold in the eastern highlands
gem mining in the western region along the border with Thailand
destruction of mangrove swamps threatens natural fisheries, illegal fishing and over-fishing
large scale sand mining in river beds and estuaries of Koh Kong's mangrove marshes affects tidal balance
A nascent environmental movement has been noticed by NGO's - and it is gaining strength, as the example of local resistance against the building of a Chinese hydro-electric dam in the Areng Valley shows.
Cambodia has a bad but improving performance in the global Environmental Performance Index (EPI) with an overall ranking of 146 out of 180 countries in 2016. This is among the worst in the Southeast Asian region, only ahead of Laos and Myanmar. The EPI was established in 2001 by the World Economic Forum as a global gauge to measure how well individual countries perform in implementing the United Nations' Sustainable Development Goals.
The environmental areas where Cambodia performs worst on the EPI (i.e. highest ranking) are air quality (148), water resource management (140) and health impacts of environmental issues (137), with the areas of sanitation, environmental impacts of fisheries and forest management following closely. Cambodia has an unusually large expanse of protected areas, both on land and at sea, with the land-based protections covering about 20% of the country. This secures Cambodia a better than average ranking of 61 in relation to biodiversity and habitat, despite the fact deforestation, illegal logging, construction and poaching are heavily deteriorating these protections and habitats in reality, partly fueled by the government's placement of economic land concessions and plantations within protected areas.
In November 2017, the U.S. cut funds to help clear unexploded ordnance including land mines and chemical weapons in Cambodia which it had dropped during the Vietnam War.
Consequences
Flooding
Deforestation
Soil erosion in rural areas
Declining fish stocks
Decreasing access to clean water
Habitat loss and declining biodiversity
International agreements and conventions
Cambodia is party to the following treaties:
Convention on Biological Diversity
Convention on Climate Change
MARPOL 73/78
Tropical Timber 94
Ramsar Convention on Wetlands
Signed, but not ratified:
Law of the Sea
See also
References
External links
National
Ministry of Land Management Urban Planning and Construction
Forestry Administration
Law on Forestry
law on land use
Ministry of Water Resources and Meteorology
Tonle Sap Authority
Economic Land Concession
Environmental Law
An Assessment of Cambodia’s Draft Environmental Impact Assessment Law
Climate Change Department
International
National Library of France
National Aquaculture Legislation
Cambodia Forestry Outlook Study
FAO UN
Mekong River Commission
National Adaptation Programme of Action to Climate Change (NAPA)
World reference base for soil resources
Further reading
Stories from the Mekong
Cardamoms 'one of the crown jewels...
Kampot's forgotten Karst formations |
5431 | https://en.wikipedia.org/wiki/Politics%20of%20Cambodia | Politics of Cambodia | The politics of Cambodia are defined within the framework of a constitutional monarchy, in which the king serves as the head of state, and the prime minister is the head of government. The collapse of communism set in motion events that led to the withdrawal of the Vietnamese armed forces, which had established their presence in the country since the fall of the Khmer Rouge. The 1993 constitution, which is currently in force, was promulgated as a result of the 1991 Paris Peace Agreements, followed by elections organized under the aegis of the United Nations Transitional Authority in Cambodia. The constitution declares Cambodia to be an "independent, sovereign, peaceful, permanently neutral and non-aligned country." The constitution also proclaims a liberal, multiparty democracy in which powers are devolved to the executive, the judiciary and the legislature. However, there is no effective opposition to the Prime Minister Hun Sen, who has been in power since 1985. His Cambodian People's Party won all 125 seats in the National Assembly in 2018 after the banning of opposition party CNRP and KNLF. KNLF became a main opposition exiled in Denmark after CNRP was dissolved. During the communal election in 2022 and the national election in 2023, there were no international observers. The government is considered to be autocratic.
Executive power is exercised by the Royal Government, on behalf of and with the consent of the monarch. The government is constituted of the Council of Ministers, headed by the prime minister. The prime minister is aided in his functions by members of the Council such as deputy prime ministers, senior ministers and other ministers. Legislative power is vested in a bicameral legislature composed of the National Assembly, which has the power to vote on draft law, and the Senate, that has the power of review. Upon passage of legislation through the two chambers, the draft law is presented to the monarch for signing and promulgation. The judiciary is tasked with the protection of rights and liberties of the citizens, and with being an impartial arbiter of disputes. The Supreme Court is the highest court of the country and takes appeals from lower courts on questions of law. A separate body called the Constitutional Council was established to provide interpretations of the constitution and the laws, and also to resolve disputes related to election of members of the legislature.
The Cambodian People's Party has dominated the political landscape since the 1997 coup d'état in Phnom Penh. Other prominent political parties include the royalist FUNCINPEC and the erstwhile Cambodia National Rescue Party that was dissolved by the Supreme Court in 2017. Comparative political scientists Steven Levitsky and Lucan Way have described Cambodia as a "competitive authoritarian regime", a hybrid regime type with important characteristics of both democracy and authoritarianism.
In July 2023 election, the ruling Cambodian People’s Party (CPP) easily won by landslide in flawed election, after disqualification of Cambodia’s most important opposition, Candlelight Party. On 22 August 2023, Hun Manet, son of Hun Sen, was sworn in as the new Cambodian prime minister.
Legal framework
Cambodia is a constitutional monarchy with a unitary structure and a parliamentary form of government. The constitution, which prescribes the governing framework, was promulgated in September 1993 by the Constituent Assembly that resulted from the 1993 general election conducted under the auspices of the United Nations Transitional Authority in Cambodia (UNTAC). The assembly adopted the basic principles and measures mandated under the Paris Peace Agreements into the text of the constitution. Assimilated into the governing charter, these provisions place the constitution as the supreme law of the land; declare Cambodia's status as a sovereign, independent and neutral state; enshrine a liberal, multi-party democracy with fair and periodic elections; guarantee respect for human rights; and provide for an independent judiciary.
The brutality of the Democratic Kampuchea regime had especially necessitated the inclusion of provisions concerning human rights in order to prevent a return to the policies and practices of the past. These criteria had been drawn from the Namibian constitution drafting process that took place in 1982. German constitutional law scholar, Jörg Menzel, characterized these benchmarks as the "necessary nucleus of a modern constitutional state." The constitution further sanctifies the status of international law in the issue of human rights by binding Cambodia to "respect" the provisions of human rights treaties adopted by the UN. The 1993 constitution has been amended eight times since its passage – in 1994, 1999, 2001, 2005, 2006, 2008, 2014 and 2018.
Separation of powers
The powers are devolved to three branches of the state: the legislature, the executive and the judiciary, in recognition of the doctrine of separation of powers. Political sovereignty rests with the Cambodian people, who exercise their power through the three arms of the state. The Royal Government, which wields executive power, is directly responsible to the National Assembly. The judiciary, which is an independent power, is tasked with the protection of citizens' rights and liberties. Buddhism is proclaimed as the state religion.
Influences on legal system
The legal system of Cambodia is civil law and has been strongly influenced by the legal heritage of France as a consequence of colonial rule. The Soviet-Vietnamese system dominated the country from 1981 until 1989, and Sri Lankan jurist Basil Fernando argues that its elements are present in the current system as well. The role of customary law, based on Buddhist beliefs and unwritten law drawn from the Angkorean period, is also prevalent.
Market economy
The constitution contains a commitment to the "market economy system", which along with accompanying provisions effects a fundamental change in the role of the state from the past. Security of private property and the right to sell and exchange freely, necessary conditions for the functioning of the market economy, are provided for. The state's powers of expropriation are limited to the extent they serve public interest, to be exercised only when "fair and just" compensation is made in advance. Operating under the slogan Le Cambodge s'aide lui-même or "Cambodia will help itself", one of the earliest undertakings of the Royal Government was to implement programs to ensure the economic rehabilitation of Cambodia and its integration in the regional and global economies. On 10 March 1994, the Royal Government declared an "irreversible and irrevocable" move away from a centrally-planned economy towards a market-oriented economy.
Monarchy
Cambodia is a constitutional monarchy. The king is officially the head of state and is the symbol of unity and "perpetuity" of the nation, as defined by Cambodia's constitution.
From September 24, 1993, through October 7, 2004, Norodom Sihanouk reigned as king, after having previously served in a number of offices (including king) since 1941. Under the constitution, the king has no political power, but as Norodom Sihanouk was revered in the country, his word often carried much influence in the government. The king, often irritated over the conflicts in his government, several times threatened to abdicate unless the political factions in the government got along. This put pressure on the government to solve their differences. This influence of the king was often used to help mediate differences in government.
After the abdication of King Norodom Sihanouk in 2004, he was succeeded by his son Norodom Sihamoni. While the retired king was highly revered in his country for dedicating his lifetime to Cambodia, the current king has spent most of his life abroad in France. Thus, it remains to be seen whether the new king's views will be as highly respected as his father's.
Although in the Khmer language there are many words meaning "king", the word officially used in Khmer (as found in the 1993 Cambodian constitution) is preăhmôhaksăt (Khmer regular script: ព្រះមហាក្សត្រ), which literally means: preăh- ("excellent", cognate of the Pali word vara) -môha- (from Sanskrit, meaning "great", cognate with "maha-" in maharaja) -ksăt ("warrior, ruler", cognate of the Sanskrit word kṣatrá).
On the occasion of King Norodom Sihanouk's retirement in September 2004, the Cambodian National Assembly coined a new word for the retired king: preăhmôhavireăkksăt (Khmer regular script: ព្រះមហាវីរក្សត្រ), where vireăk comes from Sanskrit vīra, meaning "brave or eminent man, hero, chief", cognate of Latin vir, viris, English virile. Preăhmôhavireăkksăt is translated in English as "King-Father" (), although the word "father" does not appear in the Khmer noun.
As preăhmôhavireăkksăt, Norodom Sihanouk retained many of the prerogatives he formerly held as preăhmôhaksăt and was a highly respected and listened-to figure. Thus, in effect, Cambodia could be described as a country with two Kings during Sihanouk's lifetime: the one who was the head of state, the preăhmôhaksăt Norodom Sihamoni, and the one who was not the head of state, the preăhmôhavireăkksăt Norodom Sihanouk.
Sihanouk died of a pulmonary infarction on October 15, 2012.
Succession to the throne
Unlike most monarchies, Cambodia's monarchy is not necessarily hereditary and the king is not allowed to select his own heir. Instead, a new king is chosen by a Royal Council of the Throne, consisting of the president of the National Assembly, the prime minister, the president of the Senate, the first and second vice presidents of the Senate, the chiefs of the orders of Mohanikay and Thammayut, and the first and second vice-president of the assembly. The Royal Council meets within a week of the king's death or abdication and selects a new king from a pool of candidates with royal blood.
It has been suggested that Cambodia's ability to peacefully appoint a new king shows that Cambodia's government has stabilized incredibly from the situation the country was in during the 1970s (see History of Cambodia).
Executive branch
The prime minister of Cambodia is a representative from the ruling party of the National Assembly. The prime minister is appointed by the king on the recommendation of the president and vice presidents of the National Assembly. The prime minister must receive be given a vote of confidence by the National Assembly.
The prime minister is officially the head of government in Cambodia. The prime minister appoints a Council of Ministers. Officially, the prime minister's duties include chairing meetings of the Council of Ministers (Cambodia's version of a cabinet) and appointing and leading a government. The prime minister and the government make up Cambodia's executive branch of government.
The current prime minister is Cambodian People's Party (CPP) member Hun Manet. He has held this position since 2023.
Result 1998 election, one year after the CPP staged a bloody coup in Phnom Penh to overthrow elected Prime Minister Prince Norodom Ranariddh, president of the FUNCINPEC party.
Legislative branch
The legislative branch of the Cambodian government is made up of a bicameral parliament.
The National Assembly ( ) has 125 members, elected for a five-year term by proportional representation.
The Senate ( ) has 61 members. Two of these members are appointed by the king, two are elected by the lower house of the government, and the remaining fifty-seven are elected popularly by "functional constituencies". Members in this house serve a six-year term.
The official duty of the Parliament is to legislate and make laws. Bills passed by the Parliament are given to the king who gives the proposed bills royal assent. The king does not have veto power over bills passed by the National Assembly and thus, cannot withhold royal assent. The National Assembly also has the power to dismiss the prime minister and his government by a two-thirds vote of no confidence.
Senate
The upper house of the Cambodian legislature is called the Senate. It consists of sixty-one members. Two of these members are appointed by the king, two are elected by the lower house of the government, and the remaining fifty-seven are elected popularly by electors from provincial and local governments, in a similar fashion to the Senate of France. Members in this house serve six-year terms.
Prior to 2006, elections had last been held for the Senate in 1999. New elections were supposed to have occurred in 2004, but these elections were initially postponed. On January 22, 2006, 11,352 possible voters went to the poll and chose their candidates. This election was criticized by local monitoring non-governmental organizations as being undemocratic.
, the Cambodian People's Party holds forty-three seats in the Senate, constituting a significant majority. The two other major parties holding seats in the Senate are the Funcinpec party (holding twelve seats) and the Sam Rainsy Party (holding two seats).
National Assembly
The lower house of the legislature is called the National Assembly. It is made up of 125 members, elected by popular vote to serve a five-year term. Elections were last held for the National Assembly in July 2013.
To vote in legislative elections, one must be at least eighteen years of age. However, to be elected to the legislature, one must be at least twenty-five years of age.
The National Assembly is led by a president and two vice presidents who are selected by the assembly members prior to each session.
, the Cambodian People's Party holds all 125 seats in the National Assembly.
Political parties and elections
2018 general election results
Judicial branch
The judicial branch is independent from the rest of the government, as specified by the Cambodian Constitution. The highest court of judicial branch is the Supreme Council of the Magistracy. Other, lower courts also exist. Until 1997, Cambodia did not have a judicial branch of government despite the nation's Constitution requiring one. In 2003, Judge Kim Sathavy was in charge of establishing the first Royal School for Judges and Prosecutors to train a new generation of magistrates and legal clerks for Cambodia.
The main duties of the judiciary are to prosecute criminals, settle lawsuits, and, most importantly, protect the freedoms and rights of Cambodian citizens. However, in reality, the judicial branch in Cambodia is highly corrupt and often serves as a tool of the executive branch to silence civil society and its leaders. There are currently 17 justices on the Supreme Council.
Foreign relations
Cambodia is a member of the ACCT, AsDB, ASEAN, ESCAP, FAO, G-77, IAEA, IBRD, ICAO, ICC, ICRM, IDA, IFAD, IFC, IFRCS, ILO, IMF, IMO, Intelsat (nonsignatory user), International Monetary Fund, Interpol, IOC, ISO (subscriber), ITU, NAM, OPCW, PCA, UN, UNCTAD, UNESCO, UNIDO, UPU, WB, WFTU, WHO, WIPO, WMO, WTO, WToO, WTrO (applicant)
International rankings
Provincial and local governments
Below the central government are 24 provincial and municipal administration. (In rural areas, first-level administrative divisions are called provinces; in urban areas, they are called municipalities.) The administrations are a part of the Ministry of the Interior and their members are appointed by the central government. Provincial and municipal administrations participate in the creation of nation budget; they also issue land titles and license businesses.
Since 2002, commune-level governments (commune councils) have been composed of members directly elected by commune residents every five years.
In practice, the allocation of responsibilities between various levels of government is uncertain. This uncertainty has created additional opportunities for corruption and increased costs for investors.
Citations
References
External links
Global Integrity Report: Cambodia reports on corruption and anti-corruption in Cambodia
Royalty
King of Cambodia, Norodom Sihamoni Official Website of King Norodom Sihamoni
King of Cambodia, Norodom Sihanouk Official Website of former King Norodom Sihanouk
Official
Cambodia.gov.kh Official Royal Government of Cambodia Website (English Version) (Cambodia.gov.kh Khmer Version)
CDC Council for the Development of Cambodia
Conseil Constitutionnel du Cambodge Constitution council of Cambodia
Department of Fisheries
Food Security and Nutrition Information System Cambodia
Ministry of Commerce
Ministry of Culture and Fine Arts
Ministry of Economy and Finance
Ministry of Education, Youth and Sport
Ministry of Environment
Ministry of Posts and Telecommunications
Ministry of Public Works and Transport
Ministry of Tourism
NiDA National Information Communications Technology Development Authority
Ministry of Planning
NIS National Institute of Statistics of Cambodia
Ministry of Interior
N.C.C.T National Committee for Counter Trafficking in person
Other
Constitution of Cambodia
Government of Cambodia
Cambodia |
5432 | https://en.wikipedia.org/wiki/Economy%20of%20Cambodia | Economy of Cambodia | The economy of Cambodia ( ) currently follows an open market system (market economy) and has seen rapid economic progress in the last decade. Cambodia had a GDP of $28.54 billion in 2022. Per capita income, although rapidly increasing, is low compared with most neighboring countries. Cambodia's two largest industries are textiles and tourism, while agricultural activities remain the main source of income for many Cambodians living in rural areas. The service sector is heavily concentrated on trading activities and catering-related services. Recently, Cambodia has reported that oil and natural gas reserves have been found off-shore.
In 1995, with a GDP of $2.92 billion the government transformed the country's economic system from a planned economy to its present market-driven system. Following those changes, growth was estimated at a value of 7% while inflation dropped from 26% in 1994 to only 6% in 1995. Imports increased due to the influx of foreign aid, and exports, particularly from the country's garment industry, also increased. Although there was a constant economic growth, this growth translated to only about
0.71% for the ASEAN economy in 2016, compared with her neighbor Indonesia, which contributed 37.62%.
After four years of improving economic performance, Cambodia's economy slowed in 1997–1998 due to the regional economic crisis, civil unrest, and political infighting. Foreign investments declined during this period. Also, in 1998 the main harvest was hit by drought. But in 1999, the first full year of relative peace in 30 years, progress was made on economic reforms and growth resumed at 4%.
Currently, Cambodia's foreign policy focuses on establishing friendly borders with its neighbors (such as Thailand and Vietnam), as well as integrating itself into regional (ASEAN) and global (WTO) trading systems. Some of the obstacles faced by this emerging economy are the need for a better education system and the lack of a skilled workforce; particularly in the poverty-ridden countryside, which struggles with inadequate basic infrastructure. Nonetheless, Cambodia continues to attract investors because of its low wages, plentiful labor, proximity to Asian raw materials, and favorable tax treatment.
Recent economic history
Following its independence from France in 1953, the Cambodian state has undergone five periods of political, social, and economic transformation:
First Kingdom of Cambodia (1953-1970)
Khmer Republic (1970–1975)
Democratic Kampuchea (1975-1982, ousted in 1979); became Coalition Government of Democratic Kampuchea in exile (1982-1993)
People's Republic of Kampuchea (1979-1989), later renamed "State of Cambodia" (1989–1993)
Second Kingdom of Cambodia (1993–present)
In 1989, the State of Cambodia implemented reform policies that transformed the Cambodian economic system from a command economy to an open market one. In line with the economic reformation, private property rights were introduced and state-owned enterprises were privatized. Cambodia also focused on integrating itself into regional and international economic blocs, such as the Association of South East Asian Nations and the World Trade Organization respectively. These policies triggered a growth in the economy, with its national GDP growing at an average of 6.1% before a period of domestic unrest and regional economic instability in 1997 (1997 Asian financial crisis). However, conditions improved and since 1999, the Cambodian economy has continued to grow at an average pace of approximately 6-8% per annum.
In 2007, Cambodia's gross domestic product grew by an estimated 18.6%. Garment exports rose by almost 8%, while tourist arrivals increased by nearly 35%. With exports decreasing, the 2007 GDP growth was driven largely by consumption and investment. Foreign direct investment (FDI) inflows reached US$600 million (7 percent of GDP), slightly more than what the country received in official aid. Domestic investment, driven largely by the private sector, accounted for 23.4 percent of GDP. Export growth, especially to the US, began to slow in late 2007 accompanied by stiffer competition from Vietnam and emerging risks (a slowdown in the US economy and lifting of safeguards on China's exports). US companies were the fifth largest investors in Cambodia, with more than $1.2 billion in investments over the period 1997-2007.
Cambodia was severely damaged by the financial crisis of 2007–2008, and its main economic sector, the garment industry, suffered a 23% drop in exports to the United States and Europe. As a result, 60,000 workers were laid off. However, in the last quarter of 2009 and early 2010, conditions were beginning to improve and the Cambodian economy began to recover. Cambodian exports to the US for the first 11 months of 2012 reached $2.49 billion, a 1 per cent increase year-on-year. Its imports of US goods grew 26 per cent for that period, reaching $213 million. Another factor underscoring the potential of the Cambodian economy is the recent halving of its poverty rate. The poverty rate is 20.5 per cent, meaning that approximately 2.8 million people live below the poverty line.
Data
The following table shows the main economic indicators in 1986–2020 (with IMF staff estimates in 2021–2026). Inflation below 5% is in green. The annual unemployment rate is extracted from the World Bank, although the International Monetary Fund find them unreliable.
Economic sectors
Garment industry
The garment industry represents the largest portion of Cambodia's manufacturing sector, accounting for 80% of the country's exports. In 2012, the exports grew to $4.61 billion up 8% over 2011. In the first half of 2013, the garment industry reported exports worth $1.56 billion. The sector employs 335,400 workers, of which 91% are female.
The sector operates largely on the final phase of garment production, that is turning yarns and fabrics into garments, as the country lacks a strong textile manufacturing base. In 2005, there were fears that the end of the Multi Fibre Arrangement would threaten Cambodia's garment industry; exposing it to stiff competition with China's strong manufacturing capabilities. On the contrary, Cambodia's garment industry at present continues to grow rapidly. This is can be attributed to the country's open economic policy which has drawn in large amounts of foreign investment into this sector of the economy.
Garment Factories by Ownership Nationality in 2010:
In 2010, 236 garment export-oriented factories were operating and registered with GMAC, the Garment Manufacturers Association in Cambodia, with 93% being foreign direct investment (FDI).
As seen in the table above, Cambodia's garment industry is characterized by a small percentage of local ownership. This is a reflection of the deficiency of skilled workers in the country as well as the limited leverage and autonomy Cambodian factories have in strategic decisions. Another characteristic of the industry is the country's competitive advantage as the only country where garment factories are monitored and reported according to national and international standards.
This has allowed Cambodia to secure its share of quotas for exports to the US through the US-Cambodia Trade Agreement on Textiles and Apparel (1999–2004), which linked market access to labor standards. However, the Cambodian garment industry remains vulnerable to global competition due to a lack of adequate infrastructure, labor unrest, the absence of a domestic textile industry, and almost complete dependence on imported textile material.
GMAC is establishing a specialized training institute to train garment workers. The institute is in Phnom Penh Special Economic Zone and will be completed by late 2016. It aims to train 1,600 garment workers in the first three years and 240 university students each year as part of a separate program.
Agriculture
Agriculture is the traditional mainstay of the Cambodian economy. Agriculture accounted for 90 percent of GDP in 1985 and employed approximately 80 percent of the work force. Rice is the principal commodity.
Major secondary crops include maize, cassava, sweet potatoes, groundnuts, soybeans, sesame seeds, dry beans, and rubber. The principal commercial crop is rubber. In the 1980s it was an important primary commodity, second only to rice, and one of the country's few sources of foreign exchange.
Tourism
In the 1960s, Cambodia was a prominent tourist destination in the Southeast Asian region. Due to protracted periods of civil war, insurgencies, and especially the genocidal regime of the Khmer Rouge (see Khmer Rouge Genocide), Cambodia's tourism industry was reduced to being virtually non-existent. Since the late 1990s, tourism is fast becoming Cambodia's second largest industry, just behind the garment manufacturing. In 2006, Cambodia's tourism sector generated a revenue of US$1.594 billion, which made up approximately 16% of the country's GDP.
Cultural heritage tourism is especially popular in the country, with many foreign tourists visiting the ancient Hindu temple of Angkor Wat located in the Siem Reap province. Other popular tourist attractions include the Royal Palace, Phnom Penh, as well as ecotourism spots such as Tonlé Sap Lake and the Mekong River.
The tourism industry in Cambodia has been perpetuated by the development of important transportation infrastructure; in particular Cambodia's two international airports in Phnom Penh and Siem Reap respectively. To the Cambodian economy, tourism has been a means for the accumulation of foreign currency earnings and employment for the Cambodian workforce, with about 250,000 jobs generated in 2006. Meanwhile, challenges to the industry include leakage of revenue to foreign markets due to a dependence on foreign goods as well as the prevalence of the Child sex tourism industry.
Gambling industry
Construction
The increase in tourist arrivals has led to growing demand for hotels and other forms of accommodation surrounding tourist hotspots. Siem Reap in particular has seen a construction boom in recent years. The capital Phnom Penh has also witnessed a growth in the construction and real estate sector. Recently, planned projects that have been on the pipeline for several years have been shelved temporarily due to a reduction in foreign investment.
From 2009, the Cambodian government has allowed foreigners to own condominiums. This has helped in attracting real estate investors from Thailand, Malaysia, Singapore and other countries.
The construction sector attracted investment of $2.1 billion in 2012 which is a 72 per cent rise compared with 2011. Construction licenses issued stood at 1,694 projects in 2012, which was 20% lower than 2011 but they were higher in value.
Resources
Oil seeps were discovered in Cambodia as early as the 1950s by Russian and Chinese geologists. Development of the industry was delayed, however, by the Vietnam and Cambodian Civil Wars and the political uncertainty that followed. Further discoveries of oil and natural gas deposits offshore in the early 2000s led to renewed domestic and international interest in Cambodia's production possibilities. As of 2013, the US company Chevron, Japanese JOGMEC and other international companies maintained production sites both on shore and off. Chevron alone had invested over US$160 million and drilled 18 wells.
Sok Khavan, acting director general of the Cambodian National Petroleum Authority, estimated that once the contracts are finalized and legal issues resolved, the Cambodian government will receive approximately 70% of the revenues, contributing to an economy in which the GDP is projected to increase five-fold by 2030. In addition, there are 10,000 square miles offshore in the Gulf of Thailand that holds potential reserves of 12-14 trillion cubic feet of natural gas and an unspecified amount of oil. The rights to this territory are currently a subject of dispute between Cambodia and Thailand, further delaying any possible production developments. In early 2013 it was reported that the two countries were close to a deal that would allow joint production to begin.
Foreign aid
Cambodia's emerging democracy has received strong international support. Under the mandate of the United Nations Transitional Authority in Cambodia (UNTAC), $1.72 billion (1.72 G$) was spent in an effort to bring basic security, stability and democratic rule to the country. Various news and media reports suggest that since 1993 the country has been the recipient of some US$10 billion in foreign aid.
With regards to economic assistance, official donors had pledged $880 million at the Ministerial Conference on the Rehabilitation of Cambodia (MCRRC) in Tokyo in June 1992. In addition to that figure, $119 million was pledged in September 1993 at the International Committee on the Reconstruction of Cambodia (ICORC) meeting in Paris, and $643 million at the March 1994 ICORC meeting in Tokyo.
Cambodia experienced a shortfall in foreign aid in the year 2005 due to the government's failure to pass anti-corruption laws, opening up a single import/export window, increasing its spending on education, and complying with policies of good governance. In response, the government adopted the National Strategic Development Plan for 2006–10 (also known as the “Third Five-Year Plan”). The plan focused on three major areas:
the speeding up of economic growth at an annual rate of 6-7%
eradicating corruption
developing public structures in favor of quality (i.e. by education, training, and healthcare) over quantity (i.e. rapid population growth)
Banking
There are no significant barriers to bank entry. At the end of 2013, there stood 35 commercial banks. Since 2011 new banks
with offshore funding have begun to enter the market.
Telecommunications
Energy
Cambodia has significant potential for developing renewable energy. The country, however, remains one of the few countries in the ASEAN region that has not adopted renewable energy targets. To attract more investment in renewable energy Cambodia could adopt targets, improve renewable energy governance, develop a regulatory framework, improve project bankability and facilitate market entry for international investors. Due to high vulnerability to climate change, it is recommended that Cambodia focuses on developing renewable energy away from fossil fuels as part of climate change mitigation measures.
Transport
Child labour
Trade - EBA Issues
The announcement from February 12, 2020 was to suspend EBA ("Everything But Arms") trade preferences between EU and Cambodia. The country has known to be the second largest beneficiary from EBA's program. The EU's preliminary conclusion sent to Cambodian government in November 12, 2019 because Cambodia failed to address serious human and labor rights concerns under Human Rights Watch. Moreover, the issue behind ending the opposition party (CNRP) and dropping charges against the leader of CNRP violated the right to freedom of expression.
Challenges for industrial development
Although Cambodia exports mainly garments and products from agriculture and fisheries, it is striving to diversify the economy. There is some evidence of expansion in value-added exports from a low starting point, largely thanks to the manufacture of electrical goods and telecommunications by foreign multinationals implanted in the country. Between 2008 and 2013, high-tech exports climbed from just US$3.8million to US$76.5 million.
It will be challenging for Cambodia to enhance the technological capacity of the many small and medium-sized enterprises (SMEs) active in agriculture, engineering and the natural sciences. Whereas the large foreign firms in Cambodia that are the main source of value-added exports tend to specialize in electrical machinery and telecommunications, the principal task for science and technology policy will be to facilitate spillovers in terms of skills and innovation capability from these large operators towards smaller firms and across other sectors.
There is little evidence that the Law on Patents, Utility Model Certificates and Industrial Designs (2006) has been of practical use, thus far, to any but the larger foreign firms operating in Cambodia. By 2012, 27 patent applications had been filed, all by foreigners. Of the 42 applications for industrial design received up to 2012, 40 had been filed by foreigners. Nevertheless, the law has no doubt encouraged foreign firms to introduce technological improvements to their on-shore production systems, which can only be beneficial.
Statistics
Investment (gross fixed)
3% of GDP (2011 est.)
Household income or consumption by percentage share
lowest 10%: 2.6%
highest 10%: 23.7% (2011)
Agriculture - products
rice,
rubber,
corn,
vegetables,
cashews,
tapioca,
silk
Industries
tourism, garments, construction, rice milling, fishing, wood and wood products, rubber, cement, gem mining, textiles
Industrial production growth rate
5.7% (2011 est.)
Electricity
Exchange rates
See also
Special Economic Zones of Cambodia
Cambodia and the World Bank
Sources
References
External links
CAMBODIA INVESTMENT GUIDEBOOK 2013 (Council for the Development of Cambodia)
Economies of developing countries
Cambodia |
5437 | https://en.wikipedia.org/wiki/Khmer%20architecture | Khmer architecture | Khmer architecture (), also known as Angkorian architecture (), is the architecture produced by the Khmers during the Angkor period of the Khmer Empire from approximately the later half of the 8th century CE to the first half of the 15th century CE.
The architecture of the Indian rock-cut temples, particularly in sculpture, had an influence on Southeast Asia and was widely adopted into the Indianised architecture of Cambodian (Khmer), Annamese and Javanese temples (of the Greater India). Evolved from Indian influences, Khmer architecture became clearly distinct from that of the Indian sub-continent as it developed its own special characteristics, some of which were created independently and others of which were incorporated from neighboring cultural traditions, resulting in a new artistic style in Asian architecture unique to the Angkorian tradition. The development of Khmer architecture as a distinct style is particularly evident in artistic depictions of divine and royal figures with facial features representative of the local Khmer population, including rounder faces, broader brows, and other physical characteristics. In any study of Angkorian architecture, the emphasis is necessarily on religious architecture, since all the remaining Angkorian buildings are religious in nature. During the period of Angkor, only temples and other religious buildings were constructed of stone.
Non-religious buildings such as dwellings were constructed of perishable materials such as wood, and so have not survived. The religious architecture of Angkor has characteristic structures, elements, and motifs, which are identified in the glossary below. Since a number of different architectural styles succeeded one another during the Angkorean period, not all of these features were equally in evidence throughout the period. Indeed, scholars have referred to the presence or absence of such features as one source of evidence for dating the remains.
Periodization
Many temples had been built before Cambodia became the powerful Khmer Empire which dominated a large part of mainland Southeast Asia. At that time, Cambodia was known as the Chenla kingdom, the predecessor state of the Khmer empire.
Latest research reveals that the Khmer already erected stone buildings in the protohistoric period, which they used for the worship of mighty tutelary spirits. This earliest extant architecture consists of relatively small cells made from prefabricated megalithic construction parts, which probably date at least to the second century BC.
There are three pre-Angkorean architectural styles:
Sambor Prei Kuk style (610–650): Sambor Prei Kuk, also known as Isanapura, was the capital of the Chenla Kingdom. Temples of Sambor Prei Kuk were built in rounded, plain colonettes with capitals that include a bulb.
Prei Khmeng style (635–700): structures reveal masterpieces of sculpture but examples are scarce. Colonettes are larger than those of previous styles. Buildings were more heavily decorated but had general decline in standards.
Kompong Preah style (700–800): temples with more decorative rings on colonettes which remain cylindrical. Brick constructions were being continued.
Scholars have worked to develop a periodization of Angkorean architectural styles. The following periods and styles may be distinguished. Each is named for a particular temple regarded as paradigmatic for the style.
Kulen style (825–875): continuation of pre-Angkorean style but it was a period of innovation and borrowing such as from Cham temples. Tower is mainly square and relatively high as well as brick with laterite walls and stone door surrounds but square and octagonal colonettes begin to appear.
Preah Ko style (877–886): Hariharalaya was the first capital city of the Khmer empire located in the area of Angkor; its ruins are in the area now called Roluos some fifteen kilometers southeast of the modern city of Siem Reap. The earliest surviving temple of Hariharalaya is Preah Ko; the others are Bakong and Lolei. The temples of the Preah Ko style are known for their small brick towers and for the great beauty and delicacy of their lintels.
Bakheng Style (889–923): Bakheng was the first temple mountain constructed in the area of Angkor proper north of Siem Reap. It was the state temple of King Yasovarman, who built his capital of Yasodharapura around it. Located on a hill (phnom), it is currently one of the most endangered of the monuments, having become a favorite perch for tourists eager to witness a glorious sundown at Angkor.
Koh Ker Style (921–944): during the reign of King Jayavarman IV, capital of Khmer empire was removed from Angkor region through the north which is called Koh Ker. The architectural style of temples in Koh Ker, scale of buildings diminishes toward center. Brick still main material but sandstone also used.
Pre Rup Style (944–968): under King Rajendravarman, the Angkorian Khmer built the temples of Pre Rup, East Mebon and Phimeanakas. Their common style is named after the state temple mountain of Pre Rup.
Banteay Srei Style (967–1000): Banteay Srei is the only major Angkorian temple constructed not by a monarch, but by a courtier. It is known for its small scale and the extreme refinement of its decorative carvings, including several famous narrative bas-reliefs dealing with scenes from Indian mythology.
Khleang Style (968–1010): the Khleang temples, first use of galleries. Cruciform gopuras. Octagonal colonettes. Restrained decorative carving. A few temples that were built in this style are Ta Keo, Phimeanakas.
Baphuon Style (1050–1080): Baphuon, the massive temple mountain of King Udayadityavarman II was apparently the temple that most impressed the Chinese traveller Zhou Daguan, who visited Angkor toward the end of the 13th century. Its unique relief carvings have a naive dynamic quality that contrast with the rigidity of the figures typical of some other periods. As of 2008, Baphuon is under restoration and cannot currently be appreciated in its full magnificence.
Classical or Angkor Wat Style (1080–1175): Angkor Wat, the temple and perhaps the mausoleum of King Suryavarman II, is the greatest of the Angkorian temples and defines what has come to be known as the classical style of Angkorian architecture. Other temples in this style are Banteay Samre and Thommanon in the area of Angkor, and Phimai in modern Thailand.
Bayon Style (1181–1243): in the final quarter of the 12th century, King Jayavarman VII freed the country of Angkor from occupation by an invasionary force from Champa. Thereafter, he began a massive program of monumental construction, paradigmatic for which was the state temple called the Bayon. The king's other foundations participated in the style of the Bayon, and included Ta Prohm, Preah Khan, Angkor Thom, and Banteay Chmar. Though grandiose in plan and elaborately decorated, the temples exhibit a hurriedness of construction that contrasts with the perfection of Angkor Wat.
Post Bayon Style (1243–1431): following the period of frantic construction under Jayavarman VII, Angkorian architecture entered the period of its decline. The 13th century Terrace of the Leper King is known for its dynamic relief sculptures of demon kings, dancers, and nāgas.
Materials
Angkorian builders used brick, sandstone, laterite and wood as their materials. The ruins that remain are of brick, sandstone and laterite, the wood elements having been lost to decay and other destructive processes.
Brick
The earliest Angkorian temples were made mainly of brick. Good examples are the temple towers of Preah Ko, Lolei and Bakong at Hariharalaya, and Chóp Mạt in Tay Ninh. Decorations were usually carved into a stucco applied to the brick, rather than into the brick itself. This was because bricks were a softer material, and did not lend themselves to sculpting, as opposed to stones of different kinds such as the Sandstones or the Granites. However, the tenets of the Sacred Architecture as enunciated in the Vedas and the Shastras, require no adhesives to be used while building blocks are assembled one over the other to create the Temples, as such bricks have been used only in relatively smaller temples such as Lolei and The Preah Ko. Besides, strength of bricks is much lesser as compared to the stones (mentioned here-in) and the former degrade with age.
Angkor's neighbor state of Champa was also the home to numerous brick temples that are similar in style to those of Angkor. The most extensive ruins are at Mỹ Sơn in Vietnam. A Cham story tells of the time that the two countries settled an armed conflict by means of a tower-building contest proposed by the Cham King Po Klaung Garai. While the Khmer built a standard brick tower, Po Klaung Garai directed his people to build an impressive replica of paper and wood.
Sandstone
The only stone used by Angkorian builders was sandstone, obtained from the Kulen mountains. Since its obtainment was considerably more expensive than that of brick, sandstone only gradually came into use, and at first was used for particular elements such as door frames. The 10th-century temple of Ta Keo is the first Angkorian temple to be constructed more or less entirely from Sandstone.
Laterite
Angkorian builders used laterite, a clay that is soft when taken from the ground but that hardens when exposed to the sun, for foundations and other hidden parts of buildings. Because the surface of laterite is uneven, it was not suitable for decorative carvings, unless first dressed with stucco. Laterite was more commonly used in the Khmer provinces than at Angkor itself. Because the water table in this entire region is well high, Laterite has been used in the underlying layers of Angkor Wat and other temples (especially the larger ones), because it can absorb water and help towards better stability of the Temple.
Structures
Central sanctuary
The central sanctuary of an Angkorian temple was home to the temple's primary deity, the one to whom the site was dedicated: typically Shiva or Vishnu in the case of a Hindu temple, Buddha or a bodhisattva in the case of a Buddhist temple. The deity was represented by a statue (or in the case of Shiva, most commonly by a linga). Since the temple was not considered a place of worship for use by the population at large, but rather a home for the deity, the sanctuary needed only to be large enough to hold the statue or linga; it was never more than a few metres across. Its importance was instead conveyed by the height of the tower (prasat) rising above it, by its location at the centre of the temple, and by the greater decoration on its walls. Symbolically, the sanctuary represented Mount Meru, the legendary home of the Hindu gods.
Prang
The prang is the tall finger-like spire, usually richly carved, common to much Khmer religious architecture.
Enclosure
Khmer temples were typically enclosed by a concentric series of walls, with the central sanctuary in the middle; this arrangement represented the mountain ranges surrounding Mount Meru, the mythical home of the gods. Enclosures are the spaces between these walls, and between the innermost wall and the temple itself. By modern convention, enclosures are numbered from the centre outwards. The walls defining the enclosures of Khmer temples are frequently lined by galleries, while passage through the walls is by way of gopuras located at the cardinal points.
Gallery
A gallery is a passageway running along the wall of an enclosure or along the axis of a temple, often open to one or both sides. Historically, the form of the gallery evolved during the 10th century from the increasingly long hallways which had earlier been used to surround the central sanctuary of a temple. During the period of Angkor Wat in the first half of the 12th century, additional half galleries on one side were introduced to buttress the structure of the temple.
Gopura
A gopura is an entrance building. At Angkor, passage through the enclosure walls surrounding a temple compound is frequently accomplished by means of an impressive gopura, rather than just an aperture in the wall or a doorway. Enclosures surrounding a temple are often constructed with a gopura at each of the four cardinal points. In plan, gopuras are usually cross-shaped and elongated along the axis of the enclosure wall.
If the wall is constructed with an accompanying gallery, the gallery is sometimes connected to the arms of the gopura. Many Angkorian gopuras have a tower at the centre of the cross. The lintels and pediments are often decorated, and guardian figures (dvarapalas) are often placed or carved on either side of the doorways.
Hall of Dancers
A Hall of Dancers is the structure of a type found in certain late 12th-century temples constructed under King Jayavarman VII: Ta Prohm, Preah Khan, Banteay Kdei and Banteay Chhmar. It is a rectangular building elongated along the temple's east axis and divided into four courtyards by galleries. Formerly it had a roof made of perishable materials; now only the stone walls remain. The pillars of the galleries are decorated with carved designs of dancing apsaras; hence scholars have suggested that the hall itself may have been used for dancing.
House of Fire
House of Fire, or Dharmasala, is the name given to a type of building found only in temples constructed during the reign of late 12th-century monarch Jayavarman VII: Preah Khan, Ta Prohm and Banteay Chhmar. A House of Fire has thick walls, a tower at the west end and south-facing windows.
Scholars theorize that the House of Fire functioned as a "rest house with fire" for travellers. An inscription at Preah Khan tells of 121 such rest houses lining the highways into Angkor. The Chinese traveller Zhou Daguan expressed his admiration for these rest houses when he visited Angkor in 1296 CE. Another theory is that the House of Fire had a religious function as the repository the sacred flame used in sacred ceremonies.
Library
Structures conventionally known as "libraries" are a common feature of the Khmer temple architecture, but their true purpose remains unknown. Most likely they functioned broadly as religious shrines rather than strictly as repositories of manuscripts. Freestanding buildings, they were normally placed in pairs on either side of the entrance to an enclosure, opening to the west.
Srah and baray
Srahs and barays were reservoirs, generally created by excavation and embankment, respectively. It is not clear whether the significance of these reservoirs was religious, agricultural, or a combination of the two.
The two largest reservoirs at Angkor were the West Baray and the East Baray located on either side of Angkor Thom. The East Baray is now dry. The West Mebon is an 11th-century temple standing at the center of the West Baray and the East Mebon is a 10th-century temple standing at the center of the East Baray.
The baray associated with Preah Khan is the Jayataka, in the middle of which stands the 12th-century temple of Neak Pean. Scholars have speculated that the Jayataka represents the Himalayan lake of Anavatapta, known for its miraculous healing powers.
Temple mountain
The dominant scheme for the construction of state temples in the Angkorian period was that of the Temple Mountain, an architectural representation of Mount Meru, the home of the gods in Hinduism. Enclosures represented the mountain chains surrounding Mount Meru, while a moat represented the ocean. The temple itself took shape as a pyramid of several levels, and the home of the gods was represented by the elevated sanctuary at the center of the temple.
The first great temple mountain was the Bakong, a five-level pyramid dedicated in 881 by King Indravarman I. The structure of Bakong took shape of stepped pyramid, popularly identified as temple mountain of early Khmer temple architecture. The striking similarity of the Bakong and Borobudur in Java, going into architectural details such as the gateways and stairs to the upper terraces, strongly suggests that Borobudur might have served as the prototype of Bakong. There must have been exchanges of travelers, if not mission, between Khmer kingdom and the Sailendras in Java. Transmitting to Cambodia not only ideas, but also technical and architectural details of Borobudur, including arched gateways in corbelling method.
Other Khmer temple mountains include Baphuon, Pre Rup, Ta Keo, Koh Ker, the Phimeanakas, and most notably the Phnom Bakheng at Angkor.
According to Charles Higham, "A temple was built for the worship of the ruler, whose essence, if a Saivite, was embodied in a linga... housed in the central sanctuary which served as a temple-mausoleum for the ruler after his death...these central temples also contained shrines dedicated to the royal ancestors and thus became centres of ancestor worship".
Elements
Bas-relief
Bas-reliefs are individual figures, groups of figures, or entire scenes cut into stone walls, not as drawings but as sculpted images projecting from a background. Sculpture in bas-relief is distinguished from sculpture in haut-relief, in that the latter projects farther from the background, in some cases almost detaching itself from it. The Angkorian Khmer preferred to work in bas-relief, while their neighbors the Cham were partial to haut-relief.
Narrative bas-reliefs are bas-reliefs depicting stories from mythology or history. Until about the 11th century, the Angkorian Khmer confined their narrative bas-reliefs to the space on the tympana above doorways. The most famous early narrative bas-reliefs are those on the tympana at the 10th-century temple of Banteay Srei, depicting scenes from Hindu mythology as well as scenes from the great works of Indian literature, the Ramayana and the Mahabharata.
By the 12th century, however, the Angkorian artists were covering entire walls with narrative scenes in bas-relief. At Angkor Wat, the external gallery wall is covered with some 12,000 or 13,000 square meters of such scenes, some of them historical, some mythological. Similarly, the outer gallery at the Bayon contains extensive bas-reliefs documenting the everyday life of the medieval Khmer as well as historical events from the reign of King Jayavarman VII.
The following is a listing of the motifs illustrated in some of the more famous Angkorian narrative bas-reliefs:
bas-reliefs in the tympana at Banteay Srei (10th century)
the duel of the monkey princes Vali and Sugriva, and the intervention of the human hero Rama on behalf of the latter
the duel of Bhima and Duryodhana at the Battle of Kurukshetra
the Rakshasa king Ravana shaking Mount Kailasa, upon which sit Shiva and his shakti
Kama firing an arrow at Shiva as the latter sits on Mount Kailasa
the burning of Khandava Forest by Agni and Indra's attempt to extinguish the flames
bas-reliefs on the walls of the outer gallery at Angkor Wat (mid-12th century)
the Battle of Lanka between the Rakshasas and the vanaras or monkeys
the court and procession of King Suryavarman II, the builder of Angkor Wat
the Battle of Kurukshetra between Pandavas and Kauravas
the judgment of Yama and the tortures of Hell
the Churning of the Ocean of Milk
a battle between devas and asuras
a battle between Vishnu and a force of asuras
the conflict between Krishna and the asura Bana
the story of the monkey princes Vali and Sugriva
bas-reliefs on the walls of the outer and inner galleries at the Bayon (late 12th century)
battles on land and sea between Khmer and Cham troops
scenes from the everyday life of Angkor
civil strife among the Khmer
the legend of the Leper King
the worship of Shiva
groups of dancing apsaras
Blind door and window
Angkorean shrines frequently opened in only one direction, typically to the east. The other three sides featured fake or blind doors to maintain symmetry. Blind windows were often used along otherwise blank walls.
Colonnette
Colonnettes were narrow decorative columns that served as supports for the beams and lintels above doorways or windows. Depending on the period, they were round, rectangular, or octagonal in shape. Colonnettes were often circled with molded rings and decorated with carved leaves.
Corbelling
Angkorian engineers tended to use the corbel arch in order to construct rooms, passageways and openings in buildings. A corbel arch is constructed by adding layers of stones to the walls on either side of an opening, with each successive layer projecting further towards the centre than the one supporting it from below, until the two sides meet in the middle. The corbel arch is structurally weaker than the true arch. The use of corbelling prevented the Angkorian engineers from constructing large openings or spaces in buildings roofed with stone, and made such buildings particularly prone to collapse once they were no longer maintained. These difficulties did not, of course, exist for buildings constructed with stone walls surmounted by a light wooden roof. The problem of preventing the collapse of corbelled structures at Angkor remains a serious one for modern conservation.
Lintel, pediment, and tympanum
A lintel is a horizontal beam connecting two vertical columns between which runs a door or passageway. Because the Angkorean Khmer lacked the ability to construct a true arch, they constructed their passageways using lintels or corbelling. A pediment is a roughly triangular structure above a lintel. A tympanum is the decorated surface of a pediment.
The styles employed by Angkorean artists in the decoration of lintels evolved over time, as a result, the study of lintels has proven a useful guide to the dating of temples. Some scholars have endeavored to develop a periodization of lintel styles. The most beautiful Angkorean lintels are thought to be those of the Preah Ko style from the late 9th century.
Common motifs in the decoration of lintels include the kala, the nāga and the makara, as well as various forms of vegetation. Also frequently depicted are the Hindu gods associated with the four cardinal directions, with the identity of the god depicted on a given lintel or pediment depending on the direction faced by that element. Indra, the god of the sky, is associated with East; Yama, the god of judgment and Hell, with South; Varuna, the god of the ocean, with West; and Kubera, god of wealth, with North.
List of Khmer lintel styles
Sambor Prei Kuk style: inward-facing makaras with tapering bodies. Four arches joined by three medallions, the central once carved with Indra. Small figure on each makara. A variation is with figures replacing the makaras and a scene with figures below the arch.
Prei Khmeng style: continuation of Sambor Prei Kuk but makaras disappear, being replaced by incurving ends and figures. Arches more rectilinear. Large figures sometimes at each end. A variation is a central scene below the arch, usually Vishnu Reclining.
Kompong Preah style: high quality carving. Arches replaced by a garland of vegetation (like a wreath) more or less segmented. Medallions disappear, central one sometimes replaced by a knot of leaves. Leafy pendants spray out above and below garland.
Kulen style: great diversity, with influences from Champa and Java, including the kala and outward-facing makaras.
Preah Ko style: some of the most beautiful of all Khmer lintels, rich, will-carved and imaginative. Kala in center, issuing garland on either side. Distinct loops of vegetation curl down from garland. Outward-facing makaras sometimes appear at the ends. Vishnu on Garuda common.
Bakheng style: continuation of Preah Ko but less fanciful and tiny figures disappear. Loop of vegetation below the naga form tight circular coils. Garland begins to dip in the center.
Koh Ker style: center occupied by a prominent scene, taking up almost the entire height of the lintel. Usually no lower border. Dress of figures shows a curved line to the sampot tucked in below waist.
Pre Rup style: tendency to copy earlier style, especially Preah Ko and Bakheng. Central figures. Re-appearance of lower border.
Banteay Srei style: increase in complexity and detail. Garland sometimes makes pronounced loop on either side with kala at top of each loop. Central figure.
Khleang style: less ornate than those of Banteay Srei. Central kala with triangular tongue, its hands holding the garland which is bent at the center. Kala sometimes surmounted by a divinity. Loops of garland on either side divided by flora stalk and pendant. Vigorous treatment of vegetation.
Baphuon style: the central kala surmounted by divinity, usually riding a steed or a Vishnu scene, typically from the life of Krishna. Loops of garland no longer cut. Another type is a scene with many figures and little vegetation.
Angkor Wat style: centered, framed and linked by garlands. A second type is a narrative scene filled with figures. When nagas appear, they curls are tight and prominent. Dress mirrors that of devatas and apsaras in bas-reliefs. No empty spaces.
Bayon style: most figures disappear, usually only a kala at the bottom of the lintel surmounted by small figure. Mainly Buddhist motifs. In the middle of the period the garland is cut into four parts, while later a series of whorls of foliage replace the four divisions.
Stairs
Angkorean stairs are notoriously steep. Frequently, the length of the riser exceeds that of the tread, producing an angle of ascent somewhere between 45 and 70 degrees. The reasons for this peculiarity appear to be both religious and monumental. From the religious perspective, a steep stairway can be interpreted as a "stairway to heaven", the realm of the gods. "From the monumental point of view", according to Angkor-scholar Maurice Glaize, "the advantage is clear – the square of the base not having to spread in surface area, the entire building rises to its zenith with a particular thrust".
Motifs
Apsara and devata
Apsaras, divine nymphs or celestial dancing girls, are characters from Indian mythology. Their origin is explained in the story of the churning of the Ocean of Milk, or samudra manthan, found in the Vishnu Purana. Other stories in the Mahabharata detail the exploits of individual apsaras, who were often used by the gods as agents to persuade or seduce mythological demons, heroes and ascetics. The widespread use of apsaras as a motif for decorating the walls and pillars of temples and other religious buildings, however, was a Khmer innovation. In modern descriptions of Angkorian temples, the term "apsara" is sometimes used to refer not only to dancers but also to other minor female deities, though minor female deities who are depicted standing rather than dancing are more commonly called "devatas".
Apsaras and devatas are ubiquitous at Angkor, but are most common in the foundations of the 12th century. Depictions of true (dancing) apsaras are found, for example, in the Hall of Dancers at Preah Khan, in the pillars that line the passageways through the outer gallery of the Bayon, and in the famous bas-relief of Angkor Wat depicting the churning of the Ocean of Milk. The largest population of devatas (around 2,000) is at Angkor Wat, where they appear individually and in groups.
Dvarapala
Dvarapalas are human or demonic temple guardians, generally armed with lances and clubs. They are presented either as a stone statues or as relief carvings in the walls of temples and other buildings, generally close to entrances or passageways. Their function is to protect the temples. Dvarapalas may be seen, for example, at Preah Ko, Lolei, Banteay Srei, Preah Khan and Banteay Kdei.
Gajasimha and Reachisey
The gajasimha is a mythical animal with the body of a lion and the head of an elephant. At Angkor, it is portrayed as a guardian of temples and as a mount for some warriors. The gajasimha may be found at Banteay Srei and at the temples belonging to the Roluos group.
The reachisey is another mythical animal, similar to the gajasimha, with the head of a lion, a short elephantine trunk, and the scaly body of a dragon. It occurs at Angkor Wat in the epic bas reliefs of the outer gallery.
Garuda
Garuda is a divine being that is part man and part bird. He is the lord of birds, the mythological enemy of nāgas, and the battle steed of Vishnu. Depictions of Garuda at Angkor number in the thousands, and though Indian in inspiration exhibit a style that is uniquely Khmer. They may be classified as follows:
As part of a narrative bas relief, Garuda is shown as the battle steed of Vishnu or Krishna, bearing the god on his shoulders, and simultaneously fighting against the god's enemies. Numerous such images of Garuda may be observed in the outer gallery of Angkor Wat.
Garuda serves as an atlas supporting a superstructure, as in the bas relief at Angkor Wat that depicts heaven and hell. Garudas and stylized mythological lions are the most common atlas figures at Angkor.
Garuda is depicted in the pose of a victor, often dominating a nāga, as in the gigantic relief sculptures on the outer wall of Preah Khan. In this context, Garuda symbolizes the military power of the Khmer kings and their victories over their enemies. Not coincidentally, the city of Preah Khan was built on the site of King Jayavarman VII's victory over invaders from Champa.
In free-standing nāga sculptures, such as in nāga bridges and balustrades, Garuda is often depicted in relief against the fan of nāga heads. The relationship between Garuda and the nāga heads is ambiguous in these sculptures: it may be one of cooperation, or it may again be one of domination of the nāga by Garuda.
Indra
In the ancient religion of the Vedas, Indra the sky-god reigned supreme. In the medieval Hinduism of Angkor, however, he had no religious status, and served only as a decorative motif in architecture. Indra is associated with the East; since Angkorian temples typically open to the East, his image is sometimes encountered on lintels and pediments facing that direction. Typically, he is mounted on the three-headed elephant Airavata and holds his trusty weapon, the thunderbolt or vajra. The numerous adventures of Indra documented in Hindu epic Mahabharata are not depicted at Angkor.
Kala
The kala is a ferocious monster symbolic of time in its all-devouring aspect and associated with the destructive side of the god Siva. In Khmer temple architecture, the kala serves as a common decorative element on lintels, tympana and walls, where it is depicted as a monstrous head with a large upper jaw lined by large carnivorous teeth, but with no lower jaw. Some kalas are shown disgorging vine-like plants, and some serve as the base for other figures.
Scholars have speculated that the origin of the kala as a decorative element in Khmer temple architecture may be found in an earlier period when the skulls of human victims were incorporated into buildings as a kind of protective magic or apotropaism. Such skulls tended to lose their lower jaws when the ligaments holding them together dried out. Thus, the kalas of Angkor may represent the Khmer civilization's adoption into its decorative iconography of elements derived from long forgotten primitive antecedents.
Krishna
Scenes from the life of Krishna, a hero and Avatar of the god Vishnu, are common in the relief carvings decorating Angkorian temples, and unknown in Angkorian sculpture in the round. The literary sources for these scenes are the Mahabharata, the Harivamsa, and the Bhagavata Purana. The following are some of the most important Angkorian depictions of the life of Krishna:
A series of bas reliefs at the 11th-century temple pyramid called Baphuon depicts scenes of the birth and childhood of Krishna.
Numerous bas reliefs in various temples show Krishna subduing the nāga Kaliya. In Angkorian depictions, Krishna is shown effortlessly stepping on and pushing down his opponent's multiple heads.
Also common is the depiction of Krishna as he lifts Mount Govardhana with one hand in order to provide the cowherds with shelter from the deluge caused by Indra.
Krishna is frequently depicted killing or subduing various demons, including his evil uncle Kamsa. An extensive bas relief in the outer gallery of Angkor Wat depicts Krishna's battle with the asura Bana. In battle, Krishna is shown riding on the shoulders of Garuda, the traditional mount of Vishnu.
In some scenes, Krishna is depicted in his role as charioteer, advisor and protector of Arjuna, the hero of the Mahabharata. A well-known bas relief from the 10th-century temple of Banteay Srei depicts the Krishna and Arjuna helping Agni to burn down Khandava forest.
Linga
The linga is a phallic post or cylinder symbolic of the god Shiva and of creative power. As a religious symbol, the function of the linga is primarily that of worship and ritual, and only secondarily that of decoration. In the Khmer empire, certain lingas were erected as symbols of the king himself, and were housed in royal temples in order to express the king's consubstantiality with Siva. The lingas that survive from the Angkorean period are generally made of polished stone.
The lingas of the Angkorian period are of several different types.
Some lingas are implanted in a flat square base called a yoni, symbolic of the womb.
On the surface of some lingas is engraved the face of Siva. Such lingas are called mukhalingas.
Some lingas are segmented into three parts: a square base symbolic of Brahma, an octagonal middle section symbolic of Vishnu, and a round tip symbolic of Shiva.
Makara
A makara is a mythical sea monster with the body of a serpent, the trunk of an elephant, and a head that can have features reminiscent of a lion, a crocodile, or a dragon. In Khmer temple architecture, the motif of the makara is generally part of a decorative carving on a lintel, tympanum, or wall. Often the makara is depicted with some other creature, such as a lion or serpent, emerging from its gaping maw. The makara is a central motif in the design of the famously beautiful lintels of the Roluos group of temples: Preah Ko, Bakong, and Lolei. At Banteay Srei, carvings of makaras disgorging other monsters may be observed on many of the corners of the buildings.
Nāga
Mythical serpents, or nāgas, represent an important motif in Khmer architecture as well as in free-standing sculpture. They are frequently depicted as having multiple heads, always uneven in number, arranged in a fan. Each head has a flared hood, in the manner of a cobra.
Nāgas are frequently depicted in Angkorian lintels. The composition of such lintels characteristically consists in a dominant image at the center of a rectangle, from which issue swirling elements that reach to the far ends of the rectangle. These swirling elements may take shape as either vinelike vegetation or as the bodies of nāgas. Some such nāgas are depicted wearing crowns, and others are depicted serving as mounts for human riders.
To the Angkorian Khmer, nāgas were symbols of water and figured in the myths of origin for the Khmer people, who were said to be descended from the union of an Indian Brahman and a serpent princess from Cambodia. Nāgas were also characters in other well-known legends and stories depicted in Khmer art, such as the churning of the Ocean of Milk, the legend of the Leper King as depicted in the bas-reliefs of the Bayon, and the story of Mucalinda, the serpent king who protected the Buddha from the elements.
Nāga Bridge
Nāga bridges are causeways or true bridges lined by stone balustrades shaped as nāgas.
In some Angkorian nāga-bridges, as for example those located at the entrances to 12th century city of Angkor Thom, the nāga-shaped balustrades are supported not by simple posts but by stone statues of gigantic warriors. These giants are the devas and asuras who used the nāga king Vasuki in order to the churn the Ocean of Milk in quest of the amrita or elixir of immortality. The story of the Churning of the Ocean of Milk or samudra manthan has its source in Indian mythology.
Quincunx
A quincunx is a spatial arrangement of five elements, with four elements placed as the corners of a square and the fifth placed in the center. The five peaks of Mount Meru were taken to exhibit this arrangement, and Khmer temples were arranged accordingly in order to convey a symbolic identification with the sacred mountain. The five brick towers of the 10th-century temple known as East Mebon, for example, are arranged in the shape of a quincunx. The quincunx also appears elsewhere in designs of the Angkorian period, as in the riverbed carvings of Kbal Spean.
Shiva
Most temples at Angkor are dedicated to Shiva. In general, the Angkorian Khmer represented and worshipped Shiva in the form of a lingam, though they also fashioned anthropomorphic statues of the god. Anthropomorphic representations are also found in Angkorian bas reliefs. A famous tympanum from Banteay Srei depicts Shiva sitting on Mount Kailasa with his consort, while the demon king Ravana shakes the mountain from below. At Angkor Wat and Bayon, Shiva is depicted as a bearded ascetic. His attributes include the mystical eye in the middle of his forehead, the trident, and the rosary. His vahana or mount is the bull Nandi.
Vishnu
Angkorian representations of Vishnu include anthropomorphic representations of the god himself, as well as representations of his incarnations or Avatars, especially Rama and Krishna. Depictions of Vishnu are prominent at Angkor Wat, the 12th-century temple that was originally dedicated to Vishnu. Bas reliefs depict Vishna battling with against asura opponents, or riding on the shoulders of his vahana or mount, the gigantic eagle-man Garuda. Vishnu's attributes include the discus, the conch shell, the baton, and the orb.
Ordinary housing
The nuclear family, in rural Cambodia, typically lives in a rectangular house that may vary in size from four by six meters to six by ten meters. It is constructed of a wooden frame with gabled thatch roof and walls of woven bamboo. Khmer houses typically are raised on stilts as much as three meters for protection from annual floods. Two ladders or wooden staircases provide access to the house. The steep thatch roof overhanging the house walls protects the interior from rain. Typically a house contains three rooms separated by partitions of woven bamboo.
The front room serves as a living room used to receive visitors, the next room is the parents' bedroom, and the third is for unmarried daughters. Sons sleep anywhere they can find space. Family members and neighbors work together to build the house, and a house-raising ceremony is held upon its completion. The houses of poorer persons may contain only a single large room. Food is prepared in a separate kitchen located near the house but usually behind it. Toilet facilities consist of simple pits in the ground, located away from the house, that are covered up when filled. Any livestock is kept below the house.
Chinese and Vietnamese houses in Cambodian town and villages typically are built directly on the ground and have earthen, cement, or tile floors, depending upon the economic status of the owner. Urban housing and commercial buildings may be of brick, masonry, or wood.
See also
New Khmer Architecture
Rural Khmer house
Khmer sculpture
Indian influence:
Influence of Indian Hindu temple architecture on Southeast Asia
History of Indian influence on Southeast Asia
References
Bibliography
Coedès, George. Pour mieux comprendre Angkor. Hanoi: Imprimerie d'Extrême-Orient, 1943.
Forbes, Andrew; Henley, David (2011). Angkor, Eighth Wonder of the World. Chiang Mai: Cognoscenti Books. .
Freeman, Michael and Jacques, Claude. Ancient Angkor. Bangkok: River Books, 1999. .
Glaize, Maurice. The Monuments of the Angkor Group. 1944. A translation from the original French into English is available online at theangkorguide.com.
Jessup, Helen Ibbitson. Art & Architecture of Cambodia. London: Thames & Hudson, 2004.
Ngô Vǎn Doanh, Champa:Ancient Towers. Hanoi: The Gioi Publishers, 2006.
Roveda, Vittorio. Images of the Gods: Khmer Mythology in Cambodia, Laos & Thailand. Bangkok: River Books, 2005.
Sthapatyakam. The Architecture of Cambodia. Phnom Penh: Department of Media and Communication, Royal University of Phnom Penh, 2012.
Gabel, Joachim. Earliest Khmer Stone Architecture and its Origin: A Case Study of Megalithic Remains and Spirit Belief at the Site of Vat Phu.. In: JoGA 2022: 2–137, Journal of Global Archaeology.
External links
Churning the Sea of Time (documentary film) on IMDb
Khmer Empire
Hindu temple architecture
Buddhist temples
Archaeological sites in Cambodia |
5447 | https://en.wikipedia.org/wiki/Cameroon | Cameroon | Cameroon ( ; ), officially the Republic of Cameroon (), is a country in Central Africa. It shares boundaries with Nigeria to the west and north, Chad to the northeast, the Central African Republic to the east, and Equatorial Guinea, Gabon and the Republic of the Congo to the south. Its coastline lies on the Bight of Biafra, part of the Gulf of Guinea and the Atlantic Ocean. Due to its strategic position at the crossroads between West Africa and Central Africa, it has been categorized as being in both camps. Its nearly 27 million people speak 250 native languages and English or French or both.
Early inhabitants of the territory included the Sao civilisation around Lake Chad, and the Baka hunter-gatherers in the southeastern rainforest. Portuguese explorers reached the coast in the 15th century and named the area Rio dos Camarões (Shrimp River), which became Cameroon in English. Fulani soldiers founded the Adamawa Emirate in the north in the 19th century, and various ethnic groups of the west and northwest established powerful chiefdoms and fondoms. Cameroon became a German colony in 1884 known as Kamerun. After World War I, it was divided between France and the United Kingdom as League of Nations mandates. The Union des Populations du Cameroun (UPC) political party advocated independence, but was outlawed by France in the 1950s, leading to the national liberation insurgency fought between French and UPC militant forces until early 1971. In 1960, the French-administered part of Cameroon became independent, as the Republic of Cameroun, under President Ahmadou Ahidjo. The southern part of British Cameroons federated with it in 1961 to form the Federal Republic of Cameroon. The federation was abandoned in 1972. The country was renamed the United Republic of Cameroon in 1972 and back to the Republic of Cameroon in 1984 by a presidential decree by president Paul Biya. Biya, the incumbent president, has led the country since 1982 following Ahidjo's resignation; he previously held office as prime minister from 1975 onward. Cameroon is governed as a Unitary Presidential Republic.
The official languages of Cameroon are French and English, the official languages of former French Cameroons and British Cameroons. Christianity is the majority religion in Cameroon, with significant minorities practicing Islam and traditional faiths. It has experienced tensions from the English-speaking territories, where politicians have advocated for greater decentralisation and even complete separation or independence (as in the Southern Cameroons National Council). In 2017, tensions over the creation of an Ambazonian state in the English-speaking territories escalated into open warfare.
Large numbers of Cameroonians live as subsistence farmers. The country is often referred to as "Africa in miniature" for its geological, linguistic and cultural diversity. Its natural features include beaches, deserts, mountains, rainforests, and savannas. Its highest point, at almost , is Mount Cameroon in the Southwest Region. Its most populous cities are Douala on the Wouri River, its economic capital and main seaport; Yaoundé, its political capital; and Garoua. Limbé in the southwest has a natural seaport. Cameroon is well known for its native music styles, particularly Makossa, Njang and Bikutsi, and for its successful national football team. It is a member state of the African Union, the United Nations, the (OIF), the Commonwealth of Nations, Non-Aligned Movement and the Organisation of Islamic Cooperation.
Etymology
Originally, Cameroon was the exonym given by the Portuguese to the Wouri River, which they called Rio dos Camarões meaning "river of shrimps" or "shrimp river", referring to the then abundant Cameroon ghost shrimp. Today the country's name in Portuguese remains Camarões.
History
Early history
Present-day Cameroon was first settled in the Neolithic Era. The longest continuous inhabitants are groups such as the Baka (Pygmies). From there, Bantu migrations into eastern, southern and central Africa are believed to have occurred about 2,000 years ago. The Sao culture arose around Lake Chad, , and gave way to the Kanem and its successor state, the Bornu Empire. Kingdoms, fondoms, and chiefdoms arose in the west.
Portuguese sailors reached the coast in 1472. They noted an abundance of the ghost shrimp Lepidophthalmus turneranus in the Wouri River and named it (Shrimp River), which became Cameroon in English. Over the following few centuries, European interests regularised trade with the coastal peoples, and Christian missionaries pushed inland.
In 1896, Sultan Ibrahim Njoya created the Bamum script, or Shu Mom, for the Bamum language. It is taught in Cameroon today by the Bamum Scripts and Archives Project.
German rule
Germany began to establish roots in Cameroon in 1868 when the Woermann Company of Hamburg built a warehouse. It was built on the estuary of the Wouri River. Later Gustav Nachtigal made a treaty with one of the local kings to annex the region for the German emperor. The German Empire claimed the territory as the colony of Kamerun in 1884 and began a steady push inland; the natives resisted. Under the aegis of Germany, commercial companies were local administrations. These concessions used forced labour to run profitable banana, rubber, palm oil, and cocoa plantations. Even infrastructure projects relied on a regimen of forced labour. This economic policy was much criticised by the other colonial powers.
French and British rule
With the defeat of Germany in World War I, Kamerun became a League of Nations mandate territory and was split into French Cameroon () and British Cameroon in 1919. France integrated the economy of Cameroon with that of France and improved the infrastructure with capital investments and skilled workers, modifying the colonial system of forced labour.
The British administered their territory from neighbouring Nigeria. Natives complained that this made them a neglected "colony of a colony". Nigerian migrant workers flocked to Southern Cameroons, ending forced labour altogether but angering the local natives, who felt swamped. The League of Nations mandates were converted into United Nations Trusteeships in 1946, and the question of independence became a pressing issue in French Cameroon.
France outlawed the pro-independence political party, the Union of the Peoples of Cameroon (Union des Populations du Cameroun; UPC), on 13 July 1955. This prompted a long guerrilla war waged by the UPC and the assassination of several of the party's leaders, including Ruben Um Nyobè, Félix-Roland Moumié and Ernest Ouandie. In the British Cameroons, the question was whether to reunify with French Cameroon or join Nigeria; the British ruled out the option of independence.
Independence
On 1 January 1960, French Cameroun gained independence from France under President Ahmadou Ahidjo. On 1 October 1961, the formerly British Southern Cameroons gained independence from the United Kingdom by vote of the UN General Assembly and joined with French Cameroun to form the Federal Republic of Cameroon, a date which is now observed as Unification Day, a public holiday. Ahidjo used the ongoing war with the UPC to concentrate power in the presidency, continuing with this even after the suppression of the UPC in 1971.
His political party, the Cameroon National Union (CNU), became the sole legal political party on 1 September 1966 and on 20 May 1972, a referendum was passed to abolish the federal system of government in favour of a United Republic of Cameroon, headed from Yaoundé. This day is now the country's National Day, a public holiday. Ahidjo pursued an economic policy of planned liberalism, prioritising cash crops and petroleum development. The government used oil money to create a national cash reserve, pay farmers, and finance major development projects; however, many initiatives failed when Ahidjo appointed unqualified allies to direct them.
The national flag was changed on 20 May 1975. Two stars were removed, replaced with a large central star as a symbol of national unity.
Ahidjo stepped down on 4 November 1982 and left power to his constitutional successor, Paul Biya. However, Ahidjo remained in control of the CNU and tried to run the country from behind the scenes until Biya and his allies pressured him into resigning. Biya began his administration by moving toward a more democratic government, but a failed coup d'état nudged him toward the leadership style of his predecessor.
An economic crisis took effect in the mid-1980s to late 1990s as a result of international economic conditions, drought, falling petroleum prices, and years of corruption, mismanagement, and cronyism. Cameroon turned to foreign aid, cut government spending, and privatised industries. With the reintroduction of multi-party politics in December 1990, the former British Southern Cameroons pressure groups called for greater autonomy, and the Southern Cameroons National Council advocated complete secession as the Republic of Ambazonia. The 1992 Labour Code of Cameroon gives workers the freedom to belong to a trade union or not to belong to any trade union at all. It is the choice of a worker to join any trade union in his occupation since there are more than one trade union in each occupation.
In June 2006, talks concerning a territorial dispute over the Bakassi peninsula were resolved. The talks involved President Paul Biya of Cameroon, then President Olusegun Obasanjo of Nigeria and then UN Secretary General Kofi Annan, and resulted in Cameroonian control of the oil-rich peninsula. The northern portion of the territory was formally handed over to the Cameroonian government in August 2006, and the remainder of the peninsula was left to Cameroon 2 years later, in 2008. The boundary change triggered a local separatist insurgency, as many Bakassians refused to accept Cameroonian rule. While most militants laid down their arms in November 2009, some carried on fighting for years.
In February 2008, Cameroon experienced its worst violence in 15 years when a transport union strike in Douala escalated into violent protests in 31 municipal areas.
In May 2014, in the wake of the Chibok schoolgirls kidnapping, presidents Paul Biya of Cameroon and Idriss Déby of Chad announced they were waging war on Boko Haram, and deployed troops to the Nigerian border. Boko Haram launched several attacks into Cameroon, killing 84 civilians in a December 2014 raid, but suffering a heavy defeat in a raid in January 2015. Cameroon declared victory over Boko Haram on Cameroonian territory in September 2018.
Since November 2016, protesters from the predominantly English-speaking Northwest and Southwest regions of the country have been campaigning for continued use of the English language in schools and courts. People were killed and hundreds jailed as a result of these protests. In 2017, Biya's government blocked the regions' access to the Internet for three months. In September, separatists started a guerilla war for the independence of the Anglophone region as the Federal Republic of Ambazonia. The government responded with a military offensive, and the insurgency spread across the Northwest and Southwest regions. , fighting between separatist guerillas and government forces continues. During 2020, numerous terrorist attacks—many of them carried out without claims of credit—and government reprisals have led to bloodshed throughout the country. Since 2016, more than 450,000 people have fled their homes. The conflict indirectly led to an upsurge in Boko Haram attacks, as the Cameroonian military largely withdrew from the north to focus on fighting the Ambazonian separatists.
More than 30,000 people in northern Cameroon fled to Chad after ethnic clashes over access to water between Musgum fishermen and ethnic Arab Choa herders in December 2021.
Politics and government
The President of Cameroon is elected and creates policy, administers government agencies, commands the armed forces, negotiates and ratifies treaties, and declares a state of emergency. The president appoints government officials at all levels, from the prime minister (considered the official head of government), to the provincial governors and divisional officers. The president is selected by popular vote every seven years. There have been 2 presidents since the independence of Cameroon.
The National Assembly makes legislation. The body consists of 180 members who are elected for five-year terms and meet three times per year. Laws are passed on a majority vote. The 1996 constitution establishes a second house of parliament, the 100-seat Senate. The government recognises the authority of traditional chiefs, fons, and lamibe to govern at the local level and to resolve disputes as long as such rulings do not conflict with national law.
Cameroon's legal system is a mixture of civil law, common law, and customary law. Although nominally independent, the judiciary falls under the authority of the executive's Ministry of Justice. The president appoints judges at all levels. The judiciary is officially divided into tribunals, the court of appeal, and the supreme court. The National Assembly elects the members of a nine-member High Court of Justice that judges high-ranking members of government in the event they are charged with high treason or harming national security.
Political culture
Cameroon is viewed as rife with corruption at all levels of government. In 1997, Cameroon established anti-corruption bureaus in 29 ministries, but only 25% became operational, and in 2012, Transparency International placed Cameroon at number 144 on a list of 176 countries ranked from least to most corrupt. On 18 January 2006, Biya initiated an anti-corruption drive under the direction of the National Anti-Corruption Observatory. There are several high corruption risk areas in Cameroon, for instance, customs, public health sector and public procurement. However, the corruption has gotten worse, regardless of the existing anti-corruption bureaus, as Transparency International ranked Cameroon 152 on a list of 180 countries in 2018.
President Biya's Cameroon People's Democratic Movement (CPDM) was the only legal political party until December 1990. Numerous regional political groups have since formed. The primary opposition is the Social Democratic Front (SDF), based largely in the Anglophone region of the country and headed by John Fru Ndi.
Biya and his party have maintained control of the presidency and the National Assembly in national elections, which rivals contend were unfair. Human rights organisations allege that the government suppresses the freedoms of opposition groups by preventing demonstrations, disrupting meetings, and arresting opposition leaders and journalists. In particular, English-speaking people are discriminated against; protests often escalate into violent clashes and killings. In 2017, President Biya shut down the Internet in the English-speaking region for 94 days, at the cost of hampering five million people, including Silicon Mountain startups.
Freedom House ranks Cameroon as "not free" in terms of political rights and civil liberties. The last parliamentary elections were held on 9 February 2020.
Foreign relations
Cameroon is a member of both the Commonwealth of Nations and La Francophonie.
Its foreign policy closely follows that of its main ally, France (one of its former colonial rulers). Cameroon relies heavily on France for its defence, although military spending is high in comparison to other sectors of government.
President Biya has engaged in a decades-long clash with the government of Nigeria over possession of the oil-rich Bakassi peninsula. Cameroon and Nigeria share a 1,000-mile (1,600 km) border and have disputed the sovereignty of the Bakassi peninsula. In 1994 Cameroon petitioned the International Court of Justice to resolve the dispute. The two countries attempted to establish a cease-fire in 1996; however, fighting continued for years. In 2002, the ICJ ruled that the Anglo-German Agreement of 1913 gave sovereignty to Cameroon. The ruling called for a withdrawal by both countries and denied the request by Cameroon for compensation due to Nigeria's long-term occupation. By 2004, Nigeria had failed to meet the deadline to hand over the peninsula. A UN-mediated summit in June 2006 facilitated an agreement for Nigeria to withdraw from the region and both leaders signed the Greentree Agreement. The withdrawal and handover of control was completed by August 2006.
In July 2019, UN ambassadors of 37 countries, including Cameroon, signed a joint letter to the UNHRC defending China's treatment of Uyghurs in the Xinjiang region.
Military
The Cameroon Armed Forces (French: Forces armées camerounaises, FAC) consists of the country's army (Armée de Terre), the country's navy (Marine Nationale de la République (MNR), includes naval infantry), the Cameroonian Air Force (Armée de l'Air du Cameroun, AAC), and the Gendarmerie.
Males and females that are 18 years of age up to 23 years of age and have graduated high school are eligible for military service. Those who join are obliged to complete 4 years of service. There is no conscription in Cameroon, but the government makes periodic calls for volunteers.
Human rights
Human rights organisations accuse police and military forces of mistreating and even torturing criminal suspects, ethnic minorities, homosexuals, and political activists. United Nations figures indicate that more than 21,000 people have fled to neighboring countries, while 160,000 have been internally displaced by the violence, many reportedly hiding in forests. Prisons are overcrowded with little access to adequate food and medical facilities, and prisons run by traditional rulers in the north are charged with holding political opponents at the behest of the government. However, since the first decade of the 21st century, an increasing number of police and gendarmes have been prosecuted for improper conduct. On 25 July 2018, UN High Commissioner for Human Rights Zeid Ra'ad Al Hussein expressed deep concern about reports of violations and abuses in the English-speaking Northwest and Southwest regions of Cameroon.
According to OCHA, more than 1.7 million people are in need of humanitarian assistance in the north-west and south-west regions. OCHA also estimates that at least 628,000 people have been internally displaced by violence in the two regions, while more than 87,000 have fled to Nigeria.
Same-sex sexual acts are banned by section 347-1 of the penal code with a penalty of from 6 months up to 5 years' imprisonment.
Since December 2020, Human Rights Watch claimed that Islamist armed group Boko Haram has stepped up attacks and killed at least 80 civilians in towns and villages in the Far North region of Cameroon.
Administrative divisions
The constitution divides Cameroon into 10 semi-autonomous regions, each under the administration of an elected Regional Council. Each region is headed by a presidentially appointed governor.
These leaders are charged with implementing the will of the president, reporting on the general mood and conditions of the regions, administering the civil service, keeping the peace, and overseeing the heads of the smaller administrative units. Governors have broad powers: they may order propaganda in their area and call in the army, gendarmes, and police. All local government officials are employees of the central government's Ministry of Territorial Administration, from which local governments also get most of their budgets.
The regions are subdivided into 58 divisions (French ). These are headed by presidentially appointed divisional officers (). The divisions are further split into sub-divisions (), headed by assistant divisional officers (). The districts, administered by district heads (), are the smallest administrative units.
The three northernmost regions are the Far North (), North (), and Adamawa (). Directly south of them are the Centre () and East (). The South Province () lies on the Gulf of Guinea and the southern border. Cameroon's western region is split into four smaller regions: the Littoral () and South-West () regions are on the coast, and the North-West () and West () regions are in the western grassfields.
Geography
At , Cameroon is the world's 53rd-largest country. The country is located in Central Africa, on the Bight of Bonny, part of the Gulf of Guinea and the Atlantic Ocean. Cameroon lies between latitudes 1° and 13°N, and longitudes 8° and 17°E. Cameroon controls 12 nautical miles of the Atlantic Ocean.
Tourist literature describes Cameroon as "Africa in miniature" because it exhibits all major climates and vegetation of the continent: coast, desert, mountains, rainforest, and savanna. The country's neighbours are Nigeria and the Atlantic Ocean to the west; Chad to the northeast; the Central African Republic to the east; and Equatorial Guinea, Gabon and the Republic of the Congo to the south.
Cameroon is divided into five major geographic zones distinguished by dominant physical, climatic, and vegetative features. The coastal plain extends inland from the Gulf of Guinea and has an average elevation of . Exceedingly hot and humid with a short dry season, this belt is densely forested and includes some of the wettest places on earth, part of the Cross-Sanaga-Bioko coastal forests.
The South Cameroon Plateau rises from the coastal plain to an average elevation of . Equatorial rainforest dominates this region, although its alternation between wet and dry seasons makes it less humid than the coast. This area is part of the Atlantic Equatorial coastal forests ecoregion.
An irregular chain of mountains, hills, and plateaus known as the Cameroon range extends from Mount Cameroon on the coast—Cameroon's highest point at —almost to Lake Chad at Cameroon's northern border at 13°05'N. This region has a mild climate, particularly on the Western High Plateau, although rainfall is high. Its soils are among Cameroon's most fertile, especially around volcanic Mount Cameroon. Volcanism here has created crater lakes. On 21 August 1986, one of these, Lake Nyos, belched carbon dioxide and killed between 1,700 and 2,000 people. This area has been delineated by the World Wildlife Fund as the Cameroonian Highlands forests ecoregion.
The southern plateau rises northward to the grassy, rugged Adamawa Plateau. This feature stretches from the western mountain area and forms a barrier between the country's north and south. Its average elevation is , and its average temperature ranges from to with high rainfall between April and October peaking in July and August. The northern lowland region extends from the edge of the Adamawa to Lake Chad with an average elevation of . Its characteristic vegetation is savanna scrub and grass. This is an arid region with sparse rainfall and high median temperatures.
Cameroon has four patterns of drainage. In the south, the principal rivers are the Ntem, Nyong, Sanaga, and Wouri. These flow southwestward or westward directly into the Gulf of Guinea. The Dja and Kadéï drain southeastward into the Congo River. In northern Cameroon, the Bénoué River runs north and west and empties into the Niger. The Logone flows northward into Lake Chad, which Cameroon shares with three neighbouring countries.
Wildlife
Economy and infrastructure
Cameroon's per capita GDP (Purchasing power parity) was estimated as US$3,700 in 2017. Major export markets include the Netherlands, France, China, Belgium, Italy, Algeria, and Malaysia.
Cameroon has had a decade of strong economic performance, with GDP growing at an average of 4% per year. During the 2004–2008 period, public debt was reduced from over 60% of GDP to 10% and official reserves quadrupled to over US$3 billion. Cameroon is part of the Bank of Central African States (of which it is the dominant economy), the Customs and Economic Union of Central Africa (UDEAC) and the Organization for the Harmonization of Business Law in Africa (OHADA). Its currency is the CFA franc.
Unemployment was estimated at 3.38% in 2019, and 23.8% of the population was living below the international poverty threshold of US$1.90 a day in 2014. Since the late 1980s, Cameroon has been following programmes advocated by the World Bank and International Monetary Fund (IMF) to reduce poverty, privatise industries, and increase economic growth. The government has taken measures to encourage tourism in the country.
An estimated 70% of the population farms, and agriculture comprised an estimated 16.7% of GDP in 2017. Most agriculture is done at the subsistence scale by local farmers using simple tools. They sell their surplus produce, and some maintain separate fields for commercial use. Urban centres are particularly reliant on peasant agriculture for their foodstuffs. Soils and climate on the coast encourage extensive commercial cultivation of bananas, cocoa, oil palms, rubber, and tea. Inland on the South Cameroon Plateau, cash crops include coffee, sugar, and tobacco. Coffee is a major cash crop in the western highlands, and in the north, natural conditions favour crops such as cotton, groundnuts, and rice. Production of Fairtrade cotton was initiated in Cameroon in 2004.
Livestock are raised throughout the country. Fishing employs 5,000 people and provides over 100,000 tons of seafood each year. Bushmeat, long a staple food for rural Cameroonians, is today a delicacy in the country's urban centres. The commercial bushmeat trade has now surpassed deforestation as the main threat to wildlife in Cameroon.
The southern rainforest has vast timber reserves, estimated to cover 37% of Cameroon's total land area. However, large areas of the forest are difficult to reach. Logging, largely handled by foreign-owned firms, provides the government US$60 million a year in taxes (), and laws mandate the safe and sustainable exploitation of timber. Nevertheless, in practice, the industry is one of the least regulated in Cameroon.
Factory-based industry accounted for an estimated 26.5% of GDP in 2017. More than 75% of Cameroon's industrial strength is located in Douala and Bonabéri. Cameroon possesses substantial mineral resources, but these are not extensively mined (see Mining in Cameroon). Petroleum exploitation has fallen since 1986, but this is still a substantial sector such that dips in prices have a strong effect on the economy. Rapids and waterfalls obstruct the southern rivers, but these sites offer opportunities for hydroelectric development and supply most of Cameroon's energy. The Sanaga River powers the largest hydroelectric station, located at Edéa. The rest of Cameroon's energy comes from oil-powered thermal engines. Much of the country remains without reliable power supplies.
Transport in Cameroon is often difficult. Only 6.6% of the roadways are tarred. Roadblocks often serve little other purpose than to allow police and gendarmes to collect bribes from travellers. Road banditry has long hampered transport along the eastern and western borders, and since 2005, the problem has intensified in the east as the Central African Republic has further destabilised.
Intercity bus services run by multiple private companies connect all major cities. They are the most popular means of transportation followed by the rail service Camrail. Rail service runs from Kumba in the west to Bélabo in the east and north to Ngaoundéré. International airports are located in Douala and Yaoundé, with a third under construction in Maroua. Douala is the country's principal seaport. In the north, the Bénoué River is seasonally navigable from Garoua across into Nigeria.
Although press freedoms have improved since the first decade of the 21st century, the press is corrupt and beholden to special interests and political groups. Newspapers routinely self-censor to avoid government reprisals. The major radio and television stations are state-run and other communications, such as land-based telephones and telegraphs, are largely under government control. However, cell phone networks and Internet providers have increased dramatically since the first decade of the 21st century and are largely unregulated.
Cameroon was ranked 123rd in the Global Innovation Index in 2023.
Demographics
The population of Cameroon was in . The life expectancy was 62.3 years (60.6 years for males and 64 years for females).
Cameroon has slightly more women (50.5%) than men (49.5%). Over 60% of the population is under age 25. People over 65 years of age account for only 3.11% of the total population.
Cameroon's population is almost evenly divided between urban and rural dwellers. Population density is highest in the large urban centres, the western highlands, and the northeastern plain. Douala, Yaoundé, and Garoua are the largest cities. In contrast, the Adamawa Plateau, southeastern Bénoué depression, and most of the South Cameroon Plateau are sparsely populated.
According to the World Health Organization, the fertility rate was 4.8 in 2013 with a population growth rate of 2.56%.
People from the overpopulated western highlands and the underdeveloped north are moving to the coastal plantation zone and urban centres for employment. Smaller movements are occurring as workers seek employment in lumber mills and plantations in the south and east. Although the national sex ratio is relatively even, these out-migrants are primarily males, which leads to unbalanced ratios in some regions.
Both monogamous and polygamous marriage are practised, and the average Cameroonian family is large and extended. In the north, women tend to the home, and men herd cattle or work as farmers. In the south, women grow the family's food, and men provide meat and grow cash crops. Cameroonian society is male-dominated, and violence and discrimination against women is common.
The number of distinct ethnic and linguistic groups in Cameroon is estimated to be between 230 and 282. The Adamawa Plateau broadly bisects these into northern and southern divisions. The northern peoples are Sudanic groups, who live in the central highlands and the northern lowlands, and the Fulani, who are spread throughout northern Cameroon. A small number of Shuwa Arabs live near Lake Chad. Southern Cameroon is inhabited by speakers of Bantu and Semi-Bantu languages. Bantu-speaking groups inhabit the coastal and equatorial zones, while speakers of Semi-Bantu languages live in the Western grassfields. Some 5,000 Gyele and Baka Pygmy peoples roam the southeastern and coastal rainforests or live in small, roadside settlements. Nigerians make up the largest group of foreign nationals.
Refugees
In 2007, Cameroon hosted approximately 97,400 refugees and asylum seekers. Of these, 49,300 were from the Central African Republic (many driven west by war), 41,600 from Chad, and 2,900 from Nigeria. Kidnappings of Cameroonian citizens by Central African bandits have increased since 2005.
In the first months of 2014, thousands of refugees fleeing the violence in the Central African Republic arrived in Cameroon.
On 4 June 2014, AlertNet reported:
Languages
The official percentage of French and English speakers by the Presidency of Cameroon is estimated to be 70% and 30% respectively. German, the language of the original colonisers, has long since been displaced by French and English. Cameroonian Pidgin English is the lingua franca in the formerly British-administered territories. A mixture of English, French, and Pidgin called Camfranglais has been gaining popularity in urban centres since the mid-1970s.
In addition to the colonial languages, there are approximately 250 other languages spoken by nearly 20 million Cameroonians. It is because of this that Cameroon is considered one of the most linguistically diverse countries in the world.
In 2017, there were language protests by the anglophone population against perceived oppression by francophone speakers. The military was deployed against the protesters and people were killed, hundreds imprisoned and thousands fled the country. This culminated in the declaration of an independent Republic of Ambazonia, which has since evolved into the Anglophone Crisis. It is estimated that by June 2020, 740,000 people had been internally displaced as a result of this crisis.
Religion
Cameroon has a high level of religious freedom and diversity. The majority faith is Christianity, practised by about two-thirds of the population, while Islam is a significant minority faith, adhered to by about one-fourth. In addition, traditional faiths are practised by many. Muslims are most concentrated in the north, while Christians are concentrated primarily in the southern and western regions, but practitioners of both faiths can be found throughout the country. Large cities have significant populations of both groups. Muslims in Cameroon are divided into Sufis, Salafis, Shias, and non-denominational Muslims.
People from the North-West and South-West provinces, which used to be a part of British Cameroons, have the highest proportion of Protestants. The French-speaking regions of the southern and western regions are largely Catholic. Southern ethnic groups predominantly follow Christian or traditional African animist beliefs, or a syncretic combination of the two. People widely believe in witchcraft, and the government outlaws such practices. Suspected witches are often subject to mob violence. The Islamist jihadist group Ansar al-Islam has been reported as operating in North Cameroon.
In the northern regions, the locally dominant Fulani ethnic group is almost completely Muslim, but the overall population is fairly evenly divided among Muslims, Christians, and followers of indigenous religious beliefs (called Kirdi ("pagan") by the Fulani). The Bamum ethnic group of the West Region is largely Muslim. Native traditional religions are practised in rural areas throughout the country but rarely are practised publicly in cities, in part because many indigenous religious groups are intrinsically local in character.
Education and health
In 2013, the total adult literacy rate of Cameroon was estimated to be 71.3%. Among youths age 15–24 the literacy rate was 85.4% for males and 76.4% for females. Most children have access to state-run schools that are cheaper than private and religious facilities. The educational system is a mixture of British and French precedents with most instruction in English or French.
Cameroon has one of the highest school attendance rates in Africa. Girls attend school less regularly than boys do because of cultural attitudes, domestic duties, early marriage, pregnancy, and sexual harassment. Although attendance rates are higher in the south, a disproportionate number of teachers are stationed there, leaving northern schools chronically understaffed. In 2013, the primary school enrollment rate was 93.5%.
School attendance in Cameroon is also affected by child labour. Indeed, the United States Department of Labor Findings on the Worst Forms of Child Labor reported that 56% of children aged 5 to 14 were working children and that almost 53% of children aged 7 to 14 combined work and school. In December 2014, a List of Goods Produced by Child Labor or Forced Labor issued by the Bureau of International Labor Affairs mentioned Cameroon among the countries that resorted to child labor in the production of cocoa.
The quality of health care is generally low. Life expectancy at birth is estimated to be 56 years in 2012, with 48 healthy life years expected. Fertility rate remains high in Cameroon with an average of 4.8 births per woman and an average mother's age of 19.7 years old at first birth. In Cameroon, there is only one doctor for every 5,000 people, according to the World Health Organization. In 2014, just 4.1% of total GDP expenditure was allocated to healthcare. Due to financial cuts in the health care system, there are few professionals. Doctors and nurses who were trained in Cameroon emigrate because in Cameroon the payment is poor while the workload is high. Nurses are unemployed even though their help is needed. Some of them help out voluntarily so they will not lose their skills. Outside the major cities, facilities are often dirty and poorly equipped.
In 2012, the top three deadly diseases were HIV/AIDS, lower respiratory tract infection, and diarrheal diseases. Endemic diseases include dengue fever, filariasis, leishmaniasis, malaria, meningitis, schistosomiasis, and sleeping sickness. The HIV/AIDS prevalence rate in 2016 was estimated at 3.8% for those aged 15–49, although a strong stigma against the illness keeps the number of reported cases artificially low. 46,000 children under age 14 were estimated to be living with HIV in 2016. In Cameroon, 58% of those living with HIV know their status, and just 37% receive ARV treatment. In 2016, 29,000 deaths due to AIDS occurred in both adults and children.
Breast ironing, a traditional practice that is prevalent in Cameroon, may affect girls' health. Female genital mutilation (FGM), while not widespread, is practiced among some populations; according to a 2013 UNICEF report, 1% of women in Cameroon have undergone FGM. Also impacting women and girls' health, the contraceptive prevalence rate is estimated to be just 34.4% in 2014. Traditional healers remain a popular alternative to evidence-based medicine.
Culture
Music and dance
Music and dance are integral parts of Cameroonian ceremonies, festivals, social gatherings, and storytelling. Traditional dances are highly choreographed and separate men and women or forbid participation by one sex altogether. The dances' purposes range from pure entertainment to religious devotion. Traditionally, music is transmitted orally. In a typical performance, a chorus of singers echoes a soloist.
Musical accompaniment may be as simple as clapping hands and stamping feet, but traditional instruments include bells worn by dancers, clappers, drums and talking drums, flutes, horns, rattles, scrapers, stringed instruments, whistles, and xylophones; combinations of these vary by ethnic group and region. Some performers sing complete songs alone, accompanied by a harplike instrument.
Popular music styles include ambasse bey of the coast, assiko of the Bassa, mangambeu of the Bangangte, and tsamassi of the Bamileke. Nigerian music has influenced Anglophone Cameroonian performers, and Prince Nico Mbarga's highlife hit "Sweet Mother" is the top-selling African record in history.
The two most popular music styles are makossa and bikutsi. Makossa developed in Douala and mixes folk music, highlife, soul, and Congo music. Performers such as Manu Dibango, Francis Bebey, Moni Bilé, and Petit-Pays popularised the style worldwide in the 1970s and 1980s. Bikutsi originated as war music among the Ewondo. Artists such as Anne-Marie Nzié developed it into a popular dance music beginning in the 1940s, and performers such as Mama Ohandja and Les Têtes Brulées popularised it internationally during the 1960s, 1970s and 1980s.
Holidays
The most notable holiday associated with patriotism in Cameroon is National Day, also called Unity Day. Among the most notable religious holidays are Assumption Day, and Ascension Day, which is typically 39 days after Easter. In the Northwest and Southwest provinces, collectively called Ambazonia, October 1 is considered a national holiday, a date Ambazonians consider the day of their independence from Cameroon.
Cuisine
Cuisine varies by region, but a large, one-course, evening meal is common throughout the country. A typical dish is based on cocoyams, maize, cassava (manioc), millet, plantains, potatoes, rice, or yams, often pounded into dough-like fufu. This is served with a sauce, soup, or stew made from greens, groundnuts, palm oil, or other ingredients. Meat and fish are popular but expensive additions, with chicken often reserved for special occasions. Dishes are often quite spicy; seasonings include salt, red pepper sauce, and maggi.
Cutlery is common, but food is traditionally manipulated with the right hand. Breakfast consists of leftovers of bread and fruit with coffee or tea. Generally, breakfast is made from wheat flour in various different foods such as puff-puff (doughnuts), accra banana made from bananas and flour, bean cakes, and many more. Snacks are popular, especially in larger towns where they may be bought from street vendors.
Fashion
Cameroon's relatively large and diverse population is likewise diverse in its fashions. Climate, religious, ethnic and cultural beliefs, and the influences of colonialism, imperialism, and globalization are all factors in contemporary Cameroonian dress.
Notable articles of clothing include: Pagnes, sarongs worn by Cameroon women; Chechia, a traditional hat; kwa, a male handbag; and Gandura, male custom attire.
Wrappers and loincloths are used extensively by both women and men but their use varies by region, with influences from Fulani styles more present in the north and Igbo and Yoruba styles more often in the south and west.
Imane Ayissi is one of Cameroon's top fashion designers and has received international recognition.
Local arts and crafts
Traditional arts and crafts are practiced throughout the country for commercial, decorative, and religious purposes. Woodcarvings and sculptures are especially common. The high-quality clay of the western highlands is used for pottery and ceramics. Other crafts include basket weaving, beadworking, brass and bronze working, calabash carving and painting, embroidery, and leather working. Traditional housing styles use local materials and vary from temporary wood-and-leaf shelters of nomadic Mbororo to the rectangular mud-and-thatch homes of southern peoples. Dwellings of materials such as cement and tin are increasingly common. Contemporary art is mainly promoted by independent cultural organizations (Doual'art, Africréa) and artist-run initiatives (Art Wash, Atelier Viking, ArtBakery).
Literature
Cameroonian literature has concentrated on both European and African themes. Colonial-era writers such as Louis-Marie Pouka and Sankie Maimo were educated by European missionary societies and advocated assimilation into European culture to bring Cameroon into the modern world. After World War II, writers such as Mongo Beti and Ferdinand Oyono analysed and criticised colonialism and rejected assimilation.
Films and literature
Shortly after independence, filmmakers such as Jean-Paul Ngassa and Thérèse Sita-Bella explored similar themes. In the 1960s, Mongo Beti, Ferdinand Léopold Oyono and other writers explored postcolonialism, problems of African development, and the recovery of African identity. In the mid-1970s, filmmakers such as Jean-Pierre Dikongué Pipa and Daniel Kamwa dealt with the conflicts between traditional and postcolonial society. Literature and films during the next two decades focused more on wholly Cameroonian themes.
Sports
National policy strongly advocates sport in all forms. Traditional sports include canoe racing and wrestling, and several hundred runners participate in the Mount Cameroon Race of Hope each year. Cameroon is one of the few tropical countries to have competed in the Winter Olympics.
Sport in Cameroon is dominated by football. Amateur football clubs abound, organised along ethnic lines or under corporate sponsors. The national team has been one of the most successful in Africa since its strong showing in the 1982 and 1990 FIFA World Cups. Cameroon has won five African Cup of Nations titles and the gold medal at the 2000 Olympics.
Cameroon was the host country of the Women Africa Cup of Nations in November–December 2016, the 2020 African Nations Championship and the 2021 Africa Cup of Nations. The women's football team is known as the "Indomitable Lionesses", and like their men's counterparts, are also successful at international stage, although it has not won any major trophy.
Cricket has also entered into Cameroon as an emerging sport with the Cameroon Cricket Federation participating in international matches
Cameroon has produced multiple National Basketball Association players including Pascal Siakam, Joel Embiid, D. J. Strawberry, Ruben Boumtje-Boumtje, Christian Koloko, and Luc Mbah a Moute.
The former UFC Heavyweight Champion Francis Ngannou hails from Cameroon.
See also
Index of Cameroon-related articles
Outline of Cameroon
Telephone numbers in Cameroon
Notes
References
Citations
Sources
Further reading
. Reporters without Borders. Retrieved 6 April 2007.
. Human Development Report 2006. United Nations Development Programme. Retrieved 6 April 2007.
Fonge, Fuabeh P. (1997). Modernization without Development in Africa: Patterns of Change and Continuity in Post-Independence Cameroonian Public Service. Trenton, New Jersey: Africa World Press, Inc.
MacDonald, Brian S. (1997). "Case Study 4: Cameroon", Military Spending in Developing Countries: How Much Is Too Much? McGill-Queen's University Press.
Njeuma, Dorothy L. (no date). "Country Profiles: Cameroon". The Boston College Center for International Higher Education. Retrieved 11 April 2008.
Rechniewski, Elizabeth. "1947: Decolonisation in the Shadow of the Cold War: the Case of French Cameroon." Australian & New Zealand Journal of European Studies 9.3 (2017). online
Sa'ah, Randy Joe (23 June 2006). "Cameroon girls battle 'breast ironing'". BBC News. Retrieved 6 April 2007.
Wright, Susannah, ed. (2006). Cameroon. Madrid: MTH Multimedia S.L.
"World Economic and Financial Surveys". World Economic Outlook Database, International Monetary Fund. September 2006. Retrieved 6 April 2007.
External links
Cameroon. The World Factbook. Central Intelligence Agency.
Cameroon Corruption Profile from Business Anti-Corruption Portal
Cameroon from UCB Libraries GovPubs
Cameroon profile from the BBC News
Key Development Forecasts for Cameroon from International Futures
Government
Presidency of the Republic of Cameroon
Prime Minister's Office
National Assembly of Cameroon
Global Integrity Report: Cameroon has reporting on anti-corruption in Cameroon
Chief of State and Cabinet Members
Trade
Summary Trade Statistics from World Bank
1960 establishments in Cameroon
Central African countries
Countries in Africa
Countries and territories where English is an official language
French-speaking countries and territories
Member states of the African Union
Member states of the Commonwealth of Nations
Member states of the Organisation internationale de la Francophonie
Member states of the Organisation of Islamic Cooperation
Member states of the United Nations
Republics in the Commonwealth of Nations
States and territories established in 1960
1960 establishments in Africa |
5448 | https://en.wikipedia.org/wiki/History%20of%20Cameroon | History of Cameroon | At the crossroads of West Africa and Central Africa, the territory of what is now Cameroon has seen human habitation since some time in the Middle Paleolithic, likely no later than 130,000 years ago. The earliest discovered archaeological evidence of humans dates from around 30,000 years ago at Shum Laka. The Bamenda highlands in western Cameroon near the border with Nigeria are the most likely origin for the Bantu peoples, whose language and culture came to dominate most of central and southern Africa between 1000 BCE and 1000 CE.
European traders arrived in the fifteenth century and Cameroon was the exonym given by the Portuguese to the Wouri river, which they called Rio dos Camarões—"river of shrimps" or "shrimp river", referring to the then-abundant Cameroon ghost shrimp. Cameroon was a source of slaves for the slave trade. While the northern part of Cameroon was subject to influence from the Islamic kingdoms in the Chad basin and the Sahel, the south was largely ruled by small kings, chieftains, and fons.
Cameroon as a political entity emerged from the colonization of Africa by Europeans. From 1884, Cameroon was a German colony, German Kamerun, with its borders drawn through negotiations between the Germans, British, and French. After the First World War, the League of Nations mandated France to administer most of the territory, with the United Kingdom administering a small portion in the west. Following World War II, the League of Nations' successor, the United Nations, instituted a Trusteeship system, leaving France and Britain in control of their respective regions, French Cameroon and British Cameroon. In 1960, Cameroon became independent with part of British Cameroons voting to join former French Cameroon. Cameroon has had only two presidents since independence and while opposition parties were legalized in 1990 only one party has ever governed. Cameroon has maintained close relations with France and allied itself largely with Western political and economic interests throughout the Cold War and into the twenty-first century. This consistency gave Cameroon a reputation as one of the most stable countries in the region. In 2017, tensions between Anglophone Cameroonians in former British territory and the Francophone-dominated government led to an ongoing civil war known as the Anglophone Crisis in the west of the country, while Islamist insurgents Boko Haram continue to carry out military and terror attacks in the north of the country.
Pre-colonial history
Prehistory
Archaeological research has been relatively scarce in Cameroon due to a lack of resources and transportation infrastructure. Historically the warm, wet climate in many parts of the country was thought of as inhospitable to the preservation of remains, but recent finds and the introduction of new techniques have challenged that assumption. Evidence from digs at Shum Laka in the Northwest Region shows human occupation dating back 30,000 years while in the dense forests of the south, the oldest evidence of occupation is around 7000 years old. Recent research in southern Cameroon indicates that the Iron Age may have started there as early as 1000 BCE and was certainly well established by 100 BCE at the latest.
Linguistic analysis, supported by archaeological and genetic research, has shown that the Bantu expansion, a series of migrations that spread Bantu culture across much of Sub-Saharan Africa, most likely originated in the highlands on the Nigeria-Cameroon border around 1000 BCE. Bantu languages spread with these people along with agricultural methods and possibly iron tools, first east and then south, forming one of the largest language families in Africa. In Cameroon, Bantu people largely displaced Central African Pygmies such as the Baka, who were hunter-gatherers and who now survive in much smaller numbers in the heavily forested southeast. Despite Cameroon being the original homeland of the Bantu people, the great medieval Bantu-speaking kingdoms arose elsewhere, such as what is now Kenya, Congo, Angola, and South Africa.
Northern Cameroon
The earliest known civilization to have left clear traces of their presence in the territory of modern Cameroon is known as the Sao civilisation. Known for their elaborate terracotta and broze artwork and round, walled settlements in the Lake Chad Basin, little else is known with any certainty due to the lack of historical records. The culture possibly arose as early as the fourth century BC but certainly, by the end of the first millennium BC, their presence was well established around Lake Chad and near the Chari River. The city-states of the Sao reached their apex sometime between the ninth and fifteenth centuries AD. The Sao were displaced or assimilated by the sixteenth century.
After the Muslim conquest of North Africa in 709, Islam's influence began to spread south with the growth of trans-Saharan trade, including in what is now northern Cameroon. The Kanem-Bornu Empire began in what is now Chad and likely came into conflict with the Sao. The Kanem Empire began in Chad in the eighth century and gradually extended its influence northward into Libya and southward into Nigeria and Cameroon. Slaves from raids in the south were their principal trade good along with mined salt. The Empire was Muslim from at least the eleventh century and reached its first peak in the 13th, controlling most of what is now Chad and smaller regions in surrounding countries. After a period of internal instability, the center of power shifted to Bornu with its capital at Ngazargamu, in what is now northwestern Nigeria, and territory was gradually reconquered and new territory in present-day Niger also conquered. The Empire began to decline in the seventeenth century though it continued to control much of northern Cameroon.
From 1804 to 1808 the Fulani War saw the Bornu pushed north out of Cameroon and the Sokoto Caliphate took control of the region, as well as most of northern Nigeria and large swathes of Niger and Mali. A feudal empire with local rulers pledging allegiance and paying tributes to the Caliph, northern Cameroon was likely part of the Adamawa Emirate within the Caliphate. This structure proved susceptible to exploitation by colonial powers beginning in the 1870s, who sought to undermine local rulers' ties to the Caliphate.
Southern Regions
The Muslim empires of the Sahara and Sahel never reached further south than the highlands of the Cameroon Line. Further south, there is little archaeological evidence of large empires or kingdoms and no historical record due to the lack of writing in the region. When the Portuguese arrived in the region in the sixteenth century, a large number of kings, chiefs, and fons ruled small territories. Many ethnic groups, particularly speakers of the Grassfields languages in the west, have oral histories of migrating south fleeing Muslim invaders, likely reference to the Fulani War and subsequent conflicts in Nigeria and northern Cameroon.
Malaria prevented significant European settlement or exploration until the late 1870s, when large supplies of the malaria suppressant quinine became available. The early European presence in Cameroon was primarily devoted to coastal trade and the acquisition of slaves. The Cameroon coast was a major hub for the purchase of slaves who were taken across the Atlantic to Brazil, the United States, and the Caribbean. In 1807, the British abolished slavery in the Empire and began military efforts to suppress the slave trade, particularly in West Africa. Combined with the end of legal slave imports in the United States the same year, the international slave trade in Cameroon declined sharply. Christian missionaries established a presence in the late nineteenth century. Around this time, the Aro Confederacy, was expanding its economic and political influence from southeastern Nigeria into western Cameroon. However, the arrival of British and German colonizers cut short its growth and influence.
Colonial Period
Scramble for Africa and German Kamerun (1884-1918)
The Scramble for Africa beginning in the late 1870s, saw European powers, primarily seeking to establish formal control over the parts of Africa not yet colonized. The Cameroon coast was of interest to both the British, already established in what is now Nigeria and with missionaries outposts in several towns, and the Germans who had extensive trading relationships and plantations established in the Douala region. On July 5, 1884, German explorer and administrator Gustav Nachtigal began signing agreements with Duala leaders establishing a German protectorate in the region. A brief conflict ensued with rival Duala chiefs which Germany and its allies won, leaving the British with little choice but to acknowledge Germany's claim to the region. The borders of modern Cameroon were established through a series of negotiations with the British and French. Germany established an administration for the colony with a capital first at Buea and later at Yaoundé and continued to explore the interior and co-opt or subjugate local rulers. The largest conflicts were the Bafut Wars and the Adamawa Wars which ended by 1907 with German victories.
Germany was particularly interested in Cameroon's agricultural potential and entrusted large firms with the task of exploiting and exporting it. German Chancellor Otto von Bismarck defined the order of priorities as follows: "first the merchant, then the soldier". It was under the influence of a businessman Adolph Woermann, whose company set up a trading house in Douala, that Bismarck, initially skeptical about the interest of the colonial project, was convinced. Large German trading companies (Woermann, Jantzen & Thormählen) and concession companies (Südkamerun Gesellschaft, Nord-West Kamerun Gesellschaft) established themselves massively in the colony. Letting the big companies impose their order, the administration simply supported them, protected them, and tried to eliminate indigenous rebellions.
The Imperial German government made substantial investments in the infrastructure of Cameroon, including the extensive railways, such as the 160-metre single-span railway bridge on the southern branch of Sanaga River. However, the indigenous peoples proved reluctant to work on these projects, so the Germans instigated a harsh and unpopular system of forced labour. In fact, Jesko von Puttkamer was relieved of duty as governor of the colony due to his untoward actions toward the native Cameroonians. In 1911 at the Treaty of Fez after the Agadir Crisis, France ceded a nearly 300,000 km2 portion of the territory of French Equatorial Africa to Kamerun which became Neukamerun (New Cameroon), while Germany ceded a smaller area in the north in present-day Chad to France.
Shortly after the outbreak of World War I in 1914, the British invaded Cameroon from Nigeria and the French from French Equatorial Africa in the Kamerun campaign. The last German fort in the country surrendered in February 1916. After the Allied victory, the territory was partitioned between the United Kingdom and France, which was formalized on June 28, 1919, with League of Nations mandates (Class B). France gained the larger geographical share, transferred Neukamerun back to neighboring French colonies, and ruled the rest from Yaoundé as Cameroun (French Cameroons). Britain's territory, a strip bordering Nigeria from the sea to Lake Chad, with a roughly equal population was ruled from Lagos as part of Nigeria, known as Cameroons (British Cameroons).
French Cameroon (1918-1960)
League of Nations Mandate, Free France, and UN Trust Territory
The French administration declined to return much of the property in Cameroon to its prior German owners, reassigning much of it to French companies. This was particularly the case for the Société financière des Caoutchoucs, which obtained plantations put into operation during the German period and became the largest company in French Cameroon. Roads and other infrastructure projects were undertaken with native labor, often in extremely harsh conditions. The Douala-Yaoundé railway line, begun under the German regime, was completed. Thousands of workers were forcibly deported to this site to work fifty-four hours a week. Workers also suffered from lack of food and the massive presence of mosquitoes and related illnesses. In 1925, the mortality rate on the site was 61.7%. However, the other sites were not as deadly, although working conditions were generally very harsh.
French Cameroon joined the Free France in August 1940. The system established by Free France was essentially a military dictatorship. Philippe Leclerc de Hauteclocque established a state of siege throughout the country and abolished almost all public freedom. The objective was to neutralize any potential feelings of independence or sympathy for the former German colonizer. Indigenous people known for their Germanophilia were executed in public places. In 1945, the country was placed under the supervision of the United Nations, as successor to the League of Nations, which left Cameroon under French control as a UN Trust Territory.
Independence Movement
In 1948, the Union des populations du Cameroun (UPC), a nationalist movement, was founded and Ruben Um Nyobe took over as its leader. In May 1955, the arrests of independence activists were followed by riots in several cities across the country. The repression caused several dozen or hundreds of deaths - the French administration officially lists twenty-two, although secret reports acknowledge many more. The UPC was banned and nearly 800 of its activists were arrested, many of whom would be beaten in prison. Because they were wanted by the police, UPC activists took refuge in the forests, where they formed guerilla bands; they also took refuge in neighboring British Cameroon. The French authorities repressed these events and made arbitrary arrests. The party received the support of personalities such as Gamal Abdel Nasser and Kwame Nkrumah and France's action was denounced at the UN by representatives of countries such as India, Syria, and the Soviet Union.
An insurrection broke out among the Bassa people on 18 to 19 December 1956. Several dozen anti-UPC figures were murdered or kidnapped, bridges, telephone lines, and other infrastructure were sabotaged. The French military and native security forces violently repressed these uprisings, which led to many native Cameroonians joining the cause of independence and long-running guerilla war. Several UPC militias were formed though their access to weapons was very limited. Though the UPC was a multi-ethnic movement, the pro-independence movement was seen as particularly strong among the Bamileke and Bassa peoples, and both were targeted by the French for severe repression, including razing of villages, forced relocations, and indiscriminate killings in what was sometimes called the Bamileke War or the Cameroon Independence War. Though the uprising was suppressed, guerilla violence and reprisals continued even after independence.
Legislative elections were held on 23 December 1956 and the resulting Assembly passed a decree on 16 April 1957 which made French Cameroon a state. It took back its former status of associated territory as a member of the French Union. Its inhabitants became Cameroonian citizens, and Cameroonian institutions were created under a parliamentary democracy. On 12 June 1958, the Legislative Assembly of French Cameroon asked the French government to: "Accord independence to the State of Cameroon at the ends of their trusteeship. Transfer every competence related to the running of internal affairs of Cameroon to Cameroonians". On 19 October 1958, France recognized the right of its United Nations trust territory to choose independence. On 24 October 1958, the Legislative Assembly of French Cameroon solemnly proclaimed the desire of Cameroonians to see their country accede full independence on 1 January 1960. It enjoined the government of French Cameroon to ask France to inform the General Assembly of the United Nations, to abrogate the trusteeship accord concomitant with the independence of French Cameroon.
On 12 November 1958, France asked the United Nations to grant French Cameroon independence and end the Trusteeship. On 5 December 1958, the United Nations’ General Assembly took note of the French government's declaration according to which French Cameroon would gain independence on 1 January 1960. On 13 March 1959, the United Nations’ General Assembly resolved that the UN Trusteeship Agreement with France for French Cameroon would end when French Cameroon became independent on 1 January 1960.
British Cameroons (1918-1961)
Nigerian Administration
The British territory was administered as two areas, Northern Cameroons and Southern Cameroons. Northern Cameroons consisted of two non-contiguous sections, divided by a point where the Nigerian and Cameroon borders met and were governed as part of the Northern Region of Nigeria. Southern Cameroons was administered as a province of Eastern Nigeria. In British Cameroons, many German administrators were allowed to run the plantations of the southern coastal area after World War I. A British parliamentary publication, Report on the British Sphere of the Cameroons (May 1922, p. 62-8), reported that the German plantations there were "as a whole . . . wonderful examples of industry, based on solid scientific knowledge. The natives have been taught discipline and have come to realize what can be achieved by industry. Large numbers who return to their villages take up cocoa or other cultivation on their own account, thus increasing the general prosperity of the country." In the 1930s, most of the white population still consisted of Germans, most of whom were interned in British camps starting in June 1940. The native population showed little interest in volunteering for the British forces during World War II; only 3,500 men did so.
When the League of Nations ceased to exist in 1946, British Cameroons was reclassified as a UN trust territory, administered through the UN Trusteeship Council, but remained under British control. The United Nations approved the Trusteeship Agreements for British Cameroons to be governed by Britain on June 12, 1946.
Plebiscite and Independence
French Cameroun became independent, as Cameroun or Cameroon, in January 1960, and Nigeria was scheduled for independence later that same year, which raised the question of what to do with the British territory. After some discussion (which had been going on since 1959), a plebiscite was agreed to and held on 11 February 1961. The Muslim-majority Northern area opted for union with Nigeria, and the Southern area voted to join Cameroon.
Independence and the Ahidjo era (1960-1982)
French Cameroon achieved independence on January 1, 1960. After Guinea, it was the second of France's colonies in Sub-Saharan Africa to become independent. On 21 February 1960, the new nation held a constitutional referendum, approving a new constitution. On 5 May 1960, Ahmadou Ahidjo became president. Ahidjo aligned himself closely with France and allowed many French advisers and administrators to stay on as well as leaving most of the country's assets in the hands of French companies.
Union with Southern Cameroons
On 12 February 1961, the results of the Southern Cameroon plebiscite were announced and it was learned that Southern Cameroons had voted for unification with the Republic Of Cameroon, sometimes called "reunification" since both regions had been part of German Kamerun. To negotiate the terms of this union, the Foumban Conference was held on 16–21 July 1961. John Ngu Foncha, the leader of the Kamerun National Democratic Party and the Southern Cameroons elected government represented Southern Cameroons while Ahidjo represented Cameroon. The agreement reached was a new constitution, based heavily on the version adopted in Cameroon earlier that year, but with a federal structure granting former British Cameroons - now West Cameroon - jurisdiction over certain issues and procedural rights. Buea became the capital of West Cameroon while Yaounde doubled as the federal capital and East Cameroonian capital. Neither side was particularly satisfied as Ahidjo had wanted a unitary or more centralized state while the West Cameroonians had wanted more explicit protections. On 14 August 1961, the federal constitution was adopted, with Ahidjo as president. Foncha became the prime minister of West Cameroon and vice president of the Federal Republic of Cameroon.
Civil War and repression
The UPC, which had demanded a full break with France and many of whom espoused Marxist or other leftist ideologies, were not satisfied with Ahidjo's rule and close cooperation with the French and did not lay down their arms at independence and sought to overthrow Ahidjo's regime which they viewed as too subservient to France and some, but not all, espoused overtly Marxist views. Ahidjo requested continued French assistance in suppressing the UPC rebels in what became known as the Bamileke War after the region where much of the fighting took place. The UPC was ultimately defeated with government forces capturing the last important rebel leader in 1970. During the intervening years, Ahidjo used emergency powers granted due to the war and the fear of further ethnic conflict to centralize power in himself. He implemented a highly centralized and authoritarian government that used arbitrary police custody, prohibition of meetings and rallies, submission of publications to prior censorship, restriction of freedom of movement through the establishment of passes or curfews, and a prohibition on trade unions to prevent opposition. Anyone accusation of "compromising public safety" was handled outside traditional criminal process - without the right to a lawyer or any appeal. Sentences of life imprisonment at hard labor or death were numerous and executions were often public.
In 1966, opposition parties were banned and Cameroon became a one-party state. On 28 March 1970 Ahidjo was re-elected as president with 100% of the vote and 99.4% turnout. Solomon Tandeng Muna became vice president. In 1972, a referendum was held on a new constitution, which replaced the federation between East and West with a unitary state called the United Republic of Cameroon and further expanded the power of the president. Official results claimed 98.2% turnout and 99.99% of votes in favor of the new constitution. Although Ahidjo's rule was authoritarian, he was seen as noticeably lacking in charisma in comparison to many post-colonial African leaders. He didn't follow the anti-western policies pursued by many of these leaders, which helped Cameroon achieve a degree of comparative political stability, retain Western investment, and see fairly steady economic growth.
Discovery of oil
Cameroon became an oil-producing country in 1977. The accounting of oil revenues was totally opaque and many Cameroonians felt the money was mismanaged or embezzled since. Oil remains a primary driver of the economy, though the country is not as oil-dependent as many other producers in the region.
Biya Era (1982-)
On 30 June 1975 Paul Biya, a long-serving bureaucrat and administrator in the Ahidjo government, was appointed Prime Minister. On November 4, 1982, Ahidjo resigned as president and Biya was his legal successor. Many observers were surprised, as Biya is a Christian from the south while Ahidjo was a Muslim from the North and Ahidjo was only 59 years old. However, Ahidjo did not resign his role as leader of the governing party, and many speculated that he hoped Biya would be a figurehead, or perhaps even a temporary caretaker, as Ahidjo was rumored to be ill and receiving medical care in France.
Rift and coup attempt
Despite previous good relations, in 1983 a rift was apparent between Biya and Ahidjo. Ahidjo left for France and publicly accused Biya of abuse of power. Ahidjo sought to use his continuing control over the party apparatus to sideline Biya, by causing the party, not the President to set the government's agenda. However, at the party conference in September, Biya was elected to lead the party and Ahidjo resigned. In January 1984, Biya was elected president of the country, running unopposed. In February, two senior officials were arrested and, along with Ahidjo who was tried in absentia alongside them.
On April 6, 1984, supporters of Ahidjo attempted a coup d'état, led by the Republican Guard, an elite force recruited by Ahidjo, mainly from the north. The Republican Guard under Colonel Saleh Ibrahim took control of the Yaounde airport, national radio station, and other key points around the capital. However, Biya was able to hole up in the presidential palace with his bodyguard until troops from outside the capital were able to retake control within two days. Ahidjo denied knowledge or responsibility for the coup attempt but was widely viewed as behind it.
Limnic eruptions
On August 15, 1984, Lake Monoun exploded in a limnic eruption that released enormous amounts of carbon dioxide, suffocating 37 people to death. On August 21, 1986, another limnic eruption at Lake Nyos killed as many as 1,800 people and 3,500 livestock. The two disasters are the only recorded instances of limnic eruptions, though geologic and sedimentary evidence indicates they may have caused large localized die-offs before historical records began.
Brief political loosening
Biya had initially seemed supportive of loosening restrictions on civil society, but the coup attempt ended any sign of opening up. However, by 1990, pressure from Western governments was mounting as the end of the Cold War made them less tolerant of authoritarian allies. In December 1990, opposition parties were legalized for the first time since 1966. The first multiparty elections were held in 1992 and were hotly contested. Biya won with 40% of the vote against 36 for his closest competitor and 19 for another opposition party. In Parliament, Biya's ruling party on a plurality with 45% of the votes but failed to obtain a majority. The competitiveness of the election was not to Biya's liking and subsequent elections have been widely criticized by opposition parties and international observers as rigged and suffering from numerous and widespread irregularities. The ruling party has had no trouble gaining large majorities.
Pressure from Anglophone groups in former British Cameroons resulted in changes to the constitution in 1996, which purported to decentralize power but fell short of Anglophone demands to reestablish the federal structure. As a result of continued opposition, many of the changes adopted in 1996 have never been fully implemented and power remains highly centralized in the President.
Bakassi border conflict
Bakassi is a peninsula on the Gulf of Guinea between the Cross River estuary and the Rio del Rey estuary on the east. The area was administered by Nigeria through the colonial era. However, after independence, efforts to demarcate the border revealed that a 1913 agreement between Britain and Germany, placed Bakassi in German Cameroon and accordingly should belong to Cameroon. Nigeria pointed to other colonial-era documents and agreements and their long history of administration to object to this narrative. The competing claims grew contentious after oil was discovered in the region. An agreement between the two countries in 1975 was derailed by a coup in Nigeria. In 1981, clashes between Nigerian and Cameroonian forces resulted in several deaths and nearly led to war between the two nations. The border saw further clashes several times throughout the 1980s. In 1993, the situation worsened with both countries sending large military contingents to the region and numerous reports of skirmishes and attacks against civilians. On 29 March 1994, Cameroon referred the matter to the International Court of Justice (ICJ).
In October 2002, the International Court of Justice ruled in favor of Cameroon. However, the ruling was resisted by Nigeria. Pressure from the UN and international community and the threat of withdrawal of foreign aid ultimately forced Nigeria to acquiesce and in 2006 the Greentree Agreement laid out a plan for the transfer of administration over two years. The transfer was successfully accomplished but many inhabitants of the peninsula retained their Nigerian citizenship and remain dissatisfied with the transition. Low-level violence continued until it was subsumed in the Anglophone Crisis in 2017.
2008 protests
In February 2008, Cameroon experienced widespread violent unrest as a strike by transport workers opposing high fuel prices and poor working conditions coincided with President Paul Biya's announcement that he wanted the constitution to be amended to remove term limits. Biya was scheduled to leave power at the end of his term in 2011. After several days of widespread rioting, looting, and reports of gunfire in all the major cities, calm was eventually restored after a crackdown with thousands arrested, and at least several dozen killed. The government announced lower fuel prices, increased wages for the military and civil servants, and decreased duties on key foodstuffs and construction materials. Many opposition groups reported additional harassment and restrictions on speech, gatherings, and political activity in the wake of the protests. Ultimately, the constitutional term limits were revoked and Biya was reelected in 2011 in an election criticized by the opposition and international observers as plagued by irregularities and low turnout.
Contemporary issues
Boko Haram
In 2014, the Boko Haram insurgency spread into Cameroon from Nigeria. In May 2014, in the wake of the Chibok schoolgirl kidnapping, Presidents Paul Biya of Cameroon and Idriss Déby of Chad announced they were waging war on Boko Haram, and deployed troops to the Northern Nigerian border. Cameroon announced in September 2018 that Boko Haram had been repelled, but the conflict persists in the northern border areas nonetheless.
Anglophone Crisis
In November 2016, major protests broke out in the Anglophone regions of Cameroon. In September 2017, the protests and the government's response to them escalated into an armed conflict, with separatists declaring the independence of Ambazonia and starting a guerilla war against the Cameroonian Army.
Football
Cameroon has received some international attention following the relative success of its football team. The team has qualified for the FIFA World Cup eight times, more than any other African team. However, the team has only made it out of the group stage once, in 1990, when they became the first African team to reach the quarter-final of the World Cup. They have also won five Africa Cup of Nations.
See also
Ambazonia
History of Africa
Politics of Cameroon
List of heads of government of Cameroon
List of heads of state of Cameroon
Douala history and timeline
Yaoundé history and timeline
References
Background Note: Cameroon from the U.S. Department of State.
Bullock, A. L. C. (1939). Germany's Colonial Demands, Oxford University Press.
DeLancey, Mark W., and DeLancey, Mark Dike (2000): Historical Dictionary of the Republic of Cameroon (3rd ed.). Lanham, Maryland: The Scarecrow Press.
Schnee, Heinrich (1926). German Colonization, Past and Future: The Truth about the German Colonies. London: George Allen & Unwin.
Notes
Works cited
Further reading
Ardener, Edwin. Kingdom on Mount Cameroon: Studies in the history of the Cameroon Coast, 1500-1970 (Berghahn Books, 1996) online.
Arnold, Stephen. "Preface to a history of Cameroon literature in English." Research in African Literatures 14.4 (1983): 498-515 online
Awasom, Nicodemus Fru. "The reunification question in Cameroon history: was the bride an enthusiastic or a reluctant one?" Africa Today (2000): 91-119. exccerpt
DeLancey, Mark Dike, Mark W. DeLancey, and Rebecca Neh Mbuh. Historical dictionary of the Republic of Cameroon (Rowman & Littlefield, 2019). online
Diduk, Susan. "European alcohol, history, and the state in Cameroon." African Studies Review 36.1 (1993): 1-42. doi.org/10.2307/525506
Dupraz, Yannick. "French and British colonial legacies in education: Evidence from the partition of Cameroon." Journal of Economic History 79.3 (2019): 628-668. online
Dze-Ngwa, Willibroad. "The First World War and its aftermath in Cameroon: A historical evaluation of a centenary, 1914-2014." International Journal of Liberal Arts and Social Science 3.2 (2015): 78-90. online
Fowler, Ian. "Kingdoms of the Cameroon Grassfields." Reviews in Anthropology 40.4 (2011): 292-311.
Fowler, Ian. "Tribal and palatine arts of the Cameroon grassfields: elements for a ‘traditional’ regional identity." in Contesting Art (Routledge, 2020) pp. 63-84.
Fowler, Ian, and Verkijika G. Fanso, eds. Encounter, transformation and identity: peoples of the western Cameroon borderlands, 1891-2000 (Berghahn Books, 2009) online.
Fowler, Ian, and David Zeitlyn, eds. African Crossroads: intersections between history and anthropology in Cameroon Berghahn Books, 1996) online
Geschiere, Peter. "Chiefs and colonial rule in Cameroon: Inventing chieftaincy, French and British style." Africa 63.2 (1993): 151-175.
Mengang, Joseph Mewondo. "Evolution of natural resource policy in Cameroon." Yale F&ES Bulletin 102 (1998): 239-248. online
Njung, George N. "The British Cameroons mandate regime: The roots of the twenty-first-century political crisis in Cameroon." American Historical Review 124.5 (2019): 1715-1722.
External links
Cameroon |
5465 | https://en.wikipedia.org/wiki/Transport%20in%20Cape%20Verde | Transport in Cape Verde | Most transportation in Cape Verde is done by air. There are regular flights between the major islands (Santiago, Sal and São Vicente), with less frequent flights to the other islands. Boat transportation is available, though not widely used nor dependable. In the major cities, public bus transport runs periodically and taxis are common. In smaller towns, there are mostly hiaces and/or taxis.
Types of transport
Railways:
0 km - There are no railways in Cape Verde. There was a short overhead conveyor system for salt from the open salt lake on Sal to the port at Pedra de Lume, and a short rail track to the pier at Santa Maria for similar purposes. Both are now disused.
Roadways:
total:
10,000 km including unpaved tracks accessible only to four wheel drive vehicles
asphalt:
360 km
cobbled:
5,000 km (2007 estimates)
The majority of Cape Verdean roads are paved with cobblestones cut from local basalt. Recent international aid has allowed the asphalting of many roads including all of the highway between Praia and Tarrafal, all of the highway between Praia and Cidade Velha, and all of the highway between Praia, Pedra Badejo, and Calheta de São Miguel on Santiago, and the dual carriageway between Santa Maria and Espargos on Sal. A new ring road has been built from Praia International Airport around the city of Praia.
The primary method of intercity and inter-village transport for Cape Verdeans is by aluguer shared taxis, commonly called Yasi, which is a derived from the name HiAce, because the Toyota HiAce is the most common shared taxi model. Few Cape Verdeans own cars, but ownership is rising rapidly with increasing prosperity, particularly on Santiago Island.
Ports and harbours:
Mindelo on São Vicente is the main port for cruise liners and the terminus for the ferry service to Santo Antão. A marina for yachts is undergoing enlargement (2007). Praia on Santiago is a main hub for ferry service to other islands. Palmeira on Sal supplies fuel for the main airport on the island, Amílcar Cabral International Airport, and is important for hotel construction on the island. Porto Novo on Santo Antão is the only source for imports and exports of produce from the island as well as passenger traffic since the closure of the airstrip at Ponta do Sol. There are smaller harbours, essentially single jetties at Tarrafal on São Nicolau, Sal Rei on Boa Vista, Vila do Maio (Porto Inglês) on Maio, São Filipe on Fogo and Furna on Brava. These are terminals for inter island ferry service carrying freight and passengers. There are small harbours, with protective breakwaters, used by fishing boats at Tarrafal on Santiago, Pedra de Lume on Sal and Ponta do Sol on Santo Antão. Some offer suitable protection for small yachts. The pier at Santa Maria on Sal used by both fishing and dive boats has been rehabilitated.
Merchant marine:
total:
10
ships by type:
chemical tanker 1, trawler/cargo ship 5, passenger/cargo 5
foreign-owned: 2 (Spain 1, UK 1) (2008)
Airports
7 operational in 2014 - 4 international and 3 domestic.
2 non-operational, one on Brava and the other on Santo Antão, closed for safety reasons.
Over 3,047 m: 1
1,524 to 2,437 m: 3
914 to 1,400 m: 3
International Airports:
Amílcar Cabral International Airport, Sal Island. Opened and began operating international flights from 1939. Named Sal International Airport until 1975.
Nelson Mandela International Airport, Santiago Island. Opened and began operating international flights from 2005. Named Praia International Airport from 2005 until 2013. Replaced the Francisco Mendes International Airport which served the island from 1961 to 2005, and is now closed.
Aristides Pereira International Airport, Boa Vista Island. Airport paved and began operating international traffic in 2007. Named Rabil Airport until 2011.
Cesária Évora Airport, Sao Vicente Island. Opened in 1960 and became an international airport in 2009. Named Sao Pedro Airport until 2011.
International passenger traffic is forecast to exceed 250,000 passengers for 2007. Annual growth, mostly of tourists from Europe is anticipated to continue at just under 20%. (Source ASA Cape Verde airport authority)
Main Airlines serving the country:
TACV Cabo Verde Airlines
Cabo Verde Express Cape Verde Express
Halcyonair Cabo Verde Airways - dissolved in 2013
TAP Portugal
TACV flies daily international flights from Lisbon to Sal or Praia and once a week from Amsterdam, Munich, Paris, Las Palmas, Fortaleza and Boston to one or other of the international airports. It operates on a frequency varying from daily to thrice weekly on inter-island flights to each of the seven islands with operational airports and also to Dakar. It has a fleet of two Boeing 757s and three ATR42s have been replaced by ATR72s. It is currently (2010) undergoing privatization at the insistence of the World Bank.
Road network
The road network of Cape Verde is managed by the national government (Instituto de Estradas) and by the municipalities. The total length of the road network is 1,650 km, of which 1,113 km national roads and 537 km municipal roads. Of the national roads, 36% is asphalted.
Air Services
TACV Cabo Verde Airlines, the national airline, flies weekly from Boston Logan International Airport to Praia International Airport at Praia Santiago island. Currently (2007) these flights are on Wednesdays, but schedules vary and are subject to change. It also has flights four times weekly from Lisbon to Francisco Mendes (the recently opened airport at Praia on Santiago island) and four times weekly from Lisbon to Amílcar Cabral International Airport on Sal island. There is a flight on Mondays from Paris-Charles de Gaulle Airport to Sal and on Thursdays from Amsterdam Schiphol Airport via Munich-Riem Airport to Sal. Return flights are just after midnight on the same day.
From Las Palmas in the Canary Islands, Spain there are night flights on Mondays and Thursdays, with departures just after midnight. Return flights are the previous day. There is a service from Praia to Fortaleza, Brazil on Mondays and Thursdays departing early evening and returning at night. All international flights are operated by Boeing 757 aircraft. Most international flights are subject to delay and sometimes cancellation.
TAP Air Portugal the Portuguese national carrier operates a daily service from Lisbon to Sal with late evening departures returning after midnight and reaching Lisbon in the early morning. Most flights are delayed and onward connections from Lisbon can be missed as a result. TAP and other European carriers provide connections with most European capitals, enabling same day through flights.
From the UK, direct routes by Astraeus from London Gatwick and Manchester to Sal ceased in April 2008; their website has not taken reservations since May 2008. TACV Cabo Verde Airlines opened a route from London Stansted in October 2008 though it was rumoured that flights were being cancelled due to minimum take up though with effect from May 2008, TACV have ceased flights from London Gatwick. There is a Fly TACV website, and you can book from their website. Reservations should be made via the UK TACV office on 0870 774 7338.
Thomson Airways have opened additional routes from London Gatwick and Manchester on Mondays and Fridays. Various options and bookings can be made via Thomsonfly to both Sal and Boa Vista.
Hamburg International provides a charter service from Hamburg via Düsseldorf on Thursdays and Condor operates from Frankfurt Rhein Main on Tuesdays returning on Wednesday.
Neos operates charter flights from Milan Malpensa, Rome-Fiumicino and Bologna on Wednesdays.
TACV Cabo Verde Airlines, the national airline has been a monopoly carrier within the island archipelago (2007). It operates services from the main hub airports at Sal and Santiago to Boa Vista, Fogo, Maio, São Nicolau and São Vicente at frequencies ranging from thrice weekly to thrice daily. Air strips on the other islands of Brava and Santo Antão are closed (2007) and can only be reached by ferry services from other islands.
TACV does not publish timetables; flight times are listed on departure boards. Tickets can be bought at the TACV shop at each airport by queuing and paying in cash (euros or escudos). Flights are often delayed and sometimes cancelled due to weather or operational conditions. Services are operated by ATR 42 turboprop aircraft, which are being replaced (2007) by the enlarged ATR 72 aircraft. Inter island tariffs vary depending on the distance but are generally around €180 return. Air passes are obtainable for multiple flights, when buying an international ticket on TACV.
Halcyonair a private carrier with Portuguese and Cape Verdean shareholders is commenced operations on inter-island flights during 2007. It has obtained the necessary licensing from the Cape Verde Government.
Travel within the islands
The frequency and regularity of publicly-accessible ground transportation services vary between the islands and municipalities. There are some common features that can be found throughout Cape Verde. The primary mode of transportation between municipalities is via shared minibuses commonly referred to as a "yasi", because of the Toyota HiAce which makeup the majority of the minibuses in service. While 12-14 passenger "yasi" class minibuses connect the major municipalities at their end points, modified pickup trucks with partially covered cabs and benches installed in the back transport passengers along shorter distances through minor municipalities and the rural areas in between. These modified pickup trucks are referred to as "hilux" after the Toyota Hilux, the common model adapted. Notably, both "yasi" and "hilux" transportation will stop and pickup any passenger that hails them, as well as drop off any passenger that requests to disembark at any point. intermuncipality transportation licenses are granted on an individual basis to each vehicle in the name of the owner by the Direcção Geral dos Transportes e Rodoviários (General Directorate of Transport and Roads).
With the exception of the Praia ⇄ Assomada route on Santiago, all yasi and hilux class vehicles licensed to carry passengers act as individual freelancers, not collectively. As such, they do not adhere to scheduling, and have no obligation to provide service. This includes many vehicles running the same route, owned by the same person.
Brava
Hiluxes and yasis connect Furna and Nova Sintra mostly when boats arrive. Other parts of the island are connected by these vehicles.
Fogo
Fogo has many yasis running the routes between São Filipe and Mosteiros, São Filipe and Chã das Calderas. Unlike many other islands, these buses depart at roughly the same time every day, and despite the presence of multiple vehicles running each route, passengers can find themselves stranded if they do not board a vehicle during the limited departure window. Yasis tend to depart Mosteiros headed to São Filipe around 6am, and tend to Depart São Filipe headed to Chã around noon.
São Vicente
Mindelo has a municipal bus service run by the company Transcor. Yasi and hilux transportation connects Mindelo with other parts of the island. Other transportation companies especially minibuses include Transporte Morabeza, Transporte Alegría, Amizade, Sotral y Automindelo.
Santiago
Maura Company and Sol Atlántico are the only two companies that have been granted municipal bus service licenses. Over the past decade, Maura Company, which had previously been the dominant bus company, has retired the majority of its buses, while many that continue to run are in a state of disrepair due to financial difficulties. Sol Atlántico, in contrast, has greatly increased its fleet of buses, adding several new high capacity buses in 2015. Municipal bus prices are regulated at 44 escudos per ride. Transfers are not allowed. Bus schedules do not exist, but buses start running around 6am and stop around 9pm. Bus stops exist, and are frequently infiltrated by minibus vehicles (also called "yasis") and both (taxi) licensed and unlicensed "clan" taxis illegally running municipal bus routes without a municipal license. No other city on Santiago has a municipal bus service. The government of Assomada has solicited requests for a bus service but so far none has been approved, and there are no short-term plans for any bus company to enter the municipal market.
Transportation between the municipalities and rural areas is handled predominantly by yasi and hilux transportation. Rates are not fixed and range from 20 escudos for short trips between rural areas up to 500 escudos for Praia ⇄ Tarrafal. Some commonly accepted prices charged between municipalities are 100 escudos for Praia ⇄ São Domingos, 150 escudos for Praia ⇄ Orgãos, and 250 escudos for Praia ⇄ Assomada. Some of the yasis start collecting passengers before dawn to transport between Praia and Assomada and Praia and Pedra Badejo, and the last departures usually occur between 7 and 8pm. These vehicles do not maintain a schedule (with the exception of two early morning vehicles departing Assomada at 5:40 and 6:20 headed to Praia), instead choosing to drive around in circles within the urban centers of Praia, Assomada, and Pedra Badejo to pick up passengers until they are full, or over capacity (14 passengers is the legal limit for an actual Toyota HiAce), at which point they depart. Yasi drivers employ helpers to hawk out the window the destination of the yasi, as well as the obligatory "cheio", meaning full, with little regard for the number of people aboard. Helpers and drivers sometimes use shills (fake passengers) to overcome the common chicken and egg problem wherein passengers will not board an empty (or low passenger) minibus in an urban center because they know it will not depart until it is full. They will board a nearly-full (or over capacity) bus because they know it is likely to depart soon.
In 2015 a project called EcobusCV started running a fleet of dual fuel waste vegetable oil / diesel modified Toyota HiACE minibuses using a scheduled service model between Praia and Assomada. Buses depart one per hour, on the hour, from designated bus stops in Praia, at Igreja Nova Apostólica in Fazenda, and Assomada, in front of the court house. The current departure schedule as of September 15 is one departure per hour, every hour starting at 7am, with the last departure at 6pm. EcobusCV plans to expand to departures in 30 minute intervals before the end of 2015. EcobusCV has instituted aggressive, transparent pricing undercutting the informal generally accepted prices between municipalities, which has started to cause freelance yasis to alter their pricing.
Taxis are common in Praia and Assomada. Taxis with a base in Praia are painted beige, while taxis with a base in Assomada are painted white. They can carry passengers between municipalities, but they are prohibited from circulating and picking up passengers outside of their base city, though they will usually pickup passengers if they get hailed on their way back to their home city. Taximeters are installed in most legal taxis, but many are not functional and they are almost never used because the generally accepted rates are cheaper than what the taximeter would usually count. In Praia there is a large number of "clan" or clandestine taxis that operate without paying for a license. Most people identify Toyota Corolla hatchbacks as clans and they are frequently hailed. While the minimum taximeter price is officially 80, in practice 100 is the minimum a person pays if they board a taxi. Taxi rates in Praia generally go up to 250 escudos from the furthest points of the city to Plateau, and cross town taxis cap out at 400 during the day. Rates generally go up by 50 escudos after 10pm, though for longer distances some will try to charge an extra 100. An exception to this rule is the airport. Airport rates generally range from 500 to 1000 depending on the starting place or destination, and can go up by several hundred at night.
Sal
Sal has unscheduled yasi service between Espargos and Santa Maria, with frequent departures in the morning from Espargos, where most locals live, to Santa Maria, where most locals work, and vice versa in the afternoon.
Inter-Island ferries in Cape Verde
Several ferries operate between the islands with much lower fares than the airlines. These are provided by various independent shipping companies and their conditions and seaworthiness vary. Many services depart from Praia at about midnight, arriving in outlying islands at breakfast time. Return trips often depart around mid-day. Service schedules are approximate and delays or cancellations of service are common. Conditions can be very crowded it is advisable to pre-book a cabin for all but the shortest of trips. Passages can be very rough in winter.
Departure days vary according to the season and are frequently altered. Enquire at the shipping offices in Praia and other Cape Verdean ports.
In early 2011, the Kriola, the first of a proposed fleet of ferryboats belonging to the company Cabo Verde Fast Ferry (CVFF) arrived in Praia directly from Singapore. It was custom-built there by the Dutch shipbuilding company, Damen Group. The Kriola operates regular service among the Sotavento islands of Brava, Fogo, and Santiago.
Ferry routes
Boa Vista (Sal Rei)–Maio (Cidade do Maio)
Fogo (São Filipe-Vale de Cavaleiros)–Brava (Furna)
Maio (Cidade do Maio)–Santiago (Porto Praia)
Sal (Palmeira)–Boa Vista (Sal Rei)
Santiago (Porto Praia)–Fogo (São Filipe-Vale de Cavaleiros)
Santiago (Porto Praia)–São Vicente (Mindelo-Porto Grande) - longest ferry route
Santiago (Porto Praia)–Brava (Furna)
Santo Antāo (Porto Novo)–São Vicente (Mindelo-Porto Grande)
São Nicolau (Tarrafal de São Nicolau)–Sal (Palmeira)
São Nicolau (Preguiça)–Sal (Palmeira)
São Vicente (Porto Grande)–São Nicolau (Tarrafal de São Nicolau)
São Vicente (Porto Grande)–São Nicolau (Preguiça)
Lesser ferry routes:
Within Santo Antão: Tarrafal de Monte Trigo–Monte Trigo (45 min) - shortest ferry route
Within Săo Nicolau: Preguiça–Carriçal
References
Cape Verde Info UK 2007
External links
Cape Verde Information Cape Verde Expatriate Residents Collaborative Pages
Cape Verde Information UK
Cape Verde Information - Cape Verde Transport Information
Cape Verde Travel Information - Cape Verde Travel |
5468 | https://en.wikipedia.org/wiki/Cayman%20Islands | Cayman Islands | The Cayman () Islands is a self-governing British Overseas Territory, and the largest by population. The territory comprises the three islands of Grand Cayman, Cayman Brac and Little Cayman, which are located south of Cuba and north-east of Honduras, between Jamaica and Mexico's Yucatán Peninsula. The capital city is George Town on Grand Cayman, which is the most populous of the three islands.
The Cayman Islands is considered to be part of the geographic Western Caribbean zone as well as the Greater Antilles. The territory is a major offshore financial centre for international businesses and wealthy individuals, largely as a result of the state not charging taxes on any income earned or stored.
With a GDP per capita of $91,392, the Cayman Islands has the highest standard of living in the Caribbean, and one of the highest in the world. Immigrants from over 130 countries and territories reside in the Cayman Islands.
History
It is likely that the Cayman Islands were first visited by the Amerindians, the Indigenous peoples of the Caribbean. The Cayman Islands got their name from the word for crocodile (caiman) in the language of the Arawak-Taíno people. It is believed that the first European to sight the islands was Christopher Columbus, on 10 May 1503, during his final voyage to the Americas. He named them "Las Tortugas", after the large number of turtles found there (which were soon hunted to near-extinction).
However, in succeeding decades, the islands began to be referred to as "Caimanas" or "Caymanes", after the caimans present there. No immediate colonisation followed Columbus's sighting, but a variety of settlers from various backgrounds eventually arrived, including pirates, shipwrecked sailors, and deserters from Oliver Cromwell's army in Jamaica. Sir Francis Drake briefly visited the islands in 1586.
The first recorded permanent inhabitant, Isaac Bodden, was born on Grand Cayman around 1661. He was the grandson of an original settler named Bodden, probably one of Oliver Cromwell's soldiers involved in the capture of Jamaica from Spain in 1655.
England took formal control of the Cayman Islands, along with Jamaica, as a result of the Treaty of Madrid of 1670. That same year saw an attack on a turtle fishing settlement on Little Cayman by the Spanish under Manuel Ribeiro Pardal. Following several unsuccessful attempts at settlement in what had by then become a haven for pirates, a permanent English-speaking population in the islands dates from the 1730s. With settlement, after the first royal land grant by the Governor of Jamaica in 1734, came the introduction of slaves. Many were purchased and brought to the islands from Africa. That has resulted in the majority of native Caymanians being of African or English descent. The first census taken in the islands, in 1802, showed the population on Grand Cayman to be 933, with 545 of those inhabitants being slaves. Slavery was abolished in the Cayman Islands in 1833, following the passing of the Slavery Abolition Act by the British Parliament. At the time of abolition, there were over 950 slaves of African ancestry, owned by 116 families.
On 22 June 1863, the Cayman Islands was officially declared and administered as a dependency of the Crown Colony of Jamaica. The islands continued to be governed as part of the Colony of Jamaica until 1962, when they became a separate Crown colony, after Jamaica became an independent Commonwealth realm.
On 8 February 1794, the Caymanians rescued the crews of a group of ten merchant ships, including HMS Convert, an incident that has since become known as the Wreck of the Ten Sail. The ships had struck a reef and run aground during rough seas. Legend has it that King George III rewarded the islanders for their generosity with a promise never to introduce taxes, because one of the ships carried a member of the King's family. Despite the legend, the story is not true.
In the 1950s, tourism began to flourish, following the opening of Owen Roberts International Airport (ORIA), along with a bank and several hotels, as well as the introduction of a number of scheduled flights and cruise stop-overs. Politically, the Cayman Islands were an internally self-governing territory of Jamaica from 1958 to 1962, but they reverted to direct British rule following the independence of Jamaica in 1962. In 1972, a large degree of internal autonomy was granted by a new constitution, with further revisions being made in 1994. The Cayman Islands government focused on boosting the territory's economy via tourism and the attraction of off-shore finance, both of which mushroomed from the 1970s onwards. Historically, the Cayman Islands has been a tax-exempt destination, and the government has always relied on indirect and not direct taxes. The territory has never levied income tax, capital gains tax, or any wealth tax, making it a popular tax haven.
In April 1986, the first marine protected areas were designated in the Cayman Islands, making them the first islands in the Caribbean to protect their fragile marine life.
The constitution was further modified in 2001 and 2009, codifying various aspects of human rights legislation.
On 11 September 2004, the island of Grand Cayman, which lies largely unprotected at sea level, was battered by Hurricane Ivan, the worst hurricane to hit the islands in 86 years. It created an storm surge which flooded many areas of Grand Cayman. An estimated 83% of the dwellings on the island were damaged, with 4% requiring complete reconstruction. A reported 70% of all dwellings suffered severe damage from flooding or wind. Another 26% sustained minor damage from partial roof removal, low levels of flooding, or impact with floating or wind-driven hurricane debris. Power, water and communications were disrupted for months in some areas. Within two years, a major rebuilding program on Grand Cayman meant that its infrastructure was almost back to its pre-hurricane condition. Due to the tropical location of the islands, more hurricanes or tropical systems have affected the Cayman Islands than any other region in the Atlantic basin. On average, it has been brushed, or directly hit, every 2.23 years.
Geography
The islands are in the western Caribbean Sea and are the peaks of an undersea mountain range called the Cayman Ridge (or Cayman Rise). This ridge flanks the Cayman Trough, deep which lies to the south. The islands lie in the northwest of the Caribbean Sea, east of Quintana Roo, Mexico and Yucatán State, Mexico, northeast of Costa Rica, north of Panama, south of Cuba and west of Jamaica. They are situated about south of Miami, east of Mexico, south of Cuba, and about northwest of Jamaica. Grand Cayman is by far the largest, with an area of . Grand Cayman's two "sister islands", Cayman Brac and Little Cayman, are about east north-east of Grand Cayman and have areas of respectively. The nearest land mass from Grand Cayman is the Canarreos Archipelago (about 240 km or 150 miles away), whereas the nearest from the easternmost island Cayman Brac is the Jardines de la Reina archipelago (about 160 km or 100 miles away) – both of which are part of Cuba.
All three islands were formed by large coral heads covering submerged ice-age peaks of western extensions of the Cuban Sierra Maestra range and are mostly flat. One notable exception to this is The Bluff on Cayman Brac's eastern part, which rises to above sea level, the highest point on the islands.
The terrain is mostly a low-lying limestone base surrounded by coral reefs. The portions of prehistoric coral reef that line the coastline and protrude from the water are referred to as ironshore.
Fauna
The mammalian species in the Cayman Islands include the introduced Central American agouti and eight species of bats. At least three now extinct native rodent species were present until the discovery of the islands by Europeans. Marine life around the island of the Grand Cayman includes tarpon, silversides (Atheriniformes), French angelfish (Pomacanthus paru), and giant barrel sponges. A number of cetaceans are found in offshore waters. These species include the goose-beaked whale (Ziphius cavirostris), Blainville's beaked whale (Mesoplodon densirostris) and sperm whale (Physeter macrocephalus).
Cayman avian fauna includes two endemic subspecies of Amazona parrots: Amazona leucocephala hesterna or Cuban amazon, presently restricted to the island of Cayman Brac, but formerly also on Little Cayman, and Amazona leucocephala caymanensis or Grand Cayman parrot, which is native to the Cayman Islands, forested areas of Cuba, and the Isla de la Juventud. Little Cayman and Cayman Brac are also home to red-footed and brown boobies. Although the barn owl (Tyto alba) occurs in all three of the islands they are not commonplace. The Cayman Islands also possess five endemic subspecies of butterflies. These butterfly breeds can be viewed at the Queen Elizabeth II Botanic Park on the Grand Cayman.
Among other notable fauna at the Queen Elizabeth II Botanic Park is the critically threatened blue iguana which is also known as the Grand Cayman iguana (Cyclura lewisi). The blue iguana is endemic to the Grand Cayman particularly because of rocky, sunlit, open areas near the island's shores that are advantageous for the laying of eggs. Nevertheless, habitat destruction and invasive mammalian predators remain the primary reasons that blue iguana hatchlings do not survive naturally.
The Cuban crocodile (Crocodylus rhombifer) once inhabited the islands. The name "Cayman" is derived from a Carib word for various crocodilians.
Climate
The Cayman Islands has a tropical wet and dry climate, with a wet season from May to October, and a dry season that runs from November to April. Seasonally, there is little temperature change.
A major natural hazard is the tropical cyclones that form during the Atlantic hurricane season from June to November.
On 11 and 12 September 2004, Hurricane Ivan struck the Cayman Islands. The storm resulted in two deaths and caused significant damage to the infrastructure on the islands. The total economic impact of the storms was estimated to be $3.4 billion.
Demographics
Demographics & immigration
While there are a large number of generational Caymanians, many Caymanians today have roots in almost every part of the world. Similarly to countries like the United States, the Cayman Islands is a melting pot with citizens of every background. 52.5% of the population is Non-Caymanian, while 47.5% is Caymanian.
According to the Economics and Statistics Office of the Government of the Cayman Islands, the Cayman Islands had a population of 71,432 at the Census of 10 October 2021, but was estimated by them to have risen to 81,546 as of December 2022, making it the most populous British Overseas Territory. It was revealed in the 2021 census that 56% of the workforce is Non-Caymanian; this is the first time in the territory's history that the number of working immigrants has overtaken the number of working Caymanians. Most Caymanians are of mixed African and European ancestry. Slavery was not common throughout the islands, and once it was abolished, black and white communities seemed to integrate more compliantly than other Caribbean nations and territories, resulting in a more mixed-race population.
The country's demographics are changing rapidly. Immigration plays a huge role, however, the changing demographics in age have sounded alarm bells in the most recent census. In comparison to the 2010 census, the 2021 census has shown that 36% of Cayman's population growth has been in seniors over the age of 65, while only 8% growth was recorded in groups under 15 years of age. This is due to extremely low birth rates among Caymanians, which almost forces the government to have to seek out workers from overseas to sustain the country's economic success. This has raised concerns, however, among many young Caymanians, who worry about the workforce becoming increasingly competitive with the influx of workers, as well as rent and property prices going up.
Because the population has skyrocketed over the last decade, the Premier of the Cayman Islands, Wayne Panton, has stressed that the islands need more careful and managed growth. Many have worried that the country's infrastructure and services cannot cope with the surging population. It is believed that given current trends, the population will reach 100,000 before 2030.
District populations
According to the Economics and Statistics Office, the final result of the 20 October 2021 Census was 71,432; however, according to a late 2022 population report by the same body, the estimated population at the end of 2022 was 81,546, broken down as follows:
Religion
The predominant religion on the Cayman Islands is Christianity (66.9%, down from over 80% in 2010). Religions practised include United Church, Church of God, Anglican Church, Baptist Church, Catholic Church, Seventh-day Adventist Church, and Pentecostal Church. Roman Catholic churches are St. Ignatius Church, George Town, Christ the Redeemer Church, West Bay and Stella Maris Church, Cayman Brac. Many citizens are deeply religious, regularly going to church, however, atheism has been on the rise throughout the islands since 2000, with 16.7% now identifying with no religion, according to the 2021 census. Ports are closed on Sundays and Christian holidays. There is also an active synagogue and Jewish community on the island as well as places of worship in George Town for Jehovah's Witnesses and followers of the Bahá'í faith.
In 2020, there were an estimated 121 Muslims in the Cayman Islands.
Languages
The official language of the Cayman Islands is English (90%). Islanders' accents retain elements passed down from English, Scottish, and Welsh settlers (among others) in a language variety known as Cayman Creole. Caymanians of Jamaican origin speak in their own vernacular (see Jamaican Creole and Jamaican English). It is also quite commonplace to hear some residents converse in Spanish as many citizens have relocated from Latin America to work and live on Grand Cayman. The Latin American nations with the greatest representation are Honduras, Cuba, Colombia, Nicaragua, and the Dominican Republic. Spanish speakers comprise approximately between 10 and 12% of the population and are predominantly of the Caribbean dialect. Tagalog is spoken by about 8% of inhabitants most of whom are Filipino residents on work permits.
Economy
According to Forbes, the Cayman Islands has the 7th strongest currency in the world (the CI dollar or KYD), with US$1.00 equivalent to CI$0.80.
The economy of the Cayman Islands is dominated by financial services and tourism, together accounting for 50–60% of Gross Domestic Product. The nation's low tax rates have led to it being used as a tax haven for corporations; there are 100,000 companies registered in the Cayman Islands, more than the population itself. The Cayman Islands have come under criticism for allegations of money laundering and other financial crimes, including a 2016 statement by then US president Barack Obama that described a particular building which was the registered address of over 12,000 corporations as a "tax scam".
The Cayman Islands holds a relatively low unemployment rate of about 4.24% as of 2015, lower than the value of 4.7% that was recorded in 2014.
With an average income of US$71,549, Caymanians have the highest standard of living in the Caribbean. According to the CIA World Factbook, the Cayman Islands' real GDP per capita is the tenth highest in the world, but the CIA's data for Cayman dates to 2018 and is likely to be lower than present-day values. The territory prints its own currency, the Cayman Islands dollar (KYD), which is pegged to the US dollar US$1.227 to 1 KYD. However, in many retail stores throughout the islands, the KYD is typically traded at US$1.25.
Cayman Islands have a high cost of living, even when compared to UK and US. For example, a loaf of multigrain bread is $5.49 (KYD), while a similar loaf sells for $2.47 (KYD) in the US and $1.36 (KYD) in the UK.
The minimum wage (as of February 2021) is $6 KYD for standard positions, and $4.50 for workers in the service industry, where tips supplement income. This contributes to wealth disparity. A small segment of the population lives in condemned properties lacking power and running water.
The government has established a Needs Assessment Unit to relieve poverty in the islands. Local charities, including Cayman's Acts of Random Kindness (ARK) also provide assistance.
The government's primary source of income is indirect taxation: there is no income tax, capital gains tax, or corporation tax. An import duty of 5% to 22% (automobiles 29.5% to 100%) is levied against goods imported into the islands. Few goods are exempt; notable exemptions include books, cameras, and perfume.
Tourism
One of Grand Cayman's main attractions is Seven Mile Beach, site of a number of the island's hotels and resorts. Named one of the Ultimate Beaches by Caribbean Travel and Life, Seven Mile Beach (due to erosion over the years, the number has decreased to 5.5 miles) is a public beach on the western shore of Grand Cayman Island. Historical sites in Grand Cayman, such as Pedro St. James Castle in Savannah, also attract visitors.
All three islands offer scuba diving, and the Cayman Islands are home to several snorkelling locations where tourists can swim with stingrays. The most popular area to do this is Stingray City, Grand Cayman. Stingray City is a top attraction in Grand Cayman and originally started in the 1980s when divers started feeding squid to stingrays. The stingrays started to associate the sound of the boat motors with food, and thus visit this area year-round.
There are two shipwrecks off the shores of Cayman Brac, including the MV Captain Keith Tibbetts; Grand Cayman also has several shipwrecks off its shores, including one deliberate one. On 30 September 1994, the was decommissioned and struck from the Naval Vessel Register. In November 2008 her ownership was transferred for an undisclosed amount to the government of the Cayman Islands, which had decided to sink the Kittiwake in June 2009 to form a new artificial reef off Seven Mile Beach, Grand Cayman. Following several delays, the ship was finally scuttled according to plan on 5 January 2011. The Kittiwake has become a dynamic environment for marine life. While visitors are not allowed to take anything, there are endless sights. Each of the five decks of the ship offers squirrelfish, rare sponges, Goliath groupers, urchins, and more. Experienced and beginner divers are invited to swim around the Kittiwake. Pirates Week is an annual 11-day November festival started in 1977 by the then-Minister of Tourism Jim Bodden to boost tourism during the country's tourism slow season.
Other Grand Cayman tourist attractions include the ironshore landscape of Hell; the marine theme park "Cayman Turtle Centre: Island Wildlife Encounter", previously known as "Boatswain's Beach"; the production of gourmet sea salt; and the Mastic Trail, a hiking trail through the forests in the centre of the island. The National Trust for the Cayman Islands provides guided tours weekly on the Mastic Trail and other locations.
Another attraction to visit on Grand Cayman is the Observation Tower, located in Camana Bay. The Observation Tower is 75 feet tall and provides 360-degree views across Seven Mile Beach, George Town, the North Sound, and beyond. It is free to the public and climbing the tower has become a popular thing to do in the Cayman Islands.
Points of interest include the East End Light (sometimes called Gorling Bluff Light), a lighthouse at the east end of Grand Cayman island. The lighthouse is the centrepiece of East End Lighthouse Park, managed by the National Trust for the Cayman Islands; the first navigational aid on the site was the first lighthouse in the Cayman Islands.
Shipping
, 360 commercial vessels and 1,674 pleasure craft were registered in the Cayman Islands totalling 4.3 million GT.
Labour
The Cayman Islands has a population of 69,656 () and therefore a limited workforce. Work permits may, therefore, be granted to foreigners. On average, there have been more than 24,000+ foreigners holding valid work permits.
Work permits for non-citizens
To work in the Cayman Islands as a non-citizen, a work permit is required. This involves passing a police background check and a health check. A prospective immigrant worker will not be granted a permit unless certain medical conditions are met, including testing negative for syphilis and HIV. A permit may be granted to individuals on special work.
A foreigner must first have a job to move to the Cayman Islands. The employer applies and pays for the work permit. Work permits are not granted to foreigners who are in the Cayman Islands (unless it is a renewal). The Cayman Islands Immigration Department requires foreigners to remain out of the country until their work permit has been approved.
The Cayman Islands presently imposes a controversial "rollover" in relation to expatriate workers who require a work permit. Non-Caymanians are only permitted to reside and work within the territory for a maximum of nine years unless they satisfy the criteria of key employees. Non-Caymanians who are "rolled over" may return to work for additional nine-year periods, subject to a one-year gap between their periods of work. The policy has been the subject of some controversy within the press. Law firms have been particularly upset by the recruitment difficulties that it has caused. Other less well-remunerated employment sectors have been affected as well. Concerns about safety have been expressed by diving instructors, and realtors have also expressed concerns. Others support the rollover as necessary to protect Caymanian identity in the face of immigration of large numbers of expatriate workers.
Concerns have been expressed that in the long term, the policy may damage the preeminence of the Cayman Islands as an offshore financial centre by making it difficult to recruit and retain experienced staff from onshore financial centres. Government employees are no longer exempt from this "rollover" policy, according to this report in a local newspaper. The governor has used his constitutional powers, which give him absolute control over the disposition of civil service employees, to determine which expatriate civil servants are dismissed after seven years service and which are not.
This policy is incorporated in the Immigration Law (2003 revision), written by the United Democratic Party government, and subsequently enforced by the People's Progressive Movement Party government. Both governments agree to the term limits on foreign workers, and the majority of Caymanians also agree it is necessary to protect local culture and heritage from being eroded by a large number of foreigners gaining residency and citizenship.
CARICOM Single Market Economy
In recognition of the CARICOM (Free Movement) Skilled Persons Act which came into effect in July 1997 in some of the CARICOM countries such as Jamaica and which has been adopted in other CARICOM countries, such as Trinidad and Tobago it is possible that CARICOM nationals who hold the "A Certificate of Recognition of Caribbean Community Skilled Person" will be allowed to work in the Cayman Islands under normal working conditions.
Government
The Cayman Islands are a British overseas territory, listed by the UN Special Committee of 24 as one of the 16 non-self-governing territories. The current Constitution, incorporating a Bill of Rights, was ordained by a statutory instrument of the United Kingdom in 2009. A 19-seat (not including two non-voting members appointed by the Governor which brings the total to 21 members) Parliament is elected by the people every four years to handle domestic affairs. Of the elected Members of the Parliament (MPs), seven are chosen to serve as government Ministers in a Cabinet headed by the Governor. The Premier is appointed by the Governor.
A Governor is appointed by the King of the United Kingdom on the advice of the British Government to represent the monarch. Governors can exercise complete legislative and executive authority if they wish through blanket powers reserved to them in the constitution. Bills which have passed the Parliament require royal assent before becoming effective. The Constitution empowers the Governor to withhold royal assent in cases where the legislation appears to him or her to be repugnant to or inconsistent with the Constitution or affects the rights and privileges of the Parliament or the Royal Prerogative, or matters reserved to the Governor by article 55. The executive authority of the Cayman Islands is vested in the King and is exercised by the Government, consisting of the Governor and the Cabinet. There is an office of the Deputy Governor, who must be a Caymanian and have served in a senior public office. The Deputy Governor is the acting Governor when the office of Governor is vacant, or the Governor is not able to discharge his or her duties or is absent from the Cayman Islands. The current Governor of the Cayman Islands is Jane Owen.
The Cabinet is composed of two official members and seven elected members, called Ministers; one of whom is designated Premier. The Premier can serve for two consecutive terms after which he or she is barred from attaining the office again. Although an MP can only be Premier twice any person who meets the qualifications and requirements for a seat in the Parliament can be elected to the Parliament indefinitely.
There are two official members of the Parliament, the Deputy Governor and the Attorney General. They are appointed by the Governor in accordance with His Majesty's instructions, and although they have seats in the Parliament, under the 2009 Constitution, they do not vote. They serve in a professional and advisory role to the MPs, the Deputy Governor represents the Governor who is a representative of the King and the British Government. While the Attorney General serves to advise on legal matters and has special responsibilities in Parliament, they are generally responsible for changes to the Penal code.
The seven Ministers are voted into office by the 19 elected members of the Parliament of the Cayman Islands. One of the Ministers, the leader of the majority political party, is appointed Premier by the Governor.
After consulting the Premier, the Governor allocates a portfolio of responsibilities to each Cabinet Minister. Under the principle of collective responsibility, all Ministers are obliged to support in the Parliament any measures approved by Cabinet.
Almost 80 departments, sections and units carry out the business of government, joined by a number of statutory boards and authorities set up for specific purposes, such as the Port Authority, the Civil Aviation Authority, the Immigration Board, the Water Authority, the University College Board of Governors, the National Pensions Board and the Health Insurance Commission.
Since 2000, there have been two official major political parties: The Cayman Democratic Party (CDP) and the People's Progressive Movement (PPM). While there has been a shift to political parties, many contending for office still run as independents. The two parties are notably similar, though they consider each other rivals in most cases, their differences are generally in personality and implementation rather than actual policy. The Cayman Islands generally lacks any form of organised political parties. As of the May 2017 General Election, members of the PPM and CDP have joined with three independent members to form a government coalition despite many years of enmity.
Police
Policing in the country is provided chiefly by the RCIPS or Royal Cayman Islands Police Service and the CICBC or Cayman Islands Customs & Border Control. These two agencies co-operate in aspects of law enforcement, including their joint marine unit.
Military and defence
The defence of the Cayman Islands is the responsibility of the United Kingdom. The Royal Navy maintains a ship on permanent station in the Caribbean (HMS Medway (P223)) and additionally sends another Royal Navy or Royal Fleet Auxiliary ship as a part of Atlantic Patrol (NORTH) tasking. These ships' main mission in the region is to maintain British sovereignty for the overseas territories, provide humanitarian aid and disaster relief during disasters such as hurricanes, which are common in the area, and to conduct counter-narcotic operations.
Cayman Islands Regiment
On 12 October 2019, the government announced the formation of the Cayman Islands Regiment, a new British Armed Forces unit. The Cayman Islands Regiment which became fully operational in 2020, with an initial 35–50 personnel of mostly reservists. Between 2020 through 2021 the Regiment grew to over a hundred personnel and over the next several years expected to grow to over a several hundred personnel.
In mid-December 2019, recruitment for commanding officers and junior officers began, with the commanding officers expected to begin work in January 2020 and the junior officers expected to begin in February 2020.
In January 2020, the first officers were chosen for the Cayman Islands Regiment.
Since the formation of the Regiment, it has been deployed on a few operational tours providing HADR or Humanitarian Aid and Disaster Relief as well as assisting with the COVID19 Pandemic.
Cadet Corps
The Cayman Islands Cadet Corps was formed in March 2001 and carries out military-type training with teenage citizens of the country.
Coast Guard
In 2018, the PPM-led Coalition government pledged to form a coast guard to protect the interests of the Cayman Islands, especially in terms of illegal immigration and illegal drug importation as well as search and rescue. In mid-2018, the Commander and second-in-Command of the Cayman Islands Coast Guard were appointed. Commander Robert Scotland was appointed as the first commanding officer and Lieutenant Commander Leo Anglin was appointed as Second-in-Command.
In mid-2019, the commander and second-in-command took part in international joint operations with the United States Coast Guard and the Jamaica Defense Force Coast Guard called Operation Riptide. This makes it the first deployment for the Cayman Islands Coast Guard and the first in ten years any Cayman Representative has been on a foreign military ship for a counternarcotic operation.
In late November 2019, it was announced that the Cayman Islands Coast Guard would become operational in January 2020, with initial total of 21 Coast Guardsmen half of which would come from the joint marine unit, with further recruitment in the new year. One of the many taskings of the Coast Guard will be to push enforcement of all laws that apply to the designated Wildlife Interaction Zone.
On 5 October 2021, the Cayman Islands Parliament passed the Cayman Islands Coast Guard Act thus establishing the Cayman Islands Coast Guard as a uniformed and disciplined department of Government.
Taxation
No direct taxation is imposed on residents and Cayman Islands companies. The government receives the majority of its income from indirect taxation. Duty is levied against most imported goods, which is typically in the range of 22% to 25%. Some items are exempted, such as baby formula, books, cameras, electric vehicles and certain items are taxed at 5%. Duty on automobiles depends on their value. The duty can amount to 29.5% up to $20,000.00 KYD CIF (cost, insurance and freight) and up to 42% over $30,000.00 KYD CIF for expensive models. The government charges flat licensing fees on financial institutions that operate in the islands and there are work permit fees on foreign labour. A 13% government tax is placed on all tourist accommodations in addition to a US$37.50 airport departure tax which is built into the cost of an airline ticket. There is a 7.5% sales tax on the proceeds of the sale of the property, payable by the purchaser. There are no taxes on corporate profits, capital gains, or personal income. There are no estate or death inheritance taxes payable on Cayman Islands real estate or other assets held in the Cayman Islands.
The legend behind the lack of taxation comes from the Wreck of the Ten Sail, when multiple ships ran aground on the reef off the north coast of Grand Cayman. Local fishermen are said to have then sailed out to rescue the crew and salvage goods from the wrecks. It is said that out of gratitude, and due to their small size, King George III then issued the edict that the citizens of the country of the Cayman Islands would never pay tax. There is, however, no documented evidence for this story besides oral tradition.
Foreign relations
Foreign policy is controlled by the United Kingdom, as the islands remain an overseas territory of the United Kingdom. Although in its early days, the Cayman Islands' most important relationships were with Britain and Jamaica, in recent years, as a result of economic dependence, a relationship with the United States has developed.
Though the Cayman Islands is involved in no major international disputes, they have come under some criticism due to the use of their territory for narcotics trafficking and money laundering. In an attempt to address this, the government entered into the Narcotics Agreement of 1984 and the Mutual Legal Assistance Treaty of 1986 with the United States, to reduce the use of their facilities associated with these activities. In more recent years, they have stepped up the fight against money laundering, by limiting banking secrecy, introducing requirements for customer identification and record keeping, and requiring banks to co-operate with foreign investigators.
Due to their status as an overseas territory of the UK, the Cayman Islands has no separate representation either in the United Nations or in most other international organisations. However, the Cayman Islands still participates in some international organisations, being an associate member of Caricom and UNESCO, and a member of a sub-bureau of Interpol.
Emergency services
Access to emergency services is available using 9-1-1, the emergency telephone number, the same number as is used in Canada and the United States. The Cayman Islands Department of Public Safety's Communications Centre processes 9-1-1 and non-emergency police assistance, ambulance service, fire service and search and rescue calls for all three islands. The Communications Centre dispatches RCIP and EMS units directly; the Cayman Islands Fire Service maintains their own dispatch room at the airport fire station.
The police services are handled by the Royal Cayman Islands Police Service. The fire services are handled by the Cayman Islands Fire Service. There are 4 main hospitals in the Cayman Islands, private and public health in the Cayman Islands with various localised health clinics around the islands.
Infrastructure
Ports
George Town is the port capital of Grand Cayman. There are no berthing facilities for cruise ships, but up to four cruise ships can anchor in designated anchorages. There are three cruise terminals in George Town, the North, South, and Royal Watler Terminals. The ride from the ship to the terminal is about 5 minutes.
Airports and airlines
There are three airports which serve the Cayman Islands. The islands' national flag carrier is Cayman Airways, with Owen Roberts International Airport hosting the airline as its hub.
• Owen Roberts International Airport
• Charles Kirkconnell International Airport
• Edward Bodden Airfield
Main highways
There are three highways, as well as crucial feeder roads that serve the Cayman Islands capital city, George Town. Residents in the east of the city will rely on the East-West Arterial Bypass to go into George Town; as well as Shamrock Road coming from Bodden Town and the eastern districts.
Other main highways and carriageways include:
• Linford Pierson Highway (most popular roadway into George Town from the east)
• Esterly Tibbetts Highway (serves commuters to the north of the city and West Bay)
• North Sound Road (main road for Central George Town)
• South Sound Road (used by commuters to the south of the city)
• Crewe Road (alternative to taking Linford Pierson Highway)
Education
Primary and secondary schools
The Cayman Islands Education Department operates state schools. Caymanian children are entitled to free primary and secondary education. There are two public high schools on Grand Cayman, John Gray High School and Clifton Hunter High School, and one on Cayman Brac, Layman E. Scott High School. Various churches and private foundations operate several private schools.
Colleges and universities
The University College of the Cayman Islands has campuses on Grand Cayman and Cayman Brac and is the only government-run university on the Cayman Islands.
The International College of the Cayman Islands is a private college in Grand Cayman. The college was established in 1970 and offers associate's, bachelor's and master's degree programmes. Grand Cayman is also home to St. Matthew's University, which includes a medical school and a school of veterinary medicine. The Cayman Islands Law School, a branch of the University of Liverpool, is based on Grand Cayman.
The Cayman Islands Civil Service College, a unit of the Cayman Islands government organised under the Portfolio of the Civil Service, is in Grand Cayman. Co-situated with University College of the Cayman Islands, it offers both degree programs and continuing education units of various sorts. The college opened in 2007 and is also used as a government research centre.
There is a University of the West Indies Open campus in the territory.
Sports
Truman Bodden Sports Complex is a multi-use complex in George Town. The complex is separated into an outdoor, six-lane swimming pool, full purpose track and field and basketball/netball courts. The field surrounded by the track is used for association football matches as well as other field sports. The track stadium holds 3,000 people.
Association football is the national and most popular sport, with the Cayman Islands national football team representing the Cayman Islands in FIFA.
The Cayman Islands Basketball Federation joined the international basketball governing body FIBA in 1976. The country's national team attended the Caribbean Basketball Championship for the first time in 2011. Cayman Islands National Male National Team has won back-to-back Gold Medal victories in 2017 and 2019 Natwest Island Games.
Rugby union is a developing sport, and has its own national men's team, women's team, and Sevens team. The Cayman Men's Rugby 7s team is second in the region after the 2011 NACRA 7s Championship.
The Cayman Islands are a member of FIFA, the International Olympic Committee and the Pan American Sports Organisation, and also competes in the biennial Island Games.
The Cayman Islands are a member of the International Cricket Council which they joined in 1997 as an Affiliate, before becoming an Associate member in 2002. The Cayman Islands national cricket team represents the islands in international cricket. The team has previously played the sport at first-class, List A and Twenty20 level. It competes in Division Five of the World Cricket League.
Squash is popular in the Cayman Islands with a vibrant community of mostly ex-pats playing out of the 7-court South Sound Squash Club. In addition, the women's professional squash association hosts one of their major events each year in an all-glass court being set up in Camana Bay. In December 2012, the former Cayman Open will be replaced by the Women's World Championships, the largest tournament in the world. The top Cayman men's player, Cameron Stafford is No. 2 in the Caribbean and ranked top 200 on the men's professional circuit.
Flag football (CIFFA) has men's, women's, and mixed-gender leagues.
Other organised sports leagues include softball, beach volleyball, Gaelic football and ultimate frisbee.
The Cayman Islands Olympic Committee was founded in 1973 and was recognised by the IOC (International Olympic Committee) in 1976.
In April 2005 Black Pearl Skate Park was opened in Grand Cayman by Tony Hawk. At the time the park was the largest in the Western Hemisphere.
In February 2010, the first purpose-built track for kart racing in the Cayman Islands was opened. Corporate karting Leagues at the track have involved widespread participation with 20 local companies and 227 drivers taking part in the 2010 Summer Corporate Karting League.
Cydonie Mothersille was the first track and field athlete from the country to make an Olympic final at the 2008 Olympic Games. She also won a bronze medal in the 200m at the 2001 World Championships in Athletics and gold in the same event at the 2010 Commonwealth Games.
Arts and culture
Music
The Cayman National Cultural Foundation manages the F.J. Harquail Cultural Centre and the US$4 million Harquail Theatre. The Cayman National Cultural Foundation, established in 1984, helps to preserve and promote Cayman folk music, including the organisation of festivals such as the Cayman Islands International Storytelling Festival, the Cayman JazzFest, Seafarers Festival and Cayfest. The jazz, calypso and reggae genres of music styles feature prominently in Cayman music as celebrated cultural influences.
Art
The National Gallery of the Cayman Islands is an art museum in George Town. Founded in 1996, NGCI is an arts organisation that seeks to fulfil its mission through exhibitions, artist residencies, education/outreach programmes and research projects in the Cayman Islands. The NGCI is a non-profit institution, part of the Ministry of Health and Culture.
Media
There are two print newspapers currently in circulation throughout the islands: the Cayman Compass and The Caymanian Times. Online news services include Cayman Compass, Cayman News Service, Cayman Marl Road, The Caymanian Times and Real Cayman News. Olive Hilda Miller was the first paid reporter to work for a Cayman Islands newspaper, beginning her career on the Tradewinds newspaper, which her work helped to establish.
Local radio stations are broadcast throughout the islands.
Feature films that have been filmed in the Cayman Islands include: The Firm, Haven, Cayman Went and Zombie Driftwood.
Television in the Cayman Islands consist of three over-the-air broadcast stations, Trinity Broadcasting Network - CIGTV (the government-owned channel) - Seventh Day Adventist Network. Cable television is available in the Cayman Islands through three providers, C3 Pure Fibre - FLOW TV - Logic TV. Satellite television is provided by Dish Direct TV.
Broadband is widely available on the Cayman Islands, with Digicel, C3 Pure Fibre, FLOW and Logic all providing super fast fibre broadband to the islands.
Notable Caymanians
See also
Outline of the Cayman Islands
Index of Cayman Islands–related articles
References
Further reading
Originally from the CIA World Factbook 2000.
External links
Cayman Islands Government
Cayman Islands Department of Tourism
Cayman Islands Film Commission (archived 22 July 2011)
Cayman Islands. The World Factbook. Central Intelligence Agency.
Cayman Islands from UCB Libraries GovPubs (archived 7 April 2008)
Cayman National Cultural Foundation
1962 establishments in North America
Island countries
.Cayman
Dependent territories in the Caribbean
English-speaking countries and territories
Former English colonies
Former Spanish colonies
Greater Antilles
States and territories established in 1962
Offshore finance
Tax avoidance
Tax investigation |
5469 | https://en.wikipedia.org/wiki/History%20of%20the%20Cayman%20Islands | History of the Cayman Islands | The Cayman Islands are a British overseas territory located in the Caribbean that have been under various governments since their discovery by Europeans. Christopher Columbus sighted the Cayman Islands on May 10, 1503, and named them Las Tortugas after the numerous sea turtles seen swimming in the surrounding waters. Columbus had found the two smaller sister islands (Cayman Brac and Little Cayman) and it was these two islands that he named "Las Tortugas".
The 1523 "Turin map" of the islands was the first to refer to them as Los Lagartos, meaning alligators or large lizards, By 1530 they were known as the Caymanes after the Carib word caimán for the marine crocodile, either the American or the Cuban crocodile, Crocodylus acutus or C. rhombifer, which also lived there. Recent sub-fossil findings suggest that C. rhombifer, a freshwater species, were prevalent until the 20th century.
Settlement
Archaeological studies of Grand Cayman have found no evidence that humans occupied the islands prior to the sixteenth century.
The first recorded English visitor was Sir Francis Drake in 1586, who reported that the caymanas were edible, but it was the turtles which attracted ships in search of fresh meat for their crews. Overfishing nearly extinguished the turtles from the local waters. Turtles were the main source for an economy on the islands. In 1787, Captain Hull of HMS Camilla estimated between 1,200 and 1,400 turtles were captured and sold at seaports in Jamaica per year. According to historian Edward Long the inhabitants on Grand Cayman had the principal occupation of turtle-fishery. Once Caymanian turtlers greatly reduced the turtle population around the islands they journeyed to the waters of other islands in order to maintain their livelihood.
Caymanian folklore explains that the island's first inhabitants were Ebanks and his companion named Bawden (or Bodden), who first arrived in Cayman in 1658 after serving in Oliver Cromwell's army in Jamaica. The first recorded permanent inhabitant of the Cayman Islands, Isaac Bodden, was born on Grand Cayman around 1700. He was the grandson of the original settler named Bodden.
Most, if not all, early settlers were people who came from outside of the Cayman Islands and were on the fringes of society. Due to this, the Cayman Islands have often been described as "a total colonial frontier society": effectively lawless during the early settlement years. The Cayman Islands remained a frontier society until well into the twentieth century. The year 1734 marked the rough beginning period of permanent settlement in Grand Cayman. Cayman Brac and Little Cayman were not permanently settled until 1833. A variety of people settled on the islands: pirates, refugees from the Spanish Inquisition, shipwrecked sailors, and slaves. The majority of Caymanians are of African, Welsh, Scottish or English descent, with considerable interracial mixing.
During the early years, settlements on the north and west sides of Grand Cayman were often subject to raids by Spanish forces coming from Cuba. On 14 April 1669, the Spanish Privateer Rivero Pardal completed a successful raid on the village of Little Cayman. In the process of the raid, the forces burned twenty dwellings to the ground.
Those living on the islands often partook in what is called "wrecking". Caymanians enticed passing ships by creating objects that piqued sailors' interests. Often these objects did not look like other vessels. Caymanians made mules or donkeys with lanterns tied to their bodies to walk along the beaches or lit a large bonfire to attract sailors. Having very little knowledge of the area, sailors often became stuck on the reefs in the process of reaching a distance to where they could communicate with those on the island. Once the ships were stuck on the reefs, islanders took canoes to plunder and salvage the ships under the false pretense of providing assistance.
British control
England took formal control of Cayman, along with Jamaica, under the Treaty of Madrid in 1670 after the first settlers came from Jamaica in 1661–71 to Little Cayman and Cayman Brac. These first settlements were abandoned after attacks by Spanish privateers, but English privateers often used the Cayman Islands as a base and in the 18th century they became an increasingly popular hideout for pirates, even after the end of legitimate privateering in 1713. Following several unsuccessful attempts, permanent settlement of the islands began in the 1730s. In the early morning hours of February 8, 1794, ten vessels which were part of a convoy escorted by HMS Convert, were wrecked on the reef in Gun Bay, on the East end of Grand Cayman. Despite the darkness and pounding surf on the reef, local settlers braved the conditions attempting to rescue the passengers and crew of the fledgling fleet. There are conflicting reports, but it is believed that between six, and eight people died that night, among them, the Captain of the Britannia. However, the overwhelming majority, more than 450 people, were successfully rescued. The incident is now remembered as The Wreck of the Ten Sail. Legend has it that among the fleet, there was a member of the British Royal Family on board. Most believe it to be a nephew of King George III. To reward the bravery of the island's local inhabitants, King George III reportedly issued a decreed that Caymanians should never be conscripted for war service, and shall never be subject to taxation. However, no official documentation of this decree has been found. All evidence for this being the origin of their tax-free status is purely anecdotal. Regardless, the Cayman Islands' status as a tax-free British overseas territory remains to this day.
From 1670, the Cayman Islands were effective dependencies of Jamaica, although there was considerable self-government. In 1831, a legislative assembly was established by local consent at a meeting of principal inhabitants held at Pedro St. James Castle on December 5 of that year. Elections were held on December 10 and the fledgling legislature passed its first local legislation on December 31, 1831. Subsequently, the Jamaican governor ratified a legislature consisting of eight magistrates appointed by the Governor of Jamaica and 10 (later increased to 27) elected representatives.
The collapse of the Federation of the West Indies created a period of decolonization in the English-speaking Caribbean. In regards to independence, of the six dependent territories, the Cayman Islands were the most opposed because it lacked the natural resources needed. This opposition came from the fear that independence might prevent any special United States visas that aided Caymanian sailors working on American ships and elsewhere in the United States. The people had concerns about their economic viability if the country was to become independent. The Cayman Islands were not the only smaller British territory that was reluctant in regards to gaining independence. The United Kingdom authorities established a new governing constitution framework for the reluctant territories. In place of the Federation of the West Indies, a constitution was created that allowed for the continuation of formal ties with London. In the Cayman Islands, the Governor's only obligation to the British Crown is that of keeping the Executive Council informed.
Slavery
Grand Cayman was the only island of the three that had institutionalized slavery. Although slavery was instituted, Grand Cayman did not have violent slave revolts. While scholars tend to agree that to an extent a slave society existed on at least Grand Cayman, there are debates among them on how important slavery was to the society as a whole. The slave period for the Cayman Islands lasted between 1734 and 1834. In 1774, George Gauld estimated that approximately four hundred people lived on Grand Cayman; half of the inhabitants were free while the other half were constituted slaves. By 1802, of 933 inhabitants, 545 people were owned by slave owners. An April 1834 census recorded a population of 1,800 with roughly 46 percent considered free Caymanians. By the time of emancipation, enslaved people outnumbered that of slave owners or non-enslaved people on Grand Cayman. In 1835, Governor Sligo arrived in Cayman from Jamaica to declare all enslaved people free in accordance with the British Slavery Abolition Act 1833.
Caymanian settlers resented their administrative association with Jamaica, which caused them to seize every opportunity to undermine the authorities. This problematic relationship reached its peak during the period leading up to emancipation in 1835. Caymanian slave owners who did not want to give up the free labour they extracted from their human chattel refused to obey changes in British legislation outlawing slavery. In response to the Slave Trade Act 1807, the Slave Trade Felony Act 1811, and the Emancipation Act 1834, slave owners organized resistance efforts against the authorities in Jamaica.
Local White residents of the Cayman Islands also resisted the stationing of troops of the West India Regiment. This animosity stemmed from the fact that the West India Regiment enlisted Black men, which the White establishment opposed because they were 'insulted' at the idea of Black soldiers defending their settlements.
Dependency of Jamaica
The Cayman Islands were officially declared and administered as a dependency of Jamaica from 1863 but were rather like a parish of Jamaica with the nominated justices of the peace and elected vestrymen in their Legislature. From 1750 to 1898 the Chief Magistrate was the administrating official for the dependency, appointed by the Jamaican governor. In 1898 the Governor of Jamaica began appointing a Commissioner for the Islands. The first Commissioner was Frederick Sanguinetti. In 1959, upon the formation of the Federation of the West Indies the dependency status with regards to Jamaica ceased officially although the Governor of Jamaica remained the Governor of the Cayman Islands and had reserve powers over the islands. Starting in 1959 the chief official overseeing the day-to-day affairs of the islands (for the Governor) was the Administrator. Upon Jamaica's independence in 1962, the Cayman Islands broke its administrative links with Jamaica and opted to become a direct dependency of the British Crown, with the chief official of the islands being the Administrator.
In 1953 the first airfield in the Cayman Islands was opened as well as the George Town Public hospital. Barclays ushered in the age of formalised commerce by opening the first commercial bank.
Governmental changes
Following a two-year campaign by women to change their circumstances, in 1959 Cayman received its first written constitution which, for the first time, allowed women to vote. Cayman ceased to be a dependency of Jamaica.
During 1966, legislation was passed to enable and encourage the banking industry in Cayman.
In 1971, the governmental structure of the islands was again changed, with a governor now running the Cayman Islands. Athel Long CMG, CBE was the last administrator and the first governor of the Cayman Islands.
In 1991, a review of the 1972 constitution recommended several constitutional changes to be debated by the Legislative Assembly. The post of chief secretary was reinstated in 1992 after having been abolished in 1986. The establishment of the post of chief minister was also proposed. However, in November 1992 elections were held for an enlarged Legislative Assembly and the Government was soundly defeated, casting doubt on constitutional reform. The "National Team" of government critics won 12 (later reduced to 11) of the 15 seats, and independents won the other three, after a campaign opposing the appointment of
chief minister and advocating spending cuts. The unofficial leader of the team, Thomas Jefferson, had been the appointed financial secretary until March 1992, when he resigned over public spending disputes to fight the election. After the elections, Mr. Jefferson was appointed minister and leader of government business; he also held the portfolios of Tourism, Aviation and Commerce in the executive council. Three teams with a total of 44 candidates contested the general election held on November 20, 1996: the governing National Team, Team Cayman and the Democratic Alliance Group. The National Team were returned to office but with a reduced majority, winning 9 seats. The Democratic Alliance won 2 seats in George Town, Team Cayman won one in Bodden Town and independents won seats in George Town, Cayman Brac and Little Cayman.
Although all administrative links with Jamaica were broken in 1962, the Cayman Islands and Jamaica continue to share many links, including a common united church (the United Church in Jamaica and the Cayman Islands) and Anglican diocese (although there is debate about this). They also shared a common currency until 1972. In 1999, 38–40% of the expat population of the Cayman Islands was of Jamaican origin and in 2004/2005 little over 50% of the expatriates working in the Cayman Islands (i.e. 8,000) were Jamaicans (with the next largest expatriate communities coming from the United States, United Kingdom and Canada).
Hurricane Ivan
In September 2004, The Cayman Islands were hit by Hurricane Ivan, causing mass devastation, loss of animal life (both wild and domestic/livestock) and flooding; however, there was no loss of human life. Some accounts reported that the majority of Grand Cayman had been underwater and with the lower floors of some buildings being completely flooded in excess of 8 ft. An Ivan Flood Map is available from the Lands & Survey Dept. of The Cayman Islands indicating afflicted areas and their corresponding flood levels. This natural disaster also led to the bankruptcy of a heavily invested insurance company called Doyle. The company had re-leased estimates covering 20% damage to be re-insured at minimal fees when in fact the damage was over 65% and every claim was in the millions. The company simply could not keep paying out and the adjusters could not help lower the payments due to the high building code the Islands adhere to.
Much suspense was built around the devastation that Hurricane Ivan had caused as the leader of Government business Mr. Mckeeva Bush decided to close the Islands to any and all reporters, aid and denied permissions to land any aircraft except for Cayman Airways. The line of people wishing to leave, but unable to do so, extended from the airport to the post office each day, as thousands who were left stranded with no shelter, food, or fresh water hoped for a chance to evacuate. As a result, most evacuations and the mass exodus which ensued in the aftermath was done so by private charter through personal expense, with or without official permission. It was also a collective decision within the government at that time to turn away two British warships that had arrived the day after the storm with supplies. This decision was met by outrage from the Islanders who thought that it should have been their decision to make. Power and water was cut off due to damaged pipes and destroyed utility poles, with all utilities restored to various areas over the course of the next three months. Fortis Inc., a Canadian-owned utility company, sent a team down to Grand Cayman to assist the local power company, CUC, with restoration. The official report, extent of damage, duration and recovery efforts in the words of Mr. Bush himself are first recorded a month following to the Select Committee on Foreign Affairs Written Evidence, Letter from the Cayman Islands Government Office in the United Kingdom, 8 October 2004.
"Hurricane Ivan weakened to a category four hurricane as it moved over Grand Cayman. It is the most powerful hurricane ever to hit the cayman islands. The eye of the storm passed within eight to 15 miles of Grand Cayman. It struck on Sunday 12 September, bringing with it sustained winds of 155 miles per hour, gusts of up to 217 mph, and a storm surge of sea water of eight to 10 feet, which covered most of the Island. A quarter of Grand Cayman remained submerged by flood waters two days later. Both Cayman Brac and Little Cayman suffered damage, although not to the same extent as Grand Cayman.
Damage on Grand Cayman has been extensive. "I include with this letter, for your reference, a detailed briefing about the damage and the recovery effort, and some photographs of the devastation. 95% of our housing stock has sustained damage, with around 25% destroyed or damaged beyond repair. We currently have 6,000 homes that are uninhabitable-these are homes that house teachers, nurses, manual and other workers. Thankfully, loss of life in Cayman has been limited, relative to the impact of the storm."- Honourable McKeeva Bush, OBE, JP.
While there still remains visible signs of damage, in the vegetation and destruction to buildings particularly along the southern and eastern coastal regions, the Island took considerable time to become suitable as a bustling financial & tourism destination again. There remain housing issues for many of the residents as of late 2005, with some buildings still lying derelict due to insurance claims as of 2013, feasibility, new regulations and building codes. Many residents simply were unable to rebuild, and abandoned the damaged structures.
References
External links |
5478 | https://en.wikipedia.org/wiki/Central%20African%20Republic | Central African Republic | The Central African Republic (CAR), formerly known as Ubangi-Shari, is a landlocked country in Central Africa. It is bordered by Chad to the north, Sudan to the northeast, South Sudan to the east, the Democratic Republic of the Congo to the south, the Republic of the Congo to the southwest, and Cameroon to the west. The Central African Republic covers a land area of about . , it had an estimated population of around million. , the Central African Republic is the scene of a civil war, which is ongoing since 2012.
Most of the Central African Republic consists of Sudano-Guinean savannas, but the country also includes a Sahelo-Sudanian zone in the north and an equatorial forest zone in the south. Two-thirds of the country is within the Ubangi River basin (which flows into the Congo), while the remaining third lies in the basin of the Chari, which flows into Lake Chad.
What is today the Central African Republic has been inhabited since at least 8,000 BCE. The country's borders were established by France, which ruled the country as a colony starting in the late 19th century. After gaining independence from France in 1960, the Central African Republic was ruled by a series of autocratic leaders, including an abortive attempt at a monarchy.
By the 1990s, calls for democracy led to the first multi-party democratic elections in 1993. Ange-Félix Patassé became president, but was later removed by General François Bozizé in the 2003 coup. The Central African Republic Bush War began in 2004 and, despite a peace treaty in 2007 and another in 2011, civil war resumed in 2012. The civil war perpetuated the country's poor human rights record: it was characterized by widespread and increasing abuses by various participating armed groups, such as arbitrary imprisonment, torture, and restrictions on freedom of the press and freedom of movement.
Despite its significant mineral deposits and other resources, such as uranium reserves, crude oil, gold, diamonds, cobalt, lumber, and hydropower, as well as significant quantities of arable land, the Central African Republic is among the ten poorest countries in the world, with the lowest GDP per capita at purchasing power parity in the world as of 2017. , according to the Human Development Index (HDI), the country had the fourth-lowest level of human development, ranking 188 out of 191 countries. The country had the lowest inequality-adjusted Human Development Index (IHDI), ranking 156th out of 156 countries. The Central African Republic is also estimated to be the unhealthiest country as well as the worst country in which to be young.
The Central African Republic is a member of the United Nations, the African Union, the Economic Community of Central African States, the and the Non-Aligned Movement.
Etymology
The name of the Central African Republic is derived from the country's geographical location in the central region of Africa and its republican form of government. From 1976 to 1979, the country was known as the Central African Empire.
During the colonial era, the country's name was Ubangi-Shari (), a name derived from the Ubangi River and the Chari River. Barthélemy Boganda, the country's first prime minister, favored the name "Central African Republic" over Ubangi-Shari, reportedly because he envisioned a larger union of countries in Central Africa.
History
Early history
Approximately 10,000 years ago, desertification forced hunter-gatherer societies south into the Sahel regions of northern Central Africa, where some groups settled. Farming began as part of the Neolithic Revolution. Initial farming of white yam progressed into millet and sorghum, and before 3000 BCE the domestication of African oil palm improved the groups' nutrition and allowed for expansion of the local populations. This Agricultural Revolution, combined with a "Fish-stew Revolution", in which fishing began to take place, and the use of boats, allowed for the transportation of goods. Products were often moved in ceramic pots, which are the first known examples of artistic expression from the region's inhabitants.
The Bouar Megaliths in the western region of the country indicate an advanced level of habitation dating back to the very late Neolithic Era (c. 3500–2700 BCE). Ironworking developed in the region around 1000 BCE.
The Ubangian people settled along the Ubangi River in what is today Central and East Central African Republic while some Bantu peoples migrated from the southwest from Cameroon.
Bananas arrived in the region during the first millennium BCE and added an important source of carbohydrates to the diet; they were also used in the production of alcoholic beverages. Production of copper, salt, dried fish, and textiles dominated the economic trade in the Central African region.
16th–19th century
In the 16th and 17th centuries slave traders began to raid the region as part of the expansion of the Saharan and Nile River slave routes. Their captives were enslaved and shipped to the Mediterranean coast, Europe, Arabia, the Western Hemisphere, or to the slave ports and factories along the West and North Africa or South along the Ubanqui and Congo rivers. In the mid 19th century, the Bobangi people became major slave traders and sold their captives to the Americas using the Ubangi river to reach the coast. During the 18th century Bandia-Nzakara Azande peoples established the Bangassou Kingdom along the Ubangi River. In 1875, the Sudanese sultan Rabih az-Zubayr governed Upper-Oubangui, which included present-day Central African Republic.
French colonial period
The European invasion of Central African territory began in the late 19th century during the Scramble for Africa. Europeans, primarily the French, Germans, and Belgians, arrived in the area in 1885. France seized and colonized Ubangi-Shari territory in 1894. In 1911 at the Treaty of Fez, France ceded a nearly 300,000 km2 portion of the Sangha and Lobaye basins to the German Empire which ceded a smaller area (in present-day Chad) to France. After World War I France again annexed the territory. Modeled on King Leopold's Congo Free State, concessions were doled out to private companies that endeavored to strip the region's assets as quickly and cheaply as possible before depositing a percentage of their profits into the French treasury. The concessionary companies forced local people to harvest rubber, coffee, and other commodities without pay and held their families hostage until they met their quotas.
In 1920 French Equatorial Africa was established and Ubangi-Shari was administered from Brazzaville. During the 1920s and 1930s the French introduced a policy of mandatory cotton cultivation, a network of roads was built, attempts were made to combat sleeping sickness, and Protestant missions were established to spread Christianity. New forms of forced labour were also introduced and a large number of Ubangians were sent to work on the Congo-Ocean Railway. Through the period of construction until 1934 there was a continual heavy cost in human lives, with total deaths among all workers along the railway estimated in excess of 17,000 of the construction workers, from a combination of both industrial accidents and diseases including malaria. In 1928, a major insurrection, the Kongo-Wara rebellion or 'war of the hoe handle', broke out in Western Ubangi-Shari and continued for several years. The extent of this insurrection, which was perhaps the largest anti-colonial rebellion in Africa during the interwar years, was carefully hidden from the French public because it provided evidence of strong opposition to French colonial rule and forced labour.
French colonization in Oubangui-Chari is considered to be the most brutal of the French colonial Empire.
In September 1940, during the Second World War, pro-Gaullist French officers took control of Ubangi-Shari and General Leclerc established his headquarters for the Free French Forces in Bangui. In 1946 Barthélemy Boganda was elected with 9,000 votes to the French National Assembly, becoming the first representative of the Central African Republic in the French government. Boganda maintained a political stance against racism and the colonial regime but gradually became disheartened with the French political system and returned to the Central African Republic to establish the Movement for the Social Evolution of Black Africa (Mouvement pour l'évolution sociale de l'Afrique noire, MESAN) in 1950.
Since independence (1960–present)
In the Ubangi-Shari Territorial Assembly election in 1957, MESAN captured 347,000 out of the total 356,000 votes and won every legislative seat, which led to Boganda being elected president of the Grand Council of French Equatorial Africa and vice-president of the Ubangi-Shari Government Council. Within a year, he declared the establishment of the Central African Republic and served as the country's first prime minister. MESAN continued to exist, but its role was limited. The Central Africa Republic was granted autonomy within the French Community on 1 December 1958, a status which meant it was still counted as part of the French Empire in Africa.
After Boganda's death in a plane crash on 29 March 1959, his cousin, David Dacko, took control of MESAN. Dacko became the country's first president when the Central African Republic formally received independence from France at midnight on 13 August 1960, a date celebrated by the country's Independence Day holiday. Dacko threw out his political rivals, including Abel Goumba, former Prime Minister and leader of Mouvement d'évolution démocratique de l'Afrique centrale (MEDAC), whom he forced into exile in France. With all opposition parties suppressed by November 1962, Dacko declared MESAN as the official party of the state.
Bokassa and the Central African Empire (1965–1979)
On 31 December 1965, Dacko was overthrown in the Saint-Sylvestre coup d'état by Colonel Jean-Bédel Bokassa, who suspended the constitution and dissolved the National Assembly. President Bokassa declared himself President for Life in 1972 and named himself Emperor Bokassa I of the Central African Empire (as the country was renamed) on 4 December 1976. A year later, Emperor Bokassa crowned himself in a lavish and expensive ceremony that was ridiculed by much of the world.
In April 1979, young students protested against Bokassa's decree that all school pupils were required to buy uniforms from a company owned by one of his wives. The government violently suppressed the protests, killing 100 children and teenagers. Bokassa might have been personally involved in some of the killings. In September 1979, France overthrew Bokassa and restored Dacko to power (subsequently restoring the official name of the country and the original government to the Central African Republic). Dacko, in turn, was again overthrown in a coup by General André Kolingba on 1 September 1981.
Central African Republic under Kolingba
Kolingba suspended the constitution and ruled with a military junta until 1985. He introduced a new constitution in 1986 which was adopted by a nationwide referendum. Membership in his new party, the Rassemblement Démocratique Centrafricain (RDC), was voluntary. In 1987 and 1988, semi-free elections to parliament were held, but Kolingba's two major political opponents, Abel Goumba and Ange-Félix Patassé, were not allowed to participate.
By 1990, inspired by the fall of the Berlin Wall, a pro-democracy movement arose. Pressure from the United States, France, and from a group of locally represented countries and agencies called GIBAFOR (France, the US, Germany, Japan, the EU, the World Bank, and the UN) finally led Kolingba to agree, in principle, to hold free elections in October 1992 with help from the UN Office of Electoral Affairs. After using the excuse of alleged irregularities to suspend the results of the elections as a pretext for holding on to power, President Kolingba came under intense pressure from GIBAFOR to establish a "Conseil National Politique Provisoire de la République" (Provisional National Political Council, CNPPR) and to set up a "Mixed Electoral Commission", which included representatives from all political parties.
When a second round of elections were finally held in 1993, again with the help of the international community coordinated by GIBAFOR, Ange-Félix Patassé won in the second round of voting with 53% of the vote while Goumba won 45.6%. Patassé's party, the Mouvement pour la Libération du Peuple Centrafricain (MLPC) or Movement for the Liberation of the Central African People, gained a plurality (relative majority) but not an absolute majority of seats in parliament, which meant Patassé's party required coalition partners.
Patassé government (1993–2003)
Patassé purged many of the Kolingba elements from the government and Kolingba supporters accused Patassé's government of conducting a "witch hunt" against the Yakoma. A new constitution was approved on 28 December 1994 but had little impact on the country's politics. In 1996–1997, reflecting steadily decreasing public confidence in the government's erratic behavior, three mutinies against Patassé's administration were accompanied by widespread destruction of property and heightened ethnic tension. During this time (1996), the Peace Corps evacuated all its volunteers to neighboring Cameroon. To date, the Peace Corps has not returned to the Central African Republic. The Bangui Agreements, signed in January 1997, provided for the deployment of an inter-African military mission, to the Central African Republic and re-entry of ex-mutineers into the government on 7 April 1997. The inter-African military mission was later replaced by a U.N. peacekeeping force (MINURCA). Since 1997, the country has hosted almost a dozen peacekeeping interventions, earning it the title of "world champion of peacekeeping".
In 1998, parliamentary elections resulted in Kolingba's RDC winning 20 out of 109 seats. The next year, however, in spite of widespread public anger in urban centers over his corrupt rule, Patassé won a second term in the presidential election.
On 28 May 2001, rebels stormed strategic buildings in Bangui in an unsuccessful coup attempt. The army chief of staff, Abel Abrou, and General François N'Djadder Bedaya were killed, but Patassé regained the upper hand by bringing in at least 300 troops of the Congolese rebel leader Jean-Pierre Bemba and Libyan soldiers.
In the aftermath of the failed coup, militias loyal to Patassé sought revenge against rebels in many neighborhoods of Bangui and incited unrest including the murder of many political opponents. Eventually, Patassé came to suspect that General François Bozizé was involved in another coup attempt against him, which led Bozizé to flee with loyal troops to Chad. In March 2003, Bozizé launched a surprise attack against Patassé, who was out of the country. Libyan troops and some 1,000 soldiers of Bemba's Congolese rebel organization failed to stop the rebels and Bozizé's forces succeeded in overthrowing Patassé.
Civil wars
François Bozizé suspended the constitution and named a new cabinet, which included most opposition parties. Abel Goumba was named vice-president, which gave Bozizé's new government a positive image. Bozizé established a broad-based National Transition Council to draft a new constitution, and announced that he would step down and run for office once the new constitution was approved.
In 2004, the Central African Republic Bush War began, as forces opposed to Bozizé took up arms against his government. In May 2005, Bozizé won the presidential election, which excluded Patassé, and in 2006 fighting continued between the government and the rebels. In November 2006, Bozizé's government requested French military support to help them repel rebels who had taken control of towns in the country's northern regions.
Though the initial public details of the agreement pertained to logistics and intelligence, by December the French assistance included airstrikes by Dassault Mirage 2000 fighters against rebel positions.
The Syrte Agreement in February and the Birao Peace Agreement in April 2007 called for a cessation of hostilities, the billeting of FDPC fighters and their integration with FACA, the liberation of political prisoners, integration of FDPC into government, an amnesty for the UFDR, its recognition as a political party, and the integration of its fighters into the national army. Several groups continued to fight but other groups signed on to the agreement, or similar agreements with the government (e.g. UFR on 15 December 2008). The only major group not to sign an agreement at the time was the CPJP, which continued its activities and signed a peace agreement with the government on 25 August 2012.
In 2011, Bozizé was reelected in an election which was widely considered fraudulent.
In November 2012, Séléka, a coalition of rebel groups, took over towns in the northern and central regions of the country. These groups eventually reached a peace deal with the Bozizé's government in January 2013 involving a power sharing government but this deal broke down and the rebels seized the capital in March 2013 and Bozizé fled the country.
Michel Djotodia took over as president. Prime Minister Nicolas Tiangaye requested a UN peacekeeping force from the UN Security Council and on 31 May former President Bozizé was indicted for crimes against humanity and incitement of genocide.
By the end of the year there were international warnings of a "genocide" and fighting was largely from reprisal attacks on civilians from Seleka's predominantly Muslim fighters and Christian militias called "anti-balaka". By August 2013, there were reports of over 200,000 internally displaced persons (IDPs)
French President François Hollande called on the UN Security Council and African Union to increase their efforts to stabilize the country. On 18 February 2014, United Nations Secretary-General Ban Ki-moon called on the UN Security Council to immediately deploy 3,000 troops to the country, bolstering the 6,000 African Union soldiers and 2,000 French troops already in the country, to combat civilians being murdered in large numbers. The Séléka government was said to be divided, and in September 2013, Djotodia officially disbanded Seleka, but many rebels refused to disarm, becoming known as ex-Seleka, and veered further out of government control. It is argued that the focus of the initial disarmament efforts exclusively on the Seleka inadvertently handed the anti-Balaka the upper hand, leading to the forced displacement of Muslim civilians by anti-Balaka in Bangui and western Central African Republic.
On 11 January 2014, Michael Djotodia and Nicolas Tiengaye resigned as part of a deal negotiated at a regional summit in neighboring Chad. Catherine Samba-Panza was elected as interim president by the National Transitional Council, becoming the first ever female Central African president. On 23 July 2014, following Congolese mediation efforts, Séléka and anti-balaka representatives signed a ceasefire agreement in Brazzaville. By the end of 2014, the country was de facto partitioned with the anti-Balaka in the southwest and ex-Seleka in the northeast. In March 2015, Samantha Power, the U.S. ambassador to the United Nations, said 417 of the country's 436 mosques had been destroyed, and Muslim women were so scared of going out in public they were giving birth in their homes instead of going to the hospital. On 14 December 2015, Séléka rebel leaders declared an independent Republic of Logone.
Touadéra government (2016–present)
Presidential elections were held in December 2015. As no candidate received more than 50% of the vote, a second round of elections was held on 14 February 2016 with run-offs on 31 March 2016. In the second round of voting, former Prime Minister Faustin-Archange Touadéra was declared the winner with 63% of the vote, defeating Union for Central African Renewal candidate Anicet-Georges Dologuélé, another former Prime Minister. While the elections suffered from many potential voters being absent as they had taken refuge in other countries, the fears of widespread violence were ultimately unfounded and the African Union regarded the elections as successful.
Touadéra was sworn in on 30 March 2016. No representatives of the Seleka rebel group or the "anti-balaka" militias were included in the subsequently formed government.
After the end of Touadéra's first term, presidential elections were held on 27 December 2020 with a possible second round planned for 14 February 2021. Former president François Bozizé announced his candidacy on 25 July 2020 but was rejected by the Constitutional Court of the country, which held that Bozizé did not satisfy the "good morality" requirement for candidates because of an international warrant and United Nations sanctions against him for alleged assassinations, torture and other crimes.
As large parts of the country were at the time controlled by armed groups, the election could not be conducted in many areas of the country. Some 800 of the country's polling stations, 14% of the total, were closed due to violence. Three Burundian peacekeepers were killed and an additional two were wounded during the run-up to the election. President Faustin-Archange Touadéra was reelected in the first round of the election in December 2020. Russian mercenaries from the Wagner Group have supported President Faustin-Archange Touadéra in the fight against rebels. Russia's Wagner group has been accused of harassing and intimidating civilians. In December 2022 Roger Cohen wrote in The New York Times "Wagner shock troops form a Praetorian Guard for Mr. Touadéra, who is also protected by Rwandan forces, in return for an untaxed license to exploit and export the Central African Republic's resources" and "one Western ambassador called the Central African Republic...a 'vassal state' of the Kremlin."
On 8 September 2023, a United Nations-backed court in the Central African Republic charged ex-rebel leader Abdoulaye Hissène for alleged crimes against humanity and war crimes.
Geography
The Central African Republic is a landlocked nation within the interior of the African continent. It is bordered by Cameroon, Chad, Sudan, South Sudan, the Democratic Republic of the Congo, and the Republic of the Congo. The country lies between latitudes 2° and 11°N, and longitudes 14° and 28°E.
Much of the country consists of flat or rolling plateau savanna approximately above sea level. In addition to the Fertit Hills in the northeast of the Central African Republic, there are scattered hills in the southwest regions. In the northwest is the Yade Massif, a granite plateau with an altitude of . The Central African Republic contains six terrestrial ecoregions: Northeastern Congolian lowland forests, Northwestern Congolian lowland forests, Western Congolian swamp forests, East Sudanian savanna, Northern Congolian forest-savanna mosaic, and Sahelian Acacia savanna.
At , the Central African Republic is the world's 44th-largest country. It is comparable in size to Ukraine, as Ukraine is in area, according to List of countries and dependencies by area.
Much of the southern border is formed by tributaries of the Congo River; the Mbomou River in the east merges with the Uele River to form the Ubangi River, which also comprises portions of the southern border. The Sangha River flows through some of the western regions of the country, while the eastern border lies along the edge of the Nile River watershed.
It has been estimated that up to 8% of the country is covered by forest, with the densest parts generally located in the southern regions. The forests are highly diverse and include commercially important species of Ayous, Sapelli and Sipo. The deforestation rate is about 0.4% per annum, and lumber poaching is commonplace. The Central African Republic had a 2018 Forest Landscape Integrity Index mean score of 9.28/10, ranking it seventh globally out of 172 countries.
In 2008, Central African Republic was the world's least light pollution affected country.
The Central African Republic is the focal point of the Bangui Magnetic Anomaly, one of the largest magnetic anomalies on Earth.
Wildlife
In the southwest, the Dzanga-Sangha National Park is located in a rain forest area. The country is noted for its population of forest elephants and western lowland gorillas. In the north, the Manovo-Gounda St Floris National Park is well-populated with wildlife, including leopards, lions, cheetahs and rhinos, and the Bamingui-Bangoran National Park is located in the northeast of the Central African Republic. The parks have been seriously affected by the activities of poachers, particularly those from Sudan, over the past two decades.
In 2021, the rate of deforestation in the Central African Republic increased by 71%.
Climate
The climate of the Central African Republic is generally tropical, with a wet season that lasts from June to September in the northern regions of the country, and from May to October in the south. During the wet season, rainstorms are an almost daily occurrence, and early morning fog is commonplace. Maximum annual precipitation is approximately in the upper Ubangi region.
The northern areas are hot and humid from February to May, but can be subject to the hot, dry, and dusty trade wind known as the Harmattan. The southern regions have a more equatorial climate, but they are subject to desertification, while the extreme northeast regions of the country are a steppe.
Prefectures and sub-prefectures
The Central African Republic is divided into 16 administrative prefectures (préfectures), two of which are economic prefectures (préfectures economiques), and one an autonomous commune; the prefectures are further divided into 71 sub-prefectures (sous-préfectures).
The prefectures are Bamingui-Bangoran, Basse-Kotto, Haute-Kotto, Haut-Mbomou, Kémo, Lobaye, Mambéré-Kadéï, Mbomou, Nana-Mambéré, Ombella-M'Poko, Ouaka, Ouham, Ouham-Pendé and Vakaga. The economic prefectures are Nana-Grébizi and Sangha-Mbaéré, while the commune is the capital city of Bangui.
Politics and government
Politics in the Central African Republic formally take place in a framework of a presidential republic. In this system, the President is the head of state, with a Prime Minister as head of government. Executive power is exercised by the government. Legislative power is vested in both the government and parliament.
Changes in government have occurred in recent years by three methods: violence, negotiations, and elections. A new constitution was approved by voters in a referendum held on 5 December 2004. The government was rated 'Partly Free' from 1991 to 2001 and from 2004 to 2013.
Executive branch
The president is elected by popular vote for a six-year term, and the prime minister is appointed by the president. The president also appoints and presides over the Council of Ministers, which initiates laws and oversees government operations. However, as of 2018 the official government is not in control of large parts of the country, which are governed by rebel groups.
Acting president since April 2016 is Faustin-Archange Touadéra who followed the interim government under Catherine Samba-Panza, interim prime minister André Nzapayeké.
Legislative branch
The National Assembly (Assemblée Nationale) has 140 members, elected for a five-year term using the two-round (or Run-off) system.
Judicial branch
As in many other former French colonies, the Central African Republic's legal system is based on French law. The Supreme Court, or Cour Supreme, is made up of judges appointed by the president. There is also a Constitutional Court, and its judges are also appointed by the president.
Foreign relations
Foreign aid and UN Involvement
The Central African Republic is heavily dependent upon foreign aid and numerous NGOs provide services that the government does not provide. In 2019, over US$100 million in foreign aid was spent in the country, mostly on humanitarian assistance.
In 2006, due to ongoing violence, over 50,000 people in the country's northwest were at risk of starvation, but this was averted due to assistance from the United Nations. On 8 January 2008, the UN Secretary-General Ban Ki-Moon declared that the Central African Republic was eligible to receive assistance from the Peacebuilding Fund. Three priority areas were identified: first, the reform of the security sector; second, the promotion of good governance and the rule of law; and third, the revitalization of communities affected by conflicts. On 12 June 2008, the Central African Republic requested assistance from the UN Peacebuilding Commission, which was set up in 2005 to help countries emerging from conflict avoid devolving back into war or chaos.
In response to concerns of a potential genocide, a peacekeeping force – the International Support Mission to the Central African Republic (MISCA) – was authorized in December 2013. This African Union force of 6,000 personnel was accompanied by the French Operation Sangaris.
In 2017, Central African Republic signed the UN treaty on the Prohibition of Nuclear Weapons.
Human rights
The 2009 Human Rights Report by the United States Department of State noted that human rights in the Central African Republic were poor and expressed concerns over numerous government abuses. The U.S. State Department alleged that major human rights abuses such as extrajudicial executions by security forces, torture, beatings and rape of suspects and prisoners occurred with impunity. It also alleged harsh and life-threatening conditions in prisons and detention centers, arbitrary arrest, prolonged pretrial detention and denial of a fair trial, restrictions on freedom of movement, official corruption, and restrictions on workers' rights.
The State Department report also cites widespread mob violence, the prevalence of female genital mutilation, discrimination against women and Pygmies, human trafficking, forced labor, and child labor. Freedom of movement is limited in the northern part of the country "because of actions by state security forces, armed bandits, and other non-state armed entities", and due to fighting between government and anti-government forces, many persons have been internally displaced.
Violence against children and women in relation to accusations of witchcraft has also been cited as a serious problem in the country. Witchcraft is a criminal offense under the penal code.
Freedom of speech is addressed in the country's constitution, but there have been incidents of government intimidation of the media. A report by the International Research & Exchanges Board's media sustainability index noted that "the country minimally met objectives, with segments of the legal system and government opposed to a free media system".
Approximately 68% of girls are married before they turn 18, and the United Nations' Human Development Index ranked the country 188 out of 188 countries surveyed. The Bureau of International Labor Affairs has also mentioned it in its last edition of the List of Goods Produced by Child Labor or Forced Labor.
Demographics
The population of the Central African Republic has almost quadrupled since independence. In 1960, the population was 1,232,000; as of a UN estimate, it is approximately .
The United Nations estimates that approximately 4% of the population aged between 15 and 49 is HIV positive. Only 3% of the country has antiretroviral therapy available, compared to a 17% coverage in the neighboring countries of Chad and the Republic of the Congo.
The nation is divided into over 80 ethnic groups, each having its own language. The largest ethnic groups are the Baggara Arabs, Baka, Banda, Bayaka, Fula, Gbaya, Kara, Kresh, Mbaka, Mandja, Ngbandi, Sara, Vidiri, Wodaabe, Yakoma, Yulu, Zande, with others including Europeans of mostly French descent.
Religion
According to the 2003 national census, 80.3% of the population was Christian (51.4% Protestant and 28.9% Roman Catholic), 10% was Muslim and 4.5 percent other religious groups, with 5.5 percent having no religious beliefs. More recent work from the Pew Research Center estimated that, as of 2010, Christians constituted 89.8% of the population (60.7% Protestant and 28.5% Catholic) while Muslims made up 8.9%. The Catholic Church claims over 1.5 million adherents, approximately one-third of the population. Indigenous belief (animism) is also practiced, and many indigenous beliefs are incorporated into Christian and Islamic practice. A UN director described religious tensions between Muslims and Christians as being high.
There are many missionary groups operating in the country, including Lutherans, Baptists, Catholics, Grace Brethren, and Jehovah's Witnesses. While these missionaries are predominantly from the United States, France, Italy, and Spain, many are also from Nigeria, the Democratic Republic of the Congo, and other African countries. Large numbers of missionaries left the country when fighting broke out between rebel and government forces in 2002–3, but many of them have now returned to continue their work.
According to Overseas Development Institute research, during the crisis ongoing since 2012, religious leaders have mediated between communities and armed groups; they also provided refuge for people seeking shelter.
Languages
The Central African Republic's two official languages are French and Sango (also spelled Sangho), a creole developed as an inter-ethnic lingua franca based on the local Ngbandi language. The Central African Republic is one of the few African countries to have granted official status to an African language.
Healthcare
The largest hospitals in the country are located in the Bangui district. As a member of the World Health Organization, the Central African Republic receives vaccination assistance, such as a 2014 intervention for the prevention of a measles epidemic. In 2007, female life expectancy at birth was 48.2 years and male life expectancy at birth was 45.1 years.
Women's health is poor in the Central African Republic. , the country had the fourth highest maternal mortality rate in the world.
The total fertility rate in 2014 was estimated at 4.46 children born/woman. Approximately 25% of women had undergone female genital mutilation. Many births in the country are guided by traditional birth attendants, who often have little or no formal training.
Malaria is endemic in the Central African Republic and one of the leading causes of death.
According to 2009 estimates, the HIV/AIDS prevalence rate is about 4.7% of the adult population (ages 15–49). This is in general agreement with the 2016 United Nations estimate of approximately 4%. Government expenditure on health was US$20 (PPP) per person in 2006 and 10.9% of total government expenditure in 2006. There was only around 1 physician for every 20,000 persons in 2009.
Education
Public education in the Central African Republic is free and is compulsory from ages 6 to 14. However, approximately half of the adult population of the country is illiterate. The two institutions of higher education in the Central African Republic are the University of Bangui, a public university located in Bangui, which includes a medical school; and Euclid University, an international university.
Economy
The per capita income of the Republic is often listed as being approximately $400 a year, one of the lowest in the world, but this figure is based mostly on reported sales of exports and largely ignores the unregistered sale of foods, locally produced alcoholic beverages, diamonds, ivory, bushmeat, and traditional medicine.
The currency of the Central African Republic is the CFA franc, which is accepted across the former countries of French West Africa and trades at a fixed rate to the euro. Diamonds constitute the country's most important export, accounting for 40–55% of export revenues, but it is estimated that between 30% and 50% of those produced each year leave the country clandestinely.
On 27 April 2022, Bitcoin (BTC) was adopted as an additional legal tender. Lawmakers unanimously adopted a bill that made Bitcoin legal tender alongside the CFA franc and legalized the use of cryptocurrencies. President Faustin-Archange Touadéra signed the measure into law, said his chief of staff Obed Namsio.
Agriculture is dominated by the cultivation and sale of food crops such as cassava, peanuts, maize, sorghum, millet, sesame, and plantain. The annua growth rate of real GDP is slightly above 3%. The importance of food crops over exported cash crops is indicated by the fact that the total production of cassava, the staple food of most Central Africans, ranges between 200,000 and 300,000 tonnes a year, while the production of cotton, the principal exported cash crop, ranges from 25,000 to 45,000 tonnes a year. Food crops are not exported in large quantities, but still constitute the principal cash crops of the country because Central Africans derive far more income from the periodic sale of surplus food crops than from exported cash crops such as cotton or coffee. Much of the country is self-sufficient in food crops; however, livestock development is hindered by the presence of the tsetse fly.
The Republic's primary import partner is France (17.1%). Other imports come from the United States (12.3%), India (11.5%), and China (8.2%). Its largest export partner is France (31.2%), followed by Burundi (16.2%), China (12.5%), Cameroon (9.6%), and Austria (7.8%).
The Central African Republic is a member of the Organization for the Harmonization of Business Law in Africa (OHADA). In the 2009 World Bank Group's report Doing Business, it was ranked 183rd of 183 as regards 'ease of doing business', a composite index which takes into account regulations that enhance business activity and those that restrict it.
Infrastructure
Transportation
Bangui is the transport hub of the Central African Republic. As of 1999, eight roads connected the city to other main towns in the country, Cameroon, Chad and South Sudan; of these, only the toll roads are paved. During the rainy season from July to October, some roads are impassable.
River ferries sail from the river port at Bangui to Brazzaville and Zongo. The river can be navigated most of the year between Bangui and Brazzaville. From Brazzaville, goods are transported by rail to Pointe-Noire, Congo's Atlantic port. The river port handles the overwhelming majority of the country's international trade and has a cargo handling capacity of 350,000 tons; it has length of wharfs and of warehousing space.
Bangui M'Poko International Airport is Central African Republic's only international airport. As of June 2014 it had regularly scheduled direct flights to Brazzaville, Casablanca, Cotonou, Douala, Kinshasa, Lomé, Luanda, Malabo, N'Djamena, Paris, Pointe-Noire, and Yaoundé.
Since at least 2002 there have been plans to connect Bangui by rail to the Transcameroon Railway.
Energy
The Central African Republic primarily uses hydroelectricity as there are few other low cost resources for generating electricity. Access to electricity is very limited with 15.6% of the total population having electrification, 34.6% in urban areas and 1.5% in rural areas.
Communications
Presently, the Central African Republic has active television services, radio stations, internet service providers, and mobile phone carriers; Socatel is the leading provider for both internet and mobile phone access throughout the country. The primary governmental regulating bodies of telecommunications are the Ministère des Postes and Télécommunications et des Nouvelles Technologies. In addition, the Central African Republic receives international support on telecommunication related operations from ITU Telecommunication Development Sector (ITU-D) within the International Telecommunication Union to improve infrastructure.
Culture
Sports
Football is the country's most popular sport. The national football team is governed by the Central African Football Federation and stages matches at the Barthélemy Boganda Stadium.
Basketball also is popular and its national team won the African Championship twice and was the first Sub-Saharan African team to qualify for the Basketball World Cup, in 1974.
See also
Outline of the Central African Republic
List of Central African Republic–related topics
Notes
References
Citations
Bibliography
Balogh, Besenyo, Miletics, Vogel: La République Centrafricaine
Further reading
Doeden, Matt, Central African Republic in Pictures (Twentyfirst Century Books, 2009).
Petringa, Maria, Brazza, A Life for Africa (2006). .
Titley, Brian, Dark Age: The Political Odyssey of Emperor Bokassa, 2002.
Woodfrok, Jacqueline, Culture and Customs of the Central African Republic (Greenwood Press, 2006).
External links
Overviews
Country Profile from BBC News
Central African Republic. The World Factbook. Central Intelligence Agency.
Central African Republic from UCB Libraries GovPubs
Key Development Forecasts for the Central African Republic from International Futures
News
Central African Republic news headline links from AllAfrica.com
Other
Central African Republic at Humanitarian and Development Partnership Team (HDPT)
Johann Hari in Birao, Central African Republic. "Inside France's Secret War" from The Independent, 5 October 2007
French-speaking countries and territories
Landlocked countries
Least developed countries
Member states of the Organisation internationale de la Francophonie
Member states of the African Union
Member states of the United Nations
Republics
States and territories established in 1960
Central African countries
1960 establishments in the Central African Republic
Countries in Africa
Observer states of the Organisation of Islamic Cooperation |
5479 | https://en.wikipedia.org/wiki/History%20of%20the%20Central%20African%20Republic | History of the Central African Republic | The history of the Central African Republic is roughly composed of four distinct periods. The earliest period of settlement began around 10,000 years ago when nomadic people first began to settle, farm and fish in the region. The next period began around 10,000 years prior.
Early history
Approximately 10,000 years ago, desertification forced hunter-gatherer societies south into the Sahel regions of northern Central Africa, where some groups settled and began farming as part of the Neolithic Revolution. Initial farming of white yam progressed into millet and sorghum, and then later the domestication of African oil palm improved the groups' nutrition and allowed for expansion of the local populations. Bananas arrived in the region and added an important source of carbohydrates to the diet; they were also used in the production of alcohol. This Agricultural Revolution, combined with a "Fish-stew Revolution", in which fishing began to take place, and the use of boats, allowed for the transportation of goods. Products were often moved in ceramic pots, which are the first known examples of artistic expression from the region's inhabitants.
The Bouar Megaliths in the western region of the country indicate an advanced level of habitation dating back to the very late Neolithic Era (c. 3500-2700 BC). Ironworking arrived in the region by around 1000 BC, likely from early Bantu cultures in what is today southeast Nigeria and/or Cameroon. The site of Gbabiri (in the Central African Republic) has yielded evidence of iron metallurgy, from a reduction furnace and blacksmith workshop; with earliest dates of 896-773 BC and 907-796 BC respectively. Some earlier iron metallurgy dates of 2,000 BC from the site of Oboui (also in the Central Africa Republic) have also been proposed, but these are disputed by some archaeologists.
During the Bantu Migrations from about 1000 BC to AD 1000, Ubangian-speaking people spread eastward from Cameroon to Sudan, Bantu-speaking people settled in the southwestern regions of the CAR, and Central Sudanic-speaking people settled along the Ubangi River in what is today Central and East CAR.
Production of copper, salt, dried fish, and textiles dominated the economic trade in the Central African region.
The territory of modern Central African Republic is known to have been settled from at least the 7th century on by overlapping empires, including the Kanem-Bornu, Ouaddai, Baguirmi, and Dafour groups based on the Lake Chad region and along the Upper Nile.
Early modern history
During the 16th and 17th centuries Muslim slave traders began to raid the region and their captives were shipped to the Mediterranean coast, Europe, Arabia, the Western Hemisphere, or to the slave ports and factories along the West African coast. The Bobangi people became major slave traders and sold their captives to the Americas using the Ubangi river to reach the coast. During the 18th century Bandia-Nzakara peoples established the Bangassou Kingdom along the Ubangi river.
Population migration in the 18th and 19th centuries brought new migrants into the area, including the Zande, Banda, and Baya-Mandjia.
Colonial period
The European occupation of Central African territory began in the late 19th century during the Scramble for Africa. Count Savorgnan de Brazza established the French Congo and sent expeditions up the Ubangi River from Brazzaville in an effort to expand France's claims to territory in Central Africa. Belgium, Germany, and the United Kingdom also competed to establish their claims to territory in the region. In 1875, the Sudanese sultan Rabih az-Zubayr governed Upper-Oubangui, which included present-day Central African Republic. Europeans, primarily the French, German, and Belgians, arrived in the area in 1885.
The French asserted their legal claim to the area through an 1887 convention with Congo Free State (privately owned by Leopold II of Belgium), which accepted France possession of the right bank of the Oubangui River. In 1889, the French established a post on the Ubangi River at Bangui. In 1890–91, de Brazza sent expeditions up the Sangha River, in what is now south-western CAR, up the center of the Ubangi basin toward Lake Chad, and eastward along the Ubangi River toward the Nile, with the intention of expanding the borders of the French Congo to link up the other French territories in Africa. In 1894, the French Congo's borders with Leopold II of Belgium's Congo Free State and German Cameroon were fixed by diplomatic agreements, and France declared Ubangi-Shari to be a French territory.
Consolidation
In 1899, the French Congo's border with Sudan was fixed along the Congo-Nile divide. This situation left France without her much coveted outlet on the Nile.
In 1900, the French defeated the forces of Rabih in the 1900 Battle of Kousséri, but they did not consolidate their control over Ubangi-Shari until 1903 when they established colonial administration throughout the territory.
Once European negotiators had agreed upon the borders of the French Congo, France had to decide how to pay for the costly occupation, administration, and development of the territory it had acquired. The reported financial successes of Leopold II's concessionary companies in the Congo Free State convinced the French government to grant 17 private companies large concessions in the Ubangi-Shari region in 1899. In return for the right to exploit these lands by buying local products and selling European goods, the companies promised to pay rent to France and to promote the development of their concessions. The companies employed European and African agents who frequently used brutal methods to force the natives to labor.
At the same time, the French colonial administration began to force the local population to pay taxes and to provide the state with free labor. The companies and the French administration at times collaborated in forcing the Central Africans to work for them. Some French officials reported abuses committed by private company militias, and their own colonial colleagues and troops, but efforts to hold these people accountable almost invariably failed. When any news of atrocities committed against Central Africans reached France and caused an outcry, investigations were undertaken and some feeble attempts at reform were made, but the situation on the ground in Ubangi-Shari remained virtually unchanged.
In 1906, the Ubangi-Shari territory was united with the Chad colony; in 1910, it became one of the four territories of the Federation of French Equatorial Africa (AEF), along with Chad, Middle Congo, and Gabon.
During the first decade of French colonial rule, from about 1900 to 1910, the rulers of the Ubangi-Shari region increased both their slave-raiding activities and the selling of local produce to Europe. They took advantage of their treaties with the French to procure more weapons, which were used to capture more slaves: much of the eastern half of Ubangi-Shari was depopulated as a result of slave-trading by local rulers during the first decade of colonial rule. After the power of local African rulers was destroyed by the French, slave raiding greatly diminished.
In 1911, the Sangha and Lobaye basins were ceded to Germany as part of an agreement which gave France a free hand in Morocco. Western Ubangi-Shari remained under German rule until World War I, after which France again annexed the territory using Central African troops.
The next thirty years were marked by mostly small scale revolts against French rule and the development of a plantation-style economy. From 1920 to 1930, a network of roads was built, cash crops were promoted and mobile health services were formed to combat sleeping sickness. Protestant missions were established in different parts of the country. New forms of forced labor were also introduced, however, as the French conscripted large numbers of Ubangians to work on the Congo-Ocean Railway; many of these recruits died of exhaustion and illness as a result of the poor conditions.
In 1925, the French writer André Gide published Voyage au Congo, in which he described the alarming consequences of conscription for the Congo-Ocean railroad. He exposed the continuing atrocities committed against Central Africans in Western Ubangi-Shari by such employers as the Forestry Company of Sangha-Ubangi. In 1928, a major insurrection, the Kongo-Wara rebellion or 'war of the hoe handle', broke out in Western Ubangi-Shari and continued for several years. The extent of this insurrection, which was perhaps the largest anti-colonial rebellion in Africa during the interwar years, was carefully hidden from the French public because it provided evidence of strong opposition to French colonial rule and forced labor.
Resistance
Although there were numerous smaller revolts, the largest was the Kongo-Wara rebellion. Peaceful opposition to recruitment for railway construction and rubber tapping, mistreatment by European concessionary companies, began in the mid-1920s, but these efforts descended into violence in 1928, when over 350,000 natives rebelled against the colonial administration. Although the primary opposition leader, Karnou, was killed in December 1928, the rebellion was not fully suppressed until 1931.
Growing economy and World War II
During the 1930s, cotton, tea, and coffee emerged as important cash crops in Ubangi-Shari and the mining of diamonds and gold began in earnest. Several cotton companies were granted purchasing monopolies over large areas of cotton production and were able to fix the prices paid to cultivators, which assured profits for their shareholders.
In September 1940, during the Second World War, pro-Gaullist French officers took control of Ubangi-Shari. In August 1940, the territory responded, with the rest of the AEF, to the call from General Charles de Gaulle to fight for Free France.
Post-war transition to independence
After World War II, the French Constitution of 1946 inaugurated the first of a series of reforms that led eventually to complete independence for all French territories in western and equatorial Africa. In 1946, all AEF inhabitants were granted French citizenship and allowed to establish local assemblies. The assembly in CAR was led by Barthélemy Boganda, a Catholic priest who also was known for his forthright statements in the French Assembly on the need for African emancipation. In 1956, French legislation eliminated certain voting inequalities and provided for the creation of some organs of self-government in each territory.
The French constitutional referendum of September 1958 dissolved the AEF, and on 1 December of the same year the Assembly declared the birth of the autonomous Central African Republic with Boganda as head of government. Boganda ruled until his death in a plane crash on 29 March 1959. His cousin, David Dacko, replaced him as head of Government. On 12 July 1960 France agreed to the Central African Republic becoming fully independent. On 13 August 1960 the Central African Republic became an independent country and David Dacko became its first President.
Independence
David Dacko began to consolidate his power soon after taking office in 1960. He amended the Constitution to transform his regime into a one-party state with a strong presidency elected for a term of seven years. On 5 January 1964, Dacko was elected in an election in which he ran alone.
During his first term as president, Dacko significantly increased diamond production in the Central African Republic by eliminating the monopoly on mining held by concessionary companies and decreeing that any Central African could dig for diamonds. He also succeeded in having a diamond-cutting factory built in Bangui. Dacko encouraged the rapid "Centralafricanization" of the country's administration, which was accompanied by growing corruption and inefficiency, and he expanded the number of civil servants, which greatly increased the portion of the national budget needed to pay salaries.
Dacko was torn between his need to retain the support of France and his need to show that he was not subservient to France. In order to cultivate alternative sources of support and display his independence in foreign policy, he cultivated closer relations with the People's Republic of China. By 1965, Dacko had lost the support of most Central Africans and may have been planning to resign from the presidency when he was overthrown.
Bokassa and the Central African Empire
On 1 January 1966, following a swift and almost bloodless overnight coup, Colonel Jean-Bédel Bokassa assumed power as president of the Republic. Bokassa abolished the constitution of 1959, dissolved the National Assembly, and issued a decree that placed all legislative and executive powers in the hands of the president. On 4 March 1972, Bokassa's presidency was extended to a life term. On 4 December 1976, the republic became a monarchy – the Central African Empire – with the promulgation of the imperial constitution and the coronation of the president as Emperor Bokassa I. His authoritarian regime was characterized by numerous human rights violations.
On 20 September 1979, Dacko overthrew Bokassa in a bloodless coup.
Kolingba
Dacko’s efforts to promote economic and political reforms proved ineffectual, and on 20 September 1981, he in turn was overthrown in a bloodless coup by General André Kolingba. Kolingba suspended the constitution and ruled with a military junta, the Military Committee for National Recovery (CMRN) for four years.
In 1985, the CMRN was dissolved, and Kolingba named a new cabinet with increased civilian participation, signaling the start of a return to civilian rule. The process of democratization quickened in 1986 with the creation of a new political party, the Rassemblement Démocratique Centrafricain (RDC), and the drafting of a new constitution that subsequently was ratified in a national referendum. General Kolingba was sworn in as constitutional President on 29 November 1986. The constitution established a National Assembly made up of 52 elected deputies, elected in July 1987. Municipal elections were held in 1988. Kolingba's two major political opponents, Abel Goumba and Ange-Félix Patassé, boycotted these elections because their parties were not allowed to participate.
By 1990, inspired by the fall of the Berlin Wall, a pro-democracy movement became very active. In May 1990, a letter signed by 253 prominent citizens asked for the convocation of a National Conference. Kolingba refused this request and instead detained several opponents. Pressure from a group of locally represented countries and agencies called GIBAFOR (Groupe informel des bailleurs de fonds et representants residents', the United States, France, Japan, Germany, EU, World Bank and the UN finally led Kolingba to agree, in principle, to hold free elections in October 1992.
Alleging irregularities, Kolingba opted to suspend the results of the elections and held on to power. GIBAFOR applied intense pressure on him to establish a Provisional National Political Council (Conseil National Politique Provisoire de la République / CNPPR) and to set up a "Mixed Electoral Commission", which included representatives from all political parties.
Patassé
When elections were finally held in 1993, again with the help of the international community and the UN Electoral Assistance Unit, Ange-Félix Patassé led in the first round and Kolingba came in fourth behind Abel Goumba and David Dacko. In the second round, Patassé won 53% of the vote while Goumba won 45.6%. Most of Patassé's support came from Gbaya, Kare, and Kaba voters in seven heavily populated prefectures in the northwest while Goumba's support came largely from ten less-populated prefectures in the south and east. Patassé's party, the Mouvement pour la Libération du Peuple Centrafricain (MLPC) or Movement for the Liberation of the Central African People, gained a plurality but not an absolute majority of seats in parliament, which meant it required coalition partners to rule effectively.
Patassé relieved former President Kolingba of his military rank of General in March 1994 and then charged several former ministers with various crimes. Patassé also removed many Yakoma from important, lucrative posts in the government. Two hundred predominantly Yakoma members of the presidential guard were also dismissed or reassigned to the army. Kolingba's RDC loudly proclaimed that Patassé's government was conducting a "witch hunt" against the Yakoma.
A new constitution was approved on 28 December 1994 and promulgated on 14 January 1995, but this constitution, like those before it, did not have much impact on the country's politics. In 1996–1997, reflecting steadily decreasing public confidence in the government's erratic behaviour, three mutinies against Patassé's administration were accompanied by widespread destruction of property and heightened ethnic tension.
On 25 January 1997, the Bangui Agreements, which provided for the deployment of an inter-African military mission, the Mission Interafricaine de Surveillance des Accords de Bangui (MISAB), were signed. Mali's former president, Amadou Touré, served as chief mediator and brokered the entry of ex-mutineers into the government on 7 April 1997. The MISAB mission was later replaced by a U.N. peacekeeping force, the Mission des Nations Unies en RCA (MINURCA).
In 1998, parliamentary elections resulted in Kolingba's RDC winning 20 out of 109 seats, constituting a significant political comeback. In 1999, however, Patassé won free elections to become president for a second term, despite widespread public anger in urban centres over his rule.
Bozizé
On 28 May 2001, rebels stormed strategic buildings in Bangui in an unsuccessful coup attempt. The army chief of staff, Abel Abrou, and General François N'Djadder Bedaya were killed, but Patasse retained power with the assistance of troops from Libya and rebel FLC soldiers from the DRC led by Jean-Pierre Bemba.
In the aftermath of the failed coup, militias loyal to Patassé sought revenge against rebels in many neighborhoods of the capital, Bangui. They incited unrest which resulted in the destruction of homes as well as the torture and murder of opponents.
Patassé came to suspect that General François Bozizé was involved in another coup attempt against him, which led Bozizé to flee with loyal troops to Chad. In March 2003, Bozizé launched a surprise attack against Patassé, who was out of the country. This time, Libyan troops and some 1,000 soldiers of Bemba's Congolese rebel organization failed to stop the rebels, who took control of the country and thus succeeded in overthrowing Patassé. On 15 March 2003, rebels moved into Bangui and installed their Bozizé, as president.
Patassé was found guilty of major crimes in Bangui. CAR brought a case against him and Jean-Pierre Bemba to the International Criminal Court, accusing them both of multiple crimes in suppressing one of the mutinies against Patasse.
Bozizé's won the 2005 presidential election, and his coalition was the leader in the 2005 legislative election.
2003–2007: Bush War
After Bozizé seized power in 2003, the Central African Republic Bush War began with the rebellion by the Union of Democratic Forces for Unity (UFDR), led by Michel Djotodia. This quickly escalated into major fighting during 2004. The UFDR rebel forces consisted of three allies, the Groupe d'action patriotique pour la liberation de Centrafrique (GAPLC), the Convention of Patriots for Justice and Peace (CPJP), the People's Army for the Restoration of Democracy (APRD), the Movement of Central African Liberators for Justice (MLCJ), and the Front démocratique Centrafricain (FDC).
In early 2006, Bozizé's government appeared stable.
On 13 April 2007, a peace agreement between the government and the UFDR was signed in Birao. The agreement provided for an amnesty for the UFDR, its recognition as a political party, and the integration of its fighters into the army. Further negotiations resulted in an agreement in 2008 for reconciliation, a unity government, and local elections in 2009 and parliamentary and presidential elections in 2010. The new unity government that resulted was formed in January 2009.
2012–2014: Civil War
In late 2012, a coalition of old rebel groups under new name of Séléka renewed fighting. Two other, previously unknown groups, the Alliance for Revival and Rebuilding (A2R) and the Patriotic Convention for Saving the Country (CPSK) also joined the coalition, as well as the Chadian group FPR.
On 27 December 2012, CAR President Francois Bozizé requested international assistance to help with the rebellion, in particular from France and the United States. French President François Hollande rejected the plea, saying that the 250 French troops stationed at Bangui M'Poko International Airport are there "in no way to intervene in the internal affairs".
On 11 January 2013, a ceasefire agreement was signed Libreville, Gabon. The rebels dropped their demand for President François Bozizé to resign, but he had to appoint a new prime minister from the opposition party by 18 January 2013. On 13 January, Bozizé signed a decree that removed Prime Minister Faustin-Archange Touadéra from power, as part of the agreement with the rebel coalition. On 17 January, Nicolas Tiangaye was appointed Prime Minister.
On 24 March 2013, rebel forces heavily attacked the capital Bangui and took control of major structures, including the presidential palace. Bozizé's family fled across the river to the Democratic Republic of the Congo and then to Yaounde, the capital of Cameroon where he was granted temporary refuge.
Djotodia
Séléka leader Michel Djotodia declared himself President. Djotodia said that there would be a three-year transitional period and that Tiangaye would continue to serve as Prime Minister. Djotodia promptly suspended the constitution and dissolved the government, as well as the National Assembly. He then reappointed Tiangaye as Prime Minister on 27 March 2013. Top military and police officers met with Djotodia and recognized him as president on 28 March 2013. Catherine Samba-Panza assumed the office of interim president on 23 January 2014.
Peacekeeping largely transitioned from the Economic Community of Central African States-led MICOPAX to the African Union-led MISCA, which was deployed in December 2013. In September 2014, MISCA transferred its authority to the UN-led MINUSCA while the French peacekeeping mission was known as Operation Sangaris.
2015–present: Civil War
By 2015, there was little government control outside of the capital, Bangui. The dissolution of Séléka led to ex-Séléka fighters forming new militias that often fought each other.
Armed entrepreneurs had carved out personal fiefdoms in which they set up checkpoints, collect illegal taxes, and take in millions of dollars from the illicit coffee, mineral, and timber trades. Noureddine Adam, the leader of the rebel group Popular Front for the Rebirth of Central African Republic (FRPC), declared the autonomous Republic of Logone on 14 December 2015. By 2017, more than 14 armed groups vied for territory, and about 60% of the country's territory was controlled by four notable factions led by ex-Séléka leaders, including the FRP led by Adam; the Union Pour la Paix en Centrafrique (UPC), led by Ali Darassa, the Mouvement patriotique pour la Centrafrique (MPC) led by Mahamat Al-Khatim. The factions have been described as ethnic in nature with the FPRC associated with the Gula and Runga people and the UPC associated with the Fulani. With the de facto'' partition of the country between ex-Séléka militias in the north and east, and Anti-balaka militias in the south and west, hostilities between both sides decreased but sporadic fighting continued.
In February 2016, after a peaceful election, the former Prime Minister Faustin-Archange Touadéra was elected president. In October 2016, France announced that Operation Sangaris, its peacekeeping mission in the country, was a success and largely withdrew its troops.
Tensions erupted in competition between ex-Séléka militias arising over control of a goldmine in November 2016, where a coalition formed by the MPC and the FPRC (incorporating elements of their former enemy, the Anti-balaka) attacked the UPC.
Conflict in Ouaka
Most of the fighting was in the centrally located Ouaka prefecture, which has the country's second largest city Bambari, because of its strategic location between the Muslim and Christian regions of the country and its wealth. The fight for Bambari in early 2017 displaced 20,000. MINUSCA made a robust deployment to prevent FPRC taking the city. In February 2017, Joseph Zoundeiko, the chief of staff of FPRC was killed by MINUSCA after crossing one of the red lines. At the same time, MINUSCA negotiated the removal of Darassa from the city. This led to UPC to find new territory, spreading the fighting from urban to rural areas previously spared.
The thinly spread MINUSCA relied on Ugandan as well as American special forces to keep the peace in the southeast as they were part of a campaign to eliminate the Lord's Resistance Army but the mission ended in April 2017. By the latter half of 2017, the fighting largely shifted to the Southeast where the UPC reorganized and were pursued by the FPRC and Anti-balaka with the level of violence only matched by the early stage of the war. About 15,000 people fled from their homes in an attack in May and six U.N. peacekeepers were killed – the deadliest month for the mission yet.
In June 2017, another ceasefire was signed in Rome by the government and 14 armed groups including FPRC but the next day fighting between an FPRC faction and Anti-balaka militias killed more than 100 people. In October 2017, another ceasefire was signed between the UPC, the FPRC, and Anti-balaka groups. The FPRC announced Ali Darassa as coalition vice-president but fighting continued afterward. By July 2018, the FPRC, now headed by Abdoulaye Hissène and based in the northeastern town of Ndélé, had troops threatening to move onto Bangui. Further clashes between the UPC and MINUSCA/government forces occurred early in 2019.
Conflicts in Western and Northwestern CAR
In Western CAR, a new rebel group called Return, Reclamation, Rehabilitation (3R), with no known links to Séléka or Anti-balaka, formed in 2015. Self-proclaimed General Sidiki Abass claimed 3R would protect Muslim Fulani people from an Antibalaka militia led by Abbas Rafal. 3R are accused of displacing 17,000 people in November 2016 and at least 30,000 people in the Ouham-Pendé prefecture in December 2016.
For some time, Northwestern CAR, around Paoua, was divided between Revolution and Justice (RJ) and Movement for the Liberation of the Central African Republic (MNLC), but fighting erupted after the killing of RJ leader, Clément Bélanga, in November 2017. The conflict displaced 60,000 people since December 2017. The MNLC, founded in October 2017, was led by Ahamat Bahar, a former member and co-founder of FPRC and MRC, and is allegedly backed by Fulani fighters from Chad. The Christian militant group RJ was formed in 2013, mostly by members of the presidential guard of former President Ange Felix Patassé, and were composed mainly of ethnic Sara-Kaba.
2020s
In December 2020, President Faustin Archange Touadéra was reelected in the first round of the presidential election. The opposition did not accept the result because of allegations of fraud and irregularities.
Russian mercenaries from the Wagner Group have supported President Faustin-Archange Touadéra in the fight against rebels. Russia's Wagner group has been accused of harassing and intimidating civilians.
See also
Brazzaville Conference
French Equatorial Africa
History of Central Africa
Ubangi-Shari
References
Central African Republic |
5486 | https://en.wikipedia.org/wiki/Central%20African%20Armed%20Forces | Central African Armed Forces | The Central African Armed Forces (; FACA) are the armed forces of the Central African Republic and have been barely functional since the outbreak of the civil war in 2012. Today they are among the world's weakest armed forces, dependent on international support to provide security in the country. In recent years the government has struggled to form a unified national army. It consists of the Ground Force (which includes the air service), the gendarmerie, and the National Police.
Its disloyalty to the president came to the fore during the mutinies in 1996–1997, and since then has faced internal problems. It has been strongly criticised by human rights organisations due to terrorism, including killings, torture and sexual violence. In 2013 when militants of the Séléka rebel coalition seized power and overthrew President Bozizé they executed many FACA troops.
History
Role of military in domestic politics
The military has played an important role in the history of Central African Republic. The immediate former president, General François Bozizé was a former army chief-of-staff and his government included several high-level military officers. Among the country's five presidents since independence in 1960, three have been former army chiefs-of-staff, who have taken power through coups d'état. No president with a military background has, however, ever been succeeded by a new military president.
The country's first president, David Dacko was overthrown by his army chief-of-staff, Jean-Bédel Bokassa in 1966. Following the ousting of Bokassa in 1979, David Dacko was restored to power, only to be overthrown once again in 1981 by his new army chief of staff, General André Kolingba.
In 1993, Ange-Félix Patassé became the Central African Republic's first elected president. He soon became unpopular within the army, resulting in violent mutinies in 1996–1997. In May 2001, there was an unsuccessful coup attempt by Kolingba and once again Patassé had to turn to friends abroad for support, this time Libya and DR Congo. Some months later, at the end of October, Patassé sacked his army chief-of-staff, François Bozizé, and attempted to arrest him. Bozizé then fled to Chad and gathered a group of rebels. In 2002, he seized Bangui for a short period, and in March 2003 took power in a coup d'état.
Importance of ethnicity
When General Kolingba became president in 1981, he implemented an ethnicity-based recruitment policy for the administration. Kolingba was a member of the Yakoma people from the south of the country, which made up approximately 5% of the total population. During his rule, members of Yakoma were granted all key positions in the administration and made up a majority of the military. This later had disastrous consequences when Kolingba was replaced by a member of a northerner tribe, Ange-Félix Patassé.
Army mutinies of 1996–1997
Soon after the election 1993, Patassé became unpopular within the army, not least because of his inability to pay their wages (partly due to economic mismanagement and partly because France suddenly ended its economic support for the soldiers' wages). Another reason for the irritation was that most of FACA consisted of soldiers from Kolingba's ethnic group, the Yakoma. During Patassé's rule they had become increasingly marginalised, while he created militias favouring his own Gbaya tribe, as well as neighbouring Sara and Kaba. This resulted in army mutinies in 1996–1997, where fractions of the military clashed with the presidential guard, the Unité de sécurité présidentielle (USP) and militias loyal to Patassé.
On April 18, 1996, between 200 and 300 soldiers mutinied, claiming that they had not received their wages since 1992–1993. The confrontations between the soldiers and the presidential guard resulted in 9 dead and 40 wounded. French forces provided support (Operation Almandin I) and acted as negotiators. The unrest ended when the soldiers were finally paid their wages by France and the President agreed not to start legal proceedings against them.
On May 18, 1996, a second mutiny was led by 500 soldiers who refused to be disarmed, denouncing the agreement reached in April. French forces were once again called to Bangui (Operation Almadin II), supported by the militaries of Chad and Gabon. 3,500 foreigners were evacuated during the unrest, which left 43 persons dead and 238 wounded.
On May 26, a peace agreement was signed between France and the mutineers. The latter were promised amnesty, and were allowed to retain their weapons. Their security was ensured by the French military.
On November 15, 1996, a third mutiny took place, and 1,500 French soldiers were flown in to ensure the safety of foreigners. The mutineers demanded the discharge of the president.
On 6 December, a negotiation process started, facilitated by Gabon, Burkina-Faso, Chad and Mali. The military — supported by the opposition parties — insisted that Patassé had to resign. In January, 1997, however, the Bangui Agreements were signed and the French EFAO troop were replaced by the 1,350 soldiers of the Mission interafricaine de surveillance des Accords de Bangui (MISAB). In March, all mutineers were granted amnesty. The fighting between MISAB and the mutineers continued with a large offensive in June, resulting in up to 200 casualties. After this final clash, the mutineers calmed.
After the mutinies, President Patassé suffered from a typical "dictator's paranoia", resulting in a period of cruel terror executed by the presidential guard and various militias within the FACA loyal to the president, such as the Karako. The violence was directed against the Yakoma tribe, of which it is estimated that 20,000 persons fled during this period. The oppression also targeted other parts of the society. The president accused his former ally France of supporting his enemies and sought new international ties. When he strengthened his presidential guard (creating the FORSIDIR, see below), Libya sent him 300 additional soldiers for his own personal safety. When former President Kolingba attempted a coup d'état in 2001 (which was, according to Patassé, supported by France), the Movement for the Liberation of the Congo (MLC) of Jean-Pierre Bemba in DR Congo came to his rescue.
Crimes conducted by Patassé's militias and Congolese soldiers during this period are now being investigated by the International Criminal Court, who wrote that "sexual violence appears to have been a central feature of the conflict", having identified more than 600 rape victims.
Present situation
The FACA has been dominated by soldiers from the Yakoma ethnic group since the time of Kolingba. It has hence been considered disloyal by the two northerner presidents Patassé and Bozizé, both of whom have equipped and run their own militias outside FACA. The military also proved its disloyalty during the mutinies in 1996–1997. Although Francois Bozizé had a background in FACA himself (being its chief-of-staff from 1997 to 2001), he was cautious by retaining the defence portfolio, as well as by appointing his son Jean-Francis Bozizé cabinet director in charge of running the Ministry of Defence. He kept his old friend General Antoine Gambi as Chief of Staff. Due to failure to curb deepening unrest in the northern part of the country, Gambi was in July 2006 replaced with Bozizé's old friend from the military academy, Jules Bernard Ouandé.
Military's relations with the society
The forces assisting Bozizé in seizing the power in 2003 were not paid what they were promised and started looting, terrorising and killing ordinary citizens. Summary executions took place with the implicit approval of the government. The situation has deteriorated since early 2006, and the regular army and the presidential guard regularly execute extortion, torture, killings and other human rights violations. There is no possibility for the national judicial system to investigate these cases. At the end of 2006, there were an estimated 150,000 internally displaced people in CAR. During a UN mission in the northern part of the country in November 2006, the mission had a meeting with a prefect who said that he could not maintain law and order over the military and the presidential guards. The FACA currently conducts summary executions and burns houses. On the route between Kaga-Bandoro and Ouandago some 2,000 houses have been burnt, leaving an estimated 10,000 persons homeless.
Reform of the army
Both the Multinational Force in the Central African Republic (FOMUC) and France are assisting in the current reform of the army. One of the key priorities of the reform of the military is make it more ethnically diversified. It should also integrate Bozizé's own rebel group (mainly consisting of members of his own Gbaya tribe). Many of the Yakoma soldiers who left the country after the mutinies in 1996–1997 have now returned and must also be reintegrated into the army. At the same time, BONUCA holds seminars in topics such as the relationship between military and civil parts of society. 2018 saw Russia send mercenaries to help train and equip the CAR military and by 2020 Russia has increased its influence in the region.
Army equipment
Most of the army's heavy weapons and equipment were destroyed or captured by Séléka militants during the 2012–2014 civil war. In the immediate aftermath of the war, the army was only in possession of 70 rifles. The majority of its arsenals were plundered during the fighting by the Séléka coalition and other armed groups. Thousands of the army's small arms were also distributed to civilian supporters of former President Bozizé in 2013. Prior to 2014, the army's stocks of arms and ammunition were primarily of French, Soviet, and Chinese origin.
In 2018, the army's equipment stockpiles were partly revitalized by a donation of 900 pistols, 5,200 rifles, and 270 unspecified rocket launchers from Russia.
Small arms
Anti-tank weapons
Mortars
Vehicles
Scout cars
Infantry fighting vehicles
Armored personnel carriers
Utility vehicles
Foreign military presence in support of the Government
Peacekeeping and peace enforcing forces
Since the mutinies, a number of peacekeeping and peace enforcing international missions have been present in Central African Republic. There has been discussion of the deployment of a regional United Nations (UN) peacekeeping force in both Chad and Central African Republic, in order to potentially shore up the ineffectual Darfur Peace Agreement. The missions deployed in the country during the last 10 years are the following:
Chad
In addition to the multilateral forces, CAR has received bilateral support from other African countries, such as the Libyan and Congolese assistance to Patassé mentioned above. Bozizé is in many ways dependent on Chad's support. Chad has an interest in CAR, since it needs to ensure calmness close to its oil fields and the pipeline leading to the Cameroonian coast, close to CAR's troubled northwest. Before seizing power, Bozizé built up his rebel force in Chad, trained and augmented by the Chadian military. Chadian President Déby assisted him actively in taking the power in March 2003 (his rebel forces included 100 Chadian soldiers). After the coup, Chad provided another 400 soldiers. Current direct support includes 150 non-FOMUC Chadian troops that patrol the border area near Goré, a contingent of soldiers in Bangui, and troops within the presidential lifeguard. The CEMAC Force includes 121 Chadian soldiers.
France
There has been an almost uninterrupted French military presence in Central African Republic since independence, regulated through agreements between the two Governments. French troops were allowed to be based in the country and to intervene in cases of destabilisation. This was particularly important during the cold war era, when Francophone Africa was regarded as a natural French sphere of influence.
Additionally, the strategic location of the country made it a more interesting location for military bases than its neighbours, and Bouar and Bangui were hence two of the most important French bases abroad.
However, in 1997, following Lionel Jospin's expression "Neither interference nor indifference", France came to adopt new strategic principles for its presence in Africa. This included a reduced permanent presence on the continent and increased support for multilateral interventions. In Central African Republic, the Bouar base and the Béal Camp (at that time home to 1,400 French soldiers) in Bangui were shut down, as the French concentrated its African presence to Abidjan, Dakar, Djibouti, Libreville and N'Djamena and the deployment of a Force d'action rapide, based in France.
However, due to the situation in the country, France has retained a military presence. During the mutinies, 2,400 French soldiers patrolled the streets of Bangui. Their official task was to evacuate foreign citizens, but this did not prevent direct confrontations with the mutineers (resulting in French and mutineer casualties). The level of French involvement resulted in protests among the Central African population, since many sided with the mutineers and accused France of defending a dictator against the people's will. Criticism was also heard in France, where some blamed their country for its protection of a discredited ruler, totally incapable of exerting power and managing the country. After the mutinies in 1997, the MISAB became a multilateral force, but it was armed, equipped, trained and managed by France. The Chadian, Gabonese and Congolese troops of the current Force multinationale en Centrafrique (FOMUC) mission in the country also enjoy logistical support from French soldiers.
A study carried out by the US Congressional Research Service revealed that France has again increased its arms sales to Africa, and that during the 1998–2005 period it was the leading supplier of arms to the continent.
Components and units
Air Force
The Air Force is almost inoperable. Lack of funding has almost grounded the air force apart from an AS 350 Ecureuil delivered in 1987. Mirage F1 planes from the French Air Force regularly patrolled troubled regions of the country and also participated in direct confrontations until they were withdrawn and retired in 2014. According to some sources, Bozizé used the money he got from the mining concession in Bakouma to buy two old MI 8 helicopters from Ukraine and one Lockheed C-130 Hercules, built in the 1950s, from the US. In late 2019 Serbia offered two new Soko J-22 orao attack aircraft to the CAR Air Force but was it is unknown whether the orders were approved by the Air Force. The air force otherwise operates 7 light aircraft, including a single helicopter:
Garde républicaine (GR)
The Presidential Guard (garde présidentielle) or Republican Guard is officially part of FACA but it is often regarded as a separate entity under the direct command of the President. Since 2010 the Guard has received training from South Africa and Sudan, with Belgium and Germany providing support. GR consists of so-called patriots that fought for Bozizé when he seized power in 2003 (mainly from the Gbaya tribe), together with soldiers from Chad. They are guilty of numerous assaults on the civil population, such as terror, aggression, sexual violence. Only a couple of months after Bozizé's seizure of power, in May 2003, taxi and truck drivers conducted a strike against these outrages. However, post-civil leaders have been cautious in attempting to significantly reform the Republican Guard.
New amphibious force
Bozizé has created an amphibious force. It is called the Second Battalion of the Ground Forces and it patrols the Ubangi river. The staff of the sixth region in Bouali (mainly made up of members of the former president's lifeguard) was transferred to the city of Mongoumba, located on the river. This city had previously been plundered by forces from the MLC, that had crossed the CAR/Congo border. The riverine patrol force has approximately one hundred personnel and operates seven patrol boats.
Veteran Soldiers
A program for disarmament and reintegration of veteran soldiers is currently taking place. A national commission for the disarmament, demobilisation and reintegration was put in place in September 2004. The commission is in charge of implementing a program wherein approximately 7,500 veteran soldiers will be reintegrated in civil life and obtain education.
Discontinued groups and units that are no longer part of FACA
Séléka rebels: the French document Spécial investigation: Centrafrique, au cœur du chaos envisions Séléka rebels as mercenaries under the command of the president. In the documentary the Séléka fighters seem to use a large number of M16 rifles in their fight against the Anti-balaka forces.
FORSIDIR: The presidential lifeguard, Unité de sécurité présidentielle (USP), was in March 1998 transformed into the Force spéciale de défense des institutions républicaines (FORSDIR). In contrary to the army – which consisted mainly of southerner Yakoma members and which thereby was unreliable for the northerner president – this unit consisted of northerners loyal to the president. Before eventually being dissolved in January 2000, this highly controversial group became feared for their terror and troubled Patassé's relations with important international partners, such as France. Of its 1,400 staff, 800 were subsequently reintegrated into FACA, under the command of the chief-of-staff. The remaining 400 recreated the USP (once again under the command of the chief-of-staff).
Unité de sécurité présidentielle (USP): USP was Patassé's presidential guard before and after FORSIDIR. When he was overthrown by Bozizé in 2003, the USP was dissolved and while some of the soldiers have been absorbed by FACA, others are believed to have joined the pro-Patassé Democratic Front of the Central African People rebel group that is fighting FACA in the north of the country.
The Patriots or Liberators: Accompanied Bozizé when he seized power in March 2003. They are now a part of Bozizé's lifeguard, the Garde républicaine, together with soldiers from Chad.
Office central de répression du banditisme (OCRB): OCRB was a special unit within the police created to fight the looting after the army mutinies in 1996 and 1997. OCRB has committed numerous summary executions and arbitrary detentions, for which it has never been put on trial.
MLPC Militia: Le Mouvement de libération du peuple centrafricain (MLPC) was the armed component of former president Patassé's political party. The MPLC's militia was already active during the 1993 election, but was strengthened during the mutinies 1996 and 1997, particularly through its Karako contingent. Its core consisted of Sara people from Chad and Central African Republic, but during the mutinies it recruited many young people in Bangui.
DRC Militia: Rassemblement démocratique centrafricain (RDC) is the militia of the party of General Kolingba, who led the country during the 1980s. The RDC's militia is said to have camps in Mobaye and to have bonds with former officials of Kolingba's "cousin" Mobutu Sese Seko in DR Congo.
References
External links
'France donates equipment to CAR,' Jane's Defence Weekly, 28 January 2004, p. 20. First of three planned battalions of new army completed training and guaduated 15 January [2004]. See also JDW 12 November 2003.
Africa Research Bulletin: Political, Social and Cultural Series, Volume 43 Issue 12, Pages 16909A – 16910A, Published Online: 26 January 2007: Operation Boali, French aid mission to FACA
CIA World Factbook
US Department of State – Bureau of African Affairs: Background note
"Spécial investigation: Centrafrique, au cœur du chaos" Giraf Prod 13 jan 2014
Government of the Central African Republic
Military of the Central African Republic |
5488 | https://en.wikipedia.org/wiki/Chad | Chad | Chad ( ), officially the Republic of Chad, is a landlocked country at the crossroads of North and Central Africa. It is bordered by Libya to the north, Sudan to the east, the Central African Republic to the south, Cameroon to the southwest, Nigeria to the southwest (at Lake Chad), and Niger to the west. Chad has a population of 16 million, of which 1.6 million live in the capital and largest city of N'Djamena.
Chad has several regions: a desert zone in the north, an arid Sahelian belt in the centre and a more fertile Sudanian Savanna zone in the south. Lake Chad, after which the country is named, is the second-largest wetland in Africa. Chad's official languages are Arabic and French. It is home to over 200 different ethnic and linguistic groups. Islam (55.1%) and Christianity (41.1%) are the main religions practiced in Chad.
Beginning in the 7th millennium BC, human populations moved into the Chadian basin in great numbers. By the end of the 1st millennium AD, a series of states and empires had risen and fallen in Chad's Sahelian strip, each focused on controlling the trans-Saharan trade routes that passed through the region. France conquered the territory by 1920 and incorporated it as part of French Equatorial Africa. In 1960, Chad obtained independence under the leadership of François Tombalbaye. Resentment towards his policies in the Muslim north culminated in the eruption of a long-lasting civil war in 1965. In 1979 the rebels conquered the capital and put an end to the South's hegemony. The rebel commanders then fought amongst themselves until Hissène Habré defeated his rivals. The Chadian–Libyan conflict erupted in 1978 by the Libyan invasion which stopped in 1987 with a French military intervention (Operation Épervier). Hissène Habré was overthrown in turn in 1990 by his general Idriss Déby. With French support, a modernization of the Chad National Army was initiated in 1991. From 2003, the Darfur crisis in Sudan spilt over the border and destabilised the nation. Already poor, the nation and people struggled to accommodate the hundreds of thousands of Sudanese refugees who live in and around camps in eastern Chad.
While many political parties participated in Chad's legislature, the National Assembly, power laid firmly in the hands of the Patriotic Salvation Movement during the presidency of Idriss Déby, whose rule was described as authoritarian. After President Déby was killed by FACT rebels in April 2021, the Transitional Military Council led by his son Mahamat Déby assumed control of the government and dissolved the Assembly. Chad remains plagued by political violence and recurrent attempted coups d'état.
Chad ranks the 2nd lowest in the Human Development Index, with 0.394 in 2021 placed 190th, and a least developed country facing the effects of being one of the poorest and most corrupt countries in the world. Most of its inhabitants live in poverty as subsistence herders and farmers. Since 2003 crude oil has become the country's primary source of export earnings, superseding the traditional cotton industry. Chad has a poor human rights record, with frequent abuses such as arbitrary imprisonment, extrajudicial killings, and limits on civil liberties by both security forces and armed militias.
History
Early history
In the 7th millennium BC, ecological conditions in the northern half of Chadian territory favored human settlement, and its population increased considerably. Some of the most important African archaeological sites are found in Chad, mainly in the Borkou-Ennedi-Tibesti Region; some date to earlier than 2000 BC.
For more than 2,000 years, the Chadian Basin has been inhabited by agricultural and sedentary people. The region became a crossroads of civilizations. The earliest of these was the legendary Sao, known from artifacts and oral histories. The Sao fell to the Kanem Empire, the first and longest-lasting of the empires that developed in Chad's Sahelian strip by the end of the 1st millennium AD. Two other states in the region, Sultanate of Bagirmi and Wadai Empire, emerged in the 16th and 17th centuries. The power of Kanem and its successors was based on control of the trans-Saharan trade routes that passed through the region. These states, at least tacitly Muslim, never extended their control to the southern grasslands except to raid for slaves. In Kanem, about a third of the population were slaves.
French colonial period (1900–1960)
French colonial expansion led to the creation of the in 1900. By 1920, France had secured full control of the colony and incorporated it as part of French Equatorial Africa. French rule in Chad was characterised by an absence of policies to unify the territory and sluggish modernisation compared to other French colonies.
The French primarily viewed the colony as an unimportant source of untrained labour and raw cotton; France introduced large-scale cotton production in 1929. The colonial administration in Chad was critically understaffed and had to rely on the dregs of the French civil service. Only the Sara of the south was governed effectively; French presence in the Islamic north and east was nominal. The educational system was affected by this neglect.
After World War II, France granted Chad the status of overseas territory and its inhabitants the right to elect representatives to the National Assembly and a Chadian assembly. The largest political party was the Chadian Progressive Party (, PPT), based in the southern half of the colony. Chad was granted independence on 11 August 1960 with the PPT's leader, François Tombalbaye, an ethnic Sara, as its first president.
Tombalbaye rule (1960–1979)
Two years later, Tombalbaye banned opposition parties and established a one-party system. Tombalbaye's autocratic rule and insensitive mismanagement exacerbated inter-ethnic tensions. In 1965, Muslims in the north, led by the National Liberation Front of Chad (, FRONILAT), began a civil war. Tombalbaye was overthrown and killed in 1975, but the insurgency continued. In 1979 the rebel factions led by Hissène Habré took the capital, and all central authority in the country collapsed. Armed factions, many from the north's rebellion, contended for power.
Chad's first civil war (1979–1987)
The disintegration of Chad caused the collapse of France's position in the country. Libya moved to fill the power vacuum and became involved in Chad's civil war. Libya's adventure ended in disaster in 1987; the French-supported president, Hissène Habré, evoked a united response from Chadians of a kind never seen before and forced the Libyan army off Chadian soil.
Dictatorship of Habré (1987–1990)
Habré consolidated his dictatorship through a power system that relied on corruption and violence with thousands of people estimated to have been killed under his rule. The president favoured his own Toubou ethnic group and discriminated against his former allies, the Zaghawa. His general, Idriss Déby, overthrew him in 1990. Attempts to prosecute Habré led to his placement under house arrest in Senegal in 2005; in 2013, Habré was formally charged with war crimes committed during his rule. In May 2016, he was found guilty of human-rights abuses, including rape, sexual slavery, and ordering the killing of 40,000 people, and sentenced to life in prison.
Déby lineage & democracy with second Civil War (1990–present)
Déby attempted to reconcile the rebel groups and reintroduced multiparty politics. Chadians approved a new constitution by referendum, and in 1996, Déby easily won a competitive presidential election. He won a second term five years later. Oil exploitation began in Chad in 2003, bringing with it hopes that Chad would, at last, have some chances of peace and prosperity. Instead, internal dissent worsened, and a new civil war broke out. Déby unilaterally modified the constitution to remove the two-term limit on the presidency; this caused an uproar among the civil society and opposition parties.
In 2006 Déby won a third mandate in elections that the opposition boycotted. Ethnic violence in eastern Chad has increased; the United Nations High Commissioner for Refugees has warned that a genocide like that in Darfur may yet occur in Chad. In 2006 and in 2008 rebel forces attempted to take the capital by force, but failed on both occasions. An agreement for the restoration of harmony between Chad and Sudan, signed 15 January 2010, marked the end of a five-year war. The fix in relations led to the Chadian rebels from Sudan returning home, the opening of the border between the two countries after seven years of closure, and the deployment of a joint force to secure the border. In May 2013, security forces in Chad foiled a coup against President Idriss Déby that had been in preparation for several months.
Chad is currently one of the leading partners in a West African coalition in the fight against Boko Haram and other Islamist militants. Chad's army announced the death of Déby on 20 April 2021, following an incursion in the northern region by the FACT group, during which the president was killed amid fighting on the front lines. Déby's son, General Mahamat Idriss Déby, has been named interim president by a Transitional Council of military officers. That transitional council has replaced the Constitution with a new charter, granting Mahamat Déby the powers of the presidency and naming him head of the armed forces.
Geography
Chad is a large landlocked country spanning north-central Africa. It covers an area of , lying between latitudes 7° and 24°N, and 13° and 24°E, and is the twentieth-largest country in the world. Chad is, by size, slightly smaller than Peru and slightly larger than South Africa.
Chad is bounded to the north by Libya, to the east by Sudan, to the west by Niger, Nigeria and Cameroon, and to the south by the Central African Republic. The country's capital is from the nearest seaport, Douala, Cameroon. Because of this distance from the sea and the country's largely desert climate, Chad is sometimes referred to as the "Dead Heart of Africa".
The dominant physical structure is a wide basin bounded to the north and east by the Ennedi Plateau and Tibesti Mountains, which include Emi Koussi, a dormant volcano that reaches above sea level. Lake Chad, after which the country is named (and which in turn takes its name from the Kanuri word for "lake"), is the remains of an immense lake that occupied of the Chad Basin 7,000 years ago. Although in the 21st century it covers only , and its surface area is subject to heavy seasonal fluctuations, the lake is Africa's second largest wetland.
Chad is home to six terrestrial ecoregions: East Sudanian savanna, Sahelian Acacia savanna, Lake Chad flooded savanna, East Saharan montane xeric woodlands, South Saharan steppe and woodlands, and Tibesti-Jebel Uweinat montane xeric woodlands. The region's tall grasses and extensive marshes make it favourable for birds, reptiles, and large mammals. Chad's major rivers—the Chari, Logone and their tributaries—flow through the southern savannas from the southeast into Lake Chad.
Each year a tropical weather system known as the intertropical front crosses Chad from south to north, bringing a wet season that lasts from May to October in the south, and from June to September in the Sahel. Variations in local rainfall create three major geographical zones. The Sahara lies in the country's northern third. Yearly precipitations throughout this belt are under ; only occasional spontaneous palm groves survive, all of them south of the Tropic of Cancer.
The Sahara gives way to a Sahelian belt in Chad's centre; precipitation there varies from per year. In the Sahel, a steppe of thorny bushes (mostly acacias) gradually gives way to the south to East Sudanian savanna in Chad's Sudanese zone. Yearly rainfall in this belt is over .
Wildlife
Chad's animal and plant life correspond to the three climatic zones. In the Saharan region, the only flora is the date-palm groves of the oasis. Palms and acacia trees grow in the Sahelian region. The southern, or Sudanic, zone consists of broad grasslands or prairies suitable for grazing. there were at least 134 species of mammals, 509 species of birds (354 species of residents and 155 migrants), and over 1,600 species of plants throughout the country.
Elephants, lions, buffalo, hippopotamuses, rhinoceroses, giraffes, antelopes, leopards, cheetahs, hyenas, and many species of snakes are found here, although most large carnivore populations have been drastically reduced since the early 20th century. Elephant poaching, particularly in the south of the country in areas such as Zakouma National Park, is a severe problem. The small group of surviving West African crocodiles in the Ennedi Plateau represents one of the last colonies known in the Sahara today.
Chad had a 2018 Forest Landscape Integrity Index mean score of 6.18/10, ranking it 83rd globally out of 172 countries. Extensive deforestation has resulted in loss of trees such as acacias, baobab, dates and palm trees. This has also caused loss of natural habitat for wild animals; one of the main reasons for this is also hunting and livestock farming by increasing human settlements. Populations of animals like lions, leopards and rhino have fallen significantly.
Efforts have been made by the Food and Agriculture Organization to improve relations between farmers, agro-pastoralists and pastoralists in the Zakouma National Park (ZNP), Siniaka-Minia, and Aouk reserve in southeastern Chad to promote sustainable development. As part of the national conservation effort, more than 1.2 million trees have been replanted to check the advancement of the desert, which incidentally also helps the local economy by way of financial return from acacia trees, which produce gum arabic, and also from fruit trees.
Poaching is a serious problem in the country, particularly of elephants for the profitable ivory industry and a threat to lives of rangers even in the national parks such as Zakouma. Elephants are often massacred in herds in and around the parks by organized poaching. The problem is worsened by the fact that the parks are understaffed and that a number of wardens have been murdered by poachers.
Demographics
Chad's national statistical agency projected the country's 2015 population between 13,630,252 and 13,679,203, with 13,670,084 as its medium projection; based on the medium projection, 3,212,470 people lived in urban areas and 10,457,614 people lived in rural areas. The country's population is young: an estimated 47% is under 15. The birth rate is estimated at 42.35 births per 1,000 people, and the mortality rate at 16.69. The life expectancy is 52 years. The agency assessed the population as at mid 2017 at 15,775,400, of whom just over 1.5 million were in N'Djaména.
Chad's population is unevenly distributed. Density is in the Saharan Borkou-Ennedi-Tibesti Region but in the Logone Occidental Region. In the capital, it is even higher. About half of the nation's population lives in the southern fifth of its territory, making this the most densely populated region.
Urban life is concentrated in the capital, whose population is mostly engaged in commerce. The other major towns are Sarh, Moundou, Abéché and Doba, which are considerably smaller but growing rapidly in population and economic activity. Since 2003, 230,000 Sudanese refugees have fled to eastern Chad from war-ridden Darfur. With the 172,600 Chadians displaced by the civil war in the east, this has generated increased tensions among the region's communities.
Polygamy is common, with 39% of women living in such unions. This is sanctioned by law, which automatically permits polygamy unless spouses specify that this is unacceptable upon marriage. Although violence against women is prohibited, domestic violence is common. Female genital mutilation is also prohibited, but the practice is widespread and deeply rooted in tradition; 45% of Chadian women undergo the procedure, with the highest rates among Arabs, Hadjarai, and Ouaddaians (90% or more). Lower percentages were reported among the Sara (38%) and the Toubou (2%). Women lack equal opportunities in education and training, making it difficult for them to compete for the relatively few formal-sector jobs. Although property and inheritance laws based on the French code do not discriminate against women, local leaders adjudicate most inheritance cases in favour of men, according to traditional practice.
Largest cities, towns, and municipalities
Ethnic groups
The peoples of Chad carry significant ancestry from Eastern, Central, Western, and Northern Africa.
Chad has more than 200 distinct ethnic groups, which create diverse social structures. The colonial administration and independent governments have attempted to impose a national society, but for most Chadians the local or regional society remains the most important influence outside the immediate family. Nevertheless, Chad's people may be classified according to the geographical region in which they live.
In the south live sedentary people such as the Sara, the nation's main ethnic group, whose essential social unit is the lineage. In the Sahel sedentary peoples live side by side with nomadic ones, such as the Arabs, the country's second major ethnic group. The north is inhabited by nomads, mostly Toubous.
Languages
Chad's official languages are Arabic and French, but over 100 languages are spoken. The Chadic branch of the Afroasiatic language family gets its name from Chad, and is represented by dozens of languages native to the country. Chad is also home to Central Sudanic, Maban, and several Niger-Congo languages.
Due to the important role played by itinerant Arab traders and settled merchants in local communities, Chadian Arabic has become a lingua franca.
Religion
Chad is a religiously diverse country. Various estimates, including from Pew Research Center in 2010, found that 52–58% of the population was Muslim, while 39–44% were Christian, with 22% being Catholic and a further 17% being Protestant. According to a 2012 Pew Research survey, 48% of Muslim Chadians professed to be Sunni, 21% Shia, 4% Ahmadi and 23% non-denominational Muslim. Islam is expressed in diverse ways; for example, 55% of Muslim Chadians belong to Sufi orders. Its most common expression is the Tijaniyah, an order followed by the 35% of Chadian Muslims which incorporates some local African religious elements. In 2020, the ARDA estimated the vast majority of Muslims Chadians to be Sunni belonging to the Sufi brotherhood Tijaniyah. A small minority of the country's Muslims (5–10%) hold more fundamentalist practices, which, in some cases, may be associated with Saudi-oriented Salafi movements.
Roman Catholics represent the largest Christian denomination in the country. Most Protestants, including the Nigeria-based "Winners' Chapel", are affiliated with various evangelical Christian groups. Members of the Baháʼí and Jehovah's Witnesses religious communities also are present in the country. Both faiths were introduced after independence in 1960 and therefore are considered to be "new" religions in the country.
A small proportion of the population continues to practice indigenous religions. Animism includes a variety of ancestor and place-oriented religions whose expression is highly specific. Christianity arrived in Chad with the French and American missionaries; as with Chadian Islam, it syncretises aspects of pre-Christian religious beliefs.
Muslims are largely concentrated in northern and eastern Chad, and animists and Christians live primarily in southern Chad and Guéra. Many Muslims also reside in southern Chad but the Christian presence in the north is minimal. The constitution provides for a secular state and guarantees religious freedom; different religious communities generally co-exist without problems.
Chad is home to foreign missionaries representing both Christian and Islamic groups. Itinerant Muslim preachers, primarily from Sudan, Saudi Arabia, and Pakistan, also visit. Saudi Arabian funding generally supports social and educational projects and extensive mosque construction.
Education
Educators face considerable challenges due to the nation's dispersed population and a certain degree of reluctance on the part of parents to send their children to school. Although attendance is compulsory, only 68 percent of boys attend primary school, and more than half of the population is illiterate. Higher education is provided at the University of N'Djamena. At 33 percent, Chad has one of the lowest literacy rates of Sub-Saharan Africa.
In 2013, the U.S. Department of Labor's Findings on the Worst Forms of Child Labor in Chad reported that school attendance of children aged 5 to 14 was as low as 39%. This can also be related to the issue of child labor as the report also stated that 53% of children aged 5 to 14 were working, and that 30% of children aged 7 to 14 combined work and school. A more recent DOL report listed cattle herding as a major agricultural activity that employed underage children.
Government and politics
Chad's constitution provides for a strong executive branch headed by a president who dominates the political system. The president has the power to appoint the prime minister and the cabinet, and exercises considerable influence over appointments of judges, generals, provincial officials and heads of Chad's para-statal firms. In cases of grave and immediate threat, the president, in consultation with the National Assembly, may declare a state of emergency. The president is directly elected by popular vote for a five-year term; in 2005 constitutional term limits were removed, allowing a president to remain in power beyond the previous two-term limit. Most of Déby's key advisers are members of the Zaghawa ethnic group, although southern and opposition personalities are represented in government.
Chad's legal system is based on French civil law and Chadian customary law where the latter does not interfere with public order or constitutional guarantees of equality. Despite the constitution's guarantee of judicial independence, the president names most key judicial officials. The legal system's highest jurisdictions, the Supreme Court and the Constitutional Council, have become fully operational since 2000. The Supreme Court is made up of a chief justice, named by the president, and 15 councillors, appointed for life by the president and the National Assembly. The Constitutional Court is headed by nine judges elected to nine-year terms. It has the power to review legislation, treaties and international agreements prior to their adoption.
The National Assembly makes legislation. The body consists of 155 members elected for four-year terms who meet three times per year. The Assembly holds regular sessions twice a year, starting in March and October, and can hold special sessions when called by the prime minister. Deputies elect a National Assembly president every two years. The president must sign or reject newly passed laws within 15 days. The National Assembly must approve the prime minister's plan of government and may force the prime minister to resign through a majority vote of no confidence. However, if the National Assembly rejects the executive branch's programme twice in one year, the president may disband the Assembly and call for new legislative elections. In practice, the president exercises considerable influence over the National Assembly through his party, the Patriotic Salvation Movement (MPS), which holds a large majority.
Until the legalisation of opposition parties in 1992, Déby's MPS was the sole legal party in Chad. Since then, 78 registered political parties have become active. In 2005, opposition parties and human rights organisations supported the boycott of the constitutional referendum that allowed Déby to stand for re-election for a third term amid reports of widespread irregularities in voter registration and government censorship of independent media outlets during the campaign. Correspondents judged the 2006 presidential elections a mere formality, as the opposition deemed the polls a farce and boycotted them.
Chad is listed as a failed state by the Fund for Peace (FFP). Chad had the seventh-highest rank in the Fragile States Index in 2021. Corruption is rife at all levels; Transparency International's Corruption Perceptions Index for 2021 ranked Chad 164th among the 180 countries listed. Critics of former President Déby had accused him of cronyism and tribalism.
In southern Chad, bitter conflicts over land are becoming more and more common. They frequently turn violent. Long-standing community culture is being eroded – and so are the livelihoods of many farmers.
Longtime Chad President Idriss Déby's death on 20 April 2021, resulted in both the nation's National Assembly and government being dissolved and national leadership being replaced with a transitional military council consisting of military officers and led by his son Mahamat Kaka. The constitution is currently suspended, pending replacement with one drafted by a civilian National Transitional Council, yet to be appointed. The military council has stated that elections will be held at the end of an 18-month transitional period.
Internal opposition and foreign relations
Déby faced armed opposition from groups who are deeply divided by leadership clashes but were united in their intention to overthrow him. These forces stormed the capital on 13 April 2006, but were ultimately repelled. Chad's greatest foreign influence is France, which maintains 1,000 soldiers in the country. Déby relied on the French to help repel the rebels, and France gives the Chadian army logistical and intelligence support for fear of a complete collapse of regional stability. Nevertheless, Franco-Chadian relations were soured by the granting of oil drilling rights to the American Exxon company in 1999.
There have been numerous rebel groups in Chad throughout the last few decades. In 2007, a peace treaty was signed that integrated United Front for Democratic Change soldiers into the Chadian Army. The Movement for Justice and Democracy in Chad also clashed with government forces in 2003 in an attempt to overthrow President Idriss Déby. In addition, there have been various conflicts with Khartoum's Janjaweed rebels in eastern Chad, who killed civilians by use of helicopter gunships. Presently, the Union of Resistance Forces (UFR) are a rebel group that continues to battle with the government of Chad. In 2010, the UFR reportedly had a force estimating 6,000 men and 300 vehicles.
The UAE foreign aid was inaugurated in the Chadian city of Amdjarass on 3 August 2023. The UAE's continuous efforts to provide assistance to the Chadian people and support endeavors to provide humanitarian and relief aid through the UAE's humanitarian institutions to Sudanese refugees in Chad.
Military
The CIA World Factbook estimates the military budget of Chad to be 4.2% of GDP as of 2006. Given the then GDP ($7.095 bln) of the country, military spending was estimated to be about $300 million. This estimate however dropped after the end of the Civil war in Chad (2005–2010) to 2.0% as estimated by the World Bank for the year 2011.
Administrative divisions
Since 2012 Chad has been divided into 23 regions. The subdivision of Chad in regions came about in 2003 as part of the decentralisation process, when the government abolished the previous 14 prefectures. Each region is headed by a presidentially appointed governor. Prefects administer the 61 departments within the regions. The departments are divided into 200 sub-prefectures, which are in turn composed of 446 cantons.
The cantons are scheduled to be replaced by communautés rurales, but the legal and regulatory framework has not yet been completed. The constitution provides for decentralised government to compel local populations to play an active role in their own development. To this end, the constitution declares that each administrative subdivision be governed by elected local assemblies, but no local elections have taken place, and communal elections scheduled for 2005 have been repeatedly postponed.
Economy
The United Nations' Human Development Index ranks Chad as the seventh poorest country in the world, with 80% of the population living below the poverty line. The GDP (purchasing power parity) per capita was estimated as US$1,651 in 2009. Chad is part of the Bank of Central African States, the Customs and Economic Union of Central Africa (UDEAC) and the Organization for the Harmonization of Business Law in Africa (OHADA).
Chad's currency is the CFA franc. In the 1960s, the mining industry of Chad produced sodium carbonate, or natron. There have also been reports of gold-bearing quartz in the Biltine Prefecture. However, years of civil war have scared away foreign investors; those who left Chad between 1979 and 1982 have only recently begun to regain confidence in the country's future. In 2000 major direct foreign investment in the oil sector began, boosting the country's economic prospects.
Uneven inclusion in the global political economy as a site for colonial resource extraction (primarily cotton and crude oil), a global economic system that does not promote nor encourage the development of Chadian industrialization, and the failure to support local agricultural production has meant that the majority of Chadians live in daily uncertainty and hunger. Over 80% of Chad's population relies on subsistence farming and livestock raising for its livelihood. The crops grown and the locations of herds are determined by the local climate. In the southernmost 10% of the territory lies the nation's most fertile cropland, with rich yields of sorghum and millet. In the Sahel only the hardier varieties of millet grow, and with much lower yields than in the south. On the other hand, the Sahel is ideal pastureland for large herds of commercial cattle and for goats, sheep, donkeys and horses. The Sahara's scattered oases support only some dates and legumes. Chad's cities face serious difficulties of municipal infrastructure; only 48% of urban residents have access to potable water and only 2% to basic sanitation.
Before the development of oil industry, cotton dominated industry and the labour market accounted for approximately 80% of export earnings. Cotton remains a primary export, although exact figures are not available. Rehabilitation of Cotontchad, a major cotton company weakened by a decline in world cotton prices, has been financed by France, the Netherlands, the European Union, and the International Bank for Reconstruction and Development (IBRD). The parastatal is now expected to be privatised. Other than cotton, cattle and gum arabic are dominant.
According to the United Nations, Chad has been affected by a humanitarian crisis since at least 2001. , the country of Chad hosts over 280,000 refugees from the Sudan's Darfur region, over 55,000 from the Central African Republic, as well as over 170,000 internally displaced persons. In February 2008 in the aftermath of the Battle of N'Djamena, UN Under-Secretary-General for Humanitarian Affairs John Holmes expressed "extreme concern" that the crisis would have a negative effect on the ability of humanitarians to deliver life-saving assistance to half a million beneficiaries, most of whom – according to him – heavily rely on humanitarian aid for their survival. UN spokesperson Maurizio Giuliano stated to The Washington Post: "If we do not manage to provide aid at sufficient levels, the humanitarian crisis might become a humanitarian catastrophe". In addition, organizations such as Save the Children have suspended activities due to killings of aid workers.
Chad has made some progress in reducing poverty, there was a decline in the national poverty rate from 55% to 47% between 2003 and 2011. However, the amount of poor people increased from 4.7 million (2011) to 6.5 million (2019) in absolute amounts. By 2018, 4.2 out of 10 people still live below the poverty line.
Infrastructure
Transport
Civil war crippled the development of transport infrastructure; in 1987, Chad had only of paved roads. Successive road rehabilitation projects improved the network to by 2004. Nevertheless, the road network is limited; roads are often unusable for several months of the year. With no railways of its own, Chad depends heavily on Cameroon's rail system for the transport of Chadian exports and imports to and from the seaport of Douala.
Chad had an estimated 59 airports, only 9 of which had paved runways. An international airport serves the capital and provides regular nonstop flights to Paris and several African cities.
Energy
Chad's energy sector has had years of mismanagement by the parastatal Chad Water and Electric Society (STEE), which provides power for 15% of the capital's citizens and covers only 1.5% of the national population. Most Chadians burn biomass fuels such as wood and animal manure for power.
ExxonMobil leads a consortium of Chevron and Petronas that has invested $3.7 billion to develop oil reserves estimated at one billion barrels in southern Chad. Oil production began in 2003 with the completion of a pipeline (financed in part by the World Bank) that links the southern oilfields to terminals on the Atlantic coast of Cameroon. As a condition of its assistance, the World Bank insisted that 80% of oil revenues be spent on development projects. In January 2006 the World Bank suspended its loan programme when the Chadian government passed laws reducing this amount. On 14 July 2006, the World Bank and Chad signed a memorandum of understanding under which the Government of Chad commits 70% of its spending to priority poverty reduction programmes.
Telecommunications
The telecommunication system is basic and expensive, with fixed telephone services provided by the state telephone company SotelTchad. In 2000, there were only 14 fixed telephone lines per 10,000 inhabitants in the country, one of the lowest telephone densities in the world.
Gateway Communications, a pan-African wholesale connectivity and telecommunications provider also has a presence in Chad. In September 2013, Chad's Ministry for Posts and Information & Communication Technologies (PNTIC) announced that the country will be seeking a partner for fiber optic technology.
Chad is ranked last in the World Economic Forum's Network Readiness Index (NRI) – an indicator for determining the development level of a country's information and communication technologies. Chad ranked number 148 out of 148 overall in the 2014 NRI ranking, down from 142 in 2013. In September 2010 the mobile phone penetration rate was estimated at 24.3% over a population estimate of 10.7 million.
Culture
Because of its great variety of peoples and languages, Chad possesses a rich cultural heritage. The Chadian government has actively promoted Chadian culture and national traditions by opening the Chad National Museum and the Chad Cultural Centre. Six national holidays are observed throughout the year, and movable holidays include the Christian holiday of Easter Monday and the Muslim holidays of Eid ul-Fitr, Eid ul-Adha, and Eid Milad Nnabi.
Cuisine
Millet is the staple food of Chadian cuisine. It is used to make balls of paste that are dipped in sauces. In the north this dish is known as alysh; in the south, as biya. Fish is popular, which is generally prepared and sold either as salanga (sun-dried and lightly smoked Alestes and Hydrocynus) or as banda (smoked large fish). Carcaje is a popular sweet red tea extracted from hibiscus leaves. Alcoholic beverages, though absent in the north, are popular in the south, where people drink millet beer, known as billi-billi when brewed from red millet, and as coshate when from white millet.
Music
The music of Chad includes a number of instruments such as the kinde, a type of bow harp; the kakaki, a long tin horn; and the hu hu, a stringed instrument that uses calabashes as loudspeakers. Other instruments and their combinations are more linked to specific ethnic groups: the Sara prefer whistles, balafons, harps and kodjo drums; and the Kanembu combine the sounds of drums with those of flute-like instruments.
The music group Chari Jazz formed in 1964 and initiated Chad's modern music scene. Later, more renowned groups such as African Melody and International Challal attempted to mix modernity and tradition. Popular groups such as Tibesti have clung faster to their heritage by drawing on sai, a traditional style of music from southern Chad. The people of Chad have customarily disdained modern music. However, in 1995 greater interest has developed and fostered the distribution of CDs and audio cassettes featuring Chadian artists. Piracy and a lack of legal protections for artists' rights remain problems to further development of the Chadian music industry.
Literature
As in other Sahelian countries, literature in Chad has seen an economic, political and spiritual drought that has affected its best known writers. Chadian authors have been forced to write from exile or expatriate status and have generated literature dominated by themes of political oppression and historical discourse. Since 1962, 20 Chadian authors have written some 60 works of fiction. Among the most internationally renowned writers are Joseph Brahim Seïd, Baba Moustapha, Antoine Bangui and Koulsy Lamko. In 2003 Chad's sole literary critic, Ahmat Taboye, published his to further knowledge of Chad's literature internationally and among youth and to make up for Chad's lack of publishing houses and promotional structure.
Media and cinema
Chad's television audience is limited to N'Djamena. The only television station is the state-owned Télé Tchad. Radio has a far greater reach, with 13 private radio stations. Newspapers are limited in quantity and distribution, and circulation figures are small due to transportation costs, low literacy rates, and poverty. While the constitution defends liberty of expression, the government has regularly restricted this right, and at the end of 2006 began to enact a system of prior censorship on the media.
The development of a Chadian film industry, which began with the short films of Edouard Sailly in the 1960s, was hampered by the devastations of civil wars and from the lack of cinemas, of which there is currently only one in the whole country (the Normandie in N'Djamena). The Chadian feature film industry began growing again in the 1990s, with the work of directors Mahamat-Saleh Haroun, Issa Serge Coelo and Abakar Chene Massar. Haroun's film Abouna was critically acclaimed, and his Daratt won the Grand Special Jury Prize at the 63rd Venice International Film Festival. The 2010 feature film A Screaming Man won the Jury Prize at the 2010 Cannes Film Festival, making Haroun the first Chadian director to enter, as well as win, an award in the main Cannes competition. Issa Serge Coelo directed the films Daresalam and DP75: Tartina City.
Sports
Football is Chad's most popular sport. The country's national team is closely followed during international competitions and Chadian footballers have played for French teams. Basketball and freestyle wrestling are widely practiced, the latter in a form in which the wrestlers put on traditional animal hides and cover themselves with dust.
See also
Outline of Chad
Index of Chad-related articles
Notes
References
Citations
Sources
Alphonse, Dokalyo (2003) "Cinéma: un avenir plein d'espoir" , Tchad et Culture 214.
"Background Note: Chad". September 2006. United States Department of State.
Bambé, Naygotimti (April 2007); "", 256.
Botha, D.J.J. (December 1992); "S.H. Frankel: Reminiscences of an Economist", The South African Journal of Economics 60 (4): 246–255.
Boyd-Buggs, Debra & Joyce Hope Scott (1999); Camel Tracks: Critical Perspectives on Sahelian Literatures. Lawrenceville: Africa World Press.
"Chad". Country Reports on Human Rights Practices 2006, 6 March 2007. Bureau of Democracy, Human Rights, and Labor, U.S. Department of State.
"Chad". Country Reports on Human Rights Practices 2004, 28 February 2005. Bureau of Democracy, Human Rights, and Labor, U.S. Department of State.
"Chad". International Religious Freedom Report 2006. 15 September 2006. Bureau of Democracy, Human Rights, and Labor, U.S. Department of State.
"Amnesty International Report 2006 ". Amnesty International Publications.
"Chad" (PDF). African Economic Outlook 2007. OECD. May 2007.
"Chad". The World Factbook. United States Central Intelligence Agency. 15 May 2007.
"Chad" (PDF). Women of the World: Laws and Policies Affecting Their Reproductive Lives – Francophone Africa. Center for Reproductive Rights. 2000
. Freedom of the Press: 2007 Edition. Freedom House, Inc.
"Chad". Human Rights Instruments. United Nations Commission on Human Rights. 12 December 1997.
"Chad". Encyclopædia Britannica. (2000). Chicago: Encyclopædia Britannica, Inc.
"Chad, Lake". Encyclopædia Britannica. (2000).
"Chad – Community Based Integrated Ecosystem Management Project" (PDF). 24 September 2002. World Bank.
(PDF). Cultural Profiles Project. Citizenship and Immigration Canada.
"Chad Urban Development Project" (PDF). 21 October 2004. World Bank.
"Chad: Humanitarian Profile – 2006/2007" (PDF). 8 January 2007. Office for the Coordination of Humanitarian Affairs.
"Chad Livelihood Profiles" (PDF). March 2005. United States Agency for International Development.
"Chad Poverty Assessment: Constraints to Rural Development" (PDF). World Bank. 21 October 1997.
"Chad (2006) ". Country Report: 2006 Edition. Freedom House, Inc.
. Country Analysis Briefs. January 2007. Energy Information Administration.
"Chad leader's victory confirmed", BBC News, 14 May 2006.
"Chad may face genocide, UN warns", BBC News, 16 February 2007.
Chapelle, Jean (1981); . Paris: L'Harmattan.
Chowdhury, Anwarul Karim & Sandagdorj Erdenbileg (2006); . New York: United Nations.
Collelo, Thomas (1990); Chad: A Country Study, 2d ed. Washington: U.S. GPO.
Dadnaji, Dimrangar (1999);
East, Roger & Richard J. Thomas (2003); Profiles of People in Power: The World's Government Leaders. Routledge.
Dinar, Ariel (1995); Restoring and Protecting the World's Lakes and Reservoirs. World Bank Publications.
Gondjé, Laoro (2003); "", 214.
"Chad: the Habré Legacy" . Amnesty International. 16 October 2001.
Lange, Dierk (1988). "The Chad region as a crossroad" (PDF), in UNESCO General History of Africa – Africa from the Seventh to the Eleventh Century, vol. 3: 436–460. University of California Press.
(PDF). . N. 3. September 2004.
Macedo, Stephen (2006); Universal Jurisdiction: National Courts and the Prosecution of Serious Crimes Under International Law. University of Pennsylvania Press.
Malo, Nestor H. (2003); "", 214.
Manley, Andrew; "Chad's vulnerable president", BBC News, 15 March 2006.
"Mirren crowned 'queen' at Venice", BBC News, 9 September 2006.
Ndang, Tabo Symphorien (2005); " " (PDF). 4th PEP Research Network General Meeting. Poverty and Economic Policy.
Pollack, Kenneth M. (2002); Arabs at War: Military Effectiveness, 1948–1991. Lincoln: University of Nebraska Press.
"Rank Order – Area ". The World Factbook. United States Central Intelligence Agency. 10 May 2007.
"Republic of Chad – Public Administration Country Profile " (PDF). United Nations, Department of Economic and Social Affairs. November 2004.
Spera, Vincent (8 February 2004); . United States Department of Commerce.
"Symposium on the evaluation of fishery resources in the development and management of inland fisheries". CIFA Technical Paper No. 2. FAO. 29 November – 1 December 1972.
"". . UNESCO, Education for All.
"" (PDF). International Crisis Group. 1 June 2006.
Wolfe, Adam; , PINR, 6 December 2006.
World Bank (14 July 2006). World Bank, Govt. of Chad Sign Memorandum of Understanding on Poverty Reduction. Press release.
World Population Prospects: The 2006 Revision Population Database. 2006. United Nations Population Division.
"Worst corruption offenders named", BBC News, 18 November 2005.
Young, Neil (August 2002); An interview with Mahamet-Saleh Haroun, writer and director of Abouna ("Our Father").
External links
Chad. The World Factbook. Central Intelligence Agency.
Chad country study from Library of Congress
Chad profile from the BBC News
Key Development Forecasts for Chad from International Futures
1960 establishments in Africa
Countries and territories where Arabic is an official language
Central African countries
Countries in Africa
French-speaking countries and territories
Landlocked countries
Least developed countries
Member states of the African Union
Member states of the Organisation internationale de la Francophonie
Member states of the Organisation of Islamic Cooperation
Member states of the United Nations
Republics
Saharan countries
States and territories established in 1960
1960 establishments in Chad
Military dictatorships |
5489 | https://en.wikipedia.org/wiki/Chile | Chile | Chile, officially the Republic of Chile, is a country located in western South America. It is the southernmost country in the world and the closest to Antarctica, stretching along a narrow strip of land between the Andes Mountains and the Pacific Ocean. With an area of and a population of 17.5 million as of 2017, Chile shares borders with Peru to the north, Bolivia to the northeast, Argentina to the east, and the Drake Passage to the south. The country also controls several Pacific islands, including Juan Fernández, Isla Salas y Gómez, Desventuradas, and Easter Island, and claims about of Antarctica as the Chilean Antarctic Territory. The capital and largest city of Chile is Santiago, and the national language is Spanish.
Spain conquered and colonized the region in the mid-16th century, replacing Inca rule, but failed to conquer the independent Mapuche people who inhabited what is now south-central Chile. Chile emerged as a relatively stable authoritarian republic in the 1830s after their 1818 declaration of independence from Spain. During the 19th century, Chile experienced significant economic and territorial growth, putting an end to Mapuche resistance in the 1880s and gaining its current northern territory in the War of the Pacific (1879–83) by defeating Peru and Bolivia. In the 20th century, up until the 1970s, Chile underwent a process of democratization and experienced rapid population growth and urbanization, while relying increasingly on exports from copper mining to support its economy. During the 1960s and 1970s, the country was marked by severe left-right political polarization and turmoil, which culminated in the 1973 Chilean coup d'état that overthrew Salvador Allende's democratically elected left-wing government. This was followed by a 16-year right-wing military dictatorship under Augusto Pinochet, which resulted in more than 3,000 deaths or disappearances. The regime ended in 1990, following a referendum in 1988, and was succeeded by a center-left coalition, which ruled until 2010.
Chile has a high-income economy and is one of the most economically and socially stable nations in South America, leading Latin America in competitiveness, per capita income, globalization, peace, and economic freedom. Chile also performs well in the region in terms of sustainability of the state and democratic development, and boasts the second lowest homicide rate in the Americas, following only Canada. Chile is a founding member of the United Nations, the Community of Latin American and Caribbean States (CELAC), and the Pacific Alliance, and joined the OECD in 2010.
Etymology
There are various theories about the origin of the word Chile. According to 17th-century Spanish chronicler Diego de Rosales, the Incas called the valley of the Aconcagua Chili by corruption of the name of a Picunche tribal chief () called Tili, who ruled the area at the time of the Incan conquest in the 15th century. Another theory points to the similarity of the valley of the Aconcagua with that of the Casma Valley in Peru, where there was a town and valley named Chili.
Other theories say Chile may derive its name from a Native American word meaning either 'ends of the earth' or 'sea gulls'; from the Mapuche word , which may mean 'where the land ends'" or from the Quechua chiri, 'cold', or , meaning either 'snow' or "the deepest point of the Earth". Another origin attributed to chilli is the onomatopoeic —the Mapuche imitation of the warble of a bird locally known as trile.
The Spanish conquistadors heard about this name from the Incas, and the few survivors of Diego de Almagro's first Spanish expedition south from Peru in 1535–36 called themselves the "men of Chilli". Ultimately, Almagro is credited with the universalization of the name Chile, after naming the Mapocho valley as such. The older spelling "Chili" was in use in English until the early 20th century before switching to "Chile".
History
Early history
Stone tool evidence indicates humans sporadically frequented the Monte Verde valley area as long as 18,500 years ago. About 10,000 years ago, migrating Indigenous Peoples settled in fertile valleys and coastal areas of what is present-day Chile. Settlement sites from very early human habitation include Monte Verde, Cueva del Milodón and the Pali-Aike Crater's lava tube.
The Incas briefly extended their empire into what is now northern Chile, but the Mapuche (or Araucanians as they were known by the Spaniards) successfully resisted many attempts by the Inca Empire to subjugate them, despite their lack of state organization. They fought against the Sapa Inca Tupac Yupanqui and his army. The result of the bloody three-day confrontation known as the Battle of the Maule was that the Inca conquest of the territories of Chile ended at the Maule river.
Spanish colonization
In 1520, while attempting to circumnavigate the globe, Ferdinand Magellan discovered the southern passage now named after him (the Strait of Magellan) thus becoming the first European to set foot on what is now Chile. The next Europeans to reach Chile were Diego de Almagro and his band of Spanish conquistadors, who came from Peru in 1535 seeking gold. The Spanish encountered various cultures that supported themselves principally through slash-and-burn agriculture and hunting.
The conquest of Chile began in earnest in 1540 and was carried out by Pedro de Valdivia, one of Francisco Pizarro's lieutenants, who founded the city of Santiago on 12 February 1541. Although the Spanish did not find the extensive gold and silver they sought, they recognized the agricultural potential of Chile's central valley, and Chile became part of the Spanish Empire.
Conquest took place gradually, and the Europeans suffered repeated setbacks. A massive Mapuche insurrection that began in 1553 resulted in Valdivia's death and the destruction of many of the colony's principal settlements. Subsequent major insurrections took place in 1598 and in 1655. Each time the Mapuche and other native groups revolted, the southern border of the colony was driven northward. The abolition of slavery by the Spanish crown in 1683 was done in recognition that enslaving the Mapuche intensified resistance rather than cowing them into submission. Despite royal prohibitions, relations remained strained from continual colonialist interference.
Cut off to the north by desert, to the south by the Mapuche, to the east by the Andes Mountains, and to the west by the ocean, Chile became one of the most centralized, homogeneous colonies in Spanish America. Serving as a sort of frontier garrison, the colony found itself with the mission of forestalling encroachment by both the Mapuche and Spain's European enemies, especially the English and the Dutch. Buccaneers and pirates menaced the colony in addition to the Mapuche, as was shown by Sir Francis Drake's 1578 raid on Valparaíso, the colony's principal port. Chile hosted one of the largest standing armies in the Americas, making it one of the most militarized of the Spanish possessions, as well as a drain on the treasury of the Viceroyalty of Peru.
The first general census was conducted by the government of Agustín de Jáuregui between 1777 and 1778; it indicated that the population consisted of 259,646 inhabitants: 73.5% of European descent, 7.9% mestizos, 8.6% indigenous peoples and 9.8% blacks. Francisco Hurtado, Governor of the province of Chiloé, conducted a census in 1784 and found the population consisted of 26,703 inhabitants, 64.4% of whom were whites and 33.5% of whom were natives. The Diocese of Concepción conducted a census in areas south of the Maule river in 1812, but did not include the indigenous population or the inhabitants of the province of Chiloé. The population is estimated at 210,567, 86.1% of whom were Spanish or of European descent, 10% of whom were indigenous and 3.7% of whom were mestizos, blacks and mulattos.
A 2021 study by Baten and Llorca-Jaña shows that regions with a relatively high share of North European migrants developed faster in terms of numeracy, even if the overall number of migrants was small. This effect might be related to externalities: the surrounding population adopted a similar behavior as the small non-European immigrant group, and new schools were created. Ironically, there might have been positive spillover effects from the educational investment made by migrants, at the same time numeracy might have been reduced by the greater inequality in these regions. However, the positive effects of immigration were apparently stronger.
Independence and nation building
In 1808, Napoleon's enthronement of his brother Joseph as the Spanish King precipitated the drive by the colony for independence from Spain. A national junta in the name of Ferdinand – heir to the deposed king – was formed on 18 September 1810. The Government Junta of Chile proclaimed Chile an autonomous republic within the Spanish monarchy (in memory of this day, Chile celebrates its National Day on 18 September each year).
After these events, a movement for total independence, under the command of José Miguel Carrera (one of the most renowned patriots) and his two brothers Juan José and Luis Carrera, soon gained a wider following. Spanish attempts to re-impose arbitrary rule during what was called the Reconquista led to a prolonged struggle, including infighting from Bernardo O'Higgins, who challenged Carrera's leadership.
Intermittent warfare continued until 1817. With Carrera in prison in Argentina, O'Higgins and anti-Carrera cohort José de San Martín, hero of the Argentine War of Independence, led an army that crossed the Andes into Chile and defeated the royalists. On 12 February 1818, Chile was proclaimed an independent republic. The political revolt brought little social change, however, and 19th-century Chilean society preserved the essence of the stratified colonial social structure, which was greatly influenced by family politics and the Roman Catholic Church. A strong presidency eventually emerged, but wealthy landowners remained powerful. Bernardo O'Higgins once planned to expand Chile by liberating the Philippines from Spain and incorporating the islands. In this regard he tasked the Scottish naval officer, Lord Thomas Cochrane, in a letter dated November 12, 1821, expressing his plan to conquer Guayaquil, the Galapagos Islands, and the Philippines. There were preparations, but the plan did not push through because O' Higgins was exiled.
Chile slowly started to expand its influence and to establish its borders. By the Tantauco Treaty, the archipelago of Chiloé was incorporated in 1826. The economy began to boom due to the discovery of silver ore in Chañarcillo, and the growing trade of the port of Valparaíso, which led to conflict over maritime supremacy in the Pacific with Peru. At the same time, attempts were made to strengthen sovereignty in southern Chile intensifying penetration into Araucanía and colonizing Llanquihue with German immigrants in 1848. Through the founding of Fort Bulnes by the Schooner Ancud under the command of John Williams Wilson, the Magallanes region joined the country in 1843, while the Antofagasta region, at the time part of Bolivia, began to fill with people.
Toward the end of the 19th century, the government in Santiago consolidated its position in the south by the Occupation of Araucanía. The Boundary treaty of 1881 between Chile and Argentina confirmed Chilean sovereignty over the Strait of Magellan. As a result of the War of the Pacific with Peru and Bolivia (1879–83), Chile expanded its territory northward by almost one-third, eliminating Bolivia's access to the Pacific, and acquired valuable nitrate deposits, the exploitation of which led to an era of national affluence. Chile had joined the stand as one of the high-income countries in South America by 1870.
The 1891 Chilean Civil War brought about a redistribution of power between the President and Congress, and Chile established a parliamentary style democracy. However, the Civil War had also been a contest between those who favored the development of local industries and powerful Chilean banking interests, particularly the House of Edwards which had strong ties to foreign investors. Soon after, the country engaged in a vastly expensive naval arms race with Argentina that nearly led to war.
20th century
The Chilean economy partially degenerated into a system protecting the interests of a ruling oligarchy. By the 1920s, the emerging middle and working classes were powerful enough to elect a reformist president, Arturo Alessandri, whose program was frustrated by a conservative congress. In the 1920s, Marxist groups with strong popular support arose.
A military coup led by General Luis Altamirano in 1924 set off a period of political instability that lasted until 1932. Of the ten governments that held power in that period, the longest lasting was that of General Carlos Ibáñez del Campo, who briefly held power in 1925 and then again between 1927 and 1931 in what was a de facto dictatorship (although not really comparable in harshness or corruption to the type of military dictatorship that have often bedeviled the rest of Latin America).
By relinquishing power to a democratically elected successor, Ibáñez del Campo retained the respect of a large enough segment of the population to remain a viable politician for more than thirty years, in spite of the vague and shifting nature of his ideology. When constitutional rule was restored in 1932, a strong middle-class party, the Radicals, emerged. It became the key force in coalition governments for the next 20 years. During the period of Radical Party dominance (1932–52), the state increased its role in the economy. In 1952, voters returned Ibáñez del Campo to office for another six years. Jorge Alessandri succeeded Ibáñez del Campo in 1958, bringing Chilean conservatism back into power democratically for another term.
The 1964 presidential election of Christian Democrat Eduardo Frei Montalva by an absolute majority initiated a period of major reform. Under the slogan "Revolution in Liberty", the Frei administration embarked on far-reaching social and economic programs, particularly in education, housing, and agrarian reform, including rural unionization of agricultural workers. By 1967, however, Frei encountered increasing opposition from leftists, who charged that his reforms were inadequate, and from conservatives, who found them excessive. At the end of his term, Frei had not fully achieved his party's ambitious goals.
In the 1970 election, Senator Salvador Allende of the Socialist Party of Chile (then part of the "Popular Unity" coalition which included the Communists, Radicals, Social-Democrats, dissident Christian Democrats, the Popular Unitary Action Movement, and the Independent Popular Action), achieved a partial majority in a plurality of votes in a three-way contest, followed by candidates Radomiro Tomic for the Christian Democrat Party and Jorge Alessandri for the Conservative Party. Allende was not elected with an absolute majority, receiving fewer than 35% of the votes.
The Chilean Congress conducted a runoff vote between the leading candidates, Allende and former president Jorge Alessandri, and, keeping with tradition, chose Allende by a vote of 153 to 35. Frei refused to form an alliance with Alessandri to oppose Allende, on the grounds that the Christian Democrats were a workers' party and could not make common cause with the right wing.
An economic depression that began in 1972 was exacerbated by capital flight, plummeting private investment, and withdrawal of bank deposits in response to Allende's socialist program. Production fell and unemployment rose. Allende adopted measures including price freezes, wage increases, and tax reforms, to increase consumer spending and redistribute income downward. Joint public-private public works projects helped reduce unemployment. Much of the banking sector was nationalized. Many enterprises within the copper, coal, iron, nitrate, and steel industries were expropriated, nationalized, or subjected to state intervention. Industrial output increased sharply and unemployment fell during the Allende administration's first year.
Allende's program included advancement of workers' interests, replacing the judicial system with "socialist legality", nationalization of banks and forcing others to bankruptcy, and strengthening "popular militias" known as MIR. Started under former President Frei, the Popular Unity platform also called for nationalization of Chile's major copper mines in the form of a constitutional amendment. The measure was passed unanimously by Congress. As a result, the Richard Nixon administration organized and inserted secret operatives in Chile, in order to swiftly destabilize Allende's government. In addition, US financial pressure restricted international economic credit to Chile.
The economic problems were also exacerbated by Allende's public spending which was financed mostly by printing money and poor credit ratings given by commercial banks.
Simultaneously, opposition media, politicians, business guilds and other organizations helped to accelerate a campaign of domestic political and economical destabilization, some of which was backed by the United States. By early 1973, inflation was out of control.
On 26 May 1973, Chile's Supreme Court, which was opposed to Allende's government, unanimously denounced Allende's disruption of the legality of the nation. Although illegal under the Chilean constitution, the court supported and strengthened Pinochet's soon-to-be seizure of power.
Pinochet era (1973–1990)
A military coup overthrew Allende on 11 September 1973. As the armed forces bombarded the presidential palace, Allende apparently committed suicide. After the coup, Henry Kissinger told U.S. president Richard Nixon that the United States had "helped" the coup.
A military junta, led by General Augusto Pinochet, took control of the country. The first years of the regime were marked by human rights violations. Chile actively participated in Operation Condor. In October 1973, at least 72 people were murdered by the Caravan of Death. According to the Rettig Report and Valech Commission, at least 2,115 were killed, and at least 27,265 were tortured (including 88 children younger than 12 years old). In 2011, Chile recognized an additional 9,800 victims, bringing the total number of killed, tortured or imprisoned for political reasons to 40,018. At the national stadium, filled with detainees, one of those tortured and killed was internationally known poet-singer Víctor Jara (see "Music and Dance", below).
A new Constitution was approved by a controversial plebiscite on 11 September 1980, and General Pinochet became president of the republic for an eight-year term. After Pinochet obtained rule of the country, several hundred committed Chilean revolutionaries joined the Sandinista army in Nicaragua, guerrilla forces in Argentina or training camps in Cuba, Eastern Europe and Northern Africa.
In the late 1980s, largely as a result of events such as the 1982 economic collapse and mass civil resistance in 1983–88, the government gradually permitted greater freedom of assembly, speech, and association, to include trade union and political activity. The government launched market-oriented reforms with Hernán Büchi as Minister of Finance. Chile moved toward a free market economy that saw an increase in domestic and foreign private investment, although the copper industry and other important mineral resources were not opened to competition. In a plebiscite on 5 October 1988, Pinochet was denied a second eight-year term as president (56% against 44%). Chileans elected a new president and the majority of members of a bicameral congress on 14 December 1989. Christian Democrat Patricio Aylwin, the candidate of a coalition of 17 political parties called the Concertación, received an absolute majority of votes (55%). President Aylwin served from 1990 to 1994, in what was considered a transition period.
21st century
In December 1993, Christian Democrat Eduardo Frei Ruiz-Tagle, the son of previous president Eduardo Frei Montalva, led the Concertación coalition to victory with an absolute majority of votes (58%). Frei Ruiz-Tagle was succeeded in 2000 by Socialist Ricardo Lagos, who won the presidency in an unprecedented runoff election against Joaquín Lavín of the rightist Alliance for Chile. In January 2006, Chileans elected their first female president, Michelle Bachelet Jeria, of the Socialist Party, defeating Sebastián Piñera, of the National Renewal party, extending the Concertación governance for another four years. In January 2010, Chileans elected Sebastián Piñera as the first rightist President in 20 years, defeating former President Eduardo Frei Ruiz-Tagle of the Concertación, for a four-year term succeeding Bachelet. Due to term limits, Sebastián Piñera did not stand for re-election in 2013, and his term expired in March 2014 resulting in Michelle Bachelet returning to office. Sebastián Piñera succeeded Bachelet again in 2018 as the President of Chile after winning the December 2017 presidential election.
On 27 February 2010, Chile was struck by an 8.8 earthquake, the fifth largest ever recorded at the time. More than 500 people died (most from the ensuing tsunami) and over a million people lost their homes. The earthquake was also followed by multiple aftershocks. Initial damage estimates were in the range of US$15–30 billion, around 10% to 15% of Chile's real gross domestic product.
Chile achieved global recognition for the successful rescue of 33 trapped miners in 2010. On 5 August 2010, the access tunnel collapsed at the San José copper and gold mine in the Atacama Desert near Copiapó in northern Chile, trapping 33 men below ground. A rescue effort organized by the Chilean government located the miners 17 days later. All 33 men were brought to the surface two months later on 13 October 2010 over a period of almost 24 hours, an effort that was carried on live television around the world.
2019–20 Chilean protests are a series of country-wide protests in response to a rise in the Santiago Metro's subway fare, the increased cost of living, privatization and inequality prevalent in the country. On 15 November, most of the political parties represented in the National Congress signed an agreement to call a national referendum in April 2020 regarding the creation of a new Constitution, later postponed to October due to the COVID-19 pandemic. On 25 October 2020, Chileans voted 78.28 per cent in favor of a new constitution, while 21.72 per cent rejected the change. Voter turnout was 51 percent. An election for the members of the Constitutional Convention was held in Chile between 15 and 16 May 2021.
On 19 December 2021, a leftist candidate, the 35-year-old former student protest leader Gabriel Boric, won Chile's presidential election to become the country's youngest ever leader. On 11 March 2022, Boric was sworn in as president to succeed outgoing President Sebastian Pinera. Out of 24 members of Gabriel Boric's female-majority Cabinet, 14 are women.
On 4 September 2022, voters rejected overwhelmingly the new constitution in the constitutional referendum, which was put forward by the constitutional convention. The rejected new constitution, supported strongly by president Boric, proved to be too radical and left-leaning for the majority of voters.
Geography
A long and narrow coastal Southern Cone country on the west side of the Andes Mountains, Chile stretches over north to south, but only at its widest point east to west and at its narrowest point east to west, with an average width of . This encompasses a remarkable variety of climates and landscapes. It contains of land area. It is situated within the Pacific Ring of Fire. Excluding its Pacific islands and Antarctic claim, Chile lies between latitudes 17° and 56°S, and longitudes 66° and 75°W.
Chile is among the longest north–south countries in the world. If one considers only mainland territory, Chile is unique within this group in its narrowness from east to west, with the other long north–south countries (including Brazil, Russia, Canada, and the United States, among others) all being wider from east to west by a factor of more than 10. Chile also claims of Antarctica as part of its territory (Chilean Antarctic Territory). However, this latter claim is suspended under the terms of the Antarctic Treaty, of which Chile is a signatory. It is the world's southernmost country that is geographically on the mainland.
Chile controls Easter Island and Sala y Gómez Island, the easternmost islands of Polynesia, which it incorporated to its territory in 1888, and the Juan Fernández Islands, more than from the mainland. Also controlled but only temporarily inhabited (by some local fishermen) are the small islands of San Ambrosio and San Felix. These islands are notable because they extend Chile's claim to territorial waters out from its coast into the Pacific Ocean.
The northern Atacama Desert contains great mineral wealth, primarily copper and nitrates. The relatively small Central Valley, which includes Santiago, dominates the country in terms of population and agricultural resources. This area is also the historical center from which Chile expanded in the late 19th century when it integrated the northern and southern regions. Southern Chile is rich in forests, grazing lands, and features a string of volcanoes and lakes. The southern coast is a labyrinth of fjords, inlets, canals, twisting peninsulas, and islands. The Andes Mountains are located on the eastern border.
Topography
Chile is located along a highly seismic and volcanic zone, part of the Pacific Ring of Fire, due to the subduction of the Nazca and Antarctic plates in the South American plate. Late Paleozoic, 251 million years ago, Chile belonged to the continental block called Gondwana. It was just a depression that accumulated marine sediments began to rise at the end of the Mesozoic, 66 million years ago, due to the collision between the Nazca and South American plates, resulting in the Andes. The territory would be shaped over millions of years by the folding of the rocks, forming the current relief.
The Chilean relief consists of the central depression, which crosses the country longitudinally, flanked by two mountain ranges that make up about 80% of the territory: the Andes mountains to the east-natural border with Bolivia and Argentina in the region of Atacama and the Coastal Range west-minor height from the Andes. Chile's highest peak is the Nevado Ojos del Salado, at 6891.3 m, which is also the highest volcano in the world. The highest point of the Coastal Range is Vicuña Mackenna, at 3114 meters, located in the Sierra Vicuña Mackenna, the south of Antofagasta. Among the coastal mountains and the Pacific is a series of coastal plains, of variable length, which allow the settlement of coastal towns and big ports. Some areas of the plains territories encompass territory east of the Andes, and the Patagonian steppes and Magellan, or are high plateaus surrounded by high mountain ranges, such as the Altiplano or Puna de Atacama.
The Far North is the area between the northern boundary of the country and the parallel 26° S, covering the first three regions. It is characterized by the presence of the Atacama desert, the most arid in the world. The desert is fragmented by streams that originate in the area known as the pampas Tamarugal. The Andes, split in two and whose eastern arm runs through Bolivia, has a high altitude and volcanic activity, which has allowed the formation of the Andean altiplano and salt structures as the Salar de Atacama, due to the gradual accumulation of sediments over time.
To the south is the Norte Chico, extending to the Aconcagua river. Los Andes begin to decrease its altitude to the south and closer to the coast, reaching 90 km away at the height of Illapel, the narrowest part of the Chilean territory. The two mountain ranges intersect, virtually eliminating the intermediate depression. The existence of rivers flowing through the territory allows the formation of transverse valleys, where agriculture has developed strongly in recent times, while the coastal plains begin to expand.
The Central area is the most populated region of the country. The coastal plains are wide and allow the establishment of cities and ports along the Pacific. The Andes maintain altitudes above 6000m but descend slowly in height to 4000 meters on average. The intermediate depression reappears becoming a fertile valley that allows agricultural development and human settlement, due to sediment accumulation. To the south, the Cordillera de la Costa reappears in the range of Nahuelbuta while glacial sediments create a series of lakes in the area of La Frontera.
Patagonia extends from within Reloncavi, at the height of parallel 41°S, to the south. During the last glaciation, this area was covered by ice that strongly eroded Chilean relief structures. As a result, the intermediate depression sinks in the sea, while the coastal mountains rise to a series of archipelagos, such as Chiloé and the Chonos, disappearing in Taitao peninsula, in the parallel 47°S. The Andes mountain range loses height and erosion caused by the action of glaciers has caused fjords. East of the Andes, on the continent, or north of it, on the island of Tierra del Fuego are located relatively flat plains, which in the Strait of Magellan cover large areas. The Andes, as he had done previously Cordillera de la Costa, begins to break in the ocean causing a myriad of islands and islets and disappear into it, sinking and reappearing in the Southern Antilles arc and then the Antarctic Peninsula, where it is called Antartandes, in the Chilean Antarctic Territory, lying between the meridians 53°W and 90°W.
In the middle of the Pacific, the country has sovereignty over several islands of volcanic origin, collectively known as Insular Chile. The archipelago of Juan Fernandez and Easter Island is located in the fracture zone between the Nazca plate and the Pacific plate known as East Pacific Rise.
Climate and hydrography
The diverse climate of Chile ranges from the world's driest desert in the north—the Atacama Desert—through a Mediterranean climate in the center, humid subtropical in Easter Island, to an oceanic climate, including alpine tundra and glaciers in the east and south. According to the Köppen system, Chile within its borders hosts at least ten major climatic subtypes. There are four seasons in most of the country: summer (December to February), autumn (March to May), winter (June to August), and spring (September to November).
Due to the characteristics of the territory, Chile is crossed by numerous rivers generally short in length and with low flow rates. They commonly extend from the Andes to the Pacific Ocean, flowing from East to West. Because of the Atacama desert, in the Norte Grande there are only short endorheic character streams, except for the Loa River, the longest in the country 440 km. In the high valleys, wetland areas generate Chungará Lake, located at 4500 meters above sea level. It and the Lauca River are shared with Bolivia, as well as the Lluta River. In the center-north of the country, the number of rivers that form valleys of agricultural importance increases. Noteworthy are the Elqui with 75 km long, 142 km Aconcagua, Maipo with 250 km and its tributary, the Mapocho with 110 km, and Maule with 240 km. Their waters mainly flow from Andean snowmelt in the summer and winter rains. The major lakes in this area are the artificial lake Rapel, the Colbun Maule lagoon and the lagoon of La Laja.
Biodiversity
The flora and fauna of Chile are characterized by a high degree of endemism, due to its particular geography. In continental Chile, the Atacama Desert in the north and the Andes mountains to the east are barriers that have led to the isolation of flora and fauna. Add to that the enormous length of Chile (over ) and this results in a wide range of climates and environments that can be divided into three general zones: the desert provinces of the north, central Chile, and the humid regions of the south.
The native flora of Chile consists of relatively fewer species compared to the flora of other South American countries.
The northernmost coastal and central region is largely barren of vegetation, approaching the most absolute desert in the world.
On the slopes of the Andes, in addition to the scattered tola desert brush, grasses are found. The central valley is characterized by several species of cacti, the hardy espinos, the Chilean pine, the southern beeches and the copihue, a red bell-shaped flower that is Chile's national flower.
In southern Chile, south of the Biobío River, heavy precipitation has produced dense forests of laurels, magnolias, and various species of conifers and beeches, which become smaller and more stunted to the south.
The cold temperatures and winds of the extreme south preclude heavy forestation. Grassland is found in Atlantic Chile (in Patagonia). Much of the Chilean flora is distinct from that of neighboring Argentina, indicating that the Andean barrier existed during its formation.
Some of Chile's flora has an Antarctic origin due to land bridges which formed during the Cretaceous ice ages, allowing plants to migrate from Antarctica to South America. Chile had a 2018 Forest Landscape Integrity Index mean score of 7.37/10, ranking it 43rd globally out of 172 countries.
Just over 3,000 species of fungi are recorded in Chile, but this number is far from complete. The true total number of fungal species occurring in Chile is likely to be far higher, given the generally accepted estimate that only about 7 percent of all fungi worldwide have so far been discovered. Although the amount of available information is still very small, a first effort has been made to estimate the number of fungal species endemic to Chile, and 1995 species have been tentatively identified as possible endemics of the country.
Chile's geographical isolation has restricted the immigration of faunal life so that only a few of the many distinctive South American animals are found. Among the larger mammals are the puma or cougar, the llama-like guanaco and the fox-like chilla. In the forest region, several types of marsupials and a small deer known as the pudu are found.
There are many species of small birds, but most of the larger common Latin American types are absent. Few freshwater fish are native, but North American trout have been successfully introduced into the Andean lakes. Owing to the vicinity of the Humboldt Current, ocean waters abound with fish and other forms of marine life, which in turn support a rich variety of waterfowl, including several penguins. Whales are abundant, and some six species of seals are found in the area.
Government and politics
The current Constitution of Chile was drafted by Jaime Guzmán in 1980 and subsequently approved via a national plebiscite—regarded as "highly irregular" by some observers—in September of that year, under the military dictatorship of Augusto Pinochet. It entered into force in March 1981. After Pinochet's defeat in the 1988 plebiscite, the constitution was amended to ease provisions for future amendments to the Constitution. In September 2005, President Ricardo Lagos signed into law several constitutional amendments passed by Congress. These include eliminating the positions of appointed senators and senators for life, granting the President authority to remove the commanders-in-chief of the armed forces, and reducing the presidential term from six to four years.
Chile's judiciary is independent and includes a court of appeal, a system of military courts, a constitutional tribunal, and the Supreme Court of Chile. In June 2005, Chile completed a nationwide overhaul of its criminal justice system. The reform has replaced inquisitorial proceedings with an adversarial system with greater similarity to that of common law jurisdictions such as the United States.
For parliamentary elections, between 1989 and 2013 the binominal system was used, which promoted the establishment of two majority political blocs -Concertación and Alliance- at the expense of the exclusion of non-majority political groups. The opponents of this system approved in 2015 a moderate proportional electoral system that has been in force since the 2017 parliamentary elections, allowing the entry of new parties and coalitions. The Congress of Chile has a 50-seat Senate and a 155-member Chamber of Deputies. Senators serve for eight years with staggered terms, while deputies are elected every 4 years. The last congressional elections were held on 21 November 2021, concurrently with the presidential election. The Congress is located in the port city of Valparaíso, about west of the capital, Santiago.
The main existing political coalitions in Chile are:
Government:
Apruebo Dignidad (Approve Dignity) is a left-wing coalition that has its origin in the 2021 Chilean Constitutional Convention election. After the success in that election, it held presidential primaries, in which Gabriel Boric (CS, FA) was the winner. It is formed by the coalition Frente Amplio (Broad Front) and the coalition Chile Digno (Worthy Chile) formed by the Communist Party of Chile and other left-wing parties.
Democratic Socialism is a center-left coalition, successor of the Constituent Unity coalition, itself a successor of the Concertation coalition which supported the "NO" option in the 1988 plebiscite and subsequently governed the country from 1990 to 2010. This pact is formed by the Socialist, for Democracy, Radical, and Liberal parties.
Opposition:
Chile Vamos (Let's go Chile) is a center-right coalition with roots of liberal conservatism, formed by the parties Renovación Nacional (National Renewal), Unión Demócrata Independiente (Independent Democratic Union) and Evópoli. It has its origins in the Alliance coalition, formed by the main parties that supported the "YES" option in the 1988 plebiscite, although it has used different names since then. It was the ruling coalition during the first and second government of Sebastián Piñera, (2010–2014) and (2018–2022).
In the National Congress, Chile Vamos has 52 deputies and 24 senators, while the parliamentary group of Apruebo Dignidad is formed by 37 deputies and 6 senators. Democratic Socialism is the third political force with 30 deputies and 13 senators. The other groups with parliamentary representation are the Republican Party (15 deputies and 1 senator), the Christian Democratic Party (8 deputies and 5 senators), the Party of the People (8 deputies) and the independents outside of a coalition (5 deputies and 1 senator).
Foreign relations
Since the early decades after independence, Chile has always had an active involvement in foreign affairs. In 1837, the country aggressively challenged the dominance of Peru's port of Callao for preeminence in the Pacific trade routes, defeating the short-lived alliance between Peru and Bolivia, the Peru–Bolivian Confederation (1836–39) in the War of the Confederation. The war dissolved the confederation while distributing power in the Pacific. A second international war, the War of the Pacific (1879–83), further increased Chile's regional role, while adding considerably to its territory.
During the 19th century, Chile's commercial ties were primarily with Britain, a nation that had a major influence on the formation of the Chilean navy. The French, influenced Chile's legal and educational systems and had a decisive impact on Chile, through the architecture of the capital in the boom years at the turn of the 20th century. German influence came from the organization and training of the army by Prussians.
On 26 June 1945, Chile participated as a founding member of the United Nations being among 50 countries that signed the United Nations Charter in San Francisco, California. With the military coup of 1973, Chile became isolated politically as a result of widespread human rights abuses.
Since its return to democracy in 1990, Chile has been an active participant in the international political arena. Chile completed a two-year non-permanent position on the UN Security Council in January 2005. Jose Miguel Insulza, a Chilean national, was elected Secretary General of the Organization of American States in May 2005 and confirmed in his position, being re-elected in 2009. Chile is currently serving on the International Atomic Energy Agency (IAEA) Board of Governors, and the 2007–2008 chair of the board is Chile's ambassador to the IAEA, Milenko E. Skoknic. The country is an active member of the UN family of agencies and participates in UN peacekeeping activities. It was re-elected as a member of the UN Human Rights Council in 2011 for a three-year term. It was also elected to one of five non-permanent seats on the UN Security Council in 2013. Chile hosted the Defense Ministerial of the Americas in 2002 and the APEC summit and related meetings in 2004. It also hosted the Community of Democracies ministerial in April 2005 and the Ibero-American Summit in November 2007. An associate member of Mercosur and a full member of APEC, Chile has been a major player in international economic issues and hemispheric free trade.
Military
The Armed Forces of Chile are subject to civilian control exercised by the president through the Minister of Defense. The president has the authority to remove the commanders-in-chief of the armed forces.
The commander-in-chief of the Chilean Army is Army General Ricardo Martínez Menanteau. The Chilean Army is 45,000 strong and is organized with an Army headquarters in Santiago, six divisions throughout its territory, an Air Brigade in Rancagua, and a Special Forces Command in Colina. The Chilean Army is one of the most professional and technologically advanced armies in Latin America.
Admiral Julio Leiva Molina directs the around 25,000-person Chilean Navy, including 2,500 Marines. Of the fleet of 29 surface vessels, only eight are operational major combatants (frigates). Those ships are based in Valparaíso. The Navy operates its own aircraft for transport and patrol; there are no Navy fighter or bomber aircraft. The Navy also operates four submarines based in Talcahuano.
Air Force General (four-star) Jorge Rojas Ávila heads the 12,500-strong Chilean Air Force. Air assets are distributed among five air brigades headquartered in Iquique, Antofagasta, Santiago, Puerto Montt, and Punta Arenas. The Air Force also operates an airbase on King George Island, Antarctica. The Air Force took delivery of the final two of ten F-16s, all purchased from the U.S., in March 2007 after several decades of U.S. debate and previous refusal to sell. Chile also took delivery in 2007 of a number of reconditioned Block 15 F-16s from the Netherlands, bringing to 18 the total of F-16s purchased from the Dutch.
After the military coup in September 1973, the Chilean national police (Carabineros) were incorporated into the Defense Ministry. With the return of democratic government, the police were placed under the operational control of the Interior Ministry but remained under the nominal control of the Defense Ministry. Gen. Gustavo González Jure is the head of the national police force of 40,964 men and women who are responsible for law enforcement, traffic management, narcotics suppression, border control, and counter-terrorism throughout Chile.
In 2017, Chile signed the UN treaty on the Prohibition of Nuclear Weapons.
Administrative divisions
In 1978 Chile was administratively divided into regions, and in 1979 subdivided into provinces and these into communes. In total the country has 16 regions, 56 provinces and 348 communes.
Each region was designated by a name and a Roman numeral assigned from north to south, except for the Santiago Metropolitan Region, which did not have a number. The creation of two new regions in 2007, Arica and Parinacota (XV) and Los Ríos (XIV), and a third region in 2018, Ñuble (XVI) made this numbering lose its original order meaning.
National symbols
The national flower is the copihue (Lapageria rosea, Chilean bellflower), which grows in the woods of southern Chile.
The coat of arms depicts the two national animals: the condor (Vultur gryphus, a very large bird that lives in the mountains) and the huemul (Hippocamelus bisulcus, an endangered white tail deer). It also has the legend Por la razón o la fuerza (By reason or by force).
The flag of Chile consists of two equal horizontal bands of white (top) and red; there is a blue square the same height as the white band at the hoist-side end of the white band; the square bears a white five-pointed star in the center representing a guide to progress and honor; blue symbolizes the sky, white is for the snow-covered Andes, and red stands for the blood spilled to achieve independence. The flag of Chile is similar to the Flag of Texas, although the Chilean flag is 21 years older. However, like the Texan flag, the flag of Chile is modeled after the Flag of the United States.
Economy
The Central Bank of Chile in Santiago serves as the central bank for the country. The Chilean currency is the Chilean peso (CLP). Chile is one of South America's most stable and prosperous nations, leading Latin American nations in human development, competitiveness, globalization, economic freedom, and low perception of corruption. Since July 2013, Chile is considered by the World Bank as a "high-income economy".
Chile has the highest degree of economic freedom in South America (ranking 7th worldwide), owing to its independent and efficient judicial system and prudent public finance management. In May 2010 Chile became the first South American country to join the OECD. In 2006, Chile became the country with the highest nominal GDP per capita in Latin America. As of 2020, Chile ranks third in Latin America (behind Uruguay and Panama) in nominal GDP per capita.
Copper mining makes up 20% of Chilean GDP and 60% of exports. Escondida is the largest copper mine in the world, producing over 5% of global supplies. Overall, Chile produces a third of the world's copper. Codelco, the state mining firm, competes with private copper mining companies.
Sound economic policies, maintained consistently since the 1980s, have contributed to steady economic growth in Chile and have more than halved poverty rates. Chile began to experience a moderate economic downturn in 1999. The economy remained sluggish until 2003, when it began to show clear signs of recovery, achieving 4.0% GDP growth. The Chilean economy finished 2004 with growth of 6%. Real GDP growth reached 5.7% in 2005 before falling back to 4% in 2006. GDP expanded by 5% in 2007. Faced with the financial crisis of 2007–2008 the government announced an economic stimulus plan to spur employment and growth, and despite the Great Recession, aimed for an expansion of between 2% and 3% of GDP for 2009. Nonetheless, economic analysts disagreed with government estimates and predicted economic growth at a median of 1.5%. Real GDP growth in 2012 was 5.5%. Growth slowed to 4.1% in the first quarter of 2013.
The unemployment rate was 6.4% in April 2013. There are reported labor shortages in agriculture, mining, and construction. The percentage of Chileans with per capita household incomes below the poverty line—defined as twice the cost of satisfying a person's minimal nutritional needs—fell from 45.1% in 1987 to 11.5% in 2009, according to government surveys. Critics in Chile, however, argue that true poverty figures are considerably higher than those officially published. Using the relative yardstick favoured in many European countries, 27% of Chileans would be poor, according to Juan Carlos Feres of the ECLAC.
, about 11.1 million people (64% of the population) benefit from government welfare programs, via the "Social Protection Card", which includes the population living in poverty and those at a risk of falling into poverty. The privatized national pension system (AFP) has encouraged domestic investment and contributed to an estimated total domestic savings rate of approximately 21% of GDP. Under the compulsory private pension system, most formal sector employees pay 10% of their salaries into privately managed funds.
Chile has signed free trade agreements (FTAs) with a whole network of countries, including an FTA with the United States that was signed in 2003 and implemented in January 2004. Internal Government of Chile figures show that even when factoring out inflation and the recent high price of copper, bilateral trade between the U.S. and Chile has grown over 60% since then. Chile's total trade with China reached US$8.8 billion in 2006, representing nearly 66% of the value of its trade relationship with Asia. Exports to Asia increased from US$15.2 billion in 2005 to US$19.7 billion in 2006, a 29.9% increase. Year-on-year growth of imports was especially strong from a number of countries: Ecuador (123.9%), Thailand (72.1%), South Korea (52.6%), and China (36.9%).
Chile's approach to foreign direct investment is codified in the country's Foreign Investment Law. Registration is reported to be simple and transparent, and foreign investors are guaranteed access to the official foreign exchange market to repatriate their profits and capital.
The Chilean Government has formed a Council on Innovation and Competition, hoping to bring in additional FDI to new parts of the economy.
Standard & Poor's gives Chile a credit rating of AA-. The Government of Chile continues to pay down its foreign debt, with public debt only 3.9% of GDP at the end of 2006. The Chilean central government is a net creditor with a net asset position of 7% of GDP at end 2012. The current account deficit was 4% in the first quarter of 2013, financed mostly by foreign direct investment. 14% of central government revenue came directly from copper in 2012. Chile was ranked 52nd in the Global Innovation Index in 2023
Mineral resources
Chile is rich in mineral resources, especially copper and lithium. It is thought that due to the importance of lithium for batteries for electric vehicles and stabilization of electric grids with large proportions of intermittent renewables in the electricity mix, Chile could be strengthened geopolitically. However, this perspective has also been criticized for underestimating the power of economic incentives for expanded production in other parts of the world.
The country was, in 2019, the world's largest producer of copper, iodine and rhenium, the second largest producer of lithium and molybdenum, the sixth largest producer of silver, the seventh largest producer of salt, the eighth largest producer of potash, the thirteenth producer of sulfur and the thirteenth producer of iron ore in the world. The country also has considerable gold production: between 2006 and 2017, the country produced annual amounts ranging from 35.9 tonnes in 2017 to 51.3 tonnes in 2013.
Agriculture
Agriculture in Chile encompasses a wide range of different activities due to its particular geography, climate and geology and human factors. Historically agriculture is one of the bases of Chile's economy. Now agriculture and allied sectors like forestry, logging and fishing account for only 4.9% of the GDP and employ 13.6% of the country's labor force. Chile is one of the 5 largest world producers of cherry and blueberry, and one of the 10 largest world producers of grape, apple, kiwi, peach, plum and hazelnut, focusing on exporting high-value fruits. Some other major agriculture products of Chile include pears, onions, wheat, maize, oats, garlic, asparagus, beans, beef, poultry, wool, fish, timber and hemp. Due to its geographical isolation and strict customs policies Chile is free from diseases and pests such as mad cow disease, fruit fly and Phylloxera. This, its location in the Southern Hemisphere, which has quite different harvesting times from the Northern Hemisphere, and its wide range of agriculture conditions are considered Chile's main comparative advantages. However, Chile's mountainous landscape limits the extent and intensity of agriculture so that arable land corresponds only to 2.62% of the total territory. Chile currently utilizes 14,015 Hectares of agricultural land.
Chile is the world's second largest producer of salmon, after Norway. In 2019, it was responsible for 26% of the global supply. In wine, Chile is usually among the 10 largest producers in the world. In 2018 it was in 6th place.
Tourism
Tourism in Chile has experienced sustained growth over the last few decades. In 2005, tourism grew by 13.6%, generating more than 4.5 billion dollars of which 1.5 billion was attributed to foreign tourists. According to the National Service of Tourism (Sernatur), 2 million people a year visit the country. Most of these visitors come from other countries in the American continent, mainly Argentina; followed by a growing number from the United States, Europe, and Brazil with a growing number of Asians from South Korea and China.
The main attractions for tourists are places of natural beauty situated in the extreme zones of the country: San Pedro de Atacama, in the north, is very popular with foreign tourists who arrive to admire the Incaic architecture, the altiplano lakes, and the Valley of the Moon. In Putre, also in the north, there is the Chungará Lake, as well as the Parinacota and the Pomerape volcanoes, with altitudes of 6,348 m and 6,282 m, respectively. Throughout the central Andes there are many ski resorts of international repute, including Portillo, Valle Nevado and Termas de Chillán.
The main tourist sites in the south are national parks (the most popular is Conguillío National Park in the Araucanía) and the coastal area around Tirúa and Cañete with the Isla Mocha and the Nahuelbuta National Park, Chiloé Archipelago and Patagonia, which includes Laguna San Rafael National Park, with its many glaciers, and the Torres del Paine National Park. The central port city of Valparaíso, which is World Heritage with its unique architecture, is also popular. Finally, Easter Island in the Pacific Ocean is one of the main Chilean tourist destinations.
For locals, tourism is concentrated mostly in the summer (December to March), and mainly in the coastal beach towns. Arica, Iquique, Antofagasta, La Serena and Coquimbo are the main summer centers in the north, and Pucón on the shores of Lake Villarrica is the main center in the south. Because of its proximity to Santiago, the coast of the Valparaíso Region, with its many beach resorts, receives the largest number of tourists. Viña del Mar, Valparaíso's more affluent northern neighbor, is popular because of its beaches, casino, and its annual song festival, the most important musical event in Latin America. Pichilemu in the O'Higgins Region is widely known as South America's "best surfing spot" according to Fodor's.
In November 2005 the government launched a campaign under the brand "Chile: All Ways Surprising" intended to promote the country internationally for both business and tourism. Museums in Chile such as the Chilean National Museum of Fine Arts built in 1880, feature works by Chilean artists.
Chile is home to the world-renowned Patagonian Trail that resides on the border between Argentina and Chile. Chile recently launched a massive scenic route for tourism in hopes of encouraging development based on conservation. The Route of Parks covers and was designed by Tompkin Conservation (founders Douglas Tompkins and wife Kristine).
Transport
Due to Chile's topography a functioning transport network is vital to its economy. In 2020, Chile had of highways, with paved. In the same year, the country had of duplicated highways, the second largest network in South America, after Brazil. Since the mid-1990s, there has been a significant improvement in the country's roads, through bidding processes that allowed the construction of an efficient road network, with emphasis on the duplication of continuous of the Panamerican Highway (Chile Route 5) between Puerto Montt and Caldera (in addition to the planned duplication in the Atacama Desert area), the excerpts in between Santiago, Valparaiso and the Central Coast, and the northern access to Concepción and the large project of the Santiago urban highways network, opened between 2004 and 2006. Buses are now the main means of long-distance transportation in Chile, following the decline of its railway network. The bus system covers the entire country, from Arica to Santiago (a 30-hour journey) and from Santiago to Punta Arenas (about 40 hours, with a change at Osorno).
Chile has a total of 372 runways (62 paved and 310 unpaved). Important airports in Chile include Chacalluta International Airport (Arica), Diego Aracena International Airport (Iquique), Andrés Sabella Gálvez International Airport
(Antofagasta), Carriel Sur International Airport (Concepción), El Tepual International Airport (Puerto Montt), Presidente Carlos Ibáñez del Campo International Airport (Punta Arenas), La Araucanía International Airport (Temuco), Mataveri International Airport (Easter Island), the most remote airport in the world, as defined by distance to another airport, and the Arturo Merino Benítez International Airport (Santiago) with a traffic of 12,105,524 passengers in 2011. Santiago is headquarters of Latin America's largest airline holding company and Chilean flag carrier LATAM Airlines.
Internet and telecommunications
Chile has a telecommunication system which covers much of the country, including Chilean insular and Antarctic bases. Privatization of the telephone system began in 1988; Chile has one of the most advanced telecommunications infrastructure in South America with a modern system based on extensive microwave radio relay facilities and a domestic satellite system with 3 earth stations. In 2012, there were 3.276 million main lines in use and 24.13 million mobile cellular telephone subscribers.
According to a 2012 database of the International Telecommunication Union (ITU), 61.42% of the Chilean population uses the internet, making Chile the country with the highest internet penetration in South America.
The Chilean internet country code is ".cl". In 2017 the government of Chile launched its first cyber security strategy, which receives technical support from the Organization of American States (OAS) Cyber Security Program of the Inter-American Committee against Terrorism (CICTE).
Energy
Chile's total energy supply (TES) was 23.0GJ per capita in 2020. Energy in Chile is dominated by fossil fuels, with coal, oil and gas accounting for 73.4% of the total primary energy. Biofuels and waste account for another 20.5% of primary energy supply, with the rest sourced from hydro and other renewables.
Electricity consumption was 68.90 TWh in 2014. Main sources of electricity in Chile are hydroelectricity, gas, oil and coal. Renewable energy in the forms of wind and solar energy are also coming into use, encouraged by collaboration since 2009 with the United States Department of Energy. The electricity industry is privatized with ENDESA as the largest company in the field.
In 2021, Chile had, in terms of installed renewable electricity, 6,807 MW in hydropower (28th largest in the world), 3,137 MW in wind power (28th largest in the world), 4,468 MW in solar (22nd largest in the world), and 375 MW in biomass. As the Atacama Desert has the highest solar irradiation in the world, and Chile has always had problems obtaining oil, gas and coal (the country basically does not produce them, so it has to import them), renewable energy is seen as the solution for the country's shortcomings in the energy field.
Demographics
Chile's 2017 census reported a population of 17,574,003. Its rate of population growth has been decreasing since 1990, due to a declining birth rate. By 2050 the population is expected to reach approximately 20.2 million people.
Ancestry and ethnicity
Mexican professor Francisco Lizcano, of the National Autonomous University of Mexico, estimated that 52.7% of Chileans were white, 39.3% were mestizo, and 8% were Amerindian.
In 1984, a study called Sociogenetic Reference Framework for Public Health Studies in Chile, from the Revista de Pediatría de Chile determined an ancestry of 67.9% European, and 32.1% Native American. In 1994, a biological study determined that the Chilean composition was 64% European and 35% Amerindian. The recent study in the Candela Project establishes that the genetic composition of Chile is 52% of European origin, with 44% of the genome coming from Native Americans (Amerindians), and 4% coming from Africa, making Chile a primarily mestizo country with traces of African descent present in half of the population. Another genetic study conducted by the University of Brasilia in several South American countries shows a similar genetic composition for Chile, with a European contribution of 51.6%, an Amerindian contribution of 42.1%, and an African contribution of 6.3%. In 2015 another study established genetic composition in 57% European, 38% Native American, and 2.5% African.
A public health booklet from the University of Chile states that 64% of the population is of Caucasian origin; "predominantly White" Mestizos are estimated to amount to a total of 35%, while Native Americans (Amerindians) comprise the remaining 5%.
Despite the genetic considerations, many Chileans, if asked, would self-identify as White. The 2011 Latinobarómetro survey asked respondents in Chile what race they considered themselves to belong to. Most answered "White" (59%), while 25% said "Mestizo" and 8% self-classified as "indigenous". A 2002 national poll revealed that a majority of Chileans believed they possessed some (43.4%) or much (8.3%) "indigenous blood", while 40.3% responded that they had none.
Chile is one of 22 countries to have signed and ratified the only binding international law concerning indigenous peoples, the Indigenous and Tribal Peoples Convention, 1989. It was adopted in 1989 as the International Labour Organization (ILO) Convention 169. Chile ratified it in 2008. A Chilean court decision in November 2009, considered to be a landmark ruling on indigenous rights, made use of the convention. The Supreme Court decision on Aymara water rights upheld rulings by both the Pozo Almonte tribunal and the Iquique Court of Appeals and marks the first judicial application of ILO Convention 169 in Chile.
The earliest European immigrants were Spanish colonisers who arrived in the 16th century. The Amerindian population of central Chile was absorbed into the Spanish settler population in the beginning of the colonial period to form the large mestizo population that exists in Chile today; mestizos create modern middle and lower classes. In the 18th and 19th centuries, many Basques came to Chile where they integrated into the existing elites of Castilian origin. Postcolonial Chile was never a particularly attractive destination for migrants, owing to its remoteness and distance from Europe. Europeans preferred to stay in countries closer to their homelands instead of taking the long journey through the Straits of Magellan or crossing the Andes. European migration did not result in a significant change in the ethnic composition of Chile, except in the region of Magellan. Spaniards were the only major European migrant group to Chile, and there was never large-scale immigration such as that to Argentina or Brazil. Between 1851 and 1924, Chile only received 0.5% of European immigration to Latin America, compared to 46% to Argentina, 33% to Brazil, 14% to Cuba, and 4% to Uruguay. However, it is undeniable that immigrants have played a significant role in Chilean society.
Most of the immigrants to Chile during the 19th and 20th centuries came from France, Great Britain, Germany, and Croatia, among others. Descendants of different European ethnic groups often intermarried in Chile. This intermarriage and mixture of cultures and races have helped to shape the present society and culture of the Chilean middle and upper classes. Also, roughly 500,000 of Chile's population is of full or partial Palestinian origin, and 800,000 Arab descents. Chile currently has 1.5 million of Latin American immigrants, mainly from Venezuela, Peru, Haiti, Colombia, Bolivia and Argentina; 8% of the total population in 2019, without counting descendants. According to the 2002 national census, Chile's foreign-born population has increased by 75% since 1992. As of November 2021, numbers of people entering Chile from elsewhere in Latin America have grown swiftly in the last decade, tripling in the last three years to 1.5 million, with arrivals stemming from humanitarian crises in Haiti (ca. 180,000) and Venezuela (ca 460,000).
Urbanization
About 85% of the country's population lives in urban areas, with 40% living in Greater Santiago. The largest agglomerations according to the 2002 census are Greater Santiago with 5.6 million people, Greater Concepción with 861,000
and Greater Valparaíso with 824,000.
Religion
, 66.6% of Chilean population over 15 years of age claimed to adhere to the Roman Catholic church, a decrease from the 70% reported in the 2002 census. In the same census of 2012, 17% of Chileans reported adherence to an Evangelical church ("Evangelical" in the census referred to all Christian denominations other than the Roman Catholic and Orthodox—Greek, Persian, Serbian, Ukrainian, and Armenian—churches, the Church of Jesus Christ of Latter-day Saints, Seventh-day Adventists, and Jehovah's Witnesses: essentially, those denominations generally still termed "Protestant" in most English-speaking lands, although Adventism is often considered an Evangelical denomination as well). Approximately 90% of Evangelical Christians are Pentecostal. but Wesleyan, Lutheran, Anglican, Episcopalian, Presbyterian, other Reformed, Baptist, and Methodist churches also are present amongst Chilean Evangelical churches. Irreligious people, atheists, and agnostics account for around 12% of the population.
By 2015, the major religion in Chile remained Christianity (68%), with an estimated 55% of Chileans belonging to the Roman Catholic church, 13% to various Evangelical churches, and just 7% adhering to any other religion. Agnostics and atheist were estimated at 25% of the population.
Chile has a Baháʼí religious community, and is home to the Baháʼí mother temple, or continental House of Worship, for Latin America. Completed in 2016, it serves as a space for people of all religions and backgrounds to gather, meditate, reflect, and worship. It is formed from cast glass and translucent marble and has been described as innovative in its architectural style.
The Constitution guarantees the right to freedom of religion, and other laws and policies contribute to generally free religious practice. The law at all levels fully protects this right against abuse by either governmental or private actors. Church and state are officially separate in Chile. A 1999 law on religion prohibits religious discrimination.
However, the Roman Catholic church for mostly historical and social reasons enjoys a privileged status and occasionally receives preferential treatment. Government officials attend Roman Catholic events as well as major Evangelical and Jewish ceremonies.
The Chilean government treats the religious holidays of Christmas, Good Friday, the Feast of the Virgin of Carmen, the Feast of Saints Peter and Paul, the Feast of the Assumption, All Saints' Day, and the Feast of the Immaculate Conception as national holidays. Recently, the government declared 31 October, Reformation Day, to be an additional national holiday, in honor of the Evangelical churches of the country.
The patron saints of Chile are Our Lady of Mount Carmel and Saint James the Greater (Santiago). In 2005, Pope Benedict XVI canonized Alberto Hurtado, who became the country's second native Roman Catholic saint after Teresa de los Andes.
Languages
The Spanish spoken in Chile is distinctively accented and quite unlike that of neighboring South American countries because final syllables are often dropped, and some consonants have a soft pronunciation. Accent varies only very slightly from north to south; more noticeable are the differences in accent based on social class or whether one lives in the city or the country. That the Chilean population was largely formed in a small section at the center of the country and then migrated in modest numbers to the north and south helps explain this relative lack of differentiation, which was maintained by the national reach of radio, and now television, which also helps to diffuse and homogenize colloquial expressions.
There are several indigenous languages spoken in Chile: Mapudungun, Aymara, Rapa Nui, Chilean Sign Language and (barely surviving) Qawasqar and Yaghan, along with non-indigenous German, Italian, English, Greek and Quechua. After the Spanish conquest, Spanish took over as the lingua franca and the indigenous languages have become minority languages, with some now extinct or close to extinction.
German is still spoken to some extent in southern Chile, either in small countryside pockets or as a second language among the communities of larger cities.
Through initiatives such as the English Opens Doors Program, the government made English mandatory for students in fifth grade and above in public schools. Most private schools in Chile start teaching English from kindergarten. Common English words have been absorbed and appropriated into everyday Spanish speech.
Health
The Ministry of Health (Minsal) is the cabinet-level administrative office in charge of planning, directing, coordinating, executing, controlling and informing the public health policies formulated by the President of Chile. The National Health Fund (Fonasa), created in 1979, is the financial entity entrusted to collect, manage and distribute state funds for health in Chile. It is funded by the public. All employees pay 7% of their monthly income to the fund.
Fonasa is part of the NHSS and has executive power through the Ministry of Health (Chile). Its headquarters are in Santiago and decentralized public service is conducted by various Regional Offices. More than 12 million beneficiaries benefit from Fonasa. Beneficiaries can also opt for more costly private insurance through Isapre.
Education
In Chile, education begins with preschool until the age of 5. Primary school is provided for children between ages 6 and 13. Students then attend secondary school until graduation at age 17.
Secondary education is divided into two parts: During the first two years, students receive a general education. Then, they choose a branch: scientific humanistic education, artistic education, or technical and professional education. Secondary school ends two years later on the acquirement of a certificate (licencia de enseñanza media).
Chilean education is segregated by wealth in a three-tiered system – the quality of the schools reflects socioeconomic backgrounds:
city schools (colegios municipales) that are mostly free and have the worst education results, mostly attended by poor students;
subsidized schools that receive some money from the government which can be supplemented by fees paid by the student's family, which are attended by mid-income students and typically get mid-level results; and
entirely private schools that consistently get the best results. Many private schools charge attendance fees of 0,5 to 1 median household income.
Upon successful graduation of secondary school, students may continue into higher education. The higher education schools in Chile consist of Chilean Traditional Universities and are divided into public universities or private universities. There are medical schools and both the Universidad de Chile and Universidad Diego Portales offer law schools in a partnership with Yale University.
Culture
From the period between early agricultural settlements and up to the late pre-Columbian period, northern Chile was a region of Andean culture that was influenced by altiplano traditions spreading to the coastal valleys of the north, while southern regions were areas of Mapuche cultural activities. Throughout the colonial period following the conquest, and during the early Republican period, the country's culture was dominated by the Spanish. Other European influences, primarily English, French, and German began in the 19th century and have continued to this day. German migrants influenced the Bavarian style rural architecture and cuisine in the south of Chile in cities such as Valdivia, Frutillar, Puerto Varas, Osorno, Temuco, Puerto Octay, Llanquihue, Faja Maisan, Pitrufquén, Victoria, Pucón and Puerto Montt.
Music and dance
Music in Chile ranges from folkloric, popular and classical music. Its large geography generates different musical styles in the north, center and south of the country, including also Easter Island and Mapuche music. The national dance is the cueca. Another form of traditional Chilean song, though not a dance, is the tonada. Arising from music imported by the Spanish colonists, it is distinguished from the cueca by an intermediate melodic section and a more prominent melody.
In the 1950s to the 1970s, native folk musical forms were revitalized with the movement lead by composers such as Violeta Parra and Raúl de Ramón, which was also associated with political activists and reformers such as Víctor Jara, Inti-Illimani, and Quilapayún. Also, many Chilean rock bands like Los Jaivas, Los Prisioneros, La Ley, Los Tres and Los Bunkers have reached international success, some incorporating strong folk influences, such as Los Jaivas. In February, annual music and comedy festivals are held in Viña del Mar.
Literature
Chile is a country of poets. Gabriela Mistral was the first Latin American to receive a Nobel Prize in Literature (1945). Chile's most famous poet is Pablo Neruda, who received the Nobel Prize for Literature (1971) and is world-renowned for his extensive library of works on romance, nature, and politics. His three highly personalized homes in Isla Negra, Santiago and Valparaíso are popular tourist destinations.
Among the list of other Chilean poets are Carlos Pezoa Véliz, Vicente Huidobro, Gonzalo Rojas, Pablo de Rokha, Nicanor Parra, Ivonne Coñuecar and Raúl Zurita. Isabel Allende is the best-selling Chilean novelist, with 51 million of her novels sold worldwide. Novelist José Donoso's novel The Obscene Bird of Night is considered by critic Harold Bloom to be one of the canonical works of 20th-century Western literature. Another internationally recognized Chilean novelist and poet is Roberto Bolaño whose translations into English have had an excellent reception from the critics.
Cuisine
Chilean cuisine is a reflection of the country's topographical variety, featuring an assortment of seafood, beef, fruits, and vegetables. Traditional recipes include asado, cazuela, empanadas, humitas, pastel de choclo, pastel de papas, curanto, and sopaipillas. Crudos is an example of the mixture of culinary contributions from the various ethnic influences in Chile. The raw minced llama, heavy use of shellfish, and rice bread were taken from native Quechua Andean cuisine, (although beef, brought to Chile by Europeans, is also used in place of the llama meat), lemon and onions were brought by the Spanish colonists, and the use of mayonnaise and yogurt was introduced by German immigrants, as was beer.
Folklore
The folklore of Chile, cultural and demographic characteristics of the country, is the result of the mixture of Spanish and Amerindian elements that occurred during the colonial period. Due to cultural and historical reasons, they are classified and distinguished four major areas in the country: northern areas, central, southern and south. Most of the traditions of the culture of Chile have a festive purpose, but some, such as dances and ceremonies, have religious components.
Chilean mythology is the mythology and beliefs of the Folklore of Chile. This includes Chilote mythology, Rapa Nui mythology and Mapuche mythology.
Sports
Chile's most popular sport is association football. Chile has appeared in nine FIFA World Cups which includes hosting the 1962 FIFA World Cup where the national football team finished third. Other results achieved by the national football team include two Copa América titles (2015 and 2016), two runners-up positions, one silver and two bronze medals at the Pan American Games, a bronze medal at the 2000 Summer Olympics and two third places finishes in the FIFA under-17 and under-20 youth tournaments. The top league in the Chilean football league system is the Chilean Primera División, which is named by the IFFHS as the ninth strongest national football league in the world.
The main football clubs are Colo-Colo, Universidad de Chile and Universidad Católica. Colo-Colo is the country's most successful football club, having both the most national and international championships, including the coveted Copa Libertadores South American club tournament. Universidad de Chile was the last international champion (Copa Sudamericana 2011).
Tennis is Chile's most successful sport. Its national team won the World Team Cup clay tournament twice (2003 & 2004), and played the Davis Cup final against Italy in 1976. At the 2004 Summer Olympics the country captured gold and bronze in men's singles and gold in men's doubles (Nicolás Massú obtained two gold medals). Marcelo Ríos became the first Latin American man to reach the number one spot in the ATP singles rankings in 1998. Anita Lizana won the US Open in 1937, becoming the first woman from Latin America to win a Grand Slam tournament. Luis Ayala was twice a runner-up at the French Open and both Ríos and Fernando González reached the Australian Open men's singles finals. González also won a silver medal in singles at the 2008 Summer Olympics in Beijing.
At the Summer Olympic Games Chile boasts a total of two gold medals (tennis), seven silver medals (athletics, equestrian, boxing, shooting and tennis) and four bronze medals (tennis, boxing and football). In 2012, Chile won its first Paralympic Games medal (gold in Athletics).
Rodeo is the country's national sport and is practiced in the more rural areas of the nation. A sport similar to hockey called chueca was played by the Mapuche people during the Spanish conquest. Skiing and snowboarding are practiced at ski centers located in the Central Andes, and in southern ski centers near to cities as Osorno, Puerto Varas, Temuco and Punta Arenas. Surfing is popular at some coastal towns. Polo is professionally practiced within Chile, with the country achieving top prize in the 2008 and 2015 World Polo Championship.
Basketball is a popular sport in which Chile earned a bronze medal in the first men's FIBA World Championship held in 1950 and won a second bronze medal when Chile hosted the 1959 FIBA World Championship. Chile hosted the first FIBA World Championship for Women in 1953 finishing the tournament with the silver medal. San Pedro de Atacama is host to the annual "Atacama Crossing", a six-stage, footrace which annually attracts about 150 competitors from 35 countries. The Dakar Rally off-road automobile race has been held in both Chile and Argentina since 2009.
Cultural heritage
The cultural heritage of Chile consists, first, of its intangible heritage, composed of various cultural events and activities, such as visual arts, crafts, dances, holidays, cuisine, games, music and traditions. Secondly, its tangible heritage consists of those buildings, objects and sites of archaeological, architectural, traditional, artistic, ethnographic, folkloric, historical, religious or technological significance scattered through Chilean territory. Among them, some are declared World Heritage Sites by UNESCO, in accordance with the provisions of the Convention concerning the Protection of World Cultural and Natural Heritage of 1972, ratified by Chile in 1980. These cultural sites are the Rapa Nui National Park (1995), the Churches of Chiloé (2000), the historical district of the port city of Valparaíso (2003), Humberstone and Santa Laura Saltpeter Works (2005) and the mining city Sewell (2006).
In 1999 Cultural Heritage Day was established as a way to honour and commemorate Chile's cultural heritage. It is an official national event celebrated in May every year.
See also
Index of Chile-related articles
Outline of Chile
COVID-19 pandemic in Chile
References
Notes
Citations
Further reading
Christian Balteum: The Strip. A Marxist critique of a semicomparador economy, University of Vermont Press, 2018
Simon Collier and William F. Sater, A History of Chile, 1808–1894, Cambridge University Press, 1996
Paul W. Drake, and others., Chile: A Country Study, Library of Congress, 1994
Luis Galdames, A History of Chile, University of North Carolina Press, 1941
Brian Lovemen, Chile: The Legacy of Hispanic Capitalism, 3rd ed., Oxford University Press, 2001
John L. Rector, The History of Chile, Greenwood Press, 2003
External links
Official Chile Government website
ThisIsChile Tourism & Commerce Website
Chile. The World Factbook. Central Intelligence Agency.
Chile from UCB Libraries GovPubs
Chile profile from the BBC News
Road maps of Chile, interactive
World Bank Summary Trade Statistics Chile
Key Development Forecasts for Chile from International Futures
Chile Cultural Society
G15 nations
Former Spanish colonies
Republics
States and territories established in 1818
Spanish-speaking countries and territories
Countries in South America
Member states of the United Nations
1818 establishments in South America
Southern Cone countries
1818 establishments in Chile
Transcontinental countries
OECD members |
5490 | https://en.wikipedia.org/wiki/History%20of%20Chile | History of Chile | The territory of Chile has been populated since at least 3000 BC. By the 16th century, Spanish conquistadors began to colonize the region of present-day Chile, and the territory was a colony between 1540 and 1818, when it gained independence from Spain. The country's economic development was successively marked by the export of first agricultural produce, then saltpeter and later copper. The wealth of raw materials led to an economic upturn, but also led to dependency, and even wars with neighboring states. Chile was governed during most of its first 150 years of independence by different forms of restricted government, where the electorate was carefully vetted and controlled by an elite.
Failure to address the economic and social increases and increasing political awareness of the less-affluent population, as well as indirect intervention and economic funding to the main political groups by the CIA, as part of the Cold War, led to a political polarization under Socialist President Salvador Allende. This in turn resulted in the 1973 coup d'état and the military dictatorship of General Augusto Pinochet, whose subsequent 17-year regime was responsible for many human rights violations and deep market-oriented economic reforms. In 1990, Chile made a peaceful transition to democracy and initiate a succession of democratic governments.
Early history (pre-1540)
About 10,000 years ago, migrating Native Americans settled in the fertile valleys and coastal areas of what is present-day Chile. Pre-Hispanic Chile was home to over a dozen different Amerindian societies. The current prevalent theories are that the initial arrival of humans to the continent took place either along the Pacific coast southwards in a rather rapid expansion long preceding the Clovis culture, or even trans-Pacific migration. These theories are backed by findings in the Monte Verde archaeological site, which predates the Clovis site by thousands of years. Specific early human settlement sites from the very early human habitation in Chile include the Cueva del Milodon and the Pali Aike Crater's lava tube.
Despite such diversity, it is possible to classify the indigenous people into three major cultural groups: the northern people, who developed rich handicrafts and were influenced by pre-Incan cultures; the Araucanian culture, who inhabited the area between the river Choapa and the island of Chiloé, and lived primarily off agriculture; and the Patagonian culture composed of various nomadic tribes, who supported themselves through fishing and hunting (and who in Pacific/Pacific Coast immigration scenario would be descended partly from the most ancient settlers).
No elaborate, centralized, sedentary civilization reigned supreme.
The Araucanians, a fragmented society of hunters, gatherers, and farmers, constituted the largest Native American group in Chile. Mobile people who engaged in trade and warfare with other indigenous groups lived in scattered family clusters and small villages. Although the Araucanians had no written language, they did use a common tongue. Those in what became central Chile were more settled and more likely to use irrigation. Those in the south combined slash-and-burn agriculture with hunting. Of the three Araucanian groups, the one that mounted the fiercest resistance to the attempts at seizure of their territory were the Mapuche, meaning "people of the land."
The Inca Empire briefly extended their empire into what is now northern Chile, where they collected tribute from small groups of fishermen and oasis farmers but were not able to establish a strong cultural presence in the area. As the Spaniards would after them, the Incas encountered fierce resistance and so were unable to exert control in the south. During their attempts at conquest in 1460 and again in 1491, the Incas established forts in the Central Valley of Chile, but they could not colonize the region. The Mapuche fought against the Sapa Tupac Inca Yupanqui (c. 1471–1493) and his army. The result of the bloody three-day confrontation known as the Battle of the Maule was that the Inca conquest of the territories of Chile ended at the Maule river, which subsequently became the boundary between the Incan empire and the Mapuche lands until the arrival of the Spaniards.
Scholars speculate that the total Araucanian population may have numbered 1.5 million at most when the Spaniards arrived in the 1530s; a century of European conquest and disease reduced that number by at least half. During the conquest, the Araucanians quickly added horses and European weaponry to their arsenal of clubs and bows and arrows. They became adept at raiding Spanish settlements and, albeit in declining numbers, managed to hold off the Spaniards and their descendants until the late 19th century. The Araucanians' valor inspired the Chileans to mythologize them as the nation's first national heroes, a status that did nothing, however, to elevate the wretched living standard of their descendants.
The Chilean Patagonia located south of the Calle-Calle River in Valdivia was composed of many tribes, mainly Tehuelches, who were considered giants by Spaniards during Magellan's voyage of 1520.
The name Patagonia comes from the word patagón used by Magellan to describe the native people whom his expedition thought to be giants. It is now believed the Patagons were actually Tehuelches with an average height of 1.80 m (~5′11″) compared to the 1.55 m (~5′1″) average for Spaniards of the time.
The Argentine portion of Patagonia includes the provinces of Neuquén, Río Negro, Chubut and Santa Cruz, as well as the eastern portion of Tierra del Fuego archipelago. The Argentine politico-economic Patagonic Region includes the Province of La Pampa.
The Chilean part of Patagonia embraces the southern part of Valdivia, Los Lagos in Lake Llanquihue, Chiloé, Puerto Montt and the Archaeological site of Monte Verde, also the fiords and islands south to the regions of Aisén and Magallanes, including the west side of Tierra del Fuego and Cape Horn.
European conquest and colonization (1540–1810)
The first European to sight Chilean territory was Ferdinand Magellan, who crossed the Strait of Magellan on November 1, 1520. However, the title of discoverer of Chile is usually assigned to Diego de Almagro. Almagro was Francisco Pizarro's partner, and he received the Southern area (Nueva Toledo). He organized an expedition that brought him to central Chile in 1537, but he found little of value to compare with the gold and silver of the Incas in Peru. Left with the impression that the inhabitants of the area were poor, he returned to Peru, later to be garotted following defeat by Hernando Pizarro in a Civil War.
After this initial excursion there was little interest from colonial authorities in further exploring modern-day Chile. However, Pedro de Valdivia, captain of the army, realizing the potential for expanding the Spanish empire southward, asked Pizarro's permission to invade and conquer the southern lands. With a couple of hundred men, he subdued the local inhabitants and founded the city of Santiago de Nueva Extremadura, now Santiago de Chile, on February 12, 1541.
Although Valdivia found little gold in Chile he could see the agricultural richness of the land. He continued his explorations of the region west of the Andes and founded over a dozen towns and established the first encomiendas. The greatest resistance to Spanish rule came from the Mapuche people, who opposed European conquest and colonization until the 1880s; this resistance is known as the Arauco War. Valdivia died at the Battle of Tucapel, defeated by Lautaro, a young Mapuche toqui (war chief), but the European conquest was well underway.
The Spaniards never subjugated the Mapuche territories; various attempts at conquest, both by military and peaceful means, failed. The Great Uprising of 1598 swept all Spanish presence south of the Bío-Bío River except Chiloé (and Valdivia which was decades later reestablished as a fort), and the great river became the frontier line between Mapuche lands and the Spanish realm.
North of that line cities grew up slowly, and Chilean lands eventually became an important source of food for the Viceroyalty of Peru.
Valdivia became the first governor of the Captaincy General of Chile. In that post, he obeyed the viceroy of Peru and, through him, the King of Spain and his bureaucracy. Responsible to the governor, town councils known as Cabildo administered local municipalities, the most important of which was Santiago, which was the seat of a Royal Appeals Court (Real Audiencia) from 1609 until the end of colonial rule.
Chile was the least wealthy realm of the Spanish Crown for most of its colonial history. Only in the 18th century did a steady economic and demographic growth begin, an effect of the reforms by Spain's Bourbon dynasty and a more stable situation along the frontier.
Independence (1810–1818)
The drive for independence from Spain was precipitated by the usurpation of the Spanish throne by Napoleon's brother Joseph Bonaparte. The Chilean War of Independence was part of the larger Spanish American independence movement, and it was far from having unanimous support among Chileans, who became divided between independentists and royalists. What started as an elitist political movement against their colonial master, finally ended as a full-fledged civil war between pro-Independence Criollos who sought political and economic independence from Spain and royalist Criollos, who supported the continued allegiance to and permanence within the Spanish Empire of the Captaincy General of Chile. The struggle for independence was a war within the upper class, although the majority of troops on both sides consisted of conscripted mestizos and Native Americans.
The beginning of the Independence movement is traditionally dated as of September 18, 1810, when a national junta was established to govern Chile in the name of the deposed king Ferdinand VII. Depending on what terms one uses to define the end, the movement extended until 1821 (when the Spanish were expelled from mainland Chile) or 1826 (when the last Spanish troops surrendered and Chiloé was incorporated into the Chilean republic). The independence process is normally divided into three stages: Patria Vieja, Reconquista, and Patria Nueva.
Chile's first experiment with self-government, the "Patria Vieja" (old fatherland, 1810–1814), was led by José Miguel Carrera, an aristocrat then in his mid-twenties. The military-educated Carrera was a heavy-handed ruler who aroused widespread opposition. Another of the earliest advocates of full independence, Bernardo O'Higgins, captained a rival faction that plunged the Criollos into civil war. For him and certain other members of the Chilean elite, the initiative for temporary self-rule quickly escalated into a campaign for permanent independence, although other Criollos remained loyal to Spain.
Among those favouring independence, conservatives fought with liberals over the degree to which French revolutionary ideas would be incorporated into the movement. After several efforts, Spanish troops from Peru took advantage of the internecine strife to reconquer Chile in 1814, when they reasserted control by the Battle of Rancagua on October 12. O'Higgins, Carrera and many of the Chilean rebels escaped to Argentina.
The second period was characterized by the Spanish attempts to reimpose arbitrary rule during the period known as the Reconquista of 1814–1817 ("Reconquest": the term echoes the Reconquista in which the Christian kingdoms retook Iberia from the Muslims). During this period, the harsh rule of the Spanish loyalists, who punished suspected rebels, drove more and more Chileans into the insurrectionary camp. More members of the Chilean elite were becoming convinced of the necessity of full independence, regardless of who sat on the throne of Spain. As the leader of guerrilla raids against the Spaniards, Manuel Rodríguez became a national symbol of resistance.
In exile in Argentina, O'Higgins joined forces with José de San Martín. Their combined army freed Chile with a daring assault over the Andes in 1817, defeating the Spaniards at the Battle of Chacabuco on February 12 and marking the beginning of the Patria Nueva. San Martín considered the liberation of Chile a strategic stepping-stone to the emancipation of Peru, which he saw as the key to hemispheric victory over the Spanish.
Chile won its formal independence when San Martín defeated the last large Spanish force on Chilean soil at the Battle of Maipú on April 5, 1818. San Martín then led his Argentine and Chilean followers north to liberate Peru; and fighting continued in Chile's southern provinces, the bastion of the royalists, until 1826.
A declaration of independence was officially issued by Chile on February 12, 1818, and formally recognized by Spain in 1840, when full diplomatic relations were established.
Republican era (1818–1891)
Constitutional organization (1818–1833)
From 1817 to 1823, Bernardo O'Higgins ruled Chile as supreme director. He won plaudits for defeating royalists and founding schools, but civil strife continued. O'Higgins alienated liberals and provincials with his authoritarianism, conservatives and the church with his anticlericalism, and landowners with his proposed reforms of the land tenure system. His attempt to devise a constitution in 1818 that would legitimize his government failed, as did his effort to generate stable funding for the new administration. O'Higgins's dictatorial behavior aroused resistance in the provinces. This growing discontent was reflected in the continuing opposition of partisans of Carrera, who was executed by the Argentine regime in Mendoza in 1821, as were his two brothers three years earlier.
Although opposed by many liberals, O'Higgins angered the Roman Catholic Church with his liberal beliefs. He maintained Catholicism's status as the official state religion but tried to curb the church's political powers and to encourage religious tolerance as a means of attracting Protestant immigrants and traders. Like the church, the landed aristocracy felt threatened by O'Higgins, resenting his attempts to eliminate noble titles and, more important, to eliminate entailed estates.
O'Higgins's opponents also disapproved of his diversion of Chilean resources to aid San Martín's liberation of Peru. O'Higgins insisted on supporting that campaign because he realized that Chilean independence would not be secure until the Spaniards were routed from the Andean core of the empire. However, amid mounting discontent, troops from the northern and southern provinces forced O'Higgins to resign. Embittered, O'Higgins departed for Peru, where he died in 1842.
After O'Higgins went into exile in 1823, civil conflict continued, focusing mainly on the issues of anticlericalism and regionalism. Presidents and constitutions rose and fell quickly in the 1820s. The civil struggle's harmful effects on the economy, and particularly on exports, prompted conservatives to seize national control in 1830.
In the minds of most members of the Chilean elite, the bloodshed and chaos of the late 1820s were attributable to the shortcomings of liberalism and federalism, which had been dominant over conservatism for most of the period. The political camp became divided by supporters of O'Higgins, Carrera, liberal Pipiolos and conservative Pelucones, being the two last the main movements that prevailed and absorbed the rest. The abolition of slavery in 1823—long before most other countries in the Americas—was considered one of the Pipiolos' few lasting achievements. One Pipiolo leader from the south, Ramón Freire, rode in and out of the presidency several times (1823–1827, 1828, 1829, 1830) but could not sustain his authority. From May 1827 to September 1831, with the exception of brief interventions by Freire, the presidency was occupied by Francisco Antonio Pinto, Freire's former vice president.
In August 1828, Pinto's first year in office, Chile abandoned its short-lived federalist system for a unitary form of government, with separate legislative, executive, and judicial branches. By adopting a moderately liberal constitution in 1828, Pinto alienated both the federalists and the liberal factions. He also angered the old aristocracy by abolishing estates inherited by primogeniture (mayorazgo) and caused a public uproar with his anticlericalism. After the defeat of his liberal army at the Battle of Lircay on April 17, 1830, Freire, like O'Higgins, went into exile in Peru.
Conservative Era (1830–1861)
Although never president, Diego Portales dominated Chilean politics from the cabinet and behind the scenes from 1830 to 1837. He installed the "autocratic republic", which centralized authority in the national government. His political program enjoyed support from merchants, large landowners, foreign capitalists, the church, and the military. Political and economic stability reinforced each other, as Portales encouraged economic growth through free trade and put government finances in order. Portales was an agnostic who said that he believed in the clergy but not in God. He realized the importance of the Roman Catholic Church as a bastion of loyalty, legitimacy, social control and stability, as had been the case in the colonial period. He repealed Liberal reforms that had threatened church privileges and properties.
The "Portalian State" was institutionalized by the Chilean Constitution of 1833. One of the most durable charters ever devised in Latin America, the Portalian constitution lasted until 1925. The constitution concentrated authority in the national government, more precisely, in the hands of the president, who was elected by a tiny minority. The chief executive could serve two consecutive five-year terms and then pick a successor. Although the Congress had significant budgetary powers, it was overshadowed by the president, who appointed provincial officials. The constitution also created an independent judiciary, guaranteed inheritance of estates by primogeniture, and installed Catholicism as the state religion. In short, it established an autocratic system under a republican veneer.
Portales also achieved his objectives by wielding dictatorial powers, censoring the press, and manipulating elections. For the next forty years, Chile's armed forces would be distracted from meddling in politics by skirmishes and defensive operations on the southern frontier, although some units got embroiled in domestic conflicts in 1851 and 1859.
The Portalian president was General Joaquín Prieto, who served two terms (1831–1836, 1836–1841). President Prieto had four main accomplishments: implementation of the 1833 constitution, stabilization of government finances, defeat of provincial challenges to central authority, and victory over the Peru-Bolivia Confederation. During the presidencies of Prieto and his two successors, Chile modernized through the construction of ports, railroads, and telegraph lines, some built by United States entrepreneur William Wheelwright. These innovations facilitated the export-import trade as well as domestic commerce.
Prieto and his adviser, Portales, feared the efforts of Bolivian general Andrés de Santa Cruz to unite with Peru against Chile. These qualms exacerbated animosities toward Peru dating from the colonial period, now intensified by disputes over customs duties and loans. Chile also wanted to become the dominant South American military and commercial power along the Pacific. Santa Cruz united Peru and Bolivia in the Peru–Bolivian Confederation in 1836 with a desire to expand control over Argentina and Chile. Portales got Congress to declare war on the Confederation. Portales was killed by traitors in 1837. The general Manuel Bulnes defeated the Confederation in the Battle of Yungay in 1839.
After his success Bulnes was elected president in 1841. He served two terms (1841–1846, 1846–1851). His administration concentrated on the occupation of the territory, especially the Strait of Magellan and the Araucanía. The Venezuelan Andres Bello made important intellectual advances in this period, most notably the creation of the University of Santiago. But political tensions, including a liberal rebellion, led to the Chilean Civil War of 1851. In the end the conservatives defeated the liberals.
The last conservative president was Manuel Montt, who also served two terms (1851–1856, 1856–1861), but his poor administration led to the liberal rebellion in 1859. Liberals triumphed in 1861 with the election of Jose Joaquin Perez as president.
Liberal era (1861–1891)
The political revolt brought little social change, however, and 19th century Chilean society preserved the essence of the stratified colonial social structure, which was greatly influenced by family politics and the Roman Catholic Church. A strong presidency eventually emerged, but wealthy landowners remained powerful.
Toward the end of the 19th century, the government in Santiago consolidated its position in the south by persistently suppressing the Mapuche during the Occupation of the Araucanía. In 1881, it signed the Boundary Treaty of 1881 between Chile and Argentina confirming Chilean sovereignty over the Strait of Magellan, but conceding all of oriental Patagonia, and a considerable fraction of the territory it had during colonial times. As a result of the War of the Pacific with Peru and Bolivia (1879–1883), Chile expanded its territory northward by almost one-third and acquired valuable nitrate deposits, the exploitation of which led to an era of national affluence.
In the 1870s, the church influence started to diminish slightly with the passing of several laws that took some old roles of the church into the State's hands such as the registry of births and marriages.
In 1886, José Manuel Balmaceda was elected president. His economic policies visibly changed the existing liberal policies. He began to violate the constitution and slowly began to establish a dictatorship. Congress decided to depose Balmaceda, who refused to step down. Jorge Montt, among others, directed an armed conflict against Balmaceda, which soon extended into the 1891 Chilean Civil War. Defeated, Balmaceda fled to Argentina's embassy, where he committed suicide. Jorge Montt became the new president.
Parliamentary era (1891–1925)
The so-called Parliamentary Republic was not a true parliamentary system, in which the chief executive is elected by the legislature. It was, however, an unusual regime in presidentialist Latin America, for Congress really did overshadow the rather ceremonial office of the president and exerted authority over the chief executive's cabinet appointees. In turn, Congress was dominated by the landed elites. This was the heyday of classic political and economic liberalism.
For many decades thereafter, historians derided the Parliamentary Republic as a quarrel-prone system that merely distributed spoils and clung to its laissez-faire policy while national problems mounted. The characterization is epitomized by an observation made by President Ramón Barros Luco (1910–1915), reputedly made in reference to labor unrest: "There are only two kinds of problems: those that solve themselves and those that can't be solved."
At the mercy of Congress, cabinets came and went frequently, although there was more stability and continuity in public administration than some historians have suggested. Chile also temporarily resolved its border disputes with Argentina with the Puna de Atacama Lawsuit of 1899, the Boundary treaty of 1881 between Chile and Argentina and the 1902 General Treaty of Arbitration, though not without engaging in an expensive naval arms race beforehand.
Political authority ran from local electoral bosses in the provinces through the congressional and executive branches, which reciprocated with payoffs from taxes on nitrate sales. Congressmen often won election by bribing voters in this clientelistic and corrupt system. Many politicians relied on intimidated or loyal peasant voters in the countryside, even though the population was becoming increasingly urban. The lackluster presidents and ineffectual administrations of the period did little to respond to the country's dependence on volatile nitrate exports, spiraling inflation, and massive urbanization.
However, particularly when the authoritarian regime of Augusto Pinochet is taken into consideration, some scholars have in recent years reevaluated the Parliamentary Republic of 1891–1925. Without denying its shortcomings, they have lauded its democratic stability. They have also hailed its control of the armed forces, its respect for civil liberties, its expansion of suffrage and participation, and its gradual admission of new contenders, especially reformers, to the political arena. In particular, two young parties grew in importance – the Democrat Party, with roots among artisans and urban workers, and the Radical Party, representing urban middle sectors and provincial elites.
By the early 20th century, both parties were winning increasing numbers of seats in Congress. The more leftist members of the Democrat Party became involved in the leadership of labor unions and broke off to launch the Socialist Workers' Party ( – POS) in 1912. The founder of the POS and its best-known leader, Luis Emilio Recabarren, also founded the Communist Party of Chile ( – PCCh) in 1922.
Presidential era (1925–1973)
By the 1920s, the emerging middle and working classes were powerful enough to elect a reformist president, Arturo Alessandri Palma. Alessandri appealed to those who believed the social question should be addressed, to those worried by the decline in nitrate exports during World War I, and to those weary of presidents dominated by Congress. Promising "evolution to avoid revolution", he pioneered a new campaign style of appealing directly to the masses with florid oratory and charisma. After winning a seat in the Senate representing the mining north in 1915, he earned the sobriquet "Lion of Tarapacá."
As a dissident Liberal running for the presidency, Alessandri attracted support from the more reformist Radicals and Democrats and formed the so-called Liberal Alliance. He received strong backing from the middle and working classes as well as from the provincial elites. Students and intellectuals also rallied to his banner. At the same time, he reassured the landowners that social reforms would be limited to the cities.
Alessandri soon discovered that his efforts to lead would be blocked by the conservative Congress. Like Balmaceda, he infuriated the legislators by going over their heads to appeal to the voters in the congressional elections of 1924. His reform legislation was finally rammed through Congress under pressure from younger military officers, who were sick of the neglect of the armed forces, political infighting, social unrest, and galloping inflation, whose program was frustrated by a conservative congress.
A double military coup set off a period of great political instability that lasted until 1932. First military right-wingers opposing Alessandri seized power in September 1924, and then reformers in favor of the ousted president took charge in January 1925. The Saber noise (ruido de sables) incident of September 1924, provoked by discontent of young officers, mostly lieutenants from middle and working classes, lead to the establishment of the September Junta led by General Luis Altamirano and the exile of Alessandri.
However, fears of a conservative restoration in progressive sectors of the army led to another coup in January, which ended with the establishment of the January Junta as interim government while waiting for Alessandri's return. The latter group was led by two colonels, Carlos Ibáñez del Campo and Marmaduke Grove. They returned Alessandri to the presidency that March and enacted his promised reforms by decree. The latter re-assumed power in March, and a new Constitution encapsulating his proposed reforms was ratified in a plebiscite in September 1925.
The new constitution gave increased powers to the presidency. Alessandri broke with the classical liberalism's policies of laissez-faire by creating a Central Bank and imposing a revenue tax. However, social discontents were also crushed, leading to the Marusia massacre in March 1925 followed by the La Coruña massacre.
The longest lasting of the ten governments between 1924 and 1932 was that of General Carlos Ibáñez, who briefly held power in 1925 and then again between 1927 and 1931 in what was a de facto dictatorship. When constitutional rule was restored in 1932, a strong middle-class party, the Radicals, emerged. It became the key force in coalition governments for the next 20 years.
The Seguro Obrero Massacre took place on September 5, 1938, in the midst of a heated three-way election campaign between the ultraconservative Gustavo Ross Santa María, the radical Popular Front's Pedro Aguirre Cerda, and the newly formed Popular Alliance candidate, Carlos Ibáñez del Campo. The National Socialist Movement of Chile supported Ibáñez's candidacy, which had been announced on September 4. In order to preempt Ross's victory, the National Socialists mounted a coup d'état that was intended to take down the rightwing government of Arturo Alessandri Palma and place Ibáñez in power.
During the period of Radical Party dominance (1932–1952), the state increased its role in the economy. In 1952, voters returned Ibáñez to office for another 6 years. Jorge Alessandri succeeded Ibáñez in 1958.
The 1964 presidential election of Christian Democrat Eduardo Frei Montalva by an absolute majority initiated a period of major reform. Under the slogan "Revolution in Liberty", the Frei administration embarked on far-reaching social and economic programs, particularly in education, housing, and agrarian reform, including rural unionization of agricultural workers. By 1967, however, Frei encountered increasing opposition from leftists, who charged that his reforms were inadequate, and from conservatives, who found them excessive. At the end of his term, Frei had accomplished many noteworthy objectives, but he had not fully achieved his party's ambitious goals.
Popular Unity years
In the 1970 presidential election, Senator Salvador Allende Gossens won a plurality of votes in a three-way contest. He was a Marxist physician and member of Chile's Socialist Party, who headed the "Popular Unity" (UP or "Unidad Popular") coalition of the Socialist, Communist, Radical, and Social-Democratic Parties, along with dissident Christian Democrats, the Popular Unitary Action Movement (MAPU), and the Independent Popular Action.
Allende had two main competitors in the election — Radomiro Tomic, representing the incumbent Christian Democratic party, who ran a left-wing campaign with much the same theme as Allende's, and the right-wing former president Jorge Alessandri. In the end, Allende received a plurality of the votes cast, getting 36% of the vote against Alessandri's 35% and Tomic's 28%.
Despite pressure from the government of the United States, the Chilean Congress, keeping with tradition, conducted a runoff vote between the leading candidates, Allende and former president Jorge Alessandri. This procedure had previously been a near-formality, yet became quite fraught in 1970. After assurances of legality on Allende's part, the murder of the Army Commander-in-Chief, General René Schneider and Frei's refusal to form an alliance with Alessandri to oppose Allende – on the grounds that the Christian Democrats were a workers' party and could not make common cause with the oligarchs – Allende was chosen by a vote of 153 to 35.
The Popular Unity platform included the nationalization of U.S. interests in Chile's major copper mines, the advancement of workers' rights, deepening of the Chilean land reform, reorganization of the national economy into socialized, mixed, and private sectors, a foreign policy of "international solidarity" and national independence and a new institutional order (the "people's state" or "poder popular"), including the institution of a unicameral congress. Immediately after the election, the United States expressed its disapproval and raised a number of economic sanctions against Chile.
In addition, the CIA's website reports that the agency aided three different Chilean opposition groups during that time period and "sought to instigate a coup to prevent Allende from taking office". The action plans to prevent Allende from coming to power were known as Track I and Track II.
In the first year of Allende's term, the short-term economic results of Economics Minister Pedro Vuskovic's expansive monetary policy were unambiguously favorable: 12% industrial growth and an 8.6% increase in GDP, accompanied by major declines in inflation (down from 34.9% to 22.1%) and unemployment (down to 3.8%). Allende adopted measures including price freezes, wage increases, and tax reforms, which had the effect of increasing consumer spending and redistributing income downward. Joint public-private public works projects helped reduce unemployment. Much of the banking sector was nationalized. Many enterprises within the copper, coal, iron, nitrate, and steel industries were expropriated, nationalized, or subjected to state intervention. Industrial output increased sharply and unemployment fell during the administration's first year. However, these results were not sustainable and in 1972 the Chilean escudo had runaway inflation of 140%. An economic depression that had begun in 1967 peaked in 1972, exacerbated by capital flight, plummeting private investment, and withdrawal of bank deposits in response to Allende's socialist program. Production fell and unemployment rose. The combination of inflation and government-mandated price-fixing led to the rise of black markets in rice, beans, sugar, and flour, and a "disappearance" of such basic commodities from supermarket shelves.
Recognizing that U.S. intelligence forces were trying to destabilize his presidency through a variety of methods, the KGB offered financial assistance to the first democratically elected Marxist president. However, the reason behind the U.S. covert actions against Allende concerned not the spread of Marxism but fear over losing control of its investments. "By 1968, 20 percent of total U.S. foreign investment was tied up in Latin America...Mining companies had invested $1 billion over the previous fifty years in Chile's copper mining industry – the largest in the world – but they had sent $7.2 billion home." Part of the CIA's program involved a propaganda campaign that portrayed Allende as a would-be Soviet dictator. In fact, however, "the U.S.'s own intelligence reports showed that Allende posed no threat to democracy." Nevertheless, the Richard Nixon administration organized and inserted secret operatives in Chile, in order to quickly destabilize Allende's government.
In addition, Nixon gave instructions to make the Chilean economy scream, and international financial pressure restricted economic credit to Chile. Simultaneously, the CIA funded opposition media, politicians, and organizations, helping to accelerate a campaign of domestic destabilization. By 1972, the economic progress of Allende's first year had been reversed, and the economy was in crisis. Political polarization increased, and large mobilizations of both pro- and anti-government groups became frequent, often leading to clashes.
By 1973, Chilean society had grown highly polarized, between strong opponents and equally strong supporters of Salvador Allende and his government. Military actions and movements, separate from the civilian authority, began to manifest in the countryside. The Tanquetazo was a failed military coup d'état attempted against Allende in June 1973.
In its "Agreement", on August 22, 1973, the Chamber of Deputies of Chile asserted that Chilean democracy had broken down and called for "redirecting government activity", to restore constitutional rule. Less than a month later, on September 11, 1973, the Chilean military deposed Allende, who shot himself in the head to avoid capture as the Presidential Palace was surrounded and bombed. Subsequently, rather than restore governmental authority to the civilian legislature, Augusto Pinochet exploited his role as Commander of the Army to seize total power and to establish himself at the head of a junta.
CIA involvement in the coup is documented. As early as the Church Committee Report (1975), publicly available documents have indicated that the CIA attempted to prevent Allende from taking office after he was elected in 1970; the CIA itself released documents in 2000 acknowledging this and that Pinochet was one of their favored alternatives to take power.
According to the Vasili Mitrokhin and Christopher Andrew, the KGB and the Cuban Intelligence Directorate launched a campaign known as Operation TOUCAN. For instance, in 1976, the New York Times published 66 articles on human rights abuses in Chile and only 4 on Cambodia, where the communist Khmer Rouge killed some 1.5 million people of 7.5 million people in the country.
Military dictatorship (1973–1990)
By early 1973, inflation had risen 600% under Allende's presidency. The crippled economy was further battered by prolonged and sometimes simultaneous strikes by physicians, teachers, students, truck owners, copper workers, and the small business class. A military coup overthrew Allende on September 11, 1973. As the armed forces bombarded the presidential palace (Palacio de La Moneda), Allende committed suicide. A military government, led by General Augusto Pinochet Ugarte, took over control of the country.
The first years of the regime were marked by human rights violations. The junta jailed, tortured, and executed thousands of Chileans. In October 1973, at least 72 people were murdered by the Caravan of Death. At least a thousand people were executed during the first six months of Pinochet in office, and at least two thousand more were killed during the next sixteen years, as reported by the Rettig Report. At least 29,000 were imprisoned and tortured. According to the Latin American Institute on Mental Health and Human Rights (ILAS), "situations of extreme trauma" affected about 200,000 persons.; this figure includes individuals killed, tortured or exiled, and their immediate families. About 30,000 left the country.
The four-man junta headed by General Augusto Pinochet abolished civil liberties, dissolved the national congress, banned union activities, prohibited strikes and collective bargaining, and erased the Allende administration's agrarian and economic reforms.
The junta embarked on a radical program of liberalization, deregulation and privatization, slashing tariffs as well as government welfare programs and deficits. Economic reforms were drafted by a group of technocrats who became known as the Chicago Boys because many of them had been trained or influenced by University of Chicago professors. Under these new policies, the rate of inflation dropped:
A new constitution was approved by plebiscite characterized by the absence of registration lists, on September 11, 1980, and General Pinochet became president of the republic for an 8-year term.
In 1982–1983 Chile witnessed a severe economic crisis with a surge in unemployment and a meltdown of the financial sector. 16 out of 50 financial institutions faced bankruptcy. In 1982 the two biggest banks were nationalized to prevent an even worse credit crunch. In 1983 another five banks were nationalized and two banks had to be put under government supervision. The central bank took over foreign debts. Critics ridiculed the economic policy of the Chicago Boys as "Chicago way to socialism“.
After the economic crisis, Hernán Büchi became Minister of Finance from 1985 to 1989, introducing a more pragmatic economic policy. He allowed the peso to float and reinstated restrictions on the movement of capital in and out of the country. He introduced Bank regulations, simplified and reduced the corporate tax. Chile went ahead with privatizations, including public utilities plus the re-privatization of companies that had returned to the government during the 1982–1983 crisis. From 1984 to 1990, Chile's gross domestic product grew by an annual average of 5.9%, the fastest on the continent. Chile developed a good export economy, including the export of fruits and vegetables to the northern hemisphere when they were out of season, and commanded high prices.
The military junta began to change during the late 1970s. Due to problems with Pinochet, Leigh was expelled from the junta in 1978 and replaced by General Fernando Matthei. In the late 1980s, the government gradually permitted greater freedom of assembly, speech, and association, to include trade union and political activity. Due to the Caso Degollados ("slit throats case"), in which three Communist party members were assassinated, César Mendoza, member of the junta since 1973 and representants of the carabineros, resigned in 1985 and was replaced by Rodolfo Stange. The next year, Carmen Gloria Quintana was burnt alive in what became known as the Caso Quemado ("Burnt Alive case").
Chile's constitution established that in 1988 there would be another plebiscite in which the voters would accept or reject a single candidate proposed by the Military Junta. Pinochet was, as expected, the candidate proposed, but was denied a second 8-year term by 54.5% of the vote.
Transition to democracy (1990–)
Aylwin, Frei, and Lagos
Chileans elected a new president and the majority of members of a two-chamber congress on December 14, 1989. Christian Democrat Patricio Aylwin, the candidate of a coalition of 17 political parties called the Concertación, received an absolute majority of votes (55%). President Aylwin served from 1990 to 1994, in what was considered a transition period. In February 1991 Aylwin created the National Commission for Truth and Reconciliation, which released in February 1991 the Rettig Report on human rights violations committed during the military rule.
This report counted 2,279 cases of "disappearances" which could be proved and registered. Of course, the very nature of "disappearances" made such investigations very difficult. The same problem arose, several years later, with the Valech Report, released in 2004 and which counted almost 30,000 victims of torture, among testimonies from 35,000 persons.
In December 1993, Christian Democrat Eduardo Frei Ruiz-Tagle, the son of previous president Eduardo Frei Montalva, led the Concertación coalition to victory with an absolute majority of votes (58%). Frei Ruiz-Tagle was succeeded in 2000 by Socialist Ricardo Lagos, who won the presidency in an unprecedented runoff election against Joaquín Lavín of the rightist Alliance for Chile, by a very tight score of fewer than 200,000 votes (51,32%).
In 1998, Pinochet travelled to London for back surgery. But under orders of Spanish judge Baltasar Garzón, he was arrested there, attracting worldwide attention, not only because of the history of Chile and South America, but also because this was one of the first arrests of a former president based on the universal jurisdiction principle. Pinochet tried to defend himself by referring to the State Immunity Act of 1978, an argument rejected by the British justice. However, UK Home Secretary Jack Straw took the responsibility to release him on medical grounds, and refused to extradite him to Spain. Thereafter, Pinochet returned to Chile in March 2000. Upon descending the plane on his wheelchair, he stood up and saluted the cheering crowd of supporters, including an army band playing his favorite military march tunes, which was awaiting him at the airport in Santiago. President Ricardo Lagos later commented that the retired general's televised arrival had damaged the image of Chile, while thousands demonstrated against him.
Bachelet and Piñera
The Concertación coalition has continued to dominate Chilean politics for last two decades. In January 2006 Chileans elected their first female president, Michelle Bachelet, of the Socialist Party. She was sworn in on March 11, 2006, extending the Concertación coalition governance for another four years.
In 2002 Chile signed an association agreement with the European Union (comprising a free trade agreement and political and cultural agreements), in 2003, an extensive free trade agreement with the United States, and in 2004 with South Korea, expecting a boom in import and export of local produce and becoming a regional trade-hub. Continuing the coalition's free trade strategy, in August 2006 President Bachelet promulgated a free trade agreement with China (signed under the previous administration of Ricardo Lagos), the first Chinese free trade agreement with a Latin American nation; similar deals with Japan and India were promulgated in August 2007. In October 2006, Bachelet promulgated a multilateral trade deal with New Zealand, Singapore and Brunei, the Trans-Pacific Strategic Economic Partnership (P4), also signed under Lagos' presidency. Regionally, she has signed bilateral free trade agreements with Panama, Peru and Colombia.
After 20 years, Chile went in a new direction with the win of center-right Sebastián Piñera, in the Chilean presidential election of 2009–2010, defeating former President Eduardo Frei in the runoff.
On 27 February 2010, Chile was struck by an 8.8 MW earthquake, the fifth largest ever recorded at the time. More than 500 people died (most from the ensuing tsunami) and over a million people lost their homes. The earthquake was also followed by multiple aftershocks. Initial damage estimates were in the range of US$15–30 billion, around 10 to 15 percent of Chile's real gross domestic product.
Chile achieved global recognition for the successful rescue of 33 trapped miners in 2010. On 5 August 2010, the access tunnel collapsed at the San José copper and gold mine in the Atacama Desert near Copiapó in northern Chile, trapping 33 men below ground. A rescue effort organized by the Chilean government located the miners 17 days later. All 33 men were brought to the surface two months later on 13 October 2010 over a period of almost 24 hours, an effort that was carried on live television around the world.
Despite good macroeconomic indicators, there was increased social dissatisfaction, focused on demands for better and fairer education, culminating in massive protests demanding more democratic and equitable institutions. Approval of Piñera's administration fell irrevocably.
In 2013, Bachelet, a Social Democrat, was elected again as president, seeking to make the structural changes claimed in recent years by the society relative to education reform, tributary reform, same sex civil union, and definitely end the Binomial System, looking to further equality and the end of what remains of the dictatorship. In 2015 a series of corruption scandals (most notably Penta case and Caval case) became public, threatening the credibility of the political and business class.
On 17 December 2017, Sebastián Piñera was elected president of Chile for a second term. He received 36% of the votes, the highest percentage among all 8 candidates. In the second round, Piñera faced Alejandro Guillier, a television news anchor who represented Bachelet's New Majority (Nueva Mayoría) coalition. Piñera won the elections with 54% of the votes.
Estallido Social and Constitutional Referendum
In October 2019 there were violent protests about costs of living and inequality, resulting in Piñera declaring a state of emergency. On 15 November, most of the political parties represented in the National Congress signed an agreement to call a national referendum in April 2020 regarding the creation of a new Constitution. But the COVID-19 pandemic postponed the date of the elections, while Chile was one of the hardest hit nations in the Americas as of May 2020. On October 25, 2020, Chileans voted 78.28 per cent in favor of a new constitution, while 21.72 per cent rejected the change. Voter turnout was 51 per cent. A second vote was held on April 11, 2021, to select 155 Chileans who form the convention which will draft the new constitution.
On 19 December 2021, leftist candidate, the 35-year-old former student protest leader, Gabriel Boric, won Chile's presidential election to become the country's youngest ever leader, after the most polarizing election since democracy was restored, defeating right wing pinochetist and leader of the Chilean Republican Party José Antonio Kast. The center-left and center-right political conglomerates alternating power during the last 32 years (ex-Concertación and Chile Vamos) ended up in fourth and fifth place of the Presidential election.
Gabriel Boric presidency (2022- )
On 11 March 2022, Gabriel Boric was sworn in as president to succeed outgoing President Sebastian Pinera. Out of 24 members of Gabriel Boric's female-majority Cabinet, 14 are women.
On 4 September 2022, voters rejected overwhelmingly the new constitution in the constitutional referendum, which was put forward by the constitutional convention and strongly backed by President Boric. Prior to the dismissal of the proposed constitution the issue of constitutional plurinationalism was noted in polls as particularly divisive in Chile.
See also
Arauco War
Chincha Islands War
COVID-19 pandemic in Chile
Economic history of Chile
List of presidents of Chile
Miracle of Chile
Occupation of the Araucanía
Politics of Chile
Timeline of Chilean history
U.S. intervention in Chile
War of the Confederation
War of the Pacific
General:
History of the Americas
History of Latin America
History of South America
Spanish colonization of the Americas
References
Further reading
In English
(See pp. 153–160.)
Antezana-Pernet, Corinne. "Peace in the World and Democracy at Home: The Chilean Women's Movement in the 1940s" in Latin America in the 1940s, David Rock, ed. Berkeley and Los Angeles: University of California Press 1994, pp. 166–186.
Bergquist, Charles W. Labor in Latin America: Comparative Essays on Chile, Argentina, Venezuela, and Colombia. Stanford: Stanford University Press 1986.
Burr, Robert N. By Reason or Force: Chile and the Balancing Power of South America 1830–1905. Berkeley and Los Angeles: University of California Press 1965.
Collier, Simon. Ideas and Politics of Chilean Independence, 1808–1833. New York: Cambridge University Press 1967.
Drake, Paul. Socialism and Populism in Chile, 1932–1952. Urbana: University of Illinois Press 1978.
Drake, Paul. "International Crises and Popular Movements in Latin America: Chile and Peru from the Great Depression to the Cold War," in Latin America in the 1940s, David Rock, ed. Berkeley and Los Angeles: University of California Press 1994, 109–140.
Harvey, Robert. "Liberators: Latin America`s Struggle For Independence, 1810–1830". John Murray, London (2000).
Klubock, Thomas. La Frontera: Forests and Ecological Conflict in Chile's Frontier Territory. Durham: Duke University Press 2014.
Mallon, Florencia. Courage Tastes of Blood: The Mapuche Community of Nicolás Ailío and the Chilean State, 1906–2001. Durham: Duke University Press 2005.
Pike, Frederick B. Chile and the United States, 1880–1962: The Emergence of Chile's Social Crisis and challenge to United States Diplomacy. University of Notre Dame Press 1963.
Stern, Steve J. Battling for Hearts and Minds: Memory Struggles in Pinochet's Chile, 1973–1988. Durham: Duke University Press 2006.
In Spanish
Cronología de Chile in the Spanish-language Wikipedia.
Díaz, J.; Lüders. R. y Wagner, G. (2016). Chile 1810–2010. La República en Cifras. Historical Statistics. (Santiago: Ediciones Universidad Católica de Chile); a compendium of indicators, from macroeconomic aggregates to demographic trends and social policies, focused on economic and social history; more information; Data can be obtained from: online
External links
History of Chile (book by Chilean historian Luis Galdames) |
5494 | https://en.wikipedia.org/wiki/Economy%20of%20Chile | Economy of Chile | The economy of Chile is a market economy and high-income economy as ranked by the World Bank. The country is considered one of South America's most prosperous nations, leading the region in competitiveness, income per capita, globalization, economic freedom, and low perception of corruption. Although Chile has high economic inequality, as measured by the Gini index, it is close to the regional mean.
In 2006, Chile became the country with the highest nominal GDP per capita in Latin America. In May 2010 Chile became the first South American country to join the OECD. Tax revenues, all together 20.2069% of GDP in 2013, were the second lowest among the 34 OECD countries, and the lowest in 2010. Chile has an inequality-adjusted human development index of 0.722, compared to 0.720, 0.710 and 0.576 for neighboring Argentina, Uruguay and Brazil, respectively. In 2017, only 0.7% of the population lived on less than US$1.90 a day.
The Global Competitiveness Report for 2009–2010 ranked Chile as being the 30th most competitive country in the world and the first in Latin America, well above Brazil (56th), Mexico (60th), and Argentina (85th); it has since fallen out of the top 30. The ease of doing business index, created by the World Bank, listed Chile as 34th in the world as of 2014, 41st for 2015, and 48th as of 2016. The privatized national pension system (AFP) has an estimated total domestic savings rate of approximately 21% of GDP.
History
After Spanish arrival in the 15th century Chilean economy came to revolve around autarchy estates called fundos and around the army that was engaged in the Arauco War. During early colonial times there were gold exports to Perú from placer deposits which soon depleted. Trade restrictions and monopolies established by the Spanish crown are credited for having held back economic development for much of the colonial times. As effect of these restrictions the country incorporated very few new crops and animal breeds after initial conquest. Other sectors that were held back by restrictions were the wine and mining industries. The Bourbon reforms in the 18th century eased many monopolies and trade restrictions.
In the 1830s Chile consolidated under the ideas of Diego Portales as a stable state open to foreign trade. Foreign investment in Chile grew over the 19th century. After the War of the Pacific the Chilean treasury grew by 900%. The League of Nations labeled Chile the country hardest hit by the Great Depression because 80% of government revenue came from exports of copper and nitrates, which were in low demand. After the Great Depression Chilean economic policies changed toward import substitution industrialization and the Production Development Corporation was established.
Under the influence of the Chicago Boys the Pinochet regime made of Chile a leading country in establishing neoliberal policies. These policies allowed large corporations to consolidate their power over the Chilean economy, leading to long-term economic growth.
The crisis of 1982 caused the appointment of Hernán Büchi as minister of finance and a sharp revision of economic policy. Despite a general selling of state property and contrary to neoliberal prescriptions, the regime retained the lucrative state owned mining company Codelco which stands for about 30% of government income.
According to the CIA World Factbook, during the early 1990s, Chile's reputation as a role model for economic reform was strengthened when the democratic government of Patricio Aylwin, who took over from the military in 1990, deepened the economic reform initiated by the military government. The Aylwin government departed significantly from the neoliberal doctrine of the Chicago boys, as evidenced by higher government expenditure on social programs to tackle poverty and poor quality housing. Growth in real GDP averaged 8% from 1991 to 1997, but fell to half that level in 1998 because of tight monetary policies (implemented to keep the current account deficit in check) and lower exports due to the Asian financial crisis. Chile's economy has since recovered and has seen growth rates of 5–7% over the past several years.
After a decade of impressive growth rates, Chile began to experience a moderate economic downturn in 1999, brought on by unfavorable global economic conditions related to the Asian financial crisis, which began in 1997. The economy remained sluggish until 2003, when it began to show clear signs of recovery, achieving 4.0% real GDP growth. The Chilean economy finished 2004 with growth of 6.0%. Real GDP growth reached 5.7% in 2005 before falling back to 4.0% in 2006. GDP expanded by 5.1% in 2007.
Sectors
During 2012, the largest sectors by GDP were mining (mainly copper), business services, personal services, manufacturing and wholesale and retail trade. Mining also represented 59.5% of exports in the period, while the manufacturing sector accounted for 34% of exports, concentrated mainly in food products, chemicals and pulp, paper and others.
Agriculture
Chile is one of the 5 largest world producers of cherry and cranberry, and one of the 10 largest world producers of grape, apple, kiwi, peach, plum and hazelnut, focusing on exporting high-value fruits.
In 2018, Chile was the 9th largest producer of grape in the world, with 2 million tons produced; the 10th largest producer of apple in the world, with 1.7 million tons produced; and the 6th largest producer of kiwi in the world, with 230 thousand tons produced, in addition to producing 1.4 million tons of wheat, 1.1 million tons of maize, 1.1 million tons of potato, 951 thousand tons of tomato, 571 thousand tons of oats, 368 thousand tons of onion, 319 thousand tons of peach, 280 thousand tons of pear, 192 thousand tons of rice, 170 thousand tons of barley, 155 thousand tons of cherry, 151 thousand tons of lemon, 118 thousand tons of tangerine, 113 thousand tons of orange, 110 thousand tons of olives, 106 thousand tons of cranberry, in addition to smaller productions of other agricultural products.
Agriculture and allied sectors like forestry, logging and fishing accounts only for 4.9% of the GDP as of 2007 and employed 13.6% of the country's labor force. Some major agriculture products of Chile includes grapes, apples, pears, onions, wheat, corn, oats, peaches, garlic, asparagus, beans, beef, poultry, wool, fish and timber.
Chile's position in the Southern Hemisphere leads to an agricultural season cycle opposite to those of the principal consumer markets, primarily located in the Northern Hemisphere. Chile's extreme north–south orientation produces seven different macro-regions distinguished by climate and geographical features, which allows the country itself to stagger harvests and results in extended harvesting seasons. However, the mountainous landscape of Chile limits the extent and intensity of agriculture so that arable land corresponds only to 2.62% of the total territory. Through Chile's trade agreements, its agricultural products have gained access to a market controlling 77% of the world's GDP and by approximately 2012, 74% of Chilean agribusiness exports will be duty-free.
Chile's principal growing region and agricultural heartland is the Central Valley delimited by the Chilean Coast Range in the west, the Andes in the east Aconcagua River by the north and Bío-Bío River by the south. In the northern half of Chile cultivation is highly dependent on irrigation. South of the Central Valley cultivation is gradually replaced by aquaculture, silviculture, sheep and cattle farming.
Salmon
Chile is the second largest producer of salmon in the world. As of August 2007, Chile's share of worldwide salmon industry sales was 38.2%, rising from just 10% in 1990. The average growth rate of the industry for the 20 years between 1984 and 2004 was 42% per year. The presence of large foreign firms in the salmon industry has brought what probably most contributes to Chile's burgeoning salmon production, technology. Technology transfer has allowed Chile to build its global competitiveness and innovation and has led to the expansion of production as well as to an increase in average firm size in the industry. In November 2018, the Chinese company Joyvio Group (Legend Holdings) bought the Chilean salmon producer Australis Seafoods for $880 million, thus gaining control over 30% of all Chilean salmon exports.
Forestry
The Chilean forestry industry grew to comprise 13% of the country's total exports in 2005, making it one of the largest export sectors for Chile. Radiata Pine and Eucalyptus comprise the vast majority of Chile's forestry exports. Within the forestry sector, the largest contributor to total production is pulp, followed by wood-based panels and lumber. Due to popular and increasing demands for Chile's forestry products, the government is currently focusing on increasing the already vast acreage of Chile's Pine and Eucalyptus plantations as well as opening new industrial plants.
Wine
Chile's unique geography and climate make it ideal for winegrowing and the country has made the top ten list of wine producers many times in the last few decades.
The popularity of Chilean wine has been attributed not just to the quantity produced but also to increasing levels of quality. The combination of quantity and quality allows Chile to export excellent wines at reasonable prices to the international market.
Mining
The mining sector in Chile is one of the pillars of Chilean economy. The Chilean government strongly supports foreign investment in the sector and has modified its mining industry laws and regulations to create a favorable investing environment for foreigners. Thanks to a large amount of copper resources, complaisant legislation and an unregulated investment environment, Chile has become one of the main copper producer, with almost 30% of the global annual copper output.
In addition to copper, Chile was, in 2019, the world's largest producer of iodine and rhenium, the second largest producer of lithium and molybdenum, the sixth largest producer of silver, the seventh largest producer of salt, the eighth largest producer of potash, the thirteenth producer of sulfur and the thirteenth producer of iron ore in the world. The country also has considerable gold production: between 2006 and 2017, the country produced annual amounts ranging from 35.9 tonnes in 2017 to 51.3 tonnes in 2013.
Services
The service sector in Chile has grown fast and consistently in recent decades, reinforced by the rapid development of communication and information technology, access to education and an increase in specialist skills and knowledge among the workforce.
Chilean foreign policy has recognized the importance of the tertiary sector or service sector to the economy, boosting its international liberalization and leading to the signing of several free trade area agreements.
Chilean service exportation consists mainly of maritime and aeronautical services, tourism, retail (department stores, supermarkets, and shopping centers), engineering and construction services, informatics, health and education.
Chile ranked first among Latin American countries (and No. 32 worldwide) in Adecco's 2019 Global Talent Competitiveness Index (GTCI).
Finance
Chile's financial sector has grown quickly in recent years, with a banking reform law approved in 1997 that broadened the scope of permissible foreign activity for Chilean banks. The Chilean Government implemented a further liberalization of capital markets in 2001, and there is further pending legislation proposing further liberalization. Over the last ten years, people who live in Chile have enjoyed the introduction of new financial tools such as home equity loans, currency futures and options, factoring, leasing, and debit cards. The introduction of these new products has also been accompanied by an increased use of traditional instruments such as loans and credit cards. Chile's private pension system, with assets worth roughly $70 billion at the end of 2006, has been an important source of investment capital for the capital market. However, by 2009, it has been reported that $21 billion had been lost from the pension system to the global financial crisis.
Tourism
Tourism in Chile has experienced sustained growth over the last decades. Chile received about 2.25 million foreign visitors in 2006, up to 2.50 million in 2007
The percentages of foreign tourists arrivals by land, air and sea were, respectively, 55.3%, 40.5% and 4.2% for that year. The two main gateways for international tourists visiting Chile are Arturo Merino Benítez International Airport and Paso Los Libertadores.
Chile has a great diversity of natural landscapes, from the Mars-like landscapes of the hyperarid Atacama Desert to the glacier-fed fjords of the Chilean Patagonia, passing by the winelands backdropped by the Andes of the Central Valley and the old-growth forests of the Lakes District. Easter Island and Juan Fernández Archipelago, including Robinson Crusoe Island, are also major attractions.
Many of the most visited attractions in Chile are protected areas. The extensive Chilean protected areas system includes 32 protected parks, 48 natural reserves and 15 natural monuments.
Economic policies
According to the CIA World Factbook, Chile's "sound economic policies", maintained consistently since the 1980s, "have contributed to steady economic growth in Chile and have more than halved poverty rates." The 1973–90 military government sold many state-owned companies, and the three democratic governments since 1990 have implemented export promotion policies and continued privatization, though at a slower pace. The government's role in the economy is mostly limited to regulation, although the state continues to operate copper giant CODELCO and a few other enterprises (there is one state-run bank).
Under the compulsory private pension system, most formal sector employees pay 10% of their salaries into privately managed funds.
As of 2006, Chile invested 0.6% of its annual GDP in research and development (R&D). Even then, two-thirds of that was government spending. Beyond its general economic and political stability, the government has also encouraged the use of Chile as an "investment platform" for multinational corporations planning to operate in the region. Chile's approach to foreign direct investment is codified in the country's Foreign Investment Law, which gives foreign investors the same treatment as Chileans. Registration is reported to be simple and transparent, and foreign investors are guaranteed access to the official foreign exchange market to repatriate their profits and capital.
Faced with the financial crisis of 2007–2008, the government announced a $4 billion economic stimulus plan to spur employment and growth, and despite the global financial crisis, aimed for an expansion of between 2 percent and 3 percent of GDP for 2009. Nonetheless, economic analysts disagreed with government estimates and predicted economic growth at a median of 1.5 percent. According to the CIA World FactBook, the GDP contracted an estimated −1.7% in 2009.
The Chilean Government has formed a Council on Innovation and Competition, which is tasked with identifying new sectors and industries to promote. It is hoped that this, combined with some tax reforms to encourage domestic and foreign investment in research and development, will bring in additional FDI to new parts of the economy.
According to The Heritage Foundation's Index of Economic Freedom in 2012, Chile has the strongest private property rights in Latin America, scoring 90 on a scale of 100.
Chile's AA- S&P credit rating is the highest in Latin America, while Fitch Ratings places the country one step below, in A+.
There are three main ways for Chilean firms to raise funds abroad: bank loans, issuance of bonds, and the selling of stocks on U.S. markets through American Depository Receipts (ADRs). Nearly all of the funds raised through these means go to finance domestic Chilean investment. In 2006, the Government of Chile ran a surplus of $11.3 billion, equal to almost 8% of GDP. The Government of Chile continues to pay down its foreign debt, with public debt only 3.9% of GDP at the end of 2006.
Fiscal policy
One of Chile's fiscal policy central features has been its counter-cyclical nature. This has been facilitated by the voluntary application since 2001 of a structural balance policy based on the commitment to an announced goal of a medium-term structural balance as a percentage of GDP. The structural balance nets out the effect of the economic cycle (including copper price volatility) on fiscal revenues and constrains expenditures to a correspondingly consistent level. In practice, this means that expenditures rise when activity is low and decrease in booms The target was of 1% of GDP between 2001 and 2007, it was reduced to 0.5% in 2008 and then to 0% in 2009 in the wake of the global financial crisis. In 2005, key elements of this voluntary policy were incorporated into legislation through the Fiscal Responsibility Law (Law 20,128).
The Fiscal Responsibility Law also allowed for the creation of two sovereign wealth funds: the Pension Reserve Fund (PRF), to face increased expected old-age benefits liabilities, and the Economic and Social Stabilization Fund (ESSF), to stabilize fiscal spending by providing funds to finance fiscal deficits and debt amortization. By the end of 2012, they had respective market values of US$5.883 million and US$14.998 million.
The main taxes in Chile in terms of revenue collection are the value added tax (45.8% of total revenues in 2012) and the income tax (41.8% of total revenues in 2012). The value added tax is levied on sales of goods and services (including imports) at a rate of 19%, with a few exemptions. The income tax revenue comprises different taxes. While there is a corporate income tax of 20% over profits from companies (called First Category Tax), the system is ultimately designed to tax individuals. Therefore, corporate income taxes paid constitute a credit towards two personal income taxes: the Global Complementary Tax (in the case of residents) or the Additional Tax (in the case of non-residents). The Global Complementary Tax is payable by those that have different sources of income, while those receiving income solely from dependent work are subject to the Second Category Tax. Both taxes are equally progressive in statutory terms, with a top marginal rate of 40%. Income arising from corporate activity under the Global Complementary Tax only becomes payable when effectively distributed to the individual. There are also special sales taxes on alcohol and luxury goods, as well as specific taxes on tobacco and fuel. Other taxes include the inheritance tax and custom duties.
In 2012, general government expenditure reached 21.5% of GDP, while revenues were equivalent to 22% of GDP. Gross financial debt amounted to 12.2% of GDP, while in net terms it was −6.9% of GDP, both well below OECD averages.
Monetary policy
Chile's monetary authority is the Central Bank of Chile (CBoC). The CBoC pursues an inflation target of 3%, with a tolerance range of 1% (below or above). Inflation has followed a relatively stable trajectory since the year 2000, remaining under 10%, despite the temporary surge of some inflationary pressures in the year 2008. The Chilean peso's rapid appreciation against the U.S. dollar in recent years has helped dampen inflation. Most wage settlements and loans are indexed, reducing inflation's volatility.
The CBoC is granted autonomous status by Chile's National Constitution, providing credibility and stability beyond the political cycle. According to the Basic Constitutional Act of the Central Bank of Chile (Law 18,840), its main objectives are to safeguard "the stability of the currency and the normal functioning of internal and external payments". To meet these objectives, the CBoC is enabled to use monetary and foreign exchange policy instruments, along with some discretion on financial regulation. In practice, the CBoC monetary policy is guided by an inflation targeting regime, while the foreign exchange policy is led by a floating exchange rate and, although unusual, the bank reserves the right to intervene in the foreign exchange markets.
Trade policy
Chile is strongly committed to free trade and has welcomed large amounts of foreign investment. Chile has signed free trade agreements (FTAs) with a network of countries, including an FTA with the United States that was signed in 2003 and implemented in January 2004.
Chile unilaterally lowered its across-the-board import tariff for all countries with which it does not have a trade agreement to 6% in 2003. Higher effective tariffs are charged only on imports of wheat, wheat flour, and sugar as a result of a system of import price bands. The price bands were ruled inconsistent with Chile's World Trade Organization (WTO) obligations in 2002, and the government has introduced legislation to modify them. Under the terms of the U.S.–Chile FTA, the price bands will be completely phased out for U.S. imports of wheat, wheat flour, and sugar within 12 years.
Chile is a strong proponent of pressing ahead on negotiations for a Free Trade Area of the Americas (FTAA) and is active in the WTO's Doha round of negotiations, principally through its membership in the G-20 and Cairns Group.
Most imports are not subject to the full statutory tariff, due to the extensive preferences negotiated outside the multilateral system through Regional Trade Agreements (RTAs). By the last version of the World Trade Organization's Trade Policy Review (October 2009), Chile had signed 21 RTAs with 57 countries and the number has continued to rise in recent years
More recently, Chile has also been an active participant of deeper plurilateral trade agreement negotiations. Notably, Chile is currently in talks with eleven other economies in the Trans-Pacific Partnership (TPP), a proposed agreement that would stem from the existing P-4 Agreement between Brunei, Chile, New Zealand and Singapore. Chile has signed some form of bilateral or plurilateral agreement with each of the parties at TPP, although with different degrees of integration.
Chile is also a party in conversations to establish the Pacific Alliance along with Peru, Mexico and Colombia.
Foreign trade
2006 was a record year for Chilean trade. Total trade registered a 31% increase over 2005. During 2006, exports of goods and services totaled US$58 billion, an increase of 41%. This figure was somewhat distorted by the skyrocketing price of copper. In 2006, copper exports reached a historical high of US$33.3 billion. Imports totaled US$35 billion, an increase of 17% compared to the previous year. Chile thus recorded a positive trade balance of US$2.3 billion in 2006.
The main destinations for Chilean exports were the Americas (US$39 billion), Asia (US$27.8 billion) and Europe (US$22.2 billion). Seen as shares of Chile's export markets, 42% of exports went to the Americas, 30% to Asia and 24% to Europe. Within Chile's diversified network of trade relationships, its most important partner remained the United States. Total trade with the U.S. was US$14.8 billion in 2006. Since the U.S.–Chile Free Trade Agreement went into effect on 1 January 2004, U.S.–Chilean trade has increased by 154%. Internal Government of Chile figures show that even when factoring out inflation and the recent high price of copper, bilateral trade between the U.S. and Chile has grown over 60% since then.
Total trade with Europe also grew in 2006, expanding by 42%. The Netherlands and Italy were Chile's main European trading partners. Total trade with Asia also grew significantly at nearly 31%. Trade with Korea and Japan grew significantly, but China remained Chile's most important trading partner in Asia. Chile's total trade with China reached U.S. $8.8 billion in 2006, representing nearly 66% of the value of its trade relationship with Asia.=
The growth of exports in 2006 was mainly caused by a strong increase in sales to the United States, the Netherlands, and Japan. These three markets alone accounted for an additional US$5.5 billion worth of Chilean exports. Chilean exports to the United States totaled US$9.3 billion, representing a 37.7% increase compared to 2005 (US$6.7 billion). Exports to the European Union were US$15.4 billion, a 63.7% increase compared to 2005 (US$9.4 billion). Exports to Asia increased from US$15.2 billion in 2005 to US$19.7 billion in 2006, a 29.9% increase.
During 2006, Chile imported US$26 billion from the Americas, representing 54% of total imports, followed by Asia at 22%, and Europe at 16%. Mercosur members were the main suppliers of imports to Chile at US$9.1 billion, followed by the United States with US$5.5 billion and the European Union with US$5.2 billion. From Asia, China was the most important exporter to Chile, with goods valued at US$3.6 billion. Year-on-year growth in imports was especially strong from a number of countries – Ecuador (123.9%), Thailand (72.1%), Korea (52.6%), and China (36.9%).
Chile's overall trade profile has traditionally been dependent upon copper exports. The state-owned firm CODELCO is the world's largest copper-producing company, with recorded copper reserves of 200 years. Chile has made an effort to expand nontraditional exports. The most important non-mineral exports are forestry and wood products, fresh fruit and processed food, fishmeal and seafood, and wine.
Trade agreements
Over the last several years, Chile has signed FTAs with the European Union, South Korea, New Zealand, Singapore, Brunei, China, and Japan. It reached a partial trade agreement with India in 2005 and began negotiations for a full-fledged FTA with India in 2006. Chile conducted trade negotiations in 2007 with Australia, Malaysia, and Thailand, as well as with China to expand an existing agreement beyond just trade in goods. Chile concluded FTA negotiations with Australia and an expanded agreement with China in 2008. The members of the P4 (Chile, Singapore, New Zealand, and Brunei) also plan to conclude a chapter on finance and investment in 2008.
Successive Chilean governments have actively pursued trade-liberalizing agreements. During the 1990s, Chile signed free trade agreements (FTA) with Canada, Mexico, and Central America. Chile also concluded preferential trade agreements with Venezuela, Colombia, and Ecuador. An association agreement with Mercosur-Argentina, Brazil, Paraguay, and Uruguay-went into effect in October 1996. Continuing its export-oriented development strategy, Chile completed landmark free trade agreements in 2002 with the European Union and South Korea. Chile, as a member of the Asia-Pacific Economic Cooperation (APEC) organization, is seeking to boost commercial ties to Asian markets. To that end, it has signed trade agreements in recent years with New Zealand, Singapore, Brunei, India, China, and most recently Japan. In 2007, Chile held trade negotiations with Australia, Thailand, Malaysia, and China. In 2008, Chile hopes to conclude an FTA with Australia, and finalize an expanded agreement (covering trade in services and investment) with China. The P4 (Chile, Singapore, New Zealand, and Brunei) also plan to expand ties through adding a finance and investment chapter to the existing P4 agreement. Chile's trade talks with Malaysia and Thailand are also scheduled to continue in 2008.
After two years of negotiations, the United States and Chile signed an agreement in June 2003 that will lead to completely duty-free bilateral trade within 12 years. The U.S.-Chile FTA entered into force 1 January 2004, following approval by the U.S. and Chilean congresses. The FTA has greatly expanded U.S.-Chilean trade ties, with total bilateral trade jumping by 154% during the FTA's first three years. On 1 January 2014, Chile-Vietnam Free Trade Agreement officially took effect.
Issues
Unemployment hovered at 8–10% after the start of the economic slowdown in 1999, above the 7% average for the 1990s. Unemployment finally dipped to 7.8% in 2006, and continued to fall in 2007, averaging 6.8% monthly (up to August). Wages have risen faster than inflation as a result of higher productivity, boosting national living standards. The percentage of Chileans with household incomes below the poverty line – defined as twice the cost of satisfying a person's minimal nutritional needs – fell from 45.1% in 1987 to 11.7% in 2015, according to government polls. Critics in Chile, however, argue that poverty figures are considerably higher than those officially published; until 2016, the government defined the poverty line based on an outdated 1987 household consumption poll, instead of more recent polls from 1997 or 2007. According to critics who use data from the 1997 poll, the poverty rate goes up to 29%; a study published in 2017 claims that it reaches 26%. Using the relative yardstick favoured in many European countries, 27% of Chileans would be poor, according to Juan Carlos Feres of the ECLAC. Starting in 2016, a new Multidimensional Poverty Index is also used, which reached 20.9% using 2015 data.
The percent of total income earned by the richest 20% of the Chilean population in 2000 was 61.0% of GDP, while the percent of total income earned by the poorest 20% of the Chilean population was 3.3% of GDP. Chile's Gini Coefficient in 2003 (53.8) has slightly changed in comparison with the value in 1995 (56.4). In 2005 the 10% poorest among the Chileans received 1.2% of GNP (2000 = 1.4%), while the 10% richest received 47% of GNP (2000 = 46%).
Regarding the census, assessments have exhibited mixed results. An initial evaluation by a domestic independent experts panel released in August 2013 placed the omission rate in 9.3%, three times as much as other census in the region, and recommended annulling the census to hold a new version in 2015. The government sought an assessment by international experts before making a final decision. The team, which included three experts that represented the World Bank and the E.U. Statistics Commission, found "no basis for doubting the usability of the census data for most, if perhaps not all, of the customary uses" and recommended its release subject to the elimination of the imputation of housing units not observed on thBy 2021, the combined wealth of Chile's billionaires represented 16.1% of the country's gross domestic product (GDP
Historians generally explain the origin of the social gap by tracing it back to colonial times, when most land was divided between Spaniards and their descendants. This gave rise to the hacienda, in which society was divided between owners, employees, tenants and workers. Since this agrarian inequality, the concentration of wealth has spread to other economic sectors that exploit natural resources, such as mining. In more recent history, social inequality deepened in the 1970s and 1980s under Augusto Pinochet's regime, with the privatization of public enterprises in favor of large family fortunes, the repression of trade unions and the rejection of the welfare state. As social mobility is very low in Chile, social status is passed down from generation to generation.e ground during the enumeration and the concurrent publication of a methodological and administrative report.
Social inequalities
By 2021, the combined wealth of Chile's billionaires represented 16.1% of the country's gross domestic product (GDP).
Historians generally explain the origin of the social gap by tracing it back to colonial times, when most land was divided between Spaniards and their descendants. This gave rise to the hacienda, in which society was divided between owners, employees, tenants and workers. Since this agrarian inequality, the concentration of wealth has spread to other economic sectors that exploit natural resources, such as mining. In more recent history, social inequality deepened in the 1970s and 1980s under Augusto Pinochet's regime, with the privatization of public enterprises in favor of large family fortunes, the repression of trade unions and the rejection of the welfare state. As social mobility is very low in Chile, social status is often passed down from generation to generation.
Statistics
Main economic indicators
The following table shows the main economic indicators in 1980–2021 (with IMF staff estimates in 2022–2027). Inflation below 5% is in green.
GDP composition
Main macroeconomic aggregates of GDP.
Note: Data are preliminary. Source: Cuentas Nacionales de Chile – Evolución de la actividad económica en el año 2015 (p. 29), Central Bank of Chile, accessed on 23 March 2016.
GDP by sector
Gross domestic product by sector of the economy.
Note: 2011 data are preliminary. Source: Cuentas Nacionales – Evolución de la actividad económica en el año 2011 (p. 34). Central Bank of Chile. accessed on 22 March 2012.
Top exports
Chile's top exports in 2013.
Source: Central Bank of Chile's statistics database.
See also
List of Latin American and Caribbean countries by GDP growth
List of Latin American and Caribbean countries by GDP (nominal)
List of Latin American and Caribbean countries by GDP (PPP)
Bibliography
COLLIER, Simon and Sater, William F. A History of Chile, 1808–2002, New York and London, Cambridge University Press, 2004.
CONSTABLE, Pamela and Valenzuela, Arturo. A Nation of Enemies: Chile Under Pinochet. New York, W. W. Norton & Company, 1993.
PALEY, Julia. Marketing Democracy: Power and Social Movements in Post-Dictatorship Chile. University of California Press, 2001
WINN, Peter (editor).Victims of the Chilean Miracle: Workers and Neoliberalism in the Pinochet Era, 1973–2002. Durham, NC: Duke University Press, 2004.
References
External links
Chile; A Top Stock market Performer
The Economic Transformation of Chile: A Model of Progress – HACER
Invest in Chile
World Reviews on Chile – this is Chile
Chile Export, Import, Trade Balance
Chile Trade
Tariffs applied by Chile as provided by ITC's ITC Market Access Map, an online database of customs tariffs and market requirements
Chile
Chile |
5500 | https://en.wikipedia.org/wiki/Christmas%20Island | Christmas Island | The Territory of Christmas Island is an Australian external territory comprising the island of the same name. It is located in the Indian Ocean around south of Java and Sumatra and about northwest of the closest point on the Australian mainland. It has an area of .
Christmas Island had a population of 1,692 residents , the majority living in settlements on the northern edge of the island. The main settlement is Flying Fish Cove. Historically, Asian Australians of Chinese, Malay, and Indian descent formed the majority of the population. Today, around two-thirds of the island's population is estimated to have Straits Chinese origin (though just 22.2% of the population declared a Chinese ancestry in 2021), with significant numbers of Malays and European Australians and smaller numbers of Straits Indians and Eurasians. Several languages are in use including English, Malay, and various Chinese dialects. Islam and Buddhism are major religions on the island. The religion question in the Australian census is optional, and 28% of the population do not declare their religious belief.
The first European to sight Christmas Island was Richard Rowe of the Thomas in 1615. Captain William Mynors named it on Christmas Day, 25 December 1643. It was first settled in the late 19th century. Christmas Island's geographic isolation and history of minimal human disturbance has led to a high level of endemism among its flora and fauna, which is of interest to scientists and naturalists. The majority (63%) of the island is included in the Christmas Island National Park, which features several areas of primary monsoonal forest. Phosphate, deposited originally as guano, has been mined on the island since 1899.
History
First visits by Europeans, 1643
The first European to sight the island was Richard Rowe of the Thomas in 1615. Captain William Mynors of the East India Company vessel Royal Mary named the island when he sailed past it on Christmas Day in 1643. The island was included on English and Dutch navigation charts early in the 17th century, but it was not until 1666 that a map published by Dutch cartographer Pieter Goos included the island. Goos labelled the island "Mony" or "Moni", the meaning of which is unclear.
English navigator William Dampier, aboard the privateer Charles Swan's ship Cygnet, made the earliest recorded visit to the sea around the island in March 1688. In writing his account, he found the island uninhabited. Dampier was trying to reach Cocos from New Holland. His ship was blown off course in an easterly direction, arriving at Christmas Island 28 days later. Dampier landed on the west coast, at "the Dales". Two of his crewmen became the first Europeans to set foot on Christmas Island.
Captain Daniel Beeckman of the Eagle passed the island on 5 April 1714, chronicled in his 1718 book, A Voyage to and from the Island of Borneo, in the East-Indies.
Exploration and annexation
The first attempt at exploring the island was in 1857 by the crew of the Amethyst. They tried to reach the summit of the island but found the cliffs impassable. During the 1872–1876 Challenger expedition to Indonesia, naturalist John Murray carried out extensive surveys.
In 1886, Captain John Maclear of , having discovered an anchorage in a bay that he named "Flying Fish Cove", landed a party and made a small collection of the flora and fauna. In the next year, Pelham Aldrich, on board , visited the island for 10 days, accompanied by J. J. Lister, who gathered a larger biological and mineralogical collection. Among the rocks then obtained and submitted to Murray for examination were many of nearly pure phosphate of lime. This discovery led to annexation of the island by the British Crown on 6 June 1888.
Settlement and exploitation
Soon afterwards, a small settlement was established in Flying Fish Cove by G. Clunies Ross, the owner of the Cocos (Keeling) Islands some to the southwest, to collect timber and supplies for the growing industry on Cocos. In 1897 the island was visited by Charles W. Andrews, who did extensive research on the natural history of the island, on behalf of the British Museum.
Phosphate mining began in 1899 using indentured workers from Singapore, British Malaya, and China. John Davis Murray, a mechanical engineer and recent graduate of Purdue University, was sent to supervise the operation on behalf of the Phosphate Mining and Shipping Company. Murray was known as the "King of Christmas Island" until 1910, when he married and settled in London.
The island was administered jointly by the British Phosphate commissioners and district officers from the United Kingdom Colonial Office through the Straits Settlements, and later the Crown Colony of Singapore. Hunt (2011) provides a detailed history of Chinese indentured labour on the island during those years. In 1922, scientists unsuccessfully attempted to view a solar eclipse in late September from the island to test Albert Einstein's theory of relativity.
Japanese invasion
From the outbreak of the South-East Asian theatre of World War II in December 1941, Christmas Island was a target for Japanese occupation because of its rich phosphate deposits. A naval gun was installed under a British officer, four non-commissioned officers (NCOs) and 27 Indian soldiers. The first attack was carried out on 20 January 1942 by the , which torpedoed the Norwegian freighter Eidsvold. The vessel drifted and eventually sank off West White Beach. Most of the European and Asian staff and their families were evacuated to Perth.
In late February and early March 1942, there were two aerial bombing raids. Shelling from a Japanese naval group on 7 March led the district officer to hoist the white flag. But after the Japanese naval group sailed away, the British officer raised the Union Flag once more. During the night of 10–11 March, mutinous Indian troops, abetted by Sikh policemen, killed an officer and the four British NCOs in their quarters as they were sleeping. "Afterwards all Europeans on the island, including the district officer, who governed it, were lined up by the Indians and told they were going to be shot. But after a long discussion between the district officer and the leaders of the mutineers the executions were postponed and the Europeans were confined under armed guard in the district officer's house".
At dawn on 31 March 1942, a dozen Japanese bomber aircraft launched an attack, destroying the radio station. The same day, a Japanese fleet of nine vessels arrived, and the island was surrounded. About 850 men of the Japanese 21st and 24th Special Base Forces and 102nd Construction Unit came ashore at Flying Fish Cove and occupied the island. They rounded up the workforce, most of whom had fled to the jungle. Sabotaged equipment was repaired, and preparations were made to resume the mining and export of phosphate. Only 20 men from the 21st Special Base Force were left as a garrison.
Isolated acts of sabotage and the torpedoing of the cargo ship at the wharf on 17 November 1942 meant that only small amounts of phosphate were exported to Japan during the occupation. In November 1943, over 60% of the island's population were evacuated to Surabaya prison camps, leaving a population of just under 500 Chinese and Malays and 15 Japanese to survive as best they could. In October 1945, re-occupied Christmas Island.
After the war, seven mutineers were traced and prosecuted by the Military Court in Singapore. In 1947, five of them were sentenced to death. However, following representations made by the newly independent government of India, their sentences were reduced to penal servitude for life.
Transfer to Australia
At Australia's request, the United Kingdom transferred sovereignty to Australia, with a $20 million payment from the Australian government to Singapore as compensation for the loss of earnings from the phosphate revenue. The United Kingdom's Christmas Island Act was given royal assent on 14 May 1958, enabling Britain to transfer authority over Christmas Island from Singapore to Australia by an order-in-council. Australia's Christmas Island Act was passed in September 1958, and the island was officially placed under the authority of the Commonwealth of Australia on 1 October 1958.
Under Commonwealth Cabinet Decision 1573 of 9 September 1958, D. E. Nickels was appointed the first official representative of the new territory. In a media statement on 5 August 1960, the minister for territories, Paul Hasluck, said, among other things, that, "His extensive knowledge of the Malay language and the customs of the Asian people ... has proved invaluable in the inauguration of Australian administration ... During his two years on the island he had faced unavoidable difficulties ... and constantly sought to advance the island's interests."
John William Stokes succeeded Hasluck and served from 1 October 1960, to 12 June 1966. On his departure, he was lauded by all sectors of the island community. In 1968, the official secretary was retitled an administrator and, since 1997, Christmas Island and the Cocos (Keeling) Islands together are called the Australian Indian Ocean Territories and share a single administrator resident on Christmas Island.
The settlement of Silver City was built in the 1970s, with aluminium-clad houses that were supposed to be cyclone-proof. The 2004 Indian Ocean earthquake and tsunami, centred off the western shore of Sumatra in Indonesia, resulted in no reported casualties, but some swimmers were swept some out to sea for a time before being swept back in.
Refugee and immigration detention
From the late 1980s and early 1990s, boats carrying asylum seekers, mainly departing from Indonesia, began landing on the island. In 2001, Christmas Island was the site of the Tampa controversy, in which the Australian government stopped a Norwegian ship, MV Tampa, from disembarking 438 rescued asylum-seekers. The ensuing standoff and the associated political reactions in Australia were a major issue in the 2001 Australian federal election.
The Howard government operated the "Pacific Solution" from 2001 to 2007, excising Christmas Island from Australia's migration zone so that asylum seekers on the island could not apply for refugee status. Asylum seekers were relocated from Christmas Island to Manus Island and Nauru. In 2006, an immigration detention centre, containing approximately 800 beds, was constructed on the island for the Department of Immigration and Multicultural Affairs. Originally estimated to cost million, the final cost was over $400 million. In 2007, the Rudd government decommissioned Manus Regional Processing Centre and Nauru detention centre; processing would then occur on Christmas Island itself.
In December 2010, 48 asylum-seekers died just off the coast of the island in what became known as the Christmas Island boat disaster when their boat hit the rocks near Flying Fish Cove, and then smashed against nearby cliffs. In the case Plaintiff M61/2010E v Commonwealth of Australia, the High Court of Australia ruled, in a 7–0 joint judgment, that asylum seekers detained on Christmas Island were entitled to the protections of the Migration Act. Accordingly, the Commonwealth was obliged to afford asylum seekers a minimum of procedural fairness when assessing their claims. , after the interception of four boats in six days, carrying 350 people, the Immigration Department stated that there were 2,960 "irregular maritime arrivals" being held in the island's five detention facilities, which exceeded not only the "regular operating capacity" of 1,094 people, but also the "contingency capacity" of 2,724.
The Christmas Island Immigration Reception and Processing Centre closed in September 2018. The Morrison government announced it would re-open the centre in February the following year, after Australia's parliament passed legislation giving sick asylum seekers easier access to mainland hospitals. In the early days of the COVID-19 pandemic, the government opened parts of the Immigration Reception and Processing Centre to be used as a quarantine facility to accommodate Australian citizens who had been in Wuhan, the point of origin of the pandemic. The evacuees arrived on 3 February. They left 14 days later to their homes on the mainland.
Geography
The island is about in greatest length and in breadth. The total land area is , with of coastline. Steep cliffs along much of the coast rise abruptly to a central plateau. Elevation ranges from sea level to at Murray Hill. The island is mainly tropical rainforest, 63% of which is national parkland. The narrow fringing reef surrounding the island poses a maritime hazard.
Christmas Island lies northwest of Perth, Western Australia, south of Indonesia, east-northeast of the Cocos (Keeling) Islands, and west of Darwin, Northern Territory. Its closest point to the Australian mainland is from the town of Exmouth, Western Australia.
Only small parts of the shoreline are easily accessible. The island's perimeter is dominated by sharp cliff faces, making many of the island's beaches difficult to get to. Some of the easily accessible beaches include Flying Fish Cove (main beach), Lily Beach, Ethel Beach, and Isabel Beach, while the more difficult beaches to access include Greta Beach, Dolly Beach, Winifred Beach, Merrial Beach, and West White Beach, which all require a vehicle with four wheel drive and a difficult walk through dense rainforest.
Geology
The volcanic island is the flat summit of an underwater mountain more than high, which rises from about below the sea and only about above it. The mountain was originally a volcano, and some basalt is exposed in places such as The Dales and Dolly Beach, but most of the surface rock is limestone accumulated from coral growth. The karst terrain supports numerous anchialine caves. The summit of this mountain peak is formed by a succession of Tertiary limestones ranging in age from the Eocene or Oligocene up to recent reef deposits, with intercalations of volcanic rock in the older beds.
Marine Park
Reefs near the islands have healthy coral and are home to several rare species of marine life. The region, along with the Cocos (Keeling) Islands reefs, have been described as "Australia's Galapagos Islands".
In the 2021 budget the Australian Government committed $A39.1M to create two new marine parks off Christmas Island and the Cocos (Keeling) Islands. The parks will cover up to of Australian waters. After months of consultation with local people, both parks were approved in March 2022, with a total coverage of . The park will help to protect spawning of bluefin tuna from illegal international fishers, but local people will be allowed to practise fishing sustainably inshore in order to source food.
Climate
Christmas Island lies near the southern edge of the equatorial region. It has a tropical monsoon climate (Köppen Am) and temperatures vary little throughout the year. The highest temperature is usually around in March and April, while the lowest temperature is and occurs in August. There is a dry season from July to October with only occasional showers. The wet season is between November and June and includes monsoons, with downpours of rain at random times of the day. Tropical cyclones also occur in the wet season, bringing very strong winds, heavy rain, wave action, and storm surge.
Demographics
{{Pie chart
|thumb = right
|caption = ancestry of Christmas Island''' (2021)
|label1 = Chinese ancestry
|value1 = 22.2
|color1 = blue
|label2 = Australian ancestry
|value2 = 17
|color2 = Red
|label3 = Malay ancestry
|value3 = 16.1
|color3 = Green
|label4 = English ancestry
|value4 = 12.5
|color4 = DodgerBlue
|label5 = Other
|value5 = 43
|color5 = Black
}}
As of the 2021 Australian census, the population of Christmas Island is 1,843. 22.2% of the population had Chinese ancestry (up from 18.3% in 2001), 17.0% had generic Australian ancestry (11.7% in 2001), 16.1% had Malay ancestry (9.3% in 2001), 12.5% had English ancestry (8.9% in 2001), and 3.8% of the population was of Indonesian origin. As of 2021, most are people born in Christmas Island and many are of Chinese and Malay origin. 40.8% of people were born in Australia. The next most common country of birth was Malaysia at 18.6%. 29.3% of the population spoke English as their family language, while 18.4% spoke Malay, 13.9% spoke Mandarin Chinese, 3.7% Cantonese and 2.1% Southern Min (Minnan). Additionally, there are small local populations of Malaysian Indians and Eurasians.
The 2016 Australian census recorded that the population of Christmas Island was 40.5% female and 59.5% male, while in 2011 the figures had been 29.3% female and 70.7% male. In contrast, the 2021 figures for the whole of Australia were 50.7% female, 49.3% male. Since 1998 there has been no provision for childbirth on the island; expectant mothers travel to mainland Australia approximately one month before their expected due date to give birth.
Ethnicity
Historically, the majority of Christmas Islanders were those of Chinese, Malay and Indian origins, the initial permanent settlers. Today, the majority of residents are Chinese, with significant numbers of European Australians and Malays as well as smaller Indian and Eurasian communities too. Since the turn of the 21st century and right up to the present, Europeans have mainly confined themselves to the Settlement, where there is a small supermarket and several restaurants; the Malays live in the Flying Fish Cove, also known as Kampong; and the Chinese reside in Poon Saan (Cantonese for "in the middle of the hill").
Language
The main languages spoken at home on Christmas Island, according to respondents, are English (28%), Mandarin (17%), Malay (17%), with smaller numbers of speakers of Cantonese (4%) and Hokkien (2%). 27% did not specify a language. If the survey results are representative, then approximately 38% speak English, 24% Mandarin, 23% Malay, and 5% Cantonese.
Religion
In 2016, the population was estimated to be Unspecified 27.7%, Muslim 19.4%, Buddhist 18.3%, None 15.3%, Roman Catholic 8.8%, Anglican 3.6%, Uniting Church 1.2%, Other Protestant 1.7%, Other Christian 3.3% and other religions 0.6%
Religious beliefs are diverse and include Buddhism, Taoism, Christianity, Islam and Confucianism. There is a mosque, a Christian church, a Baháʼí centre and around twenty Chinese temples and shrines, which include seven Buddhist temples (like Guan Yin Monastery (观音寺) at Gaze Road), ten Taoist temples (like Soon Tian Kong (顺天宫) in South Point and Grants Well Guan Di Temple) and shrines dedicated to Na Tuk Kong or Datuk Keramat on the island. There are many religious festivals, such as Spring Festival, Chap goh meh, Qingming Festival, Zhong Yuan Festival, Hari Raya, Christmas and Easter.
Government
Christmas Island is a non-self-governing external territory of Australia , part of the Australian Indian Ocean Territories administered by the Department of Infrastructure, Transport, Regional Development and Communications (from 29 November 2007 until 14 September 2010, administration was carried out by the Attorney-General's Department, and prior to this by the Department of Transport and Regional Services).
The legal system is under the authority of the Governor-General of Australia and Australian law. An administrator appointed by the governor-general represents the monarch and Australia and lives on the island. The territory falls under no formal state jurisdiction, but the Western Australian government provides many services as established by the Christmas Island Act.
The Australian government provides services through the Christmas Island Administration and the Department of Infrastructure and Regional Development. Under the federal government's Christmas Island Act 1958, Western Australian laws are applied to Christmas Island; non-application or partial application of such laws is at the discretion of the federal government. The act also gives Western Australian courts judicial power over Christmas Island. Christmas Island remains constitutionally distinct from Western Australia, however; the power of the state to legislate for the territory is delegated by the federal government. The kind of services typically provided by a state government elsewhere in Australia are provided by departments of the Western Australian government, and by contractors, with the costs met by the federal government. A unicameral Shire of Christmas Island with nine seats provides local government services and is elected by popular vote to serve four-year terms. Elections are held every two years, with four or five of the members standing for election. women held two of the nine seats in the Christmas Island Shire Council. Its second president was Lillian Oh, from 1993 to 1995.
The next local election is scheduled for 21 October 2023 alongside elections in the Cocos (Keeling) Islands. Christmas Island residents who are Australian citizens vote in Australian federal elections. Christmas Island residents are represented in the House of Representatives by the Division of Lingiari in the Northern Territory and in the Senate by Northern Territory senators. At the 2019 federal election, the Labor Party received majorities from Christmas Island electors in both the House of Representatives and the Senate.
Defence and police
While there is no permanent Australian military presence on Christmas Island, the Royal Australian Navy and Australian Border Force deploy and patrol boats to conduct surveillance and counter-migrant smuggling patrols in adjacent waters. As of 2023, the Navy's Armidale-class boats are in the process of being replaced by larger s.
The airfield on Christmas Island has a 2100m long runway while that on Cocos (West Island, to the west) is in length. Both airfields have scheduled jet services, however, the airfield on Cocos is being upgraded by the Australian Defence Force for the purpose of acting as a forward operating base for Australian surveillance and electronic warfare aircraft in the region.
The Australian Federal Police provides community policing services to Christmas Island and also carries out duties related to immigration enforcement, the processing of visiting aircraft and ships, and in coordinating emergency operations.
Residents' views
Residents find the system of administration frustrating, with the island run by bureaucrats in the federal government, but subject to the laws of Western Australia and enforced by federal police. There is a feeling of resignation that any progress on local issues is hampered by the confusing governance system. A number of islanders support self-governance, including shire President Gordon Thompson, who also believes that a lack of news media to cover local affairs had contributed to political apathy among residents.
Flag
In early 1986, the Christmas Island Assembly held a design competition for an island flag; the winning design was adopted as the informal flag of the territory for over a decade, and in 2002 it was made the official flag of Christmas Island.
Economy
Phosphate mining had been the only significant economic activity, but in December 1987 the Australian government closed the mine. In 1991, the mine was reopened by Phosphate Resources Limited, a consortium that included many of the former mine workers as shareholders and is the largest contributor to the Christmas Island economy.
With the support of the government, the $34 million Christmas Island Casino and Resort opened in 1993 but was closed in 1998. , the resort has re-opened without the casino.
The Australian government in 2001 agreed to support the creation of a commercial spaceport on the island; however, this has not yet been constructed and appears that it will not proceed. The Howard government built a temporary immigration detention centre on the island in 2001 and planned to replace it with a larger, modern facility at North West Point until Howard's defeat in the 2007 elections.
Culture
Christmas Island cuisine can best be described as an eclectic combination of traditional Australian cuisine and Asian cuisine.
The main local organisation that promotes and supports the status and interests of female Christmas Islanders is the Christmas Island Women's Association which was established in 1989 and is a member organisation of the Associated Country Women of the World.
Christmas Island is well known for its biological diversity. There are many rare species of animals and plants on the island, making nature-walking a popular activity. Along with the diversity of species, many different types of caves exist, such as plateau caves, coastal caves, raised coastal caves and alcoves, sea caves, fissure caves, collapse caves, and basalt caves; most of these are near the sea and have been formed by the action of water. Altogether, there are approximately 30 caves on the island, with Lost Lake Cave, Daniel Roux Cave, and Full Frontal Cave being the most well-known. The many freshwater springs include Hosnies Spring Ramsar, which also has a mangrove stand.
The Dales is a rainforest in the western part of the island and consists of seven deep valleys, all of which were formed by spring streams. Hugh's Dale waterfall is part of this area and is a popular attraction. The annual breeding migration of the Christmas Island red crabs is a popular event.
Fishing is another common activity. There are many distinct species of fish in the oceans surrounding Christmas Island. Snorkelling and swimming in the ocean are two other activities that are extremely popular. Walking trails are also very popular, for there are many beautiful trails surrounded by extravagant flora and fauna. 63% of the island is covered by the Christmas Island National Park.
Sport
Cricket and rugby league are the two main organised sports on the island.
The Christmas Island Cricket Club was founded in 1959, and is now known as the Christmas Island Cricket and Sporting Club. AFL was popular from 1995-2014, games were played between the visiting Australian Navy and the locals. With one international game representing Australia, which was played in Jakarta, Indonesia in 2006 against the Jakarta Bintangs. Auskick was also presented for the kids and they participated in 2 years as represented in AFL games of half time entertainment between 2006-2010. In 2019 the club celebrated its 60-year anniversary. The club entered its first representative team into the WACA Country Week in 2020, where they were runners up in the F-division.
Rugby league is growing in the island: the first game was played in 2016, and a local committee, with the support of NRL Western Australia, is willing to organise matches with nearby Cocos Islands and to create a rugby league competition in the Indian Ocean region.
Flora and fauna
Christmas Island was uninhabited until the late 19th century, allowing many species to evolve without human interference. Two-thirds of the island has been declared a National Park, which is managed by the Australian Department of Environment and Heritage through Parks Australia. Christmas Island contains unique species, both of flora and fauna, some of which are threatened or have become extinct.
Flora
The dense rainforest has grown in the deep soils of the plateau and on the terraces. The forests are dominated by 25 tree species. Ferns, orchids and vines grow on the branches in the humid atmosphere beneath the canopy. The 135 plant species include at least 18 endemic species. The rainforest is in great condition despite the mining activities over the last 100 years. Areas that have been damaged by mining are now a part of an ongoing rehabilitation project.
Christmas Island's endemic plants include the trees Arenga listeri, Pandanus elatus and Dendrocnide peltata var. murrayana; the shrubs Abutilon listeri, Colubrina pedunculata, Grewia insularis and Pandanus christmatensis; the vines Hoya aldrichii and Zehneria alba; the herbs Asystasia alba, Dicliptera maclearii and Peperomia rossii; the grass Ischaemum nativitatis; the fern Asplenium listeri; and the orchids Brachypeza archytas, Flickingeria nativitatis, Phreatia listeri and Zeuxine exilis.
Fauna
Two species of native rats, the Maclear's and bulldog rats, have become extinct since the island was settled, while the Javan rusa deer has been introduced. The endemic Christmas Island shrew has not been seen since the mid-1980s and may be extinct, while the Christmas Island pipistrelle (a small bat) is presumed to be extinct.
The fruit bat (flying fox) species Pteropus natalis is only found on Christmas Island; its epithet natalis is a reference to that name. The species is probably the last native mammal, and an important pollinator and rainforest seed-disperser; the population is also in decline and under increasing pressure from land clearing and introduced pest species. The flying fox's low rate of reproduction (one pup each year) and high infant mortality rate makes it especially vulnerable, and its conservation status is critically endangered. Flying foxes are an 'umbrella' species helping forests regenerate and other species survive in stressed environments.
The land crabs and seabirds are the most noticeable fauna on the island. Christmas Island has been identified by BirdLife International as both an Endemic Bird Area and an Important Bird Area because it supports five endemic species and five subspecies as well as over 1% the world populations of five other seabirds.
Twenty terrestrial and intertidal species of crab have been described here, of which thirteen are regarded as true land crabs, being dependent on the ocean only for larval development. Robber crabs, known elsewhere as coconut crabs, also exist in large numbers on the island. The annual red crab mass migration to the sea to spawn has been called one of the wonders of the natural world. This takes place each year around November – after the start of the wet season and in synchronisation with the cycle of the moon. Once at the ocean, the mothers release the embryos where they can survive and grow until they are able to live on land.
The island is a focal point for seabirds of various species. Eight species or subspecies of seabirds nest on it. The most numerous is the red-footed booby, which nests in colonies, using trees on many parts of the shore terrace. The widespread brown booby nests on the ground near the edge of the seacliff and inland cliffs. Abbott's booby (listed as endangered) nests on tall emergent trees of the western, northern and southern plateau rainforest, the only remaining nesting habitat for this bird in the world.
Of the ten native land birds and shorebirds, seven are endemic species or subspecies. This includes the Christmas thrush and the Christmas imperial pigeon. Some 86 migrant bird species have been recorded as visitors to the island. The Christmas frigatebird has nesting areas on the northeastern shore terraces. The more widespread great frigatebirds nest in semi-deciduous trees on the shore terrace, with the greatest concentrations being in the North West and South Point areas. The common noddy and two species of bosun or tropicbirds also nest on the island, including the golden bosun (P. l. fulvus), a subspecies of the white-tailed tropicbird that is endemic to the island.
Six species of butterfly are known to occur on Christmas Island. These are the Christmas swallowtail (Papilio memnon), striped albatross (Appias olferna), Christmas emperor (Polyura andrewsi), king cerulean (Jamides bochus), lesser grass-blue (Zizina otis), and Papuan grass-yellow (Eurema blanda).
Insect species include the yellow crazy ant (Anoplolepis gracilipes''), introduced to the island and since subjected to attempts to destroy the supercolonies that emerged with aerial spraying of the insecticide Fipronil.
Media
Radio broadcasts to Christmas Island from Australia include ABC Radio National, ABC Kimberley, Triple J and Hit WA (Formerly Red FM). All services are provided by satellite links from the mainland. Broadband internet became available to subscribers in urban areas in mid-2005 through the local internet service provider, CIIA (formerly dotCX). Because of its proximity to South East Asia, Christmas Island falls within many of the satellite footprints throughout the region. This results in ideal conditions for receiving various Asian broadcasts, which locals sometimes prefer to those emanating from Western Australia. Additionally, ionospheric conditions are conducive to terrestrial radio transmissions, from HF through VHF and sometimes into UHF. The island plays home to a small array of radio equipment that spans a good chunk of the usable spectrum. A variety of government-owned and operated antenna systems are employed on the island to take advantage of this.
Television
Free-to-air digital television stations from Australia are broadcast in the same time zone as Perth and are broadcast from three separate locations:
Cable television from Australia, Malaysia, Singapore and the United States commenced in January 2013.
Telecommunications
Telephone services are provided by Telstra and are a part of the Australian network with the same prefix as Western Australia, South Australia and the Northern Territory (08). A GSM mobile telephone system on the 900 MHz band replaced the old analogue network in February 2005.
Newspapers
The Shire of Christmas Island publishes a fortnightly newsletter, The Islander. There are no independent newspapers.
Postage stamps
A postal agency was opened on the island in 1901 and sold stamps of the Strait Settlements. After the Japanese occupation (1942–1945), postage stamps of the British Military Administration in Malaya were in use, then stamps of Singapore. In 1958, the island received its own postage stamps after being put under Australian custody. It had a large philatelic and postal independence, managed first by the Phosphate Commission (1958–1969) and then by the island's administration (1969–1993). This ended on 2 March 1993 when Australia Post became the island's postal operator; Christmas Island stamps may be used in Australia and Australian stamps may be used on the island.
Transport
A container port exists at Flying Fish Cove with an alternative container-unloading point to the east of the island at Norris Point, intended for use during the December-to-March "swell season" of rough seas. The standard gauge Christmas Island Phosphate Co.'s Railway from Flying Fish Cove to the phosphate mine was constructed in 1914. It was closed in December 1987, when the Australian government closed the mine, and since has been recovered as scrap, leaving only earthworks in places.
Virgin Australia provides two weekly flights to Christmas Island Airport from Perth, Western Australia. A fornightly freight flight provides fresh supplies to the island. Rental cars are available from the airport, however no franchised companies are represented. CI Taxi Service also operates most days. Due to the lack of 3G or 4G, the island's sole taxi operator could not meet the requirement issued by WA Department of Transport to install electronic meters, and the operator was forced to close at the end of June 2019. The road network covers most of the island and is of generally good quality, although four-wheel drive vehicles are needed to reach some of the more distant parts of the rainforest or the more isolated beaches on the rough dirt roads.
Education
The island-operated crèche is in the Recreation Centre. Christmas Island District High School, catering to students in grades P-12, is run by the Western Australian Education Department. There are no universities on Christmas Island. The island has one public library.
See also
Outline of Christmas Island
Index of Christmas Island–related articles
.cx
Notes
References
Further reading
96 pages, including many b&w photographs.
197 pages including many photographs and plates.
263 pages including photographs.
112 pages including many photographs.
60 pages including colour photographs.
133 pages including many colour photographs.
76 pages including colour photographs.
207 pages including many b&w photographs.
288 pages pictorial illustration of crabs.
238 pages.
Island countries of the Indian Ocean
Islands of Australia
Islands of Southeast Asia
Important Bird Areas of Australian External Territories
British rule in Singapore
.
English-speaking countries and territories
Countries and territories where Malay is an official language
States and territories of Australia
States and territories established in 1957
1957 establishments in Australia
Important Bird Areas of Indian Ocean islands
Endemic Bird Areas |
5510 | https://en.wikipedia.org/wiki/Clipperton%20Island | Clipperton Island | Clipperton Island ( ; ), also known as Clipperton Atoll and previously referred to as Clipperton's Rock, is an uninhabited French coral atoll in the eastern Pacific Ocean. The only French territory in the North Pacific, Clipperton is from Paris, France; from Papeete, Tahiti; and from Acapulco, Mexico.
Clipperton was documented by French merchant-explorers in 1711 and formally claimed as part of the French protectorate of Tahiti in 1858. Despite this, American guano miners began working the island in the early 1890s. As interest in the island grew, Mexico asserted a claim to the island based upon Spanish records from the 1520s that may have identified the island. Mexico established a small military colony on the island in 1905, but during the Mexican Revolution contact with the mainland became infrequent, most of the colonists died, and lighthouse keeper Victoriano Álvarez instituted a short, brutal reign as "king" of the island. Eleven survivors were rescued in 1917 and Clipperton was abandoned.
The dispute between Mexico and France over Clipperton was taken to binding international arbitration in 1909 with the Italian king, Victor Emmanuel III, deciding in 1931 that the island was French territory. Despite the ruling, Clipperton remained largely uninhabited until 1944 when the U.S. Navy established a weather station on the island to support its war efforts in the Pacific. France protested and as concerns about Japanese activity in the eastern Pacific waned the U.S. abandoned the site in late 1945.
Since the end of World War II, Clipperton has primarily been the site for scientific expeditions to study the island's wildlife and marine life, including its significant masked and brown booby colonies. It has also hosted climate scientists and amateur radio DX-peditions. Plans to develop the island for trade and tourism have been considered, but none have been enacted and the island remains mostly uninhabited with periodic visits from the French navy.
Geography
The coral island is located at in the East Pacific, southwest of Mexico, west of Nicaragua, west of Costa Rica and northwest of the Galápagos Islands in Ecuador. The nearest land is Socorro Island, about to the south-east in the Revillagigedo Archipelago. The nearest French-owned island is Hiva Oa in the Marquesas Islands of French Polynesia.
Despite its proximity to North America, Clipperton is often considered one of the eastern-most points of Oceania due to being part the French Indo-Pacific, and to commonalities between its marine fauna and the marine fauna of Hawaii and Kiribati's Line Islands, with the island sitting along the migration path for animals in the Eastern Tropical Pacific region. The island is the only emerged part of the East Pacific Rise, as well as the only feature in the Clipperton Fracture Zone that breaks the ocean's surface, and it is one of the few islands in the Pacific that lacks an underwater archipelagic apron.
The atoll is low-lying and largely barren, with some scattered grasses, and a few clumps of coconut palms (Cocos nucifera). The land ring surrounding the lagoon measures in area with an average elevation of , although a small volcanic outcropping, referred to as (), rises to on its southeast side. The surrounding reef hosts an abundance of corals and is partly exposed at low tide.
Clipperton Rock is the remains of the island's now extinct volcano's rim; because it includes this rocky outcropping, Clipperton is not a true atoll and is sometimes referred to as a 'near-atoll'. The surrounding reef in combination with the weather makes landing on the island difficult and anchoring offshore hazardous for larger ships; in the 1940s American ships reported active problems in this regard.
Environment
The environment of Clipperton Island has been studied extensively with the first recordings and sample collection being done in the 1800s. Modern research on Clipperton is focused primarily on climate science and migratory wildlife.
The SURPACLIP oceanographic expedition, a joint undertaking by the National Autonomous University of Mexico and the University of New Caledonia Nouméa, made extensive studies of the island in 1997. In 2001, French National Centre for Scientific Research geographer Christian Jost extended the 1997 studies through the French Passion 2001 expedition, which focused on the evolution of Clipperton's ecosystem. In 2003, cinematrographer Lance Milbrand stayed on the island for 41 days, recording the adventure for the National Geographic Explorer and plotting a GPS map of Clipperton for the National Geographic Society.
In 2005, a four-month scientific mission organised by Jean-Louis Étienne made a complete inventory of Clipperton's mineral, plant, and animal species; studied algae as deep as below sea level; and examined the effects of pollution. A 2008 expedition from the University of Washington's School of Oceanography collected sediment cores from the lagoon to study climate change over the past millennium.
Lagoon
Clipperton is a ring-shaped atoll that completely encloses a stagnant fresh water lagoon and measures in circumference and in area. The island is the only coral island in the eastern Pacific. The lagoon is devoid of fish, and is shallow over parts of the eroded coral heads, but contains some deep basins with depths of , including a spot known as ('the bottomless hole') with acidic water at its base. The water is described as being almost fresh at the surface and highly eutrophic. Seaweed beds cover approximately 45 per cent of the lagoon's surface. The rim averages in width, reaching in the west, and narrowing to in the north-east, where sea waves occasionally spill over into the lagoon. Ten islets are present in the lagoon, six of which are covered with vegetation, including the Egg Islands ().
The closure of the lagoon approximately 170 years ago and prevention of seawater from entering the lagoon has formed a meromictic lake. The surface of the lagoon has a high concentration of phytoplankton that vary slightly with the seasons. As a result of this the water columns are stratified and do not mix leaving the lagoon with an oxic and brackish upper water layer and a deep sulfuric anoxic saline layer. At a depth of approximately the water shifts with salinity rising and both pH and oxygen quickly decreasing. The deepest levels of the lagoon record waters enriched with hydrogen sulfide which prevent the growth of coral. Before the lagoon was closed off to seawater, coral and clams were able to survive in the area as evident by fossilized specimens.
Studies of the water have found that microbial communities on the water's surface are similar to other water samples from around the world with deeper water samples showing a great diversity of both bacteria and archaea. In 2005, a group of French scientists discovered three dinoflagellate microalgae species in the lagoon: Peridiniopsis cristata, which was abundant; Durinskia baltica, which was known to exist previously in other locations, but was new to Clipperton; and Peridiniopsis cristata var. tubulifera, which is unique to the island. The lagoon also harbours millions of isopods, which are reported to deliver a painful sting.
While some sources have rated the lagoon water as non-potable, testimony from the crew of the tuna clipper M/V Monarch, stranded for 23 days in 1962 after their boat sank, indicates otherwise. Their report reveals that the lagoon water, while "muddy and dirty", was drinkable, despite not tasting very good. Several of the castaways drank it, with no apparent ill effects. Survivors of a Mexican military colony in 1917 (see below) indicated that they were dependent upon rain for their water supply, catching it in old boats. American servicemen on the island during World War II had to use evaporators to purify the lagoon's water. Aside from the lagoon and water caught from rain, no freshwater sources are known to exist.
Climate
The island has a tropical oceanic climate, with average temperatures of and highs up to . Annual rainfall is , and the humidity level is generally between 85 per cent and 95 per cent with December to March being the drier months. The prevailing winds are the southeast trade winds. The rainy season occurs from May to October, and the region is subject to tropical cyclones from April to September, but such storms often pass to the northeast of Clipperton. In 1997 Clipperton was in the path of the start of Hurricane Felicia, as well as Hurricane Sandra in 2015. In addition, Clipperton has been subjected to multiple tropical storms and depressions including Tropical Storm Andres in 2003. Surrounding ocean waters are warm, pushed by equatorial and counter-equatorial currents and have seen temperature increases due to global warming.
Flora and fauna
When Snodgrass and Heller visited in 1898, they reported that "no land plant is native to the island". Historical accounts from 1711, 1825, and 1839 show a low grassy or suffrutescent (partially woody) flora. During Marie-Hélène Sachet visit in 1958, the vegetation was found to consist of a sparse cover of spiny grass and low thickets, a creeping plant (Ipomoea spp.), and stands of coconut palm. This low-lying herbaceous flora seems to be a pioneer in nature, and most of it is believed to be composed of recently introduced species. Sachet suspected that Heliotropium curassavicum, and possibly Portulaca oleracea, were native. Coconut palms and pigs introduced in the 1890s by guano miners were still present in the 1940s. The largest coconut grove is Bougainville Wood () on the southwestern end of the island. On the northwest side of the atoll, the most abundant plant species are Cenchrus echinatus, Sida rhombifolia, and Corchorus aestuans. These plants compose a shrub cover up to in height, and are intermixed with Eclipta, Phyllanthus, and Solanum, as well as the taller Brassica juncea. The islets in the lagoon are primarily vegetated with Cyperaceae, Scrophulariaceae, and Ipomoea pes-caprae. A unique feature of Clipperton is that the vegetation is arranged in parallel rows of species, with dense rows of taller species alternating with lower, more open vegetation. This was assumed to be a result of the trench-digging method of phosphate mining used by guano hunters.
The only land animals known to exist are two species of reptiles (the Pacific stump-toed gecko and the copper-tailed skink), bright-orange land crabs known as Clipperton crabs (Johngarthia oceanica, prior to 2019 classified as Johngartia planata), birds, and ship rats. The rats probably arrived when large fishing boats wrecked on the island in 1999 and 2000.
The pigs introduced in the 1890s reduced the crab population, which in turn allowed grassland to gradually cover about 80 per cent of the land surface. The elimination of these pigs in 1958, the result of a personal project by Kenneth E. Stager, caused most of this vegetation to disappear as the population of land crabs recovered. As a result, Clipperton is virtually a sandy desert with only 674 palms counted by Christian Jost during the Passion 2001 French mission and five islets in the lagoon with grass that the terrestrial crabs cannot reach. A 2005 report by the National Oceanic and Atmospheric Administration Southwest Fisheries Science Center indicated that the increased rat presence had led to a decline in both crab and bird populations, causing a corresponding increase in both vegetation and coconut palms. This report urgently recommended eradication of rats, so that vegetation might be reduced, and the island might return to its 'pre-human' state.
In 1825, Benjamin Morrell reported finding green sea turtles nesting on Clipperton, but later expeditions have not found nesting turtles there, possibly due to disruption from guano extraction, as well as the introduction of pigs and rats. Sea turtles found on the island appear to have been injured due to fishing practices. Morrell also reported fur and elephant seals on the island in 1825, but they too have not been recorded by later expeditions.
Birds are common on the island; Morrell noted in 1825: "The whole island is literally covered with sea-birds, such as gulls, whale-birds, gannets, and the booby". Thirteen species of birds are known to breed on the island and 26 others have been observed as visitors. The island has been identified as an Important Bird Area by BirdLife International because of the large breeding colony of masked boobies, with 110,000 individual birds recorded. Observed bird species include white terns, masked boobies, sooty terns, brown boobies, brown noddies, black noddies, great frigatebirds, coots, martins (swallows), cuckoos, and yellow warblers. Ducks and moorhens have been reported in the lagoon.
The coral reef on the north side of the island includes colonies more than high. The 2018 Tara Pacific expedition located five colonies of Millepora platyphylla at depths of , the first of this fire coral species known in the region. Among the Porites spp. stony corals, some bleaching was observed, along with other indications of disease or stress, including parasitic worms and microalgae.
The reefs that surround Clipperton have some of the highest concentration of endemic species found anywhere with more than 115 species identified. Many species are recorded in the area, including five or six endemics, such as Clipperton angelfish (Holacanthus limbaughi), Clipperton grouper (Epinephelus clippertonensis), Clipperton damselfish (Stegastes baldwini) and Robertson's wrasse (Thalassoma robertsoni). Widespread species around the reefs include Pacific creolefish, blue-and-gold snapper, and various species of goatfish. In the water column, trevallies are predominant, including black jacks, bigeye trevally, and bluefin trevally. Also common around Clipperton are black triggerfish;, several species of groupers, including leather bass and starry groupers; Mexican hogfish; whitecheek, convict, and striped-fin surgeonfish; yellow longnose and blacknosed butterflyfish; coral hawkfish; golden pufferfish; Moorish idols; parrotfish; and moray eels, especially speckled moray eels. The population of sharks in the waters around the island was noted to have increased in both density and size of individuals in a 2019 expedition, particularly the population of the white tip shark. Galapagos sharks, reef sharks and hammerhead sharks are also present around Clipperton.
Three expeditions to Clipperton have collected sponge specimens, including U.S. President Franklin Roosevelt's visit in 1938. Of the 190 specimens collected, 20 species were noted, including nine found only at Clipperton. One of the endemic sponges, collected during the 1938 visit, was named Callyspongia roosevelti in honor of Roosevelt.
In April 2009, Steven Robinson, a tropical fish dealer from Hayward, California, traveled to Clipperton to collect Clipperton angelfish. Upon his return to the United States, he described the 52 illegally collected fish to federal wildlife authorities as king angelfish, not the rarer Clipperton angelfish, which he intended to sell for $10,000. On 15 December 2011, Robinson was sentenced to 45 days of incarceration, one year of probation, and a $2,000 fine.
Environmental threats
During the night of 10 February 2010, the Sichem Osprey, a Maltese chemical tanker, ran aground en route from the Panama Canal to South Korea. The ship contained of xylene, of soybean oil, and of tallow. All 19 crew members were reported safe, and the vessel reported no leaks. The vessel was re-floated on 6 March and returned to service.
In mid-March 2012, the crew from the Clipperton Project noted the widespread presence of refuse, particularly on the northeast shore, and around the Clipperton Rock. Debris, including plastic bottles and containers, create a potentially harmful environment for the island's flora and fauna. This trash is common to only two beaches (northeast and southwest), and the rest of the island is fairly clean. Other refuse has been left after the occupations by Americans 1944–1945, French 1966–1969, and the 2008 scientific expedition. During a 2015 scientific and amateur radio expedition to Clipperton, the operating team discovered a package that contained of cocaine. It is suspected that the package washed up after being discarded at sea. In April 2023, the Passion 23 mission by France's and the surveillance frigate Germinal collected more than of plastic waste from the island's beaches along with a bale of cocaine.
The Sea Around Us Project estimates the Clipperton EEZ produces a harvest of of fish per year; however, because French naval patrols in the area are infrequent, this includes a significant amount of illegal fishing, along with lobster harvesting and shark finning, resulting in estimated losses for France of €0.42 per kilogram of fish caught.
As deep-sea mining of polymetallic nodules increases in the adjacent Clarion–Clipperton Zone, similar mining activity within France's exclusive economic zone surrounding the atoll may have an impact on marine life around Clipperton. Polymetallic nodules were discovered in the Clipperton EEZ during the Passion 2015 expedition.
Politics and government
The island is an overseas state private property of France under direct authority of the Minister of the Overseas. Although the island is French territory, it has no status within the European Union. Ownership of Clipperton Island was disputed in the 19th and early 20th centuries between France and Mexico, but was finally settled through arbitration in 1931; the 'Clipperton Island Case' remains widely studied in international law textbooks.
In the late 1930s, as flying boats opened the Pacific to air travel, Clipperton Island was noted as a possible waypoint for a trans-Pacific route from the Americas to Asia via the Marquesas Islands in French Polynesia, bypassing Hawaii. However, France indicated no interest in developing commercial air traffic in the corridor.
After France ratified the United Nations Convention on the Law of the Sea (UNCLOS) in 1996, they reaffirmed the exclusive economic zone off Clipperton island which had been established in 1976. After changes were made to the area nations were allowed to claim under the third convention of UNCLOS France in 2018 expanded the outer limits of the territorial sea to and the exclusive economic zone off Clipperton Island to , encompassing of ocean.
On 21 February 2007, administration of Clipperton was transferred from the High Commissioner of the Republic in French Polynesia to the Minister of Overseas France.
In 2015, French MP Philippe Folliot set foot on Clipperton becoming the first elected official from France to do so. Folliot noted that visiting Clipperton was something he had wanted to do since he was nine years old. Following the visit, Folliot reported to the National Assembly on the pressing need to reaffirm French soverignty over the atoll and its surrounding maritime claims. He also proposed estblishing an international scientific research station on Clipperton and administrative reforms surrounding the oversight of the atoll.
In 2022, France passed legislation officially referring to the island as "La Passion–Clipperton".
History
Discovery and early claims
There are several claims to the first discovery of the island. The earliest recorded possible sighting is 24 January 1521 when Portuguese-born Spanish explorer Ferdinand Magellan discovered an island he named San Pablo after turning westward away from the American mainland during his circumnavigation of the globe. On 15 November 1528, Spaniard Álvaro de Saavedra Cerón discovered an island he called Isla Médanos in the region while on an expedition commissioned by his cousin, the Spanish conquistador Hernán Cortés, to find a route to the Philippines.
Although both San Pablo and Isla Médanos are considered to be possible sightings of Clipperton, the island was first charted by French merchant Michel Dubocage, commanding La Découverte, who arrived at the island on Good Friday, 3 April 1711; he was joined the following day by fellow ship captain and La Princesse. The island was given the name ('Passion Island') as the date of rediscovery fell within Passiontide. They drew up the first map of the island and claimed it for France.
In August 1825, American sea captain Benjamin Morrell made the first recorded landing on Clipperton, exploring the island and making a detailed report of its vegetation.
The common name for the island comes from John Clipperton, an English pirate and privateer who fought the Spanish during the early 18th century, and who is said to have passed by the island. Some sources claim that he used it as a base for his raids on shipping.
19th century
Mexican claim 1821–1858
After its declaration of independence in 1821, Mexico took possession of the lands that had once belonged to Spain. As Spanish records noted the existence of the island as early as 1528, the territory was incorporated into Mexico. The Mexican constitution of 1917 explicitly includes the island, using the Spanish name , as Mexican territory. This would be amended on January 18, 1934, after the sovereignty dispute over the island was settled in favor of France.
French claim (1858)
On 17 November 1858, Emperor Napoleon III annexed Clipperton as part of the French protectorate of Tahiti. Ship-of-the-line Lieutenant Victor Le Coat de Kervéguen published a notice of this annexation in Hawaiian newspapers to further cement France's claim to the island.
Guano mining claims (1892–1905)
In 1892, a claim on the island was filed with the U.S. State Department under the U.S. Guano Islands Act by Frederick W. Permien of San Francisco on behalf of the Stonington Phosphate Company. In 1893, Permien transferred those rights to a new company, the Oceanic Phosphate Company. In response to the application, the State Department rejected the claim, noting France's prior claim on the island and that the claim was not bonded as was required by law. Additionally during this time there were concerns in Mexico that the British or Americans would lay claim to the island.
Despite the lack of U.S. approval of its claim, the Oceanic Phosphate Company began mining guano on the island in 1895. Although the company had plans for as many as 200 workers on the island, at its peak only 25 men were stationed there. The company shipped its guano to Honolulu and San Francisco where it sold for between US$10 and US$20 per ton. In 1897, the Oceanic Phosphate Company began negotiations with the British Pacific Islands Company to transfer its interest in Clipperton; this drew the attention of both French and Mexican officials.
On 24 November 1897, French naval authorities arrived on the Duguay Trouin and found three Americans working on the island. The French ordered the American flag to be lowered. At that time, U.S. authorities assured the French that they did not intend to assert American sovereignty over the island. A few weeks later, on 13 December 1897, Mexico sent the gunboat La Demócrata and a group of marines to assert its claim on the island, evicting the Americans, raising the Mexican flag, and drawing a protest from France. From 1898 to 1905, the Pacific Islands Company worked the Clipperton guano deposits under a concession agreement with Mexico. In 1898, Mexico made a US$1.5 million claim against the Oceanic Phosphate Company for the guano shipped from the island from 1895 to 1897.
20th century
Mexican colonization (1905–1917)
In 1905, the Mexican government renegotiated its agreement with the British Pacific Islands Company, establishing a military garrison on the island a year later and erecting a lighthouse under the orders of Mexican President Porfirio Díaz. Captain Ramón Arnaud was appointed governor of Clipperton. At first he was reluctant to accept the post, believing it amounted to exile from Mexico, but he relented after being told that Díaz had personally chosen him to protect Mexico's interests in the international conflict with France. It was also noted that because Arnaud spoke English, French, and Spanish, he would be well equipped to help protect Mexico's sovereignty over the territory. He arrived on Clipperton as governor later that year.
By 1914 around 100 men, women, and children lived on the island, resupplied every two months by a ship from Acapulco. With the escalation of fighting in the Mexican Revolution, regular resupply visits ceased, and the inhabitants were left to their own devices. On 28 February 1914, the schooner Nokomis wrecked on Clipperton; with a still seaworthy lifeboat, four members of the crew volunteered to row to Acapulco for help. The arrived months later to rescue the crew. While there, the captain offered to transport the survivors of the colony back to Acapulco; Arnaud refused as he believed a supply ship would soon arrive.
By 1917, all but one of the male inhabitants had died. Many had perished from scurvy, while others, including Arnaud, died during an attempt to sail after a passing ship to fetch help. Lighthouse keeper Victoriano Álvarez was the last man on the island, together with 15 women and children. Álvarez proclaimed himself 'king', and began a campaign of rape and murder, before being killed by Tirza Rendón, who was his favourite victim. Almost immediately after Álvarez's death, four women and seven children, the last survivors, were picked up by the U.S. Navy gunship on 18 July 1917.
Final arbitration of ownership (1931)
Throughout Mexico's occupation of Clipperton, France insisted on its ownership of the island, and lengthy diplomatic correspondence between the two countries led to a treaty on 2 March 1909, agreeing to seek binding international arbitration by Victor Emmanuel III of Italy, with each nation promising to abide by his determination. In 1931, Victor Emmanuel III issued his arbitral decision in the 'Clipperton Island Case', declaring Clipperton a French possession. Mexican President Pascual Ortiz Rubio, in response to public opinion that considered the Italian king biased towards France, consulted international experts on the validity of the decision, but ultimately Mexico accepted Victor Emmanuel's findings. France formally took possession of Clipperton on January 26, 1935.
U.S. presidential visit
President Franklin D. Roosevelt made a stop over at Clipperton in July 1938 aboard the as part of a fishing expedition to the Galápagos Islands and other points along the Central and South American coasts. At the island, Roosevelt and his party spent time fishing for sharks, and afterwards Dr. Waldo L. Schmitt of the Smithsonian Institution went ashore with some crew to gather scientific samples and make observations of the island.
Roosevelt had previously tried to visit Clipperton in July 1934 after transiting through the Panama Canal en route to Hawaii on the Houston; he had heard the area was good for fishing, but heavy seas prevented them from lowering a boat when they reached the island. On 19 July 1934, soon after the stop at Clipperton, the rigid airship rendezvoused with the Houston, and one of the Macon Curtiss F9C biplanes delivered mail to the president.
American occupation (1944–1945)
In April 1944, the took observations of Clipperton while en route to Hawaii. After an overflight of the island by planes from the and to ensure Clipperton was uninhabited, the departed San Francisco on 4 December 1944 with aerological specialists and personnel and was followed several days later by with provisions, heavy equipment, and equipment for construction of a U.S. Navy weather station on the island. The sailors at the weather station were armed in case of a possible Japanese attack in the region. Landing on the island proved challenging. LST-563 grounded on the reef and the salvage ship was brought in to help refloat the ship but it too was grounded. Finally, in January 1945, the and were able to free the Seize and to offload equipment from LST-563 before it was abandoned.
Once the weather station was completed and sailors garrisoned on the island, the U.S. government informed the British, French, and Mexican governments of the station and its purpose. Every day at 9 a.m., the 24 sailors stationed at the Clipperton weather station sent up weather balloons to gather information. Later, Clipperton was considered for an airfield to shift traffic between North America and Australia far from the front lines of Pacific Theater.
In April 1943, during a meeting between presidents Roosevelt of the U.S. and Avila Camacho of Mexico, the topic of Mexican ownership of Clipperton was raised. The American government seemed interested in Clipperton being handed over to Mexico due to the importance the island might play in both commercial and military air travel, as well as its proximity to the Panama Canal.
Although these talks were informal, the U.S. backed away from any Mexican claim on Clipperton as Mexico had previously accepted the 1931 arbitration decision. The U.S. government also felt it would be easier to obtain a military base on the island from France. However, after the French government was notified about the weather station, relations on this matter deteriorated rapidly with the French government sending a formal note of protest in defense of French sovereignty. In response, the U.S. extended an offer for the French military to operate the station or to have the Americans agree to leave the weather station under the same framework previously agreed to with other weather stations in France and North Africa. There were additional concern within the newly formed Provisional Government of the French Republic that notification of the installation was made to military and not civilian leadership.
French Foreign Minister Georges Bidault said of the incident: "This is very humiliating to us we are anxious to cooperate with you, but sometimes you do not make it easy". French Vice Admiral Raymond Fenard requested during a meeting with U.S. Admiral Lyal A. Davidson that civilians be given access to Clipperton and the surrounding waters, but the U.S. Navy denied the request because there was an active military installation on the island. Instead Davidson offered to transport a French officer to the installation and reassured the French government that the United States did not wish to claim sovereignty over the island. During these discussions between the admirals, French diplomats in Mexico attempted to hire the Mexican vessel Pez de Plata out of Acapulco to bring a military attaché to Clipperton under a cover story that they were going on a shark fishing trip. At the request of the Americans, the Mexican government refused to allow the Pez De Plata to leave port. French officials then attempted to leave in another smaller vessel and filed a false destination with the local port authorities but were also stopped by Mexican officials.
During this period, French officials in Mexico leaked information about their concerns, as well as about the arrival of seaplanes at Clipperton, to The New York Times and Newsweek; both stories were refused publishing clearance on national security grounds. In February 1945, the U.S. Navy transported French Officer Lieutenant Louis Jampierre to Clipperton out of San Diego where he visited the installation and that afternoon returned to the United States. As the war in the Pacific progressed, concerns about Japanese incursions into the Eastern Pacific were reduced and in September 1945 the U.S. Navy began removing from Clipperton. During the evacuation, munitions were destroyed, but significant matériel was left on the island. By 21 October 1945, the last U.S. Navy staff at the weather station left Clipperton.
Post-World War II developments
Since the island was abandoned by American forces at the end of World War II, the island has been visited by sports fishermen, French naval patrols, and Mexican tuna and shark fishermen. There have been infrequent scientific and amateur radio expeditions and, in 1978, Jacques-Yves Cousteau visited with a team of divers and a survivor from the 1917 evacuation to film a television special called Clipperton: The Island that Time Forgot.
The island was visited by ornithologist Ken Stager of the Los Angeles County Museum in 1958. Appalled at the depredations visited by feral pigs upon the island's brown booby and masked booby colonies (reduced to 500 and 150 birds, respectively), Stager procured a shotgun and killed all 58 pigs. By 2003, the booby colonies had grown to 25,000 brown boobies and 112,000 masked boobies, making Clipperton home to the world's second-largest brown booby colony, and its largest masked booby colony. In 1994, Stager's story inspired Bernie Tershy and Don Croll, both professors at the University of California, Santa Cruz Long Marine Lab, to found the non-profit Island Conservation, which works to prevent extensions through the removal of invasive species from islands.
When the independence of Algeria in 1962 threatened French nuclear testing sites in North Africa, the French Ministry of Defence considered Clipperton as a possible replacement site. This was eventually ruled out due to the island's hostile climate and remote location, but the island was used to house a small scientific mission to collect data on nuclear fallout from other nuclear tests. From 1966 to 1969, the French military sent a series of missions, called "Bougainville," to the island. The Bougainville missions unloaded some 25 tons of equipment, including sanitary facilities, traditional Polynesian dwellings, drinking water treatment tanks, and generators. The missions sought to surveil the island and its surrounding waters, observe weather conditions, and evaluate potential rehabilitation of the World War II era airstrip. By 1978, the structures built during the Bougainville missions had become quite derelict. The French explored reopening the lagoon and developing a harbour for trade and tourism during the 1970s, but this too was abandoned. An automatic weather installation was completed on 7 April 1980, with data collected by the station transmitted via satellite to Brittany.
In 1981, the Académie des sciences d'outre-mer recommended the island have its own economic infrastructure, with an airstrip and a fishing port in the lagoon. This would mean opening the lagoon to the ocean by creating a passage in the atoll rim. To oversee this, the French government reassigned Clipperton from the High Commissioner for French Polynesia to the direct authority of the French government, classifying the island as an overseas state private property administered by France's Overseas Minister. In 1986, the Company for the Study, Development and Exploitation of Clipperton Island (French acronym, SEDEIC) and French officials began outlining a plan for the development of Clipperton as a fishing port, but due to economic constraints, the distance from markets, and the small size of the atoll, nothing beyond preliminary studies was undertaken and plans for the development were abandoned. In the mid-1980s, the French government began efforts to enlist citizens of French Polynesia to settle on Clipperton; these plans were ultimately abandoned as well.
In November 1994, the French Space Agency requested the help of NASA to track the first stage breakup of the newly designed Ariane 5 rocket. After spending a month on Clipperton setting up and calibrating radar equipment to monitor Ariane flight V88, the mission ended in disappointment when the rocket disintegrated 37 seconds after launch due to a software bug.
Despite Mexico accepting the 1931 arbitration decision that Clipperton was French territory, the right of Mexican fishing vessels to work Clipperton's territorial waters have remained a point of contention. A 2007 treaty, reaffirmed in 2017, grants Mexican access to Clipperton's fisheries so long as authorization is sought from the French government, conservation measures are followed, and catches are reported; however, the lack of regular monitoring of the fisheries by France makes verifying compliance difficult.
Castaways
In May 1893, Charles Jensen and "Brick" Thurman of the Oceanic Phosphate Company were left on the island by the company's ship Compeer with 90 days worth of supplies in order to prevent other attempts to claim the island and its guano. Before sailing for Clipperton, Jensen wrote a letter to the Secretary of the Coast Seamen's Union, Andrew Furuseth, instructing him that if the Oceanic Phosphate Company had not sent a vessel to Clipperton six weeks after the return of the Compeer to make it known that they had been stranded there. The Oceanic Phosphate Company denied it had left the men without adequate supplies and contracted the schooner Viking to retrieve them in late August. The Viking rescued the men, who had used seabirds' eggs to supplement their supplies, and returned them to San Francisco on 31 October.
In May 1897, the British cargo vessel Kinkora wrecked on Clipperton; the crew was able to salvage food and water from the ship, allowing them to survive on the island in relative comfort. During the crew's time on the island, a passing vessel offered to take the men to the mainland for $1,500, which the crew refused. Instead eight of the men loaded up a lifeboat and rowed to Acapulco for help. After the first mate of the Kinkora, Mr. McMarty, arrived in Acapulco, HMS Comus set sail from British Columbia to rescue the sailors.
In 1947, five American fishermen from San Pedro, California, were rescued from Clipperton after surviving on the island for six weeks.
In early 1962, the island provided a home to nine crewmen of the sunken tuna clipper MV Monarch, stranded for 23 days from 6 February to 1 March. They reported that the lagoon water was drinkable, although they preferred to drink water from the coconuts they found. Unable to use any of the dilapidated buildings, they constructed a crude shelter from cement bags and tin salvaged from Quonset huts built by the American military 20 years earlier. Wood from the huts was used for firewood, and fish caught off the fringing reef combined with potatoes and onions they had saved from their sinking vessel augmented the island's meager supply of coconuts. The crewmen reported they tried eating bird's eggs, but found them to be rancid, and they decided after trying to cook a 'little black bird' that it did not have enough meat to make the effort worthwhile. Pigs had been eradicated, but the crewmen reported seeing their skeletons around the atoll. The crewmen were eventually discovered by another fishing boat, and rescued by the U.S. Navy destroyer .
Amateur radio DX-peditions
Clipperton has long been an attractive destination for amateur radio groups due to its remoteness, permit requirements, history, and interesting environment. While some radio operation has been part of other visits to the island, major DX-peditions have included FO0XB (1978), FO0XX (1985), FO0CI (1992), FO0AAA (2000), and TX5C (2008).
In March 2014, the Cordell Expedition, organised and led by Robert Schmieder, combined a radio DX-pedition using callsign TX5K with environmental and scientific investigations. The team of 24 radio operators made more than 114,000 contacts, breaking the previous record of 75,000. The activity included extensive operation in the 6-meter band, including Earth–Moon–Earth communication (EME) or 'moonbounce' contacts. A notable accomplishment was the use of DXA, a real-time satellite-based online graphic radio log web page, allowing anyone with a browser to see the radio activity. Scientific work conducted during the expedition included the first collection and identification of foraminifera and extensive aerial imaging of the island using kite-borne cameras. The team included two scientists from the University of Tahiti and a French TV documentary crew from Thalassa.
In April 2015, Alain Duchauchoy, F6BFH, broadcast from Clipperton using callsign TX5P as part of the Passion 2015 scientific expedition to Clipperton Island. Duchauchoy also researched Mexican use of the island during the early 1900s as part of the expedition.
See also
Desert island
Lists of islands
Notes
References
External links
Isla Clipperton o 'Los náufragos mexicanos − 1914/1917' [Clipperton or 'The Mexican Castaways – 1914/1917']
Photo galleries
The first dive trip to Clipperton Island aboard the Nautilus Explorer — pictures taken during a 2007 visit
Clipperton Island 2008 — Flickr gallery containing 94 large photos from a 2008 visit
3D photos of Clipperton Island 2010 — 3D anaglyphs
Visits and expeditions
2000 DXpedition to Clipperton Island — website of a visit by amateur radio enthusiasts in 2000
Diving trips to Clipperton atoll — from NautilusExplorer.com
States and territories established in 1931
1931 establishments in the French colonial empire
1931 establishments in North America
1931 in Mexico
Islands of Overseas France
Pacific Ocean atolls of France
Uninhabited islands of France
Islands of Central America
Dependent territories in North America
Dependent territories in Oceania
French colonization of the Americas
Former populated places in North America
Former populated places in Oceania
Former disputed islands
Arbitration cases
Territorial disputes of France
Territorial disputes of Mexico
Tropical Eastern Pacific
Uninhabited islands of the Pacific Ocean
Pacific islands claimed under the Guano Islands Act
Coral reefs
Reefs of the Pacific Ocean
Neotropical ecoregions
Ecoregions of Central America
Important Bird Areas of Overseas France
Important Bird Areas of Oceania
Seabird colonies
Island restoration
Victor Emmanuel III of Italy |
5520 | https://en.wikipedia.org/wiki/Cocos%20%28Keeling%29%20Islands | Cocos (Keeling) Islands | The Cocos (Keeling) Islands (), officially the Territory of Cocos (Keeling) Islands (; ), are an Australian external territory in the Indian Ocean, comprising a small archipelago approximately midway between Australia and Sri Lanka and relatively close to the Indonesian island of Sumatra. The territory's dual name (official since the islands' incorporation into Australia in 1955) reflects that the islands have historically been known as either the Cocos Islands or the Keeling Islands.
The territory consists of two atolls made up of 27 coral islands, of which only two – West Island and Home Island – are inhabited. The population of around 600 people consists mainly of Cocos Malays, who mostly practice Sunni Islam and speak a dialect of Malay as their first language. The territory is administered by the Australian federal government's Department of Infrastructure, Transport, Regional Development, Communications and the Arts as an Australian external territory and together with Christmas Island (which is about to the east) forms the Australian Indian Ocean Territories administrative grouping. However, the islanders do have a degree of self-government through the local shire council. Many public services – including health, education, and policing – are provided by the state of Western Australia, and Western Australian law applies except where the federal government has determined otherwise. The territory also uses Western Australian postcodes.
The islands were discovered in 1609 by the British sea captain William Keeling, but no settlement occurred until the early 19th century. One of the first settlers was John Clunies-Ross, a Scottish merchant; much of the island's current population is descended from the Malay workers he brought in to work his copra plantation. The Clunies-Ross family ruled the islands as a private fiefdom for almost 150 years, with the head of the family usually recognised as resident magistrate. The British annexed the islands in 1857, and for the next century they were administered from either Ceylon or Singapore. The territory was transferred to Australia in 1955, although until 1979 virtually all of the territory's real estate still belonged to the Clunies-Ross family.
Name
The islands have been called the Cocos Islands (from 1622), the Keeling Islands (from 1703), the Cocos–Keeling Islands (since James Horsburgh in 1805) and the Keeling–Cocos Islands (19th century). Cocos refers to the abundant coconut trees, while Keeling refers to William Keeling, who discovered the islands in 1609.
John Clunies-Ross, who sailed there in the Borneo in 1825, called the group the Borneo Coral Isles, restricting Keeling to North Keeling, and calling South Keeling "the Cocos properly so called". The form Cocos (Keeling) Islands, attested from 1916, was made official by the Cocos Islands Act 1955.
The territory's Malay name is Pulu Kokos (Keeling). Sign boards on the island also feature Malay translations.
Geography
The Cocos (Keeling) Islands consist of two flat, low-lying coral atolls with an area of , of coastline, a highest elevation of and thickly covered with coconut palms and other vegetation. The climate is pleasant, moderated by the southeast trade winds for about nine months of the year and with moderate rainfall. Tropical cyclones may occur in the early months of the year.
North Keeling Island is an atoll consisting of just one C-shaped island, a nearly closed atoll ring with a small opening into the lagoon, about wide, on the east side. The island measures in land area and is uninhabited. The lagoon is about . North Keeling Island and the surrounding sea to from shore form the Pulu Keeling National Park, established on 12 December 1995. It is home to the only surviving population of the endemic, and endangered, Cocos Buff-banded Rail.
South Keeling Islands is an atoll consisting of 24 individual islets forming an incomplete atoll ring, with a total land area of . Only Home Island and West Island are populated. The Cocos Malays maintain weekend shacks, referred to as pondoks, on most of the larger islands.
There are no rivers or lakes on either atoll. Fresh water resources are limited to water lenses on the larger islands, underground accumulations of rainwater lying above the seawater. These lenses are accessed through shallow bores or wells.
Flora and fauna
Climate
Cocos (Keeling) Islands experience a tropical rainforest climate (Af) according to the Köppen climate classification; the archipelago lies approximately midway between the equator and the Tropic of Capricorn. The archipelago has two distinct seasons, the wet season and the dry season. The wettest month is April with precipitation totaling , and the driest month is October with precipitation totaling . Due to the strong maritime control, temperatures vary little although its location is some distance from the Equator. The hottest month is March with an average high temperature of , while the coolest month is September with an average low temperature of .
Demographics
According to the 2021 Australian Census, the current population of the Cocos Islands is 593 people. The median age of the population is 40 years, slightly older than the median Australian population age of 38 years. As of 2021, there are no people living on the Cocos Islands who identify as Indigenous Australians (Aboriginal or Torres Strait Islander).
The majority religion of the Cocos Islands is Islam, with 65.6% of the total population identifying as Muslim, followed by Unspecified (15.3%), Non-religious (14.0%), Catholic (2.0%), Anglican (1.5%). The remaining 1.6% of Cocos Islanders identify as secular or hold various other beliefs (including atheism, agnosticism and unspecified spiritual beliefs).
73.5% of the population were born in Australia - either on the mainland, on the Cocos Islands, or in another Australian territory. The remaining 26.5% born outside of Australia come from various countries, including Malaysia (4.0%), England (1.3%), New Zealand (1.2%), Singapore (0.5%) and Argentina (0.5%), among others. 61.2% of the population speak Malay rather than English at home, while 19.1% use English as their primary language and 3.5% speak another language (including Spanish and various Austronesian and African languages).
Kaum Ibu (Women's Group) is a women's rights organisation that represents the view of women at a local and national level.
History
Discovery and early history
The archipelago was discovered in 1609 by Captain William Keeling of the East India Company, on a return voyage from the East Indies. North Keeling was sketched by Ekeberg, a Swedish captain, in 1749, showing the presence of coconut palms. It also appears on a 1789 chart produced by British hydrographer Alexander Dalrymple.
In 1825, Scottish merchant seaman Captain John Clunies-Ross stopped briefly at the islands on a trip to India, nailing up a Union Jack and planning to return and settle on the islands with his family in the future. Wealthy Englishman Alexander Hare had similar plans, and hired a captain coincidentally, Clunies-Ross's brotherto bring him and a volunteer harem of 40 Malay women to the islands, where he hoped to establish his private residence. Hare had previously served as resident of Banjarmasin, a town in Borneo, and found that "he could not confine himself to the tame life that civilisation affords".
Clunies-Ross returned two years later with his wife, children and mother-in-law, and found Hare already established on the island and living with the private harem. A feud grew between the two. Clunies-Ross's eight sailors "began at once the invasion of the new kingdom to take possession of it, women and all".
After some time, Hare's women began deserting him, and instead finding themselves partners amongst Clunies-Ross's sailors. Disheartened, Hare left the island. He died in Bencoolen in 1834. Encouraged by members of the former harem, Clunies-Ross then recruited Malays to come to the island for work and wives.
Clunies-Ross's workers were paid in a currency called the Cocos rupee, a currency John Clunies-Ross minted himself that could only be redeemed at the company store.
On 1 April 1836, under Captain Robert FitzRoy arrived to take soundings to establish the profile of the atoll as part of the survey expedition of the Beagle. To the naturalist Charles Darwin, aboard the ship, the results supported a theory he had developed of how atolls formed, which he later published as The Structure and Distribution of Coral Reefs. He studied the natural history of the islands and collected specimens. Darwin's assistant Syms Covington noted that "an Englishman [he was in fact Scottish] and HIS family, with about sixty or seventy mulattos from the Cape of Good Hope, live on one of the islands. Captain Ross, the governor, is now absent at the Cape."
Annexation by the British Empire
The islands were annexed by the British Empire in 1857. This annexation was carried out by Captain Stephen Grenville Fremantle in command of . Fremantle claimed the islands for the British Empire and appointed Ross II as Superintendent. In 1878, by Letters Patent, the Governor of Ceylon was made Governor of the islands, and, by further Letters Patent in 1886, responsibility for the islands was transferred to the Governor of the Straits Settlement to exercise his functions as "Governor of Cocos Islands".
The islands were made part of the Straits Settlement under an Order in Council of 20 May 1903. Meanwhile, in 1886 Queen Victoria had, by indenture, granted the islands in perpetuity to John Clunies-Ross. The head of the family enjoyed semi-official status as Resident Magistrate and Government representative.
In 1901 a telegraph cable station was established on Direction Island. Undersea cables went to Rodrigues, Mauritius, Batavia, Java and Fremantle, Western Australia. In 1910 a wireless station was established to communicate with passing ships. The cable station ceased operation in 1966.
World War I
On the morning of 9 November 1914, the islands became the site of the Battle of Cocos, one of the first naval battles of World War I. A landing party from the German cruiser captured and disabled the wireless and cable communications station on Direction Island, but not before the station was able to transmit a distress call. An Allied troop convoy was passing nearby, and the Australian cruiser was detached from the convoy escort to investigate.
Sydney spotted the island and Emden at 09:15, with both ships preparing for combat. At 11:20, the heavily damaged Emden beached herself on North Keeling Island. The Australian warship broke to pursue Emdens supporting collier, which scuttled herself, then returned to North Keeling Island at 16:00. At this point, Emdens battle ensign was still flying: usually a sign that a ship intends to continue fighting. After no response to instructions to lower the ensign, two salvoes were shot into the beached cruiser, after which the Germans lowered the flag and raised a white sheet. Sydney had orders to ascertain the status of the transmission station, but returned the next day to provide medical assistance to the Germans.
Casualties totaled 134 personnel aboard Emden killed, and 69 wounded, compared to four killed and 16 wounded aboard Sydney. The German survivors were taken aboard the Australian cruiser, which caught up to the troop convoy in Colombo on 15 November, then transported to Malta and handed over the prisoners to the British Army. An additional 50 German personnel from the shore party, unable to be recovered before Sydney arrived, commandeered a schooner and escaped from Direction Island, eventually arriving in Constantinople. Emden was the last active Central Powers warship in the Indian or Pacific Ocean, which meant troopships from Australia and New Zealand could sail without naval escort, and Allied ships could be deployed elsewhere.
World War II
During World War II, the cable station was once again a vital link. The Cocos were valuable for direction finding by the Y service, the worldwide intelligence system used during the war.
Allied planners noted that the islands might be seized as an airfield for German planes and as a base for commerce raiders operating in the Indian Ocean. Following Japan's entry into the war, Japanese forces occupied neighbouring islands. To avoid drawing their attention to the Cocos cable station and its islands' garrison, the seaplane anchorage between Direction and Horsburgh islands was not used. Radio transmitters were also kept silent, except in emergencies.
After the Fall of Singapore in 1942, the islands were administered from Ceylon and West and Direction Islands were placed under Allied military administration. The islands' garrison initially consisted of a platoon from the British Army's King's African Rifles, located on Horsburgh Island, with two guns to cover the anchorage. The local inhabitants all lived on Home Island. Despite the importance of the islands as a communication centre, the Japanese made no attempt either to raid or to occupy them and contented themselves with sending over a reconnaissance aircraft about once a month.
On the night of 8–9 May 1942, 15 members of the garrison, from the Ceylon Defence Force, mutinied under the leadership of Gratien Fernando. The mutineers were said to have been provoked by the attitude of their British officers and were also supposedly inspired by Japanese anti-British propaganda. They attempted to take control of the gun battery on the islands. The Cocos Islands Mutiny was crushed, but the mutineers murdered one non-mutinous soldier and wounded one officer. Seven of the mutineers were sentenced to death at a trial that was later alleged to have been improperly conducted, though the guilt of the accused was admitted. Four of the sentences were commuted, but three men were executed, including Fernando. These were to be the only British Commonwealth soldiers executed for mutiny during the Second World War.
On 25 December 1942, the Japanese submarine I-166 bombarded the islands but caused no damage.
Later in the war, two airstrips were built, and three bomber squadrons were moved to the islands to conduct raids against Japanese targets in South East Asia and to provide support during the planned reinvasion of Malaya and reconquest of Singapore. The first aircraft to arrive were Supermarine Spitfire Mk VIIIs of No. 136 Squadron RAF. They included some Liberator bombers from No. 321 (Netherlands) Squadron RAF (members of exiled Dutch forces serving with the Royal Air Force), which were also stationed on the islands. When in July 1945 No. 99 and No. 356 RAF squadrons arrived on West Island, they brought with them a daily newspaper called Atoll which contained news of what was happening in the outside world. Run by airmen in their off-duty hours, it achieved fame when dropped by Liberator bombers on POW camps over the heads of the Japanese guards.
In 1946, the administration of the islands reverted to Singapore and it became part of the Colony of Singapore.
Transfer to Australia
On 23 November 1955, the islands were transferred from the United Kingdom to the Commonwealth of Australia. Immediately before the transfer the islands were part of the United Kingdom's Colony of Singapore, in accordance with the Straits Settlements (Repeal) Act, 1946 of the United Kingdom and the British Settlements Acts, 1887 and 1945, as applied by the Act of 1946. The legal steps for effecting the transfer were as follows:
The Commonwealth Parliament and the Government requested and consented to the enactment of a United Kingdom Act for the purpose.
The Cocos Islands Act, 1955, authorized Her Majesty, by Order in Council, to direct that the islands should cease to form part of the Colony of Singapore and be placed under the authority of the Commonwealth.
By the Cocos (Keeling) Islands Act, 1955, the Parliament of the Commonwealth provided for the acceptance of the islands as a territory under the authority of the Commonwealth and for its government.
The Cocos Islands Order in Council, 1955, made under the United Kingdom Act of 1955, provided that upon the appointed day (23 November 1955) the islands should cease to form part of the Colony of Singapore and be placed under the authority of the Commonwealth of Australia.
The reason for this comparatively complex machinery was due to the terms of the Straits Settlement (Repeal) Act, 1946. According to Sir Kenneth Roberts-Wray "any other procedure would have been of doubtful validity". The separation involved three steps: separation from the Colony of Singapore; transfer by United Kingdom and acceptance by Australia.
H. J. Hull was appointed the first official representative (now administrator) of the new territory. He had been a lieutenant-commander in the Royal Australian Navy and was released for the purpose. Under Commonwealth Cabinet Decision 1573 of 9 September 1958, Hull's appointment was terminated and John William Stokes was appointed on secondment from the Northern Territory police. A media release at the end of October 1958 by the Minister for Territories, Hasluck, commended Hull's three years of service on Cocos.
Stokes served in the position from 31 October 1958 to 30 September 1960. His son's boyhood memories and photos of the Islands have been published. C. I. Buffett MBE from Norfolk Island succeeded him and served from 28 July 1960 to 30 June 1966, and later acted as Administrator back on Cocos and on Norfolk Island. In 1974, Ken Mullen wrote a small book about his time with wife and son from 1964 to 1966 working at the Cable Station on Direction Island.
In the 1970s, the Australian government's dissatisfaction with the Clunies-Ross feudal style of rule of the island increased. In 1978, Australia forced the family to sell the islands for the sum of , using the threat of compulsory acquisition. By agreement, the family retained ownership of Oceania House, their home on the island. In 1983, the Australian government reneged on this agreement and told John Clunies-Ross that he should leave the Cocos. The following year the High Court of Australia ruled that resumption of Oceania House was unlawful, but the Australian government ordered that no government business was to be granted to Clunies-Ross's shipping company, an action that contributed to his bankruptcy. John Clunies-Ross later moved to Perth, Western Australia. However, some members of the Clunies-Ross family still live on the Cocos.
Extensive preparations were undertaken by the government of Australia to prepare the Cocos Malays to vote in their referendum of self-determination. Discussions began in 1982, with an aim of holding the referendum, under United Nations supervision, in mid-1983. Under guidelines developed by the UN Decolonization Committee, residents were to be offered three choices: full independence, free association, or integration with Australia. The last option was preferred by both the islanders and the Australian government. A change in government in Canberra following the March 1983 Australian elections delayed the vote by one year. While the Home Island Council stated a preference for a traditional communal consensus "vote", the UN insisted on a secret ballot. The referendum was held on 6 April 1984, with all 261 eligible islanders participating, including the Clunies-Ross family: 229 voted for integration, 21 for Free Association, nine for independence, and two failed to indicate a preference. In recent years a series of disputes have occurred between the Muslim and the non-Muslim population of the islands.
Indigenous status
Descendants of the Cocos Malays brought to the islands from the Malay Peninsula, the Indonesian archipelago, Southern Africa and New Guinea by Hare and by Clunies-Ross as indentured workers, slaves or convicts are seeking recognition from the Australian government to be acknowledged as Indigenous Australians.
Government
The capital of the Territory of Cocos (Keeling) Islands is West Island while the largest settlement is the village of Bantam, on Home Island.
Governance of the islands is based on the Cocos (Keeling) Islands Act 1955 and depends heavily on the laws of Australia. The islands are administered from Canberra by the Department of Infrastructure, Transport, Regional Development, Communications and the Arts through a non-resident Administrator appointed by the Governor-General. They were previously the responsibility of the Department of Transport and Regional Services (before 2007), the Attorney-General's Department (2007–2013), Department of Infrastructure and Regional Development (2013–2017) and Department of Infrastructure, Regional Development and Cities (2017–2020).
The current Administrator is Natasha Griggs, who was appointed on 5 October 2017 and is also the Administrator of Christmas Island. These two territories comprise the Australian Indian Ocean Territories. The Australian Government provides Commonwealth-level government services through the Christmas Island Administration and the Department of Infrastructure, Transport, Regional Development, Communications and the Arts. As per the Federal Government's Territories Law Reform Act 1992, which came into force on 1 July 1992, Western Australian laws are applied to the Cocos Islands, "so far as they are capable of applying in the Territory"; non-application or partial application of such laws is at the discretion of the federal government. The Act also gives Western Australian courts judicial power over the islands. The Cocos Islands remain constitutionally distinct from Western Australia, however; the power of the state to legislate for the territory is power-delegated by the federal government. The kind of services typically provided by a state government elsewhere in Australia are provided by departments of the Western Australian Government, and by contractors, with the costs met by the federal government.
There also exists a unicameral Cocos (Keeling) Islands Shire Council with seven seats. A full term lasts four years, though elections are held every two years; approximately half the members retire each two years. the president of the shire is Aindil Minkom. The next local election is scheduled for 21 October 2023 alongside elections on Christmas Island.
Federal politics
Cocos (Keeling) Islands residents who are Australian citizens also vote in federal elections. Cocos (Keeling) Islanders are represented in the House of Representatives by the member for the Division of Lingiari (in the Northern Territory) and in the Senate by Northern Territory senators. At the 2016 federal election, the Labor Party received absolute majorities from Cocos electors in both the House of Representatives and the Senate.
Defence and law enforcement
Defence is the responsibility of the Australian Defence Force. Until 2023, there were no active military installations or defence personnel on the island; the administrator could request the assistance of the Australian Defence Force if required.
In 2016, the Australian Department of Defence announced that the Cocos (Keeling) Islands Airport (West Island) would be upgraded to support the Royal Australian Air Force's P-8 Poseidon maritime patrol aircraft. Work was scheduled to begin in early 2023 and be completed by 2026. The airfield will act as a forward operating base for Australian surveillance and electronic warfare aircraft in the region.
The Royal Australian Navy and Australian Border Force also deploy and patrol boats to conduct surveillance and counter-migrant smuggling patrols in adjacent waters. As of 2023, the Navy's Armidale-class boats are in the process of being replaced by larger s.
Civilian law enforcement and community policing is provided by the Australian Federal Police. The normal deployment to the island is one sergeant and one constable. These are augmented by two locally engaged Special Members who have police powers.
Courts
Since 1992, court services have been provided by the Western Australian Department of the Attorney-General under a service delivery arrangement with the Australian Government. Western Australian Court Services provide Magistrates Court, District Court, Supreme Court, Family Court, Children's Court, Coroner's Court and Registry for births, deaths and marriages and change of name services. Magistrates and judges from Western Australia convene a circuit court as required.
Health care
Home Island and West Island have medical clinics providing basic health services, but serious medical conditions and injuries cannot be treated on the island and patients are sent to Perth for treatment, a distance of .
Economy
The population of the islands is approximately 600. There is a small and growing tourist industry focused on water-based or nature activities. In 2016, a beach on Direction Island was named the best beach in Australia by Brad Farmer, an Aquatic and Coastal Ambassador for Tourism Australia and co-author of 101 Best Beaches 2017.
Small local gardens and fishing contribute to the food supply, but most food and most other necessities must be imported from Australia or elsewhere.
The Cocos Islands Cooperative Society Ltd. employs construction workers, stevedores, and lighterage worker operations. Tourism employs others. The unemployment rate was 6.7% in 2011.
Plastic pollution
A 2019 study led by Jennifer Lavers from the University of Tasmania's Institute of Marine and Antarctic Studies published in the journal Scientific Reports estimated the volume of plastic rubbish on the Islands as around 414 million pieces, weighing 238 tonnes, 93% of which lies buried under the sand. It said that previous surveys which only assessed surface garbage probably "drastically underestimated the scale of debris accumulation". The plastic waste found in the study consisted mostly of single-use items such as bottles, plastic cutlery, bags and drinking straws.
Strategic importance
The Cocos Islands are strategically important because of their proximity to shipping lanes in the Indian and Pacific oceans. The United States and Australia have expressed interest in stationing surveillance drones on the Cocos Islands. Euronews described the plan as Australian support for an increased American presence in Southeast Asia, but expressed concern that it was likely to upset Chinese officials.
James Cogan has written for the World Socialist Web Site that the plan to station surveillance drones at Cocos is one component of former US President Barack Obama's "pivot" towards Asia, facilitating control of the sea lanes and potentially allowing US forces to enforce a blockade against China. After plans to construct airbases were reported on by The Washington Post, Australian defence minister Stephen Smith stated that the Australian government views the "Cocos as being potentially a long-term strategic location, but that is down the track."
Communications and transport
Transport
The Cocos (Keeling) Islands have of highway.
There is one paved airport on the West Island. A tourist bus operates on Home Island.
The only airport is Cocos (Keeling) Islands Airport with a single paved runway. Virgin Australia operates scheduled jet services from Perth Airport via Christmas Island. After 1952, the airport at Cocos Islands was a stop for airline flights between Australia and South Africa, and Qantas and South African Airways stopped there to refuel. The arrival of long-range jet aircraft ended this need in 1967.
The Cocos Islands Cooperative Society operates an interisland ferry, the Cahaya Baru, connecting West, Home and Direction Islands, as well as a bus service on West Island.
There is a lagoon anchorage between Horsburgh and Direction islands for larger vessels, while yachts have a dedicated anchorage area in the southern lee of Direction Island. There are no major seaports on the islands.
Communications
The islands are connected within Australia's telecommunication system (with number range +61 8 9162 xxxx). Public phones are located on both West Island and Home Island. A reasonably reliable GSM mobile phone network (number range +61 406 xxx), run by CiiA (Christmas Island Internet Association), operates on Cocos (Keeling) Islands. SIM cards (full size) and recharge cards can be purchased from the Telecentre on West Island to access this service.
Australia Post provides mail services with the postcode 6799. There are post offices on West Island and Home Island. Standard letters and express post items are sent by air twice weekly, but all other mail is sent by sea and can take up to two months for delivery.
Internet
.cc is the Internet country code top-level domain (ccTLD) for Cocos (Keeling) Islands. It is administered by VeriSign through a subsidiary company eNIC, which promotes it for international registration as "the next .com"; .cc was originally assigned in October 1997 to eNIC Corporation of Seattle WA by the IANA. The Turkish Republic of Northern Cyprus also uses the .cc domain, along with .nc.tr.
Internet access on Cocos is provided by CiiA (Christmas Island Internet Association), and is supplied via satellite ground station on West Island, and distributed via a wireless PPPoE-based WAN on both inhabited islands. Casual internet access is available at the Telecentre on West Island and the Indian Ocean Group Training office on Home Island.
The National Broadband Network announced in early 2012 that it would extend service to Cocos in 2015 via high-speed satellite link.
The Oman Australia Cable, completed in 2022, links Australia and Oman via the Cocos Islands.
Media
The Cocos (Keeling) Islands have access to a range of modern communication services. Digital television stations are broadcast from Western Australia via satellite. A local radio station, 6CKI – Voice of the Cocos (Keeling) Islands, is staffed by community volunteers and provides some local content.
Newspapers
The Cocos Islands Community Resource Centre publishes a fortnightly newsletter called The Atoll. It is available in paper and electronic formats.
Radio
Australian
The West Island receives Hit WA through the frequency 100.5 FM. Hit WA is the most listened station in Western Australia and is owned by Southern Cross Austereo.
Television
Australian
The Cocos (Keeling) Islands receives a range of digital channels from Western Australia via satellite and is broadcast from the Airport Building on the West Island on the following VHF frequencies: ABC6, SBS7, WAW8, WOW10 and WDW11
Malaysian
From 2013 onwards, Cocos Island received four Malaysian channels via satellite: TV3, ntv7, 8TV and TV9.
Education
There is a school in the archipelago, Cocos Islands District High School, with campuses located on West Island (Kindergarten to Year 10), and the other on Home Island (Kindergarten to Year 6). CIDHS is part of the Western Australia Department of Education. School instruction is in English on both campuses, with Cocos Malay teacher aides assisting the younger children in Kindergarten, Pre-Preparatory and early Primary with the English curriculum on the Home Island Campus. The Home Language of Cocos Malay is valued whilst students engage in learning English.
Culture
Although it is an Australian territory, the culture of the islands has extensive influences from Malaysia and Indonesia due to its predominantly ethnic Malay population.
Heritage listings
The West Island Mosque on Alexander Street is listed on the Australian Commonwealth Heritage List.
Museum
The Pulu Cocos Museum on Home Island was established in 1987, in recognition of the fact that the distinct culture of Home Island needed formal preservation. The site includes the displays on local culture and traditions, as well as the early history of the islands and their ownership by the Clunies-Ross family. The museum also includes displays on military and naval history, as well as local botanical and zoological items.
Marine park
Reefs near the islands have healthy coral and are home to several rare species of marine life. The region, along with the Christmas Island reefs, have been described as "Australia's Galapagos Islands".
In the 2021 budget the Australian Government committed $A39.1M to create two new marine parks off Christmas Island and the Cocos (Keeling) Islands. The parks will cover up to of Australian waters. After months of consultation with local people, both parks were approved in March 2022, with a total coverage of . The park will help to protect spawning of bluefin tuna from illegal international fishers, but local people will be allowed to practice fishing sustainably inshore in order to source food.
Sport
Cricket and rugby league are the two main organised sports on the islands.
Cocos Islands Golf Club is located on West island and established in 1962.
Image gallery
See also
Banknotes of the Cocos (Keeling) Islands
Index of Cocos (Keeling) Islands-related articles
Pearl Islands (Isla de Cocos, Panama; Cocos Island, Costa Rica).
Notes
References
Citations
Sources
Clunies-Ross, John Cecil; Souter, Gavin. The Clunies-Ross Cocos Chronicle, Self, Perth 2009, .
External links
Shire of Cocos (Keeling) Islands homepage
Areas of individual islets
Atoll Research Bulletin vol. 403
Cocos (Keeling) Islands Tourism website
Noel Crusz, The Cocos Islands mutiny , reviewed by Peter Stanley (Principal Historian, Australian War Memorial).
The man who lost a "coral kingdom"
Amateur Radio DX Pedition to Cocos (Keeling) Islands VK9EC
1955 establishments in Asia
1955 establishments in Australia
Archipelagoes of Australia
Archipelagoes of the Indian Ocean
.
British rule in Singapore
Island countries of the Indian Ocean
Islands of Southeast Asia
States and territories established in 1955
States and territories of Australia
Countries and territories where Malay is an official language |
5530 | https://en.wikipedia.org/wiki/Conspiracy%20theory | Conspiracy theory | A conspiracy theory is an explanation for an event or situation that asserts the existence of a conspiracy by powerful and sinister groups, often political in motivation, when other explanations are more probable. The term generally has a negative connotation, implying that the appeal of a conspiracy theory is based in prejudice, emotional conviction, or insufficient evidence. A conspiracy theory is distinct from a conspiracy; it refers to a hypothesized conspiracy with specific characteristics, including but not limited to opposition to the mainstream consensus among those who are qualified to evaluate its accuracy, such as scientists or historians.
Conspiracy theories are generally designed to resist falsification and are reinforced by circular reasoning: both evidence against the conspiracy and absence of evidence for it are misinterpreted as evidence of its truth, whereby the conspiracy becomes a matter of faith rather than something that can be proven or disproven. Studies have linked belief in conspiracy theories to distrust of authority and political cynicism. Some researchers suggest that conspiracist ideation—belief in conspiracy theories—may be psychologically harmful or pathological, and that it is correlated with lower analytical thinking, low intelligence, psychological projection, paranoia, and Machiavellianism. Psychologists usually attribute belief in conspiracy theories to a number of psychopathological conditions such as paranoia, schizotypy, narcissism, and insecure attachment, or to a form of cognitive bias called "illusory pattern perception". However, a 2020 review article found that most cognitive scientists view conspiracy theorizing as typically nonpathological, given that unfounded belief in conspiracy is common across cultures both historical and contemporary, and may arise from innate human tendencies towards gossip, group cohesion, and religion.
Historically, conspiracy theories have been closely linked to prejudice, propaganda, witch hunts, wars, and genocides. They are often strongly believed by the perpetrators of terrorist attacks, and were used as justification by Timothy McVeigh and Anders Breivik, as well as by governments such as Nazi Germany, the Soviet Union, and Turkey. AIDS denialism by the government of South Africa, motivated by conspiracy theories, caused an estimated 330,000 deaths from AIDS, QAnon and denialism about the 2020 United States presidential election results led to the January 6 United States Capitol attack, while belief in conspiracy theories about genetically modified foods led the government of Zambia to reject food aid during a famine, at a time when three million people in the country were suffering from hunger. Conspiracy theories are a significant obstacle to improvements in public health, encouraging opposition to vaccination and water fluoridation among others, and have been linked to outbreaks of vaccine-preventable diseases. Other effects of conspiracy theories include reduced trust in scientific evidence, radicalization and ideological reinforcement of extremist groups, and negative consequences for the economy.
Conspiracy theories once limited to fringe audiences have become commonplace in mass media, the internet, and social media, emerging as a cultural phenomenon of the late 20th and early 21st centuries. They are widespread around the world and are often commonly believed, some even held by the majority of the population. Interventions to reduce the occurrence of conspiracy beliefs include maintaining an open society and improving the analytical thinking skills of the general public.
Origin and usage
The Oxford English Dictionary defines conspiracy theory as "the theory that an event or phenomenon occurs as a result of a conspiracy between interested parties; spec. a belief that some covert but influential agency (typically political in motivation and oppressive in intent) is responsible for an unexplained event." It cites a 1909 article in The American Historical Review as the earliest usage example, although it also appeared in print for several decades before.
The earliest known usage was by the American author Charles Astor Bristed, in a letter to the editor published in The New York Times on January 11, 1863. He used it to refer to claims that British aristocrats were intentionally weakening the United States during the American Civil War in order to advance their financial interests.
The word "conspiracy" derives from the Latin con- ("with, together") and spirare ("to breathe").
The term is also used as a way to discredit dissenting analyses. Robert Blaskiewicz comments that examples of the term were used as early as the nineteenth century and states that its usage has always been derogatory. According to a study by Andrew McKenzie-McHarg, in contrast, in the nineteenth century the term conspiracy theory simply "suggests a plausible postulate of a conspiracy" and "did not, at this stage, carry any connotations, either negative or positive", though sometimes a postulate so-labeled was criticized.
The term "conspiracy theory" is itself the subject of a conspiracy theory, which posits that the term was popularized by the CIA in order to discredit conspiratorial believers, particularly critics of the Warren Commission, by making them a target of ridicule. In his 2013 book Conspiracy Theory in America, political scientist Lance deHaven-Smith wrote that the term entered everyday language in the United States after 1964, the year in which the Warren Commission published its findings on the Kennedy assassination, with The New York Times running five stories that year using the term.
The idea that the CIA was responsible for popularising the term "conspiracy theory" was analyzed by Michael Butter, a Professor of American Literary and Cultural History at the University of Tübingen. Butter wrote in 2020 that the CIA document, Concerning Criticism of the Warren Report, which proponents of the theory use as evidence of CIA motive and intention, does not contain the phrase "conspiracy theory" in the singular, and only uses the term "conspiracy theories" once, in the sentence: "Conspiracy theories have frequently thrown suspicion on our organisation , for example, by falsely alleging that Lee Harvey Oswald worked for us."
Difference from conspiracy
A conspiracy theory is not simply a conspiracy, which refers to any covert plan involving two or more people. In contrast, the term "conspiracy theory" refers to hypothesized conspiracies that have specific characteristics. For example, conspiracist beliefs invariably oppose the mainstream consensus among those people who are qualified to evaluate their accuracy, such as scientists or historians. Conspiracy theorists see themselves as having privileged access to socially persecuted knowledge or a stigmatized mode of thought that separates them from the masses who believe the official account. Michael Barkun describes a conspiracy theory as a "template imposed upon the world to give the appearance of order to events".
Real conspiracies, even very simple ones, are difficult to conceal and routinely experience unexpected problems. In contrast, conspiracy theories suggest that conspiracies are unrealistically successful and that groups of conspirators, such as bureaucracies, can act with near-perfect competence and secrecy. The causes of events or situations are simplified to exclude complex or interacting factors, as well as the role of chance and unintended consequences. Nearly all observations are explained as having been deliberately planned by the alleged conspirators.
In conspiracy theories, the conspirators are usually claimed to be acting with extreme malice. As described by Robert Brotherton:
Examples
A conspiracy theory may take any matter as its subject, but certain subjects attract greater interest than others. Favored subjects include famous deaths and assassinations, morally dubious government activities, suppressed technologies, and "false flag" terrorism. Among the longest-standing and most widely recognized conspiracy theories are notions concerning the assassination of John F. Kennedy, the 1969 Apollo Moon landings, and the 9/11 terrorist attacks, as well as numerous theories pertaining to alleged plots for world domination by various groups, both real and imaginary.
Popularity
Conspiracy beliefs are widespread around the world. In rural Africa, common targets of conspiracy theorizing include societal elites, enemy tribes, and the Western world, with conspirators often alleged to enact their plans via sorcery or witchcraft; one common belief identifies modern technology as itself being a form of sorcery, created with the goal of harming or controlling the people. In China, one widely published conspiracy theory claims that a number of events including the rise of Hitler, the 1997 Asian financial crisis, and climate change were planned by the Rothschild family, which may have led to effects on discussions about China's currency policy.
Conspiracy theories once limited to fringe audiences have become commonplace in mass media, contributing to conspiracism emerging as a cultural phenomenon in the United States of the late 20th and early 21st centuries. The general predisposition to believe conspiracy theories cuts across partisan and ideological lines. Conspiratorial thinking is correlated with antigovernmental orientations and a low sense of political efficacy, with conspiracy believers perceiving a governmental threat to individual rights and displaying a deep skepticism that who one votes for really matters.
Conspiracy theories are often commonly believed, some even being held by the majority of the population. A broad cross-section of Americans today gives credence to at least some conspiracy theories. For instance, a study conducted in 2016 found that 10% of Americans think the chemtrail conspiracy theory is "completely true" and 20–30% think it is "somewhat true". This puts "the equivalent of 120 million Americans in the 'chemtrails are real' camp." Belief in conspiracy theories has therefore become a topic of interest for sociologists, psychologists and experts in folklore.
Conspiracy theories are widely present on the Web in the form of blogs and YouTube videos, as well as on social media. Whether the Web has increased the prevalence of conspiracy theories or not is an open research question. The presence and representation of conspiracy theories in search engine results has been monitored and studied, showing significant variation across different topics, and a general absence of reputable, high-quality links in the results.
One conspiracy theory that propagated through former US President Barack Obama's time in office claimed that he was born in Kenya, instead of Hawaii where he was actually born. Former governor of Arkansas and political opponent of Obama Mike Huckabee made headlines in 2011 when he, among other members of Republican leadership, continued to question Obama's citizenship status.
Types
A conspiracy theory can be local or international, focused on single events or covering multiple incidents and entire countries, regions and periods of history. According to Russell Muirhead and Nancy Rosenblum, historically, traditional conspiracism has entailed a "theory", but over time, "conspiracy" and "theory" have become decoupled, as modern conspiracism is often without any kind of theory behind it.
Walker's five kinds
Jesse Walker (2013) has identified five kinds of conspiracy theories:
The "Enemy Outside" refers to theories based on figures alleged to be scheming against a community from without.
The "Enemy Within" finds the conspirators lurking inside the nation, indistinguishable from ordinary citizens.
The "Enemy Above" involves powerful people manipulating events for their own gain.
The "Enemy Below" features the lower classes working to overturn the social order.
The "Benevolent Conspiracies" are angelic forces that work behind the scenes to improve the world and help people.
Barkun's three types
Michael Barkun has identified three classifications of conspiracy theory:
Event conspiracy theories. This refers to limited and well-defined events. Examples may include such conspiracies theories as those concerning the Kennedy assassination, 9/11, and the spread of AIDS.
Systemic conspiracy theories. The conspiracy is believed to have broad goals, usually conceived as securing control of a country, a region, or even the entire world. The goals are sweeping, whilst the conspiratorial machinery is generally simple: a single, evil organization implements a plan to infiltrate and subvert existing institutions. This is a common scenario in conspiracy theories that focus on the alleged machinations of Jews, Freemasons, Communism, or the Catholic Church.
Superconspiracy theories. For Barkun, such theories link multiple alleged conspiracies together hierarchically. At the summit is a distant but all-powerful evil force. His cited examples are the ideas of David Icke and Milton William Cooper.
Rothbard: shallow vs. deep
Murray Rothbard argues in favor of a model that contrasts "deep" conspiracy theories to "shallow" ones. According to Rothbard, a "shallow" theorist observes an event and asks Cui bono? ("Who benefits?"), jumping to the conclusion that a posited beneficiary is responsible for covertly influencing events. On the other hand, the "deep" conspiracy theorist begins with a hunch and then seeks out evidence. Rothbard describes this latter activity as a matter of confirming with certain facts one's initial paranoia.
Lack of evidence
Belief in conspiracy theories is generally based not on evidence, but in the faith of the believer. Noam Chomsky contrasts conspiracy theory to institutional analysis which focuses mostly on the public, long-term behavior of publicly known institutions, as recorded in, for example, scholarly documents or mainstream media reports. Conspiracy theory conversely posits the existence of secretive coalitions of individuals and speculates on their alleged activities. Belief in conspiracy theories is associated with biases in reasoning, such as the conjunction fallacy.
Clare Birchall at King's College London describes conspiracy theory as a "form of popular knowledge or interpretation". The use of the word 'knowledge' here suggests ways in which conspiracy theory may be considered in relation to legitimate modes of knowing. The relationship between legitimate and illegitimate knowledge, Birchall claims, is closer than common dismissals of conspiracy theory contend.
Theories involving multiple conspirators that are proven to be correct, such as the Watergate scandal, are usually referred to as investigative journalism or historical analysis rather than conspiracy theory. By contrast, the term "Watergate conspiracy theory" is used to refer to a variety of hypotheses in which those convicted in the conspiracy were in fact the victims of a deeper conspiracy. There are also attempts to analyze the theory of conspiracy theories (conspiracy theory theory) to ensure that the term "conspiracy theory" is used to refer to narratives that have been debunked by experts, rather than as a generalized dismissal.
Rhetoric
Conspiracy theory rhetoric exploits several important cognitive biases, including proportionality bias, attribution bias, and confirmation bias. Their arguments often take the form of asking reasonable questions, but without providing an answer based on strong evidence. Conspiracy theories are most successful when proponents can gather followers from the general public, such as in politics, religion and journalism. These proponents may not necessarily believe the conspiracy theory; instead, they may just use it in an attempt to gain public approval. Conspiratorial claims can act as a successful rhetorical strategy to convince a portion of the public via appeal to emotion.
Conspiracy theories typically justify themselves by focusing on gaps or ambiguities in knowledge, and then arguing that the true explanation for this must be a conspiracy. In contrast, any evidence that directly supports their claims is generally of low quality. For example, conspiracy theories are often dependent on eyewitness testimony, despite its unreliability, while disregarding objective analyses of the evidence.
Conspiracy theories are not able to be falsified and are reinforced by fallacious arguments. In particular, the logical fallacy circular reasoning is used by conspiracy theorists: both evidence against the conspiracy and an absence of evidence for it are re-interpreted as evidence of its truth, whereby the conspiracy becomes a matter of faith rather than something that can be proved or disproved. The epistemic strategy of conspiracy theories has been called "cascade logic": each time new evidence becomes available, a conspiracy theory is able to dismiss it by claiming that even more people must be part of the cover-up. Any information that contradicts the conspiracy theory is suggested to be disinformation by the alleged conspiracy. Similarly, the continued lack of evidence directly supporting conspiracist claims is portrayed as confirming the existence of a conspiracy of silence; the fact that other people have not found or exposed any conspiracy is taken as evidence that those people are part of the plot, rather than considering that it may be because no conspiracy exists. This strategy lets conspiracy theories insulate themselves from neutral analyses of the evidence, and makes them resistant to questioning or correction, which is called "epistemic self-insulation".
Conspiracy theorists often take advantage of false balance in the media. They may claim to be presenting a legitimate alternative viewpoint that deserves equal time to argue its case; for example, this strategy has been used by the Teach the Controversy campaign to promote intelligent design, which often claims that there is a conspiracy of scientists suppressing their views. If they successfully find a platform to present their views in a debate format, they focus on using rhetorical ad hominems and attacking perceived flaws in the mainstream account, while avoiding any discussion of the shortcomings in their own position.
The typical approach of conspiracy theories is to challenge any action or statement from authorities, using even the most tenuous justifications. Responses are then assessed using a double standard, where failing to provide an immediate response to the satisfaction of the conspiracy theorist will be claimed to prove a conspiracy. Any minor errors in the response are heavily emphasized, while deficiencies in the arguments of other proponents are generally excused.
In science, conspiracists may suggest that a scientific theory can be disproven by a single perceived deficiency, even though such events are extremely rare. In addition, both disregarding the claims and attempting to address them will be interpreted as proof of a conspiracy. Other conspiracist arguments may not be scientific; for example, in response to the IPCC Second Assessment Report in 1996, much of the opposition centered on promoting a procedural objection to the report's creation. Specifically, it was claimed that part of the procedure reflected a conspiracy to silence dissenters, which served as motivation for opponents of the report and successfully redirected a significant amount of the public discussion away from the science.
Consequences
Historically, conspiracy theories have been closely linked to prejudice, witch hunts, wars, and genocides. They are often strongly believed by the perpetrators of terrorist attacks, and were used as justification by Timothy McVeigh, Anders Breivik and Brenton Tarrant, as well as by governments such as Nazi Germany and the Soviet Union. AIDS denialism by the government of South Africa, motivated by conspiracy theories, caused an estimated 330,000 deaths from AIDS, while belief in conspiracy theories about genetically modified foods led the government of Zambia to reject food aid during a famine, at a time when 3 million people in the country were suffering from hunger.
Conspiracy theories are a significant obstacle to improvements in public health. People who believe in health-related conspiracy theories are less likely to follow medical advice, and more likely to use alternative medicine instead. Conspiratorial anti-vaccination beliefs, such as conspiracy theories about pharmaceutical companies, can result in reduced vaccination rates and have been linked to outbreaks of vaccine-preventable diseases. Health-related conspiracy theories often inspire resistance to water fluoridation, and contributed to the impact of the Lancet MMR autism fraud.
Conspiracy theories are a fundamental component of a wide range of radicalized and extremist groups, where they may play an important role in reinforcing the ideology and psychology of their members as well as further radicalizing their beliefs. These conspiracy theories often share common themes, even among groups that would otherwise be fundamentally opposed, such as the antisemitic conspiracy theories found among political extremists on both the far right and far left. More generally, belief in conspiracy theories is associated with holding extreme and uncompromising viewpoints, and may help people in maintaining those viewpoints. While conspiracy theories are not always present in extremist groups, and do not always lead to violence when they are, they can make the group more extreme, provide an enemy to direct hatred towards, and isolate members from the rest of society. Conspiracy theories are most likely to inspire violence when they call for urgent action, appeal to prejudices, or demonize and scapegoat enemies.
Conspiracy theorizing in the workplace can also have economic consequences. For example, it leads to lower job satisfaction and lower commitment, resulting in workers being more likely to leave their jobs. Comparisons have also been made with the effects of workplace rumors, which share some characteristics with conspiracy theories and result in both decreased productivity and increased stress. Subsequent effects on managers include reduced profits, reduced trust from employees, and damage to the company's image.
Conspiracy theories can divert attention from important social, political, and scientific issues. In addition, they have been used to discredit scientific evidence to the general public or in a legal context. Conspiratorial strategies also share characteristics with those used by lawyers who are attempting to discredit expert testimony, such as claiming that the experts have ulterior motives in testifying, or attempting to find someone who will provide statements to imply that expert opinion is more divided than it actually is.
It is possible that conspiracy theories may also produce some compensatory benefits to society in certain situations. For example, they may help people identify governmental deceptions, particularly in repressive societies, and encourage government transparency. However, real conspiracies are normally revealed by people working within the system, such as whistleblowers and journalists, and most of the effort spent by conspiracy theorists is inherently misdirected. The most dangerous conspiracy theories are likely to be those that incite violence, scapegoat disadvantaged groups, or spread misinformation about important societal issues.
Interventions
The primary defense against conspiracy theories is to maintain an open society, in which many sources of reliable information are available, and government sources are known to be credible rather than propaganda. Additionally, independent nongovernmental organizations are able to correct misinformation without requiring people to trust the government. Other approaches to reduce the appeal of conspiracy theories in general among the public may be based in the emotional and social nature of conspiratorial beliefs. For example, interventions that promote analytical thinking in the general public are likely to be effective. Another approach is to intervene in ways that decrease negative emotions, and specifically to improve feelings of personal hope and empowerment.
Joseph Pierre has also noted that mistrust in authoritative institutions is the core component underlying many conspiracy theories and that this mistrust creates an epistemic vacuum and makes individuals searching for answers vulnerable to misinformation. Therefore, one possible solution is offering consumers a seat at the table to mend their mistrust in institutions. Regarding the challenges of this approach, Pierre has said,
It has been suggested that directly countering misinformation can be counterproductive. For example, since conspiracy theories can reinterpret disconfirming information as part of their narrative, refuting a claim can result in accidentally reinforcing it. In addition, publishing criticism of conspiracy theories can result in legitimizing them. In this context, possible interventions include carefully selecting which conspiracy theories to refute, requesting additional analyses from independent observers, and introducing cognitive diversity into conspiratorial communities by undermining their poor epistemology. Any legitimization effect might also be reduced by responding to more conspiracy theories rather than fewer.
However, presenting people with factual corrections, or highlighting the logical contradictions in conspiracy theories, has been demonstrated to have a positive effect in many circumstances. For example, this has been studied in the case of informing believers in 9/11 conspiracy theories about statements by actual experts and witnesses. One possibility is that criticism is most likely to backfire if it challenges someone's worldview or identity. This suggests that an effective approach may be to provide criticism while avoiding such challenges.
Psychology
The widespread belief in conspiracy theories has become a topic of interest for sociologists, psychologists, and experts in folklore since at least the 1960s, when a number of conspiracy theories arose regarding the assassination of U.S. President John F. Kennedy. Sociologist Türkay Salim Nefes underlines the political nature of conspiracy theories. He suggests that one of the most important characteristics of these accounts is their attempt to unveil the "real but hidden" power relations in social groups. The term "conspiracism" was popularized by academic Frank P. Mintz in the 1980s. According to Mintz, conspiracism denotes "belief in the primacy of conspiracies in the unfolding of history":
Research suggests, on a psychological level, conspiracist ideation—belief in conspiracy theories—can be harmful or pathological, and is highly correlated with psychological projection, as well as with paranoia, which is predicted by the degree of a person's Machiavellianism. The propensity to believe in conspiracy theories is strongly associated with the mental health disorder of schizotypy. Conspiracy theories once limited to fringe audiences have become commonplace in mass media, emerging as a cultural phenomenon of the late 20th and early 21st centuries. Exposure to conspiracy theories in news media and popular entertainment increases receptiveness to conspiratorial ideas, and has also increased the social acceptability of fringe beliefs.
Conspiracy theories often make use of complicated and detailed arguments, including ones which appear to be analytical or scientific. However, belief in conspiracy theories is primarily driven by emotion. One of the most widely confirmed facts about conspiracy theories is that belief in a single conspiracy theory tends to promote belief in other unrelated conspiracy theories as well. This even applies when the conspiracy theories directly contradict each other, e.g. believing that Osama bin Laden was already dead before his compound in Pakistan was attacked makes the same person more likely to believe that he is still alive. One conclusion from this finding is that the content of a conspiracist belief is less important than the idea of a coverup by the authorities. Analytical thinking aids in reducing belief in conspiracy theories, in part because it emphasizes rational and critical cognition.
Some psychological scientists assert that explanations related to conspiracy theories can be, and often are "internally consistent" with strong beliefs that had previously been held prior to the event that sparked the conspiracy. People who believe in conspiracy theories tend to believe in other unsubstantiated claims – including pseudoscience and paranormal phenomena.
Attractions
Psychological motives for believing in conspiracy theories can be categorized as epistemic, existential, or social. These motives are particularly acute in vulnerable and disadvantaged populations. However, it does not appear that the beliefs help to address these motives; in fact, they may be self-defeating, acting to make the situation worse instead. For example, while conspiratorial beliefs can result from a perceived sense of powerlessness, exposure to conspiracy theories immediately suppresses personal feelings of autonomy and control. Furthermore, they also make people less likely to take actions that could improve their circumstances.
This is additionally supported by the fact that conspiracy theories have a number of disadvantageous attributes. For example, they promote a negative and distrustful view of other people and groups, who are allegedly acting based on antisocial and cynical motivations. This is expected to lead to increased alienation and anomie, and reduced social capital. Similarly, they depict the public as ignorant and powerless against the alleged conspirators, with important aspects of society determined by malevolent forces, a viewpoint which is likely to be disempowering.
Each person may endorse conspiracy theories for one of many different reasons. The most consistently demonstrated characteristics of people who find conspiracy theories appealing are a feeling of alienation, unhappiness or dissatisfaction with their situation, an unconventional worldview, and a feeling of disempowerment. While various aspects of personality affect susceptibility to conspiracy theories, none of the Big Five personality traits are associated with conspiracy beliefs.
The political scientist Michael Barkun, discussing the usage of "conspiracy theory" in contemporary American culture, holds that this term is used for a belief that explains an event as the result of a secret plot by exceptionally powerful and cunning conspirators to achieve a malevolent end. According to Barkun, the appeal of conspiracism is threefold:
First, conspiracy theories claim to explain what institutional analysis cannot. They appear to make sense out of a world that is otherwise confusing.
Second, they do so in an appealingly simple way, by dividing the world sharply between the forces of light, and the forces of darkness. They trace all evil back to a single source, the conspirators and their agents.
Third, conspiracy theories are often presented as special, secret knowledge unknown or unappreciated by others. For conspiracy theorists, the masses are a brainwashed herd, while the conspiracy theorists in the know can congratulate themselves on penetrating the plotters' deceptions."
This third point is supported by research of Roland Imhoff, professor in Social Psychology at the Johannes Gutenberg University Mainz. The research suggests that the smaller the minority believing in a specific theory, the more attractive it is to conspiracy theorists.
Humanistic psychologists argue that even if a posited cabal behind an alleged conspiracy is almost always perceived as hostile, there often remains an element of reassurance for theorists. This is because it is a consolation to imagine that difficulties in human affairs are created by humans, and remain within human control. If a cabal can be implicated, there may be a hope of breaking its power or of joining it. Belief in the power of a cabal is an implicit assertion of human dignity—an unconscious affirmation that man is responsible for his own destiny.
People formulate conspiracy theories to explain, for example, power relations in social groups and the perceived existence of evil forces. Proposed psychological origins of conspiracy theorising include projection; the personal need to explain "a significant event [with] a significant cause;" and the product of various kinds and stages of thought disorder, such as paranoid disposition, ranging in severity to diagnosable mental illnesses. Some people prefer socio-political explanations over the insecurity of encountering random, unpredictable, or otherwise inexplicable events.
According to Berlet and Lyons, "Conspiracism is a particular narrative form of scapegoating that frames demonized enemies as part of a vast insidious plot against the common good, while it valorizes the scapegoater as a hero for sounding the alarm".
Causes
Some psychologists believe that a search for meaning is common in conspiracism. Once cognized, confirmation bias and avoidance of cognitive dissonance may reinforce the belief. In a context where a conspiracy theory has become embedded within a social group, communal reinforcement may also play a part.
Inquiry into possible motives behind the accepting of irrational conspiracy theories has linked these beliefs to distress resulting from an event that occurred, such as the events of 9/11. Additionally, research done by Manchester Metropolitan University suggests that "delusional ideation" is the most likely condition that would indicate an elevated belief in conspiracy theories. Studies also show that an increased attachment to these irrational beliefs lead to a decrease in desire for civic engagement. Belief in conspiracy theories is correlated with low intelligence, lower analytical thinking, anxiety disorders, paranoia, and authoritarian beliefs.
Professor Quassim Cassam argues that conspiracy theorists hold their beliefs due to flaws in their thinking and more precisely, their intellectual character. He cites philosopher Linda Trinkaus Zagzebski and her book Virtues of the Mind in outlining intellectual virtues (such as humility, caution and carefulness) and intellectual vices (such as gullibility, carelessness and closed-mindedness). Whereas intellectual virtues help in reaching sound examination, intellectual vices "impede effective and responsible inquiry", meaning that those who are prone to believing in conspiracy theories possess certain vices while lacking necessary virtues.
Some researchers have suggested that conspiracy theories could be partially caused by psychological mechanisms the human brain possesses for detecting dangerous coalitions. Such a mechanism could have been useful in the small-scale environment humanity evolved in but are mismatched in a modern, complex society and thus "misfire", perceiving conspiracies where none exist.
Projection
Some historians have argued that psychological projection is prevalent amongst conspiracy theorists. This projection, according to the argument, is manifested in the form of attribution of undesirable characteristics of the self to the conspirators. Historian Richard Hofstadter stated that:
Hofstadter also noted that "sexual freedom" is a vice frequently attributed to the conspiracist's target group, noting that "very often the fantasies of true believers reveal strong sadomasochistic outlets, vividly expressed, for example, in the delight of anti-Masons with the cruelty of Masonic punishments."
Physiology
Research on conspiracy theories by neuroscientists and cognitive linguistic experts indicates that indicates people who believe conspiracy theories have difficulty rethinking situations because the exposure to those theories has caused neural pathways which are more rigid and less subject to change. Initial susceptibility to believing the lies and dehumanizing language and metaphors of these theories leads to the acceptance of larger and more extensive theories because the hardened neural pathways are already present. Repetition of the "facts" of conspiracy theories and their connected lies simply reinforces the rigidity of those pathways. Thus, conspiracy theories and dehumanizing lies are not mere hyperbole, they can actually change the way people think.
According to semiotician and linguistic anthropologist Marcel Danesi:
Sociology
In addition to psychological factors such as conspiracist ideation, sociological factors also help account for who believes in which conspiracy theories. Such theories tend to get more traction among election losers in society, for example, and the emphasis of conspiracy theories by elites and leaders tends to increase belief among followers who have higher levels of conspiracy thinking.
Christopher Hitchens described conspiracy theories as the "exhaust fumes of democracy": the unavoidable result of a large amount of information circulating among a large number of people.
Conspiracy theories may be emotionally satisfying, by assigning blame to a group to which the theorist does not belong and so absolving the theorist of moral or political responsibility in society. Likewise, Roger Cohen writing for The New York Times has said that, "captive minds; ... resort to conspiracy theory because it is the ultimate refuge of the powerless. If you cannot change your own life, it must be that some greater force controls the world."
Sociological historian Holger Herwig found in studying German explanations for the origins of World War I, "Those events that are most important are hardest to understand because they attract the greatest attention from myth makers and charlatans."
Justin Fox of Time magazine argues that Wall Street traders are among the most conspiracy-minded group of people, and ascribes this to the reality of some financial market conspiracies, and to the ability of conspiracy theories to provide necessary orientation in the market's day-to-day movements.
Influence of critical theory
Bruno Latour notes that the language and intellectual tactics of critical theory have been appropriated by those he describes as conspiracy theorists, including climate-change denialists and the 9/11 Truth movement: "Maybe I am taking conspiracy theories too seriously, but I am worried to detect, in those mad mixtures of knee-jerk disbelief, punctilious demands for proofs, and free use of powerful explanation from the social neverland, many of the weapons of social critique."
Fusion paranoia
Michael Kelly, a The Washington Post journalist and critic of anti-war movements on both the left and right, coined the term "fusion paranoia" to refer to a political convergence of left-wing and right-wing activists around anti-war issues and civil liberties, which he said were motivated by a shared belief in conspiracism or shared anti-government views.
Barkun has adopted this term to refer to how the synthesis of paranoid conspiracy theories, which were once limited to American fringe audiences, has given them mass appeal and enabled them to become commonplace in mass media, thereby inaugurating an unrivaled period of people actively preparing for apocalyptic or millenarian scenarios in the United States of the late 20th and early 21st centuries. Barkun notes the occurrence of lone-wolf conflicts with law enforcement acting as proxy for threatening the established political powers.
Viability
As evidence that undermines an alleged conspiracy grows, the number of alleged conspirators also grows in the minds of conspiracy theorists. This is because of an assumption that the alleged conspirators often have competing interests. For example, if Republican President George W. Bush is allegedly responsible for the 9/11 terrorist attacks, and the Democratic party did not pursue exposing this alleged plot, that must mean that both the Democratic and Republican parties are conspirators in the alleged plot. It also assumes that the alleged conspirators are so competent that they can fool the entire world, but so incompetent that even the unskilled conspiracy theorists can find mistakes they make that prove the fraud. At some point, the number of alleged conspirators, combined with the contradictions within the alleged conspirators' interests and competence, becomes so great that maintaining the theory becomes an obvious exercise in absurdity.
The physicist David Robert Grimes estimated the time it would take for a conspiracy to be exposed based on the number of people involved. His calculations used data from the PRISM surveillance program, the Tuskegee syphilis experiment, and the FBI forensic scandal. Grimes estimated that:
A Moon landing hoax would require the involvement of 411,000 people and would be exposed within 3.68 years;
Climate-change fraud would require a minimum of 29,083 people (published climate scientists only) and would be exposed within 26.77 years, or up to 405,000 people, in which case it would be exposed within 3.70 years;
A vaccination conspiracy would require a minimum of 22,000 people (without drug companies) and would be exposed within at least 3.15 years and at most 34.78 years depending on the number involved;
A conspiracy to suppress a cure for cancer would require 714,000 people and would be exposed within 3.17 years.
Grimes's study did not consider exposure by sources outside of the alleged conspiracy. It only considered exposure from within the alleged conspiracy through whistleblowers or through incompetence.
Terminology
The term "truth seeker" is adopted by some conspiracy theorists when describing themselves on social media.
Conspiracy theorists are often referred to derogatorily as "cookers" in Australia. The term is also loosely associated with the far right.
Politics
The philosopher Karl Popper described the central problem of conspiracy theories as a form of fundamental attribution error, where every event is generally perceived as being intentional and planned, greatly underestimating the effects of randomness and unintended consequences. In his book The Open Society and Its Enemies, he used the term "the conspiracy theory of society" to denote the idea that social phenomena such as "war, unemployment, poverty, shortages ... [are] the result of direct design by some powerful individuals and groups." Popper argued that totalitarianism was founded on conspiracy theories which drew on imaginary plots which were driven by paranoid scenarios predicated on tribalism, chauvinism, or racism. He also noted that conspirators very rarely achieved their goal.
Historically, real conspiracies have usually had little effect on history and have had unforeseen consequences for the conspirators, in contrast to conspiracy theories which often posit grand, sinister organizations, or world-changing events, the evidence for which has been erased or obscured. As described by Bruce Cumings, history is instead "moved by the broad forces and large structures of human collectivities".
Middle East
Conspiracy theories are a prevalent feature of Arab culture and politics. Variants include conspiracies involving colonialism, Zionism, superpowers, oil, and the war on terrorism, which may be referred to as a war against Islam. For example, The Protocols of the Elders of Zion, an infamous hoax document purporting to be a Jewish plan for world domination, is commonly read and promoted in the Muslim world. Roger Cohen has suggested that the popularity of conspiracy theories in the Arab world is "the ultimate refuge of the powerless". Al-Mumin Said has noted the danger of such theories, for they "keep us not only from the truth but also from confronting our faults and problems".
Osama bin Laden and Ayman al-Zawahiri have used conspiracy theories about the United States to gain support for al-Qaeda in the Arab world, and as rhetoric to distinguish themselves from similar groups, although they may not have believed the conspiratorial claims themselves.
United States
The historian Richard Hofstadter addressed the role of paranoia and conspiracism throughout U.S. history in his 1964 essay "The Paranoid Style in American Politics". Bernard Bailyn's classic The Ideological Origins of the American Revolution (1967) notes that a similar phenomenon could be found in North America during the time preceding the American Revolution. Conspiracism labels people's attitudes as well as the type of conspiracy theories that are more global and historical in proportion.
Harry G. West and others have noted that while conspiracy theorists may often be dismissed as a fringe minority, certain evidence suggests that a wide range of the U.S. maintains a belief in conspiracy theories. West also compares those theories to hypernationalism and religious fundamentalism.
Theologian Robert Jewett and philosopher John Shelton Lawrence attribute the enduring popularity of conspiracy theories in the U.S. to the Cold War, McCarthyism, and counterculture rejection of authority. They state that among both the left-wing and right-wing, there remains a willingness to use real events, such as Soviet plots, inconsistencies in the Warren Report, and the 9/11 attacks, to support the existence of unverified and ongoing large-scale conspiracies.
In his studies of "American political demonology," historian Michael Paul Rogin too analyzed this paranoid style of politics that has occurred throughout American history. Conspiracy theories frequently identify an imaginary subversive group that is supposedly attacking the nation and requires the government and allied forces to engage in harsh extra-legal repression of those threatening subversives. Rogin cites examples from the Red Scares of 1919, to McCarthy's anti-communist campaign in the 1950s and more recently fears of immigrant hordes invading the US. Unlike Hofstadter, Rogin saw these "countersubversive" fears as frequently coming from those in power and dominant groups, instead of from the dispossessed. Unlike Robert Jewett, Rogin blamed not the counterculture, but America's dominant culture of liberal individualism and the fears it stimulated to explain the periodic eruption of irrational conspiracy theories.
The Watergate scandal has also been used to bestow legitimacy to other conspiracy theories, with Richard Nixon himself commenting that it served as a "Rorschach ink blot" which invited others to fill in the underlying pattern.
Historian Kathryn S. Olmsted cites three reasons why Americans are prone to believing in government conspiracies theories:
Genuine government overreach and secrecy during the Cold War, such as Watergate, the Tuskegee syphilis experiment, Project MKUltra, and the CIA's assassination attempts on Fidel Castro in collaboration with mobsters.
Precedent set by official government-sanctioned conspiracy theories for propaganda, such as claims of German infiltration of the U.S. during World War II or the debunked claim that Saddam Hussein played a role in the 9/11 attacks.
Distrust fostered by the government's spying on and harassment of dissenters, such as the Sedition Act of 1918, COINTELPRO, and as part of various Red Scares.
Alex Jones referenced numerous conspiracy theories for convincing his supporters to endorse Ron Paul over Mitt Romney in the 2012 Republican Party presidential primaries and Donald Trump over Hillary Clinton in the 2016 United States presidential election. Into the 2020s, the QAnon conspiracy theory alleges that Trump is fighting against a deep-state cabal of child sex-abusing and Satan-worshipping Democrats.
See also
References
Informational notes
Citations
Further reading
Burnett, Thom. Conspiracy Encyclopedia: The Encyclopedia of Conspiracy Theories
Butter, Michael, and Peter Knight. "Bridging the great divide: conspiracy theory research for the 21st century." Diogenes (2016): 0392192116669289. online
De Graaf, Beatrice and Zwierlein, Cornel (eds.) Security and Conspiracy in History, 16th to 21st Century. Historical Social Research 38, Special Issue, 2013
Fleming, Chris and Emma A. Jane. Modern Conspiracy: The Importance of Being Paranoid. New York and London: Bloomsbury, 2014. .
Goertzel, Ted. "Belief in conspiracy theories." Political Psychology (1994): 731–742. online
Harris, Lee. "The Trouble with Conspiracy Theories". The American, 12 January 2013.
Hofstadter, Richard. The paranoid style in American politics (1954). online
Oliver, J. Eric, and Thomas J. Wood. "Conspiracy theories and the paranoid style (s) of mass opinion." American Journal of Political Science 58.4 (2014): 952–966.online
Slosson, W. "The 'Conspiracy' Superstition". The Unpopular Review, Vol. VII, N°. 14, 1917.
Sunstein, Cass R., and Adrian Vermeule. "Conspiracy theories: Causes and cures." Journal of Political Philosophy 17.2 (2009): 202–227. online
Uscinski, Joseph E. and Joseph M. Parent, American Conspiracy Theories (2014) excerpt
Uscinski, Joseph E. "The 5 Most Dangerous Conspiracy Theories of 2016" POLITICO Magazine (Aug 22, 2016)
Wood, Gordon S. "Conspiracy and the paranoid style: causality and deceit in the eighteenth century." William and Mary Quarterly (1982): 402–441. in jstor
External links
Conspiracy Theories, Internet Encyclopedia of Philosophy
Barriers to critical thinking
Fringe theory
Pejorative terms |
5551 | https://en.wikipedia.org/wiki/Costa%20Rica | Costa Rica | Costa Rica (, ; ; literally "Rich Coast"), officially the Republic of Costa Rica (), is a country in the Central American region of North America. Costa Rica is bordered by Nicaragua to the north, the Caribbean Sea to the northeast, Panama to the southeast, and the Pacific Ocean to the southwest, as well as maritime border with Ecuador to the south of Cocos Island. It has a population of around five million in a land area of . An estimated 333,980 people live in the capital and largest city, San José, with around two million people in the surrounding metropolitan area.
The sovereign state is a unitary presidential constitutional republic. It has a long-standing and stable democracy and a highly educated workforce. The country spends roughly 6.9% of its budget (2016) on education, compared to a global average of 4.4%. Its economy, once heavily dependent on agriculture, has diversified to include sectors such as finance, corporate services for foreign companies, pharmaceuticals, and ecotourism. Many foreign manufacturing and services companies operate in Costa Rica's Free Trade Zones (FTZ) where they benefit from investment and tax incentives.
Costa Rica was inhabited by indigenous peoples before coming under Spanish rule in the 16th century. It remained a peripheral colony of the empire until independence as part of the First Mexican Empire, followed by membership in the Federal Republic of Central America, from which it formally declared independence in 1847. Following the brief Costa Rican Civil War in 1948, it permanently abolished its army in 1949, becoming one of only a few sovereign nations without a standing army.
The country has consistently performed favorably in the Human Development Index (HDI), placing 58th in the world , and fifth in Latin America. It has also been cited by the United Nations Development Programme (UNDP) as having attained much higher human development than other countries at the same income levels, with a better record on human development and inequality than the median of the region. It also performs well in comparisons of democratic governance, press freedom, subjective happiness and sustainable wellbeing. It has the 8th freest press according to the Press Freedom Index, it is the 35th most democratic country according to the Freedom in the World index and is the 23rd happiest country in the 2023 World Happiness Report.
History
Pre-Columbian period
Historians have classified the indigenous people of Costa Rica as belonging to the Intermediate Area, where the peripheries of the Mesoamerican and Andean native cultures overlapped. More recently, pre-Columbian Costa Rica has also been described as part of the Isthmo-Colombian Area.
Stone tools, the oldest evidence of human occupation in Costa Rica, are associated with the arrival of various groups of hunter-gatherers about 10,000 to 7,000 years BCE in the Turrialba Valley. The presence of Clovis culture type spearheads and arrows from South America opens the possibility that, in this area, two different cultures coexisted.
Agriculture became evident in the populations that lived in Costa Rica about 5,000 years ago. They mainly grew tubers and roots. For the first and second millennia BCE there were already settled farming communities. These were small and scattered, although the timing of the transition from hunting and gathering to agriculture as the main livelihood in the territory is still unknown.
The earliest use of pottery appears around 2,000 to 3,000 BCE. Shards of pots, cylindrical vases, platters, gourds, and other forms of vases decorated with grooves, prints, and some modeled after animals have been found.
The influence of indigenous peoples on modern Costa Rican culture has been relatively small compared to other nations since the country lacked a strong native civilization, to begin with. Most of the native population was absorbed into the Spanish-speaking colonial society through inter-marriage, except for some small remnants, the most significant of which are the Bribri and Boruca tribes who still inhabit the mountains of the Cordillera de Talamanca, in the southeastern part of Costa Rica, near the frontier with Panama.
Spanish colonization
The name , meaning "rich coast" in the Spanish language, was in some accounts first applied by Christopher Columbus, who sailed to the eastern shores of Costa Rica during his final voyage in 1502, and reported vast quantities of gold jewelry worn by natives. The name may also have come from conquistador Gil González Dávila, who landed on the west coast in 1522, encountered natives, and obtained some of their gold, sometimes by violent theft and sometimes as gifts from local leaders.
During most of the colonial period, Costa Rica was the southernmost province of the Captaincy General of Guatemala, nominally part of the Viceroyalty of New Spain. In practice, the captaincy general was a largely autonomous entity within the Spanish Empire. Costa Rica's distance from the capital of the captaincy in Guatemala, its legal prohibition under mercantilist Spanish law from trade with its southern neighbor Panama, then part of the Viceroyalty of New Granada (i.e. Colombia), and lack of resources such as gold and silver, made Costa Rica into a poor, isolated, and sparsely-inhabited region within the Spanish Empire. Costa Rica was described as "the poorest and most miserable Spanish colony in all America" by a Spanish governor in 1719.
Another important factor behind Costa Rica's poverty was the lack of a significant indigenous population available for (forced labor), which meant most of the Costa Rican settlers had to work on their land, preventing the establishment of large (plantations). For all these reasons, Costa Rica was, by and large, unappreciated and overlooked by the Spanish Crown and left to develop on its own. The circumstances during this period are believed to have led to many of the idiosyncrasies for which Costa Rica has become known, while concomitantly setting the stage for Costa Rica's development as a more egalitarian society than the rest of its neighbors. Costa Rica became a "rural democracy" with no oppressed mestizo or indigenous class. It was not long before Spanish settlers turned to the hills, where they found rich volcanic soil and a milder climate than that of the lowlands.
Independence
Like the rest of Central America, Costa Rica never fought for independence from Spain. On 15 September 1821, after the final Spanish defeat in the Mexican War of Independence (1810–1821), the authorities in Guatemala declared the independence of all of Central America. That date is still celebrated as Independence Day in Costa Rica even though, technically, under the Spanish Constitution of 1812 that had been readopted in 1820, Nicaragua and Costa Rica had become an autonomous province with its capital in León.
Upon independence, Costa Rican authorities faced the issue of officially deciding the future of the country. Two bands formed, the Imperialists, defended by Cartago and Heredia cities which were in favor of joining the Mexican Empire, and the Republicans, represented by the cities of San José and Alajuela who defended full independence. Because of the lack of agreement on these two possible outcomes, the first civil war of Costa Rica occurred. The Battle of Ochomogo took place on the Hill of Ochomogo, located in the Central Valley in 1823. The conflict was won by the Republicans and, as a consequence, the city of Cartago lost its status as the capital, which moved to San José.
In 1838, long after the Federal Republic of Central America ceased to function in practice, Costa Rica formally withdrew and proclaimed itself sovereign. The considerable distance and poor communication routes between Guatemala City and the Central Plateau, where most of the Costa Rican population lived then and still lives now, meant the local population had little allegiance to the federal government in Guatemala. Since colonial times, Costa Rica has been reluctant to become economically tied with the rest of Central America. Even today, despite most of its neighbors' efforts to increase regional integration, Costa Rica has remained more independent.
Until 1849, when it became part of Panama, Chiriquí was part of Costa Rica. Costa Rican pride was assuaged for the loss of this eastern (or southern) territory with the acquisition of Guanacaste, in the north.
Economic growth in the 19th century
Coffee was first planted in Costa Rica in 1808, and by the 1820s, it surpassed tobacco, sugar, and cacao as a primary export. Coffee production remained Costa Rica's principal source of wealth well into the 20th century, creating a wealthy class of growers, the so-called Coffee Barons. The revenue helped to modernize the country.
Most of the coffee exported was grown around the main centers of population in the Central Plateau and then transported by oxcart to the Pacific port of Puntarenas after the main road was built in 1846. By the mid-1850s the main market for coffee was Britain. It soon became a high priority to developing an effective transportation route from the Central Plateau to the Atlantic Ocean. For this purpose, in the 1870s, the Costa Rican government contracted with U.S. businessman Minor C. Keith to build a railroad from San José to the Caribbean port of Limón. Despite enormous difficulties with construction, disease, and financing, the railroad was completed in 1890.
Most Afro-Costa Ricans descend from Jamaican immigrants who worked in the construction of that railway and now make up about 3% of Costa Rica's population. U.S. convicts, Italians, and Chinese immigrants also participated in the construction project. In exchange for completing the railroad, the Costa Rican government granted Keith large tracts of land and a lease on the train route, which he used to produce bananas and export them to the United States. As a result, bananas came to rival coffee as the principal Costa Rican export, while foreign-owned corporations (including the United Fruit Company later) began to hold a major role in the national economy and eventually became a symbol of the exploitative export economy. The major labor dispute between the peasants and the United Fruit Company (The Great Banana Strike) was a major event in the country's history and was an important step that would eventually lead to the formation of effective trade unions in Costa Rica, as the company was required to sign a collective agreement with its workers in 1938.
20th century
Historically, Costa Rica has generally enjoyed greater peace and more consistent political stability than many of its fellow Latin American nations. Since the late 19th century, however, Costa Rica has experienced two significant periods of violence. In 1917–1919, General Federico Tinoco Granados ruled as a military dictator until he was overthrown and forced into exile. The unpopularity of Tinoco's regime led, after he was overthrown, to a considerable decline in the size, wealth, and political influence of the Costa Rican military. In 1948, José Figueres Ferrer led an armed uprising in the wake of a disputed presidential election between Rafael Ángel Calderón Guardia (who had been president between 1940 and 1944) and Otilio Ulate Blanco. With more than 2,000 dead, the resulting 44-day Costa Rican Civil War was the bloodiest event in Costa Rica during the 20th century.
The victorious rebels formed a government junta that abolished the military altogether and oversaw the drafting of a new constitution by a democratically elected assembly. Having enacted these reforms, the junta transferred power to Ulate on 8 November 1949. After the coup d'état, Figueres became a national hero, winning the country's first democratic election under the new constitution in 1953. Since then, Costa Rica has held 15 additional presidential elections, the latest in 2022. With uninterrupted democracy dating back to at least 1948, the country is the region's most stable.
Geography
Costa Rica borders the Caribbean Sea to the east, and the Pacific Ocean to the west. Costa Rica also borders Nicaragua to the north and Panama to the south.
The highest point in the country is Cerro Chirripó, at . The highest volcano in the country is the Irazú Volcano () and the largest lake is Lake Arenal. There are 14 known volcanoes in Costa Rica, and six of them have been active in the last 75 years.
Climate
Costa Rica experiences a tropical climate year-round. There are two seasons. The dry season is December to April, and the rainy season is May to November.
Flora and fauna
There is a rich variety of plants and Costa Rican wildlife.
One national park, the Corcovado National Park, is internationally renowned among ecologists for its biodiversity (including big cats and tapirs) and is where visitors can expect to see an abundance of wildlife. Corcovado is the one park in Costa Rica where all four Costa Rican monkey species can be found. These include the white-headed capuchin, the mantled howler, the endangered Geoffroy's spider monkey, and the Central American squirrel monkey, found only on the Pacific coast of Costa Rica and a small part of Panama, and considered endangered until 2008, when its status was upgraded to vulnerable. Deforestation, illegal pet-trading, and hunting are the main reasons for its threatened status. Costa Rica is the first tropical country to have stopped and reversed deforestation; it has successfully restored its forestry and developed an ecosystem service to teach biologists and ecologists about its environmental protection measures. The country had a 2018 Forest Landscape Integrity Index mean score of 4.65/10, ranking it 118th globally out of 172 countries.
Economy
The country has been considered economically stable with moderate inflation, estimated at 2.6% in 2017, and moderately high growth in GDP, which increased from US$41.3 billion in 2011 to US$52.6 billion in 2015. The estimated GDP for 2018 is US$59.0 billion and the estimated GDP per capita (purchasing power parity) is Intl$17,559.1. The growing debt and budget deficit are the country's primary concerns. A 2017 study by the Organisation for Economic Co-operation and Development warned that reducing the foreign debt must be a very high priority for the government. Other fiscal reforms were also recommended to moderate the budget deficit.
Many foreign companies (manufacturing and services) operate in Costa Rica's Free Trade Zones (FTZ) where they benefit from investment and tax incentives. Well over half of that type of investment has come from the U.S. According to the government, the zones supported over 82,000 direct jobs and 43,000 indirect jobs in 2015. Companies with facilities in the America Free Zone in Heredia, for example, include Intel, Dell, HP, Bayer, Bosch, DHL, IBM and Okay Industries.
Of the GDP, 5.5% is generated by agriculture, 18.6% by industry and 75.9% by services. (2016) Agriculture employs 12.9% of the labor force, industry 18.57%, services 69.02% (2016) For the region, its unemployment level is moderately high (8.2% in 2016, according to the IMF). Although 20.5% of the population lives below the poverty line (2017), Costa Rica has one of the highest standards of living in Central America.
High-quality health care is provided by the government at a low cost to the users. Housing is also very affordable. Costa Rica is recognized in Latin America for the quality of its educational system. Because of its educational system, Costa Rica has one of the highest literacy rates in Latin America, 97%. General Basic Education is mandatory and provided without cost to the user. A US government report confirms that the country has "historically placed a high priority on education and the creation of a skilled workforce" but notes that the high school drop-out rate is increasing. As well, Costa Rica would benefit from more courses in languages such as English, Portuguese, Mandarin, and French and also in Science, Technology, Engineering, and Math (STEM).
Trade and foreign investment
Costa Rica has free trade agreements with many countries, including the US. There are no significant trade barriers that would affect imports and the country has been lowering its tariffs by other Central American countries. The country's Free Trade Zones provide incentives for manufacturing and service industries to operate in Costa Rica. In 2015, the zones supported over 82 thousand direct jobs and 43 thousand indirect jobs in 2015 and average wages in the FTZ were 1.8 times greater than the average for private enterprise work in the rest of the country. In 2016, Amazon.com for example, had some 3,500 employees in Costa Rica and planned to increase that by 1,500 in 2017, making it an important employer.
The central location provides access to American markets and direct ocean access to Europe and Asia. The most important exports in 2015 (in order of dollar value) were medical instruments, bananas, tropical fruits, integrated circuits and orthopedic appliances. Total imports in that year were US$15 billion. The most significant products imported in 2015 (in order of dollar value) were refined petroleum, automobiles, packaged medications, broadcasting equipment, and computers. The total exports were US$12.6 billion for a trade deficit of US$2.39 billion in 2015.
Pharmaceuticals, financial outsourcing, software development, and ecotourism have become the prime industries in Costa Rica's economy. High levels of education among its residents make the country an attractive investing location. Since 1999, tourism earns more foreign exchange than the combined exports of the country's three main cash crops: bananas and pineapples especially, but also other crops, including coffee. Coffee production played a key role in Costa Rica's history and in 2006, was the third cash crop export. As a small country, Costa Rica now provides under 1% of the world's coffee production. In 2015, the value of coffee exports was US$305.9 million, a small part of the total agricultural exports of US$2.7 billion. Coffee production increased by 13.7% percent in 2015–16, declined by 17.5% in 2016–17, but was expected to increase by about 15% in the subsequent year.
Costa Rica has developed a system of payments for environmental services. Similarly, Costa Rica has a tax on water pollution to penalize businesses and homeowners that dump sewage, agricultural chemicals, and other pollutants into waterways. In May 2007, the Costa Rican government announced its intentions to become 100% carbon neutral by 2021. By 2015, 93 percent of the country's electricity came from renewable sources. In 2019, the country produced 99.62% of its electricity from renewable sources and ran completely on renewable sources for 300 continuous days.
In 1996, the Forest Law was enacted to provide direct financial incentives to landowners for the provision of environmental services. This helped reorient the forestry sector away from commercial timber production and the resulting deforestation and helped create awareness of the services it provides for the economy and society (i.e., carbon fixation, hydrological services such as producing fresh drinking water, biodiversity protection, and provision of scenic beauty).
A 2016 report by the U.S. government report identifies other challenges facing Costa Rica as it works to expand its economy by working with companies from the US (and probably from other countries). The major concerns identified were as follows:
The ports, roads, railways, and water delivery systems would benefit from major upgrading, a concern voiced by other reports too. Attempts by China to invest in upgrading such aspects were "stalled by bureaucratic and legal concerns".
The bureaucracy is "often slow and cumbersome".
Tourism
Costa Rica is the most-visited nation in the Central American region, with 2.9 million foreign visitors in 2016, up 10% from 2015. In 2015, the tourism sector was responsible for 5.8% of the country's GDP, or $3.4 billion. In 2016, the highest number of tourists came from the United States, with 1,000,000 visitors, followed by Europe with 434,884 arrivals. According to Costa Rica Vacations, once tourists arrive in the country, 22% go to Tamarindo, 18% go to Arenal, 17% pass through Liberia (where the Daniel Oduber Quirós International Airport is located), 16% go to San José, the country's capital (passing through Juan Santamaría International Airport), while 18% choose Manuel Antonio and 7% Monteverde.
By 2004, tourism was generating more revenue and foreign exchange than bananas and coffee combined. In 2016, the World Travel & Tourism Council's estimates indicated a direct contribution to the GDP of 5.1% and 110,000 direct jobs in Costa Rica; the total number of jobs indirectly supported by tourism was 271,000.
A pioneer of ecotourism, Costa Rica draws many tourists to its extensive series of national parks and other protected areas. The trail Camino de Costa Rica supports this by allowing travelers to walk across the country from the Atlantic to the Pacific coast. In the 2011 Travel and Tourism Competitiveness Index, Costa Rica ranked 44th in the world and second among Latin American countries after Mexico in 2011. By the time of the 2017 report, the country had reached 38th place, slightly behind Panama. The Ethical Traveler group's ten countries on their 2017 list of The World's Ten Best Ethical Destinations includes Costa Rica. The country scored highest in environmental protection among the winners. Costa Rica began reversing deforestation in the 1990s, and they are moving towards using only renewable energy.
Government and politics
Administrative divisions
Costa Rica is composed of seven provinces, which in turn are divided into 82 cantons (, plural ), each of which is directed by a mayor. Mayors are chosen democratically every four years by each canton. There are no provincial legislatures. The cantons are further divided into 488 districts ().
Foreign relations
Costa Rica is an active member of the United Nations and the Organization of American States. The Inter-American Court of Human Rights and the United Nations University of Peace are based in Costa Rica. It is also a member of many other international organizations related to human rights and democracy, such as the Community of Democracies. The main foreign policy objective of Costa Rica is to foster human rights and sustainable development as a way to secure stability and growth.
Costa Rica is a member of the International Criminal Court, without a Bilateral Immunity Agreement of protection for the United States military (as covered under Article 98). Costa Rica is an observer of the Organisation internationale de la Francophonie.
On 10 September 1961, some months after Fidel Castro declared Cuba a socialist state, Costa Rican President Mario Echandi ended diplomatic relations with Cuba through Executive Decree Number 2. This freeze lasted 47 years until President Óscar Arias Sánchez re-established normal relations on 18 March 2009, saying, "If we have been able to turn the page with regimes as profoundly different to our reality as occurred with the USSR or, more recently, with the Republic of China, how would we not do it with a country that is geographically and culturally much nearer to Costa Rica?" Arias announced that both countries would exchange ambassadors.
Costa Rica has a long-term disagreement with Nicaragua over the San Juan River, which defines the border between the two countries, and Costa Rica's rights of navigation on the river. In 2010, there was also a dispute around Isla Calero, and the effects of Nicaraguan dredging of the river in that area.
On 14 July 2009, the International Court of Justice in the Hague upheld Costa Rica's navigation rights for commercial purposes to subsistence fishing on their side of the river. An 1858 treaty extended navigation rights to Costa Rica, but Nicaragua denied passenger travel and fishing were part of the deal; the court ruled Costa Ricans on the river were not required to have Nicaraguan tourist cards or visas as Nicaragua argued, but, in a nod to the Nicaraguans, ruled that Costa Rican boats and passengers must stop at the first and last Nicaraguan port along their route. They must also have an identity document or passport. Nicaragua can also impose timetables on Costa Rican traffic. Nicaragua may require Costa Rican boats to display the flag of Nicaragua but may not charge them for departure clearance from its ports. These were all specific items of contention brought to the court in the 2005 filing.
On 1 June 2007, Costa Rica broke diplomatic ties with Taiwan, switching recognition to the People's Republic of China. Costa Rica was the first of the Central American nations to do so. President Óscar Arias Sánchez admitted the action was a response to economic exigency. In response, the PRC built a new, $100 million, state-of-the-art football stadium in Parque la Sabana, in the province of San José. Approximately 600 Chinese engineers and laborers took part in this project, and it was inaugurated in March 2011, with a match between the national teams of Costa Rica and China.
Costa Rica finished a term on the United Nations Security Council, having been elected for a nonrenewable, two-year term in the 2007 election. Its term expired on 31 December 2009; this was Costa Rica's third time on the Security Council. Elayne Whyte Gómez is the Permanent Representative of Costa Rica to the UN Office at Geneva (2017) and President of the United Nations Conference to Negotiate a Legally Binding Instrument to Prohibit Nuclear Weapons.
Pacifism
On 1 December 1948, Costa Rica abolished its military force. In 1949, the abolition of the military was introduced in Article 12 of the Costa Rican Constitution. The budget previously dedicated to the military is now dedicated to providing health care services and education. According to Deutsche Welle, "Costa Rica is known for its stable democracy, progressive social policies, such as free, compulsory public education, high social well-being, and emphasis on environmental protection." For law enforcement, Costa Rica has the Public Force of Costa Rica police agency.
In 2017, Costa Rica signed the UN treaty on the Prohibition of Nuclear Weapons.
Leadership in World governance initiatives
Costa Rica has been one of the signatories of the agreement to convene a convention for drafting a world constitution. As a result, in 1968, for the first time in human history, a World Constituent Assembly convened to draft and adopt the Constitution for the Federation of Earth. Francisco Orlich Bolmarcich, then president of Costa Rica signed the agreement to convene a World Constituent Assembly along with former presidents José Figueres Ferrer and Otilio Ulate Blanco.
Environmentalism
In 2021 Costa Rica with Denmark launched the "Beyond Oil and Gas alliance" (BOGA) for stopping the use of fossil fuels. The BOGA campaign was presented in the COP26 Climate Summit, where Sweden joined as a core member, while New Zealand and Portugal joined as associate members.
Demographics
The 2022 census counted a total population of 5,044,197 people. In 2022, the census also recorded ethnic or racial identity for all groups separately for the first time in more than ninety-five years since the 1927 census. Options included indigenous, Black or Afro-descendant, Mulatto, Chinese, Mestizo, white and other on section IV: question 7.
In 2011 data for the following groups were : 83.6% whites or mestizos, 6.7% mulattoes, 2.4% Native American, 1.1% black or Afro-Caribbean; the census showed 1.1% as Other, 2.9% (141,304 people) as None, and 2.2% (107,196 people) as unspecified.
In 2011, there were over 104,000 Native American or indigenous inhabitants, representing 2.4% of the population. Most of them live in secluded reservations, distributed among eight ethnic groups: Quitirrisí (in the Central Valley), Matambú or Chorotega (Guanacaste), Maleku (northern Alajuela), Bribri (southern Atlantic), Cabécar (Cordillera de Talamanca), Guaymí (southern Costa Rica, along the Panamá border), Boruca (southern Costa Rica) and (southern Costa Rica).
The population includes European Costa Ricans (of European ancestry), primarily of Spanish descent, with significant numbers of Italian, German, English, Dutch, French, Irish, Portuguese, and Polish families, as well a sizable Jewish community. The majority of the Afro-Costa Ricans are Creole English-speaking descendants of 19th century black Jamaican immigrant workers.
The 2011 census classified 83.6% of the population as white or Mestizo; the latter are persons of combined European and Amerindian descent. The Mulatto segment (mix of white and black) represented 6.7% and indigenous people made up 2.4% of the population. Native and European mixed-blood populations are far less than in other Latin American countries. Exceptions are Guanacaste, where almost half the population is visibly mestizo, a legacy of the more pervasive unions between Spanish colonists and Chorotega Amerindians through several generations, and Limón, where the vast majority of the Afro-Costa Rican community lives.
Costa Rica hosts many refugees, mainly from Colombia and Nicaragua. As a result of that and illegal immigration, an estimated 10–15% (400,000–600,000) of the Costa Rican population is made up of Nicaraguans. Some Nicaraguans migrate for seasonal work opportunities and then return to their country. Costa Rica took in many refugees from a range of other Latin American countries fleeing civil wars and dictatorships during the 1970s and 1980s, notably from Chile and Argentina, as well as people from El Salvador who fled from guerrillas and government death squads.
According to the World Bank, in 2010 about 489,200 immigrants lived in the country, many from Nicaragua, Panama, El Salvador, Honduras, Guatemala, and Belize, while 125,306 Costa Ricans live abroad in the United States, Panama, Nicaragua, Spain, Mexico, Canada, Germany, Venezuela, Dominican Republic, and Ecuador. The number of migrants declined in later years but in 2015, there were some 420,000 immigrants in Costa Rica and the number of asylum seekers (mostly from Honduras, El Salvador, Guatemala and Nicaragua) rose to more than 110,000, a fivefold increase from 2012. In 2016, the country was called a "magnet" for migrants from South and Central America and other countries who were hoping to reach the U.S.
Largest cantons
Religion
Most Costa Ricans identify with a Christian religion, with Catholicism being the one with the largest number of members and also the official state religion according to the 1949 Constitution, which at the same time guarantees freedom of religion. Costa Rica is the only modern state in the Americas which currently has Catholicism as its state religion; other countries with state religions (Catholic, Lutheran, Anglican, Orthodox) are in Europe: Liechtenstein, Monaco, the Vatican City, Malta, United Kingdom, Denmark, Iceland, and Greece.
The Latinobarómetro survey of 2017 found that 57% of the population identify themselves as Roman Catholics, 25% are Evangelical Protestants, 15% report that they do not have a religion, and 2% declare that they belong to another religion. This survey indicated a decline in the share of Catholics and rise in the share of Protestants and irreligious. A University of Costa Rica survey of 2018 show similar rates; 52% Catholics, 22% Protestants, 17% irreligious and 3% other. The rate of secularism is high by Latin American standards.
Due to small, but continuous, immigration from Asia and the Middle East, other religions have grown, the most popular being Buddhism, with about 100,000 practitioners (over 2% of the population). Most Buddhists are members of the Han Chinese community of about 40,000 with some new local converts. There is also a small Muslim community of about 500 families, or 0.001% of the population.
The Sinagoga Shaarei Zion synagogue is near La Sabana Metropolitan Park in San José. Several homes in the neighborhood east of the park display the Star of David and other Jewish symbols.
The Church of Jesus Christ of Latter-day Saints claims more than 35,000 members, and has a temple in San José that served as a regional worship center for Costa Rica. However, they represent less than 1% of the population.
Languages
The primary language spoken in Costa Rica is Spanish, which features characteristics distinct to the country, a form of Central American Spanish. Costa Rica is a linguistically diverse country and home to at least five living local indigenous languages spoken by the descendants of pre-Columbian peoples: Maléku, Cabécar, Bribri, Guaymí, and Buglere.
Of native languages still spoken, primarily in indigenous reservations, the most numerically important are the Bribri, Maléku, Cabécar and Ngäbere languages; some of these have several thousand speakers in Costa Rica while others have a few hundred. Some languages, such as Teribe and Boruca, have fewer than a thousand speakers. The Buglere language and the closely related Guaymí are spoken by some in southeast Puntarenas.
A Creole-English language, Jamaican patois (also known as Mekatelyu), is an English-based Creole language spoken by the Afro-Carib immigrants who have settled primarily in Limón Province along the Caribbean coast.
About 10.7% of Costa Rica's adult population (18 or older) also speaks English, 0.7% French, and 0.3% speaks Portuguese or German as a second language.
Culture
Costa Rica was the point where the Mesoamerican and South American native cultures met. The northwest of the country, the Nicoya peninsula, was the southernmost point of Nahuatl cultural influence when the Spanish conquerors (conquistadores) came in the 16th century. The central and southern portions of the country had Chibcha influences. The Atlantic coast, meanwhile, was populated with African workers during the 17th and 18th centuries.
As a result of the immigration of Spaniards, their 16th-century Spanish culture and its evolution marked everyday life and culture until today, with the Spanish language and the Catholic religion as primary influences.
The Department of Culture, Youth, and Sports is in charge of the promotion and coordination of cultural life. The work of the department is divided into Direction of Culture, Visual Arts, Scenic Arts, Music, Patrimony, and the System of Libraries. Permanent programs, such as the National Symphony Orchestra of Costa Rica and the Youth Symphony Orchestra, are conjunctions of two areas of work: Culture and Youth.
Dance-oriented genres, such as soca, salsa, bachata, merengue, cumbia and Costa Rican swing are enjoyed increasingly by older rather than younger people. The guitar is popular, especially as an accompaniment to folk dances; however, the marimba was made the national instrument.
In November 2017, National Geographic magazine named Costa Rica as the happiest country in the world, and the country routinely ranks high in various happiness metrics. The article included this summary: "Costa Ricans enjoy the pleasure of living daily life to the fullest in a place that mitigates stress and maximizes joy". It is not surprising then that one of the most recognizable phrases among "Ticos" is "Pura Vida", pure life in a literal translation. It reflects the inhabitant's philosophy of life, denoting a simple life, free of stress, a positive, relaxed feeling. The expression is used in various contexts in conversation. Often, people walking down the streets, or buying food at shops say hello by saying Pura Vida. It can be phrased as a question or as an acknowledgement of one's presence. A recommended response to "How are you?" would be "Pura Vida." In that usage, it might be translated as "awesome", indicating that all is very well. When used as a question, the connotation would be "everything is going well?" or "how are you?".
Costa Rica rates 12th on the 2017 Happy Planet Index in the World Happiness Report by the UN but the country is said to be the happiest in Latin America. Reasons include the high level of social services, the caring nature of its inhabitants, long life expectancy and relatively low corruption.
Cuisine
Costa Rican cuisine is a blend of Native American, Spanish, African, and many other cuisine origins. Dishes such as the very traditional tamale and many others made of corn are the most representative of its indigenous inhabitants, and similar to other neighboring Mesoamerican countries. Spaniards brought many new ingredients to the country from other lands, especially spices and domestic animals. And later in the 19th century, the African flavor lent its presence with influence from other Caribbean mixed flavors. This is how Costa Rican cuisine today is very varied, with every new ethnic group who had recently become part of the country's population influencing the country's cuisine.
Sports
Costa Rica entered the Summer Olympics for the first time in 1936. The sisters Silvia and Claudia Poll have won all four of the country's Olympic Medals for swimming; one Gold, one Silver, and two Bronze.
Football is the most popular sport in Costa Rica. The national team has played in five FIFA World Cup tournaments and reached the quarter-finals for the first time in 2014. Its best performance in the regional CONCACAF Gold Cup was runner-up in 2002. Paulo Wanchope, a forward who played for three clubs in England's Premier League in the late 1990s and early 2000s, is credited with enhancing foreign recognition of Costa Rican football. Costa Rica, along with Panama, was granted the hosting rights of 2020 FIFA U-20 Women's World Cup, which was postponed until 2021, due to the COVID-19 pandemic. On 17 November 2020, FIFA announced that the event would be held in Costa Rica in 2022.
As of late 2021, Costa Rica's women's national volleyball team has been the top team in Central America's AFECAVOL (Asociación de Federaciones CentroAmericanas de Voleibol) zone. Costa Rica featured a women's national team in beach volleyball that competed at the 2018–2020 NORCECA Beach Volleyball Continental Cup.
Education
The literacy rate in Costa Rica is approximately 97 percent and English is widely spoken primarily due to Costa Rica's tourism industry. When the army was abolished in 1949, it was said that the "army would be replaced with an army of teachers". Universal public education is guaranteed in the constitution; primary education is obligatory, and both preschool and secondary school are free. Students who finish 11th grade receive a Costa Rican Bachillerato Diploma accredited by the Costa Rican Ministry of Education.
There are both state and private universities. The state-funded University of Costa Rica has been awarded the title "Meritorious Institution of Costa Rican Education and Culture" and hosts around 25,000 students who study at numerous campuses established around the country.
A 2016 report by the U.S. government report identifies the current challenges facing the education system, including the high dropout rate among secondary school students. The country needs even more workers who are fluent in English and languages such as Portuguese, Mandarin and French. It would also benefit from more graduates in science, technology, engineering and mathematics (STEM) programs, according to the report. Costa Rica was ranked 74th in the Global Innovation Index in 2023, down from 55th in 2019.
Health
According to the UNDP, in 2010 the life expectancy at birth for Costa Ricans was 79.3 years. The Nicoya Peninsula is considered one of the Blue Zones in the world, where people commonly live active lives past the age of 100 years. The New Economics Foundation (NEF) ranked Costa Rica first in its 2009 Happy Planet Index, and once again in 2012. The index measures the health and happiness they produce per unit of environmental input. According to NEF, Costa Rica's lead is due to its very high life expectancy which is second highest in the Americas, and higher than the United States. The country also experienced well-being higher than many richer nations and a per capita ecological footprint one-third the size of the United States.
In 2002, there were 0.58 new general practitioner (medical) consultations and 0.33 new specialist consultations per capita, and a hospital admission rate of 8.1%. Preventive health care is also successful. In 2002, 96% of Costa Rican women used some form of contraception, and antenatal care services were provided to 87% of all pregnant women. All children under one have access to well-baby clinics, and the immunization coverage rate in 2020 was above 95% for all antigens. Costa Rica has a very low malaria incidence of 48 per 100,000 in 2000 and no reported cases of measles in 2002. The perinatal mortality rate dropped from 12.0 per 1000 in 1972 to 5.4 per 1000 in 2001.
Costa Rica has been cited as Central America's great health success story. Its healthcare system is ranked higher than that of the United States, despite having a fraction of its GDP. Prior to 1940, government hospitals and charities provided most health care. But since the 1941 creation of the Social Insurance Administration (Caja Costarricense de Seguro Social – CCSS), Costa Rica has provided universal health care to its wage-earning residents, with coverage extended to dependants over time. In 1973, the CCSS took over administration of all 29 of the country's public hospitals and all health care, also launching a Rural Health Program (Programa de Salud Rural) for primary care to rural areas, later extended to primary care services nationwide. In 1993, laws were passed to enable elected health boards that represented health consumers, social insurance representatives, employers, and social organizations. By 2000, social health insurance coverage was available to 82% of the Costa Rican population. Each health committee manages an area equivalent to one of the 83 administrative cantons of Costa Rica. There is limited use of private, for-profit services (around 14.4% of the national total health expenditure). About 7% of GDP is allocated to the health sector, and over 70% is government-funded.
Primary health care facilities in Costa Rica include health clinics, with a general practitioner, nurse, clerk, pharmacist, and a primary health technician. In 2008, there were five specialty national hospitals, three general national hospitals, seven regional hospitals, 13 peripheral hospitals, and 10 major clinics serving as referral centers for primary care clinics, which also deliver biopsychosocial services, family and community medical services, and promotion and prevention programs. Patients can choose private health care to avoid waiting lists.
Costa Rica is among the Latin America countries that have become popular destinations for medical tourism. In 2006, Costa Rica received 150,000 foreigners that came for medical treatment. Costa Rica is particularly attractive to Americans due to geographic proximity, high quality of medical services, and lower medical costs.
See also
Index of Costa Rica-related articles
Outline of Costa Rica
Camino de Costa Rica (trail across the country from the Atlantic to the Pacific coast)
Notes
References
Further reading
Blake, Beatrice. The New Key to Costa Rica (Berkeley: Ulysses Press, 2009).
Chase, Cida S. "Costa Rican Americans". Gale Encyclopedia of Multicultural America, edited by Thomas Riggs, (3rd ed., vol. 1, Gale, 2014), pp. 543–551. online
Edelman, Marc. Peasants Against Globalization: Rural Social Movements in Costa Rica. Stanford: Stanford University Press, 1999.
Huhn, Sebastian: Contested Cornerstones of Nonviolent National Self-Perception in Costa Rica: A Historical Approach, 2009.
Keller, Marius; Niestroy, Ingeborg; García Schmidt, Armando; Esche, Andreas. "Costa Rica: Pioneering Sustainability". Excerpt (pp. 81–102) from Bertelsmann Stiftung (ed.). Winning Strategies for a Sustainable Future. Gütersloh, Germany: Verlag Bertelsmann Stiftung, 2013.
Lara, Sylvia Lara, Tom Barry, and Peter Simonson. Inside Costa Rica: The Essential Guide to Its Politics, Economy, Society and Environment. London: Latin America Bureau, 1995.
Lehoucq, Fabrice E. and Ivan Molina. Stuffing the Ballot Box: Fraud, Electoral Reform, and Democratization in Costa Rica. Cambridge: Cambridge University Press, 2002.
Lehoucq, Fabrice E. Policymaking, Parties, and Institutions in Democratic Costa Rica, 2006.
Longley, Kyle. Sparrow and the Hawk: Costa Rica and the United States during the Rise of José Figueres. (University of Alabama Press, 1997).
Mount, Graeme S. "Costa Rica and the Cold War, 1948–1990". Canadian Journal of History 50.2 (2015): 290–316.
Palmer, Steven and Iván Molina. The Costa Rica Reader: History, Culture, Politics. Durham and London: Duke University Press, 2004.
Sandoval, Carlos. Threatening Others: Nicaraguans and the Formation of National Identities in Costa Rica. Athens: Ohio University Press, 2004.
Wilson, Bruce M. Costa Rica: Politics, Economics, and Democracy: Politics, Economics, and Democracy. Boulder, London: Lynne Rienner Publishers, 1998.
External links
Costa Rica. The World Factbook. Central Intelligence Agency.
Costa Rica at UCB Libraries GovPubs
Street Art of San Jose by danscape
Costa Rica profile from the BBC News
Key Development Forecasts for Costa Rica from International Futures
Government and administration
Official website of the government of Costa Rica
Trade
World Bank Summary Trade Statistics Costa Rica
Countries in Central America
Former Spanish colonies
Republics
Spanish-speaking countries and territories
States and territories established in 1821
Member states of the United Nations
1821 establishments in North America
Countries in North America
Christian states
OECD members
World Constitutional Convention call signatories |
5554 | https://en.wikipedia.org/wiki/Demographics%20of%20Costa%20Rica | Demographics of Costa Rica | This is a demographic article about Costa Rica's population, including population density, ethnicity, education level, health of the populace, economic status, religious affiliations, and other aspects of the population.
According to the United Nations, Costa Rica had an estimated population of people as of 2021. White and Mestizos make up 83.4% of the population, 7% are black people (including mixed race), 2.4% Amerindians, 0.2% Chinese and 7% other/none.
In 2010, just under 3% of the population was of African descent. These are called Afro-Costa Ricans or West Indians and are English-speaking descendants of 19th-century black Jamaican immigrant workers. Another 1% is composed of those of Chinese origin, and less than 1% are West Asian, mainly of Lebanese descent but also Palestinians. The 2011 Census provided the following data: whites and mestizos make up 83.4% of the population, 7% are black people (including mixed race), 2.4% Amerindians, 0.2% Chinese, and 7% other/none.
There is also a community of North American retirees from the United States and Canada, followed by fairly large numbers of European Union expatriates (chiefly Scandinavians and from Germany) come to retire as well, and Australians. Immigration to Costa Rica made up 9% of the population in 2012. This included permanent settlers as well as migrants who were hoping to reach the U.S. In 2015, there were some 420,000 immigrants in Costa Rica and the number of asylum seekers (mostly from Honduras, El Salvador, Guatemala and Nicaragua) rose to more than 110,000. An estimated 10% of the Costa Rican population in 2014 was made up of Nicaraguans.
The indigenous population today numbers about 60,000 (just over 1% of the population), with some Miskito and Garifuna (a population of mixed African and Carib Amerindian descent) living in the coastal regions.
Costa Rica's emigration is the smallest in the Caribbean Basin and is among the smallest in the Americas. By 2015 about just 133,185 (2.77%) of the country's people live in another country as immigrants. The main destination countries are the United States (85,924), Nicaragua (10,772), Panama (7,760), Canada (5,039), Spain (3,339), Mexico (2,464), Germany (1,891), Italy (1,508), Guatemala (1,162) and Venezuela (1,127).
Population and ancestry
In , Costa Rica had a population of . The population is increasing at a rate of 1.5% per year. At current trends the population will increase to 9,158,000 in about 46 years. The population density is 94 people per square km, the third highest in Central America.
Approximately 40% lived in rural areas and 60% in urban areas. The rate of urbanization estimated for the period 2005–2015 is 2.74% per annum, one of the highest among developing countries. About 75% of the population live in the upper lands (above 500 meters) where temperature is cooler and milder.
The 2011 census counted a population of 4.3 million people distributed among the following groups: 83.6% whites or mestizos, 6.7% black mixed race, 2.4% Native American, 1.1% black or Afro-Caribbean; the census showed 1.1% as Other, 2.9% (141,304 people) as None, and 2.2% (107,196 people) as unspecified.
In 2011, there were over 104,000 Native American or indigenous inhabitants, representing 2.4% of the population. Most of them lived in secluded reservations, distributed among eight ethnic groups: Quitirrisí (in the Central Valley), Matambú or Chorotega (Guanacaste), Maleku (northern Alajuela), Bribri (southern Atlantic), Cabécar (Cordillera de Talamanca), Guaymí (southern Costa Rica, along the Panamá border), Boruca (southern Costa Rica) and Térraba (southern Costa Rica).
Costa Ricans of European origin are primarily of Spanish descent, with significant numbers of Italian, German, English, Dutch, French, Irish, Portuguese, and Polish families, as well as a sizable Jewish community. The majority of the Afro-Costa Ricans are Creole English-speaking descendants of 19th century black Jamaican immigrant workers.
The 2011 census classified 83.6% of the population as white or Mestizo; the latter have combined European and Native American descent. The Mulatto segment (mix of white and black) represented 6.7% and indigenous people made up 2.4% of the population. Native and European mixed blood populations are far less than in other Latin American countries. Exceptions are the Guanacaste province, where almost half the population is visibly mestizo, a legacy of the more pervasive unions between Spanish colonists and Chorotega Amerindians through several generations, and Limón, where the vast majority of the Afro-Costa Rican community lives.
Education
According to the United Nations, the country's literacy rate stands at 95.8%, the fifth highest among American countries. Costa Rica's Education Index in 2006 was 0.882; higher than that of richer countries, such as Singapore and Mexico. The gross enrollment ratio is 73.0%, smaller than that of the neighbors countries of El Salvador and Honduras.
All students must complete primary school and secondary school, between 6 and 15 years. Some students drop out because they must work to help support their families. In 2007 there were 536,436 pupils enrolled in 3,771 primary schools and 377,900 students attended public and private secondary schools.
The main universities are the University of Costa Rica, in San Pedro and the National University of Costa Rica, in Heredia. Costa Rica also has several small private universities.
Emigration
Costa Rican emigration is among the smallest in the Caribbean Basin. About 3% of the country's population lives in another country as immigrants. The main destination countries are the United States, Spain, Mexico, and other Central American countries. In 2005, there were 127,061 Costa Ricans living in another country as immigrants. Remittances were $513,000,000 in 2006 which represented 2.3% of the national GDP.
Immigration
Costa Rica's immigration is among the largest in the Caribbean Basin. According to the 2011 census, 385,899 residents were born abroad. The vast majority were born in Nicaragua (287,766). Other countries of origin were Colombia (20,514), United States (16,898), Spain (16,482) and Panama (11,250). Outward remittances were $246,000,000 in 2006.
Migrants
According to the World Bank, about 489,200 migrants lived in the country in 2010; mainly from Nicaragua, Panama, El Salvador, Honduras, Guatemala, and Belize, while 125,306 Costa Ricans live abroad in the United States, Panama, Nicaragua, Spain, Mexico, Canada, Germany, Venezuela, Dominican Republic, and Ecuador. The number of migrants declined in later years but in 2015, there were some 420,000 immigrants in Costa Rica and the number of asylum seekers (mostly from Honduras, El Salvador, Guatemala and Nicaragua) rose to more than 110,000, a fivefold increase from 2012. In 2016, the country was called a "magnet" for migrants from South and Central America and other countries who were hoping to reach the U.S.
European Costa Ricans
European Costa Ricans are people from Costa Rica whose ancestry lies within the continent of Europe, most notably Spain. According to DNA studies, around 75% of the population have some level of European ancestry.
Percentages of the Costa Rican population by race are known as the national census does have the question of ethnicity included in its form. As for 2012, 65.80% of Costa Ricans identify themselves as white/castizo and 13.65% as mestizo, giving around 80% of Caucasian population. This, however, is based on self-identification and not on scientific studies. According to the PLoS Genetics Geographic Patterns of Genome Admixture in Latin American Mestizos study of 2012, Costa Ricans have 68% of European ancestry, 29% Amerindian and 3% African. According to CIA Factbook, Costa Rica has a white or mestizo population of 83.6%.
Cristopher Columbus and his crew were the first Europeans ever to set foot on what is now Costa Rica, having arrived to Uvita Island (modern day Limón province) in 1502 in Columbus's last trip. Costa Rica was part of the Spanish Empire and colonized by Spaniards mostly Castilians, Basque and Sephardic Jews.
After independence, large migrations of wealthy Americans, Germans, French and British businessmen came to the country encouraged by the government and followed by their families and employees (many of them technicians and professionals), thus creating colonies and mixing with the population, especially the high and middle classes.
Later, smaller migrations of Italians, Spaniards (mostly Catalans) and Arabs (mostly Lebanese and Syrians) took place. These migrants arrived fleeing economical crisis in their home countries, setting in large, more closed colonies. Polish migrants, mostly Ashkenazi Jews who fled anti-Semitism and Nazi persecution in Europe, also arrived in large numbers.
In 1901 president Ascensión Esquivel Ibarra closed the country to all non-white immigration. All Black, Chinese, Arab, Turkish or Gypsy migration to the country was banned. After the beginning of the Spanish Civil War, a large influx of Republican refugees settled in the country, mostly Castilians, Galicians and Asturians, as well as later Chilean, Mexican and Colombian migrants who would arrive escaping from war or dictatorships, as Costa Rica is the longest running democracy in Latin America.
Ethnic groups
The following listing is taken from a publication of the Costa Rica 2011 Census:
Mestizos and Whites - 3,597,847 = 83.64%
Mulatto - 289,209 = 6.72%
Indigenous - 104,143 = 2.42%
Black/Afro-Caribbean - 45,228 = 1.05%
Chinese - 9 170 = 0.21%
Other - 36 334 = 0.84%
Did not state - 95,140 = 2.21%
Vital statistics
(c) = Census results.
Current vital statistics
Structure of the population
Life expectancy at birth
Source: UN World Population Prospects
Demographic statistics
Demographic statistics according to the World Population Review in 2022.
One birth every 8 minutes
One death every 19 minutes
One net migrant every 131 minutes
Net gain of one person every 12 minutes
Demographic statistics according to the CIA World Factbook, unless otherwise indicated.
Population
5,204,411 (2022 est.)
4,987,142 (July 2018 est.)
4,872,543 (July 2016 est.)
Ethnic groups
White or Mestizo 83.6%, Mulatto 6.7%, Indigenous 2.4%, Black or African descent 1.1%, other 1.1%, none 2.9%, unspecified 2.2% (2011 est.)
Age structure
0-14 years: 22.08% (male 575,731/female 549,802)
15-24 years: 15.19% (male 395,202/female 379,277)
25-54 years: 43.98% (male 1,130,387/female 1,111,791)
55-64 years: 9.99% (male 247,267/female 261,847)
65 years and over: 8.76% (2020 est.) (male 205,463/female 241,221)
0-14 years: 22.43% (male 572,172 /female 546,464)
15-24 years: 15.94% (male 405,515 /female 389,433)
25-54 years: 44.04% (male 1,105,944 /female 1,090,434)
55-64 years: 9.48% (male 229,928 /female 242,696)
65 years and over: 8.11% (male 186,531 /female 218,025) (2018 est.)
Median age
total: 32.6 years. Country comparison to the world: 109th
male: 32.1 years
female: 33.1 years (2020 est.)
Total: 31.7 years. Country comparison to the world: 109th
Male: 31.2 years
Female: 32.2 years (2018 est.)
Total: 30.9 years
Male: 30.4 years
Female: 31.3 years (2016 est.)
Birth rate
14.28 births/1,000 population (2022 est.) Country comparison to the world: 121st
15.3 births/1,000 population (2018 est.) Country comparison to the world: 121st
Death rate
4.91 deaths/1,000 population (2022 est.) Country comparison to the world: 198th
4.8 deaths/1,000 population (2018 est.) Country comparison to the world: 200th
Total fertility rate
1.86 children born/woman (2022 est.) Country comparison to the world: 134th
1.89 children born/woman (2018 est.) Country comparison to the world: 135th
Net migration rate
0.77 migrant(s)/1,000 population (2022 est.) Country comparison to the world: 69th
0.8 migrant(s)/1,000 population (2018 est.) Country comparison to the world: 65th
Population growth rate
1.01% (2022 est.) Country comparison to the world: 95th
1.13% (2018 est.) Country comparison to the world: 95th
Contraceptive prevalence rate
70.9% (2018)
Religions
Roman Catholic 47.5%, Evangelical and Pentecostal 19.8%, Jehovah's Witness 1.4%, other Protestant 1.2%, other 3.1%, none 27% (2021 est.)
Dependency ratios
Total dependency ratio: 45.4 (2015 est.)
Youth dependency ratio: 32.4 (2015 est.)
Elderly dependency ratio: 12.9 (2015 est.)
Potential support ratio: 7.7 (2015 est.)
Urbanization
urban population: 82% of total population (2022)
rate of urbanization: 1.5% annual rate of change (2020-25 est.)
Infant mortality rate
Total: 8.3 deaths/1,000 live births
Male: 9 deaths/1,000 live births
Female: 7.4 deaths/1,000 live births (2016 est.)
Life expectancy at birth
total population: 79.64 years. Country comparison to the world: 58th
male: 76.99 years
female: 82.43 years (2022 est.)
Total population: 78.9 years. Country comparison to the world: 55th
Male: 76.2 years
Female: 81.7 years (2018 est.)
Total population: 78.6 years
Male: 75.9 years
Female: 81.3 years (2016 est.)
HIV/AIDS
Adult prevalence rate: 0.33%
People living with HIV/AIDS: 10,000
Deaths:200 (2015 est.)
Education expenditures
6.7% of GDP (2020) Country comparison to the world: 24th
Literacy
total population: 97.9%
male: 97.8%
female: 97.9% (2018)
School life expectancy (primary to tertiary education)
total: 17 years
male: 16 years
female: 17 years (2019)
Unemployment, youth ages 15-24
total: 40.7%
male: 34%
female: 50.9% (2020 est.)
Nationality
Noun: Costa Rican(s)
Adjective: Costa Rican
Languages
Spanish (official)
English
Sex ratio
At birth: 1.05 male(s)/female
0–14 years: 1.05 male(s)/female
15–24 years: 1.04 male(s)/female
25–54 years: 1.01 male(s)/female
55–64 years: 0.95 male(s)/female
65 years and over: 0.86 male(s)/female
Total population: 1.01 male(s)/female (2016 est.)
Major infectious diseases
degree of risk: intermediate (2020)
food or waterborne diseases: bacterial diarrhea
vectorborne diseases: dengue fever
Languages
Nearly all Costa Ricans speak Spanish; but many know English. Indigenous Costa Ricans also speak their own language, such as the case of the Ngobes.
Religions
According to the World Factbook, the main faiths are Roman Catholic, 76.3%; Evangelical, 13.7%; Jehovah's Witnesses, 1.3%; other Protestant, 0.7%; other, 4.8%; none, 3.2%.
The most recent nationwide survey of religion in Costa Rica, conducted in 2007 by the University of Costa Rica, found that 70.5 percent of the population identify themselves as Roman Catholics (with 44.9 percent practicing, 25.6 percent nonpracticing), 13.8 percent are Evangelical Protestants, 11.3 percent report that they do not have a religion, and 4.3 percent declare that they belong to another religion.
Apart from the dominant Catholic religion, there are several other religious groups in the country. Methodist, Lutheran, Episcopal, Baptist, and other Protestant groups have significant membership. The Church of Jesus Christ of Latter-day Saints (LDS Church) claim more than 35,000 members and has a temple in San José that served as a regional worship center for Costa Rica, Panama, Nicaragua, and Honduras.
Although they represent less than 1 percent of the population, Jehovah's Witnesses have a strong presence on the Caribbean coast. Seventh-day Adventists operate a university that attracts students from throughout the Caribbean Basin. The Unification Church maintains its continental headquarters for Latin America in San José.
Non-Christian religious groups, including followers of Judaism, Islam, Taoism, Hare Krishna, Paganism, Wicca, Scientology, Tenrikyo, and the Baháʼí Faith, claim membership throughout the country, with the majority of worshipers residing in the Central Valley (the area of the capital). While there is no general correlation between religion and ethnicity, indigenous peoples are more likely to practice animism than other religions.
Article 75 of the Costa Rican Constitution states that the "Catholic, Apostolic, and Roman Religion is the official religion of the Republic". That same article provides for freedom of religion, and the Government generally respects this right in practice. The US government found no reports of societal abuses or discrimination based on religious belief or practice in 2007.
See also
Ethnic groups in Central America
References
External links
UNICEF Information about Costa Rica's Demographics
INEC. National Institute of Statistics and Census |
5556 | https://en.wikipedia.org/wiki/Economy%20of%20Costa%20Rica | Economy of Costa Rica | The economy of Costa Rica has been very stable for some years now,
with continuing growth in the GDP (Gross Domestic Product) and moderate inflation, though with a high unemployment rate: 11.49% in 2019. Costa Rica's economy emerged from recession in 1997 and has shown strong aggregate growth since then. The estimated GDP for 2023 is US$78 billion, up significantly from the US$52.6 billion in 2015 while the estimated 2023 per capita (purchasing power parity) is US$26,422.
Inflation remained around 4% to 5% per annum for several years up to 2015 but then dropped to 0.7% in 2016; it was expected to rise to a still moderate 2.8% by the end of 2017 In 2017, Costa Rica had the highest standards of living in Central America in spite of the high poverty level. The poverty level dropped by 1.2% in 2017 to 20.5%, thanks to reducing inflation and benefits offered by the government. The estimated unemployment level in 2017 was 8.1%, roughly the same as in 2016.
The country has evolved from an economy that once depended solely on agriculture, to one that is more diverse, based on tourism, electronics and medical components exports, medical manufacturing and IT services. Corporate services for foreign companies employ some 3% of the workforce. Of the GDP, 5.5% is generated by agriculture, 18.6% by industry and 75.9% by services (2016). Agriculture employs 12.9% of the labor force, industry 18.57%, services 69.02% (2016) Many foreign companies operate in the various Free-trade zones. In 2015, exports totalled US$12.6 billion while imports totalled US$15 billion for a trade deficit of US$2.39 billion.
The growing debt and budget deficit are the country's primary concerns. By August 2017, Costa Rica was having difficulty paying its obligations and the President promised dramatic changes to handle the "liquidity crisis". Other challenges face Costa Rica in its attempts to increase the economy by foreign investment. They include a poor infrastructure and a need to improve public sector efficiency.
Public debt and deficit
One of the country's major concerns is the level of the public debt, especially as a percentage of the GDP (Gross Domestic Product), increasing from 29.8% in 2011 to 40.8% in 2015 and to 45% in 2016. The total debt in 2015 was $22.648 billion, up by nearly $3 billion from 2014. On a per capita basis, the debt was $4,711 per person. Costa Rica had a formal line of credit with the World Bank valued at US$947 million in April 2014, of which US$645 million had been accessed and US$600 million remained outstanding.
In a June 2017 report, the International Monetary Fund stated that annual growth was just over 4% with moderate inflation. The report added that "financial system appears sound, and credit growth continues to be consistent with healthy financial deepening and macroeconomic trends. The agency noted that the fiscal deficit remains high and public debt continues to rise rapidly despite the authorities’ deepened consolidation efforts in 2016. Recent advances in fiscal consolidation have been partly reversed and political consensus on a comprehensive fiscal package remains elusive".
The IMF also expressed concern about increasing deficits, public debt and the heavy dollarization of bank assets and liabilities, warning that in tighter-than-expected global financial conditions these aspects would "seriously undermine investor confidence". The group also recommended taking steps to reduce pension benefits and increase the amount of contribution by the public and increasing the cost effectiveness of the education system.
The country's credit rating was reduced by Moody's Investors Service in early 2017 to Ba2 from Ba1, with a negative outlook on the rating. The agency particularly cited the "rising government debt burden and persistently high fiscal deficit, which was 5.2% of GDP in 2016". Moody's was also concerned about the "lack of political consensus to implement measures to reduce the fiscal deficit [which] will result in further pressure on the government's debt ratios". In late July 2017, the Central Bank estimated the budget deficit at 6.1 percent of the country's GDP. A 2017 study by the Organisation for Economic Co-operation and Development warned that reducing the foreign debt must be a very high priority for the government. Other fiscal reforms were also recommended to moderate the budget deficit.
In 2014, President Solís presented a budget with an increase in spending of 19% for 2015, an increase of 0.5% for 2016 and an increase of 12% for 2017. When the 2017 budget was finally proposed, it totaled US$15.9 billion. Debt payments account for one-third of that amount. Of greater concern is the fact that a full 46% of the budget will require financing, a step that will increase the debt owed to foreign entities. In late July 2017, the Central Bank estimated the budget deficit at 6.1 percent of the country's GDP.
Liquidity crisis
In early August 2017, President Luis Guillermo Solís admitted that the country was facing a "liquidity crisis", an inability to pay all of its obligations and to guarantee the essential services. To address this issue, he promised that a higher VAT and higher income tax rates were being considered by his government. Such steps are essential, Solís told the nation. "Despite all the public calls and efforts we have made since the start of my administration to contain spending and increase revenues, there is still a gap that we must close with fresh resources," he said. The crisis was occurring in spite of the growth, low inflation and continued moderate interest rates, Solís concluded.
Solís explained that the Treasury will prioritize payments on the public debt first, then salaries, and then pensions. The subsequent priorities include transfers to institutions "according to their social urgency." All other payments will be made only if funds are available.
Other challenges
A 2016 report by the U.S. government report identifies other challenges facing Costa Rica as it works to expand its economy by working with potential foreign investors:
The ports, roads, water systems would benefit from major upgrading. Attempts by China to invest in upgrading such aspects were "stalled by bureaucratic and legal concerns".
The bureaucracy is "often slow and cumbersome".
The country needs even more workers who are fluent in English and languages such as Portuguese, Mandarin and French. It would also benefit from more graduates in Science, Technology, Engineering and Math (STEM) programs.
Some sectors are controlled by a state monopoly which excludes competition but in other respects, "Costa Rican laws, regulations and practices are generally transparent and foster competition".
The country has been slow in completing environmental impact assessments which have caused delays in projects being completed.
Product registration is a slow process, although this may improve with digitization.
In spite of government attempts at improving the enforcement of intellectual property laws, this aspect remains a concern.
Natural resources
Costa Rica's rainfall, and its location in the Central American isthmus, which provides easy access to North and South American markets and direct ocean access to the European and Asian Continents. Costa Rica has two seasons, both of which have their own agricultural resources: the tropical wet and dry seasons. One-fourth of Costa Rica's land is dedicated to national forests, often adjoining beaches, which has made the country a popular destination for affluent retirees and ecotourists.
A full 10.27% of the country is protected as national parks while an additional 17% is set aside for reserves, wildlife refuges and protected zones. Costa Rica has over 50 wildlife refuges, 32 major national parks, more than 12 forest reserves and a few biological reserves.
Because of ocean access, 23.7% of Costa Rica's people fish and trade their catches to fish companies; this is viewed as "small scale artisanal coastal" fishing and is most common in the Gulf of Nicoya. Costa Rica also charges licensing fees for commercial fishing fleets that are taking tuna, sardines, banga mary, mahi-mahi, red tilapia, shrimp, red snapper, other snappers, shark, marlin and sailfish. In mid 2017, the country was planning to ban large-scale commercial fishing off the southern Pacific Coast in an area nearly a million acres in size. The bill in congress was intended to "protect the extraordinary marine and coastal resources" from "indiscriminate and unsustainable commercial fishing."
Sport fishing in Costa Rica is an important part of the tourism industry; species include marlin, sailfish, dorado, tarpon, snook, rooster fish, wahoo, tuna, mackerel, snapper and rainbow bass.
In terms of the 2012 Environmental Performance Index ranking, Costa Rica is 5th in the world, and first among the Americas. The World Economic Forum's 2017 Travel & Tourism Competitiveness Report ranked Costa Rica as third of 136 countries based on natural resources, the number of World Heritage natural sites, protected areas and species as well as eco tourism.
Tourism
With a $1.92-billion-a-year tourism industry, Costa Rica was the most visited nation in the Central American region, with 2.42 million foreign visitors in 2013. By 2016, 2.6 million tourists visited Costa Rica. The Tourism Board estimates that this sector's spending in the country represented over US$3.4 billion, or about 5.8% of the GDP. The World Travel & Tourism Council's estimates indicate a direct contribution to the 2016 GDP of 5.1% and 110,000 direct jobs in Costa Rica; the total number of jobs indirectly supported by tourism was 271,000.
Ecotourism is extremely popular with the many tourists visiting the extensive national parks and protected areas around the country. Costa Rica was a pioneer in this type of tourism and the country is recognized as one of the few with real ecotourism. Other important market segments are adventure, sun and beaches. Most of the tourists come from the U.S. and Canada (46%), and the EU (16%), the prime market travelers in the world, which translates into a relatively high expenditure per tourist of $1000 per trip.
In the 2008 Travel and Tourism Competitiveness Index (TTCI), Costa Rica reached the 44th place in the world ranking, being the first among Latin American countries, and second if the Caribbean is included. Just considering the subindex measuring human, cultural, and natural resources, Costa Rica ranks in the 24th place at a worldwide level, and 7th when considering just the natural resources criteria. The TTCI report also notes Costa Rica's main weaknesses, ground transport infrastructure (ranked 113th), and safety and security (ranked 128th).
The online travel magazine Travelzoo rated Costa Rica as one of five “Wow Deal Destinations for 2012”. The magazine Travel Weekly named Costa Rica the best destination in Central and South America in 2011. In 2017, the country was nominated in the following categories in the World Travel Awards: Mexico & Central America's Leading Beach Destination, Mexico & Central America's Leading Destination and Mexico & Central America's Leading Tourist Board.
Agriculture
Costa Rica's economy was historically based on agriculture, and this has had a large cultural impact through the years. Costa Rica's main cash crop, historically and up to modern times, was Bananas. The coffee crop had been a major export, but decreased in value to the point where it added only 2.5% to the 2013 exports of the country.
Agriculture also plays an important part in the country's gross domestic product (GDP). It makes up about 6.5% of Costa Rica’s GDP, and employs 12.9% of the labor force (2016). By comparison, 18.57% work in industry and 69.02 percent in the services sector.
Depending on location and altitude, many regions differ in agricultural crops and techniques. The main agricultural exports from the country include: bananas, pineapples (the second highest export, with over 50% share of the world market), other tropical fruits, coffee (much of it grown in the Valle Central or Meseta Central), sugar, rice, palm oil, vegetables, tropical fruits, ornamental plants, maize, and potatoes.
Livestock activity consists of cattle, pigs and horses, as well as poultry. Meat and dairy produce are leading exports according to one source, but both were not in the top 10 categories of 2013.
The combined export value of forest products and textiles in 2013 did not exceed that of either chemical products or plastics.
Exports, jobs, and energy
Mere decades ago, Costa Rica was known principally as a producer of bananas and coffee. Even though bananas, pineapple, sugar, coffee, lumber, wood products and beef are still important exports, in recent times medical instruments, electronics, pharmaceuticals, financial outsourcing, software development, and ecotourism are now the prime exports. High levels of education and fluency in English among its residents make the country an attractive investing location.
In 2015 the following were the major export products (US$): medical instruments ($2 billion), bananas ($1.24B), tropical fruits ($1.22B), integrated circuits ($841 million) and orthopedic appliances ($555M). The total exports in 2015 were US$12.6 billion, down from $18.9B in 2010; bananas and medical instruments were the two largest sectors. Total imports in 2015 were $15B, up from $13.8B in 2010; this resulted in a trade deficit.
Over the years, Costa Rica successfully attracted important investments by such companies as Intel Corporation, Procter & Gamble, Abbott Laboratories and Baxter Healthcare. Manufacturing and industry's contribution to GDP overtook agriculture over the course of the 1990s, led by foreign investment in Costa Rica's Free Trade Zones (FTZ) where companies benefit from investment and tax incentives. Companies in such zones must export at least 50% of their services. Well over half of that type of investment has come from the U.S. According to the government, the zones supported over 82 thousand direct jobs and 43 thousand indirect jobs in 2015; direct employment grew 5% over 2014. The average wages in the FTZ increased by 7% and were 1.8 times greater than the average for private enterprise work in the rest of the country. Companies with facilities in the America Free Zone in Heredia, for example, include Dell, HP, Bayer, Bosch, DHL, IBM and Okay Industries.
In 2006 Intel's microprocessor facility alone was responsible for 20% of Costa Rican exports and 4.9% of the country's GDP. In 2014, Intel announced it would end manufacturing in Costa Rica and lay off 1,500 staff but agreed to maintain at least 1,200 employees. The facility continued as a test and design center with approximately 1,600 remaining staff. In 2017, Intel had 2000 employees in the country, and was operating a facility which assembles, tests and distributes processors and a Global Innovation Center, both in Heredia.
The fastest growing aspect of the economy is the provision of corporate services for foreign companies which in 2016 employed approximately 54,000 people in a country with a workforce under 342,000; that was up from 52,400 the previous year. For example, Amazon.com employs some 5,000 people. Many work in the free-trade areas such as Zona Franca America and earn roughly double the national average for service work. This sector generated US$4.6 billion in 2016, nearly as much as tourism.
In 2013, the total FDI stock in Costa Rica amounted to about 40 percent of GDP, of which investments from the United States accounted for 64 percent, followed by the United Kingdom and Spain with 6 percent each. Costa Rica's outward foreign direct investment stock is small, at about 3 percent of
GDP as of 2011, and mainly concentrated in Central America (about 57 percent of the total outward direct investment stock).
Tourism is an important part of the economy, with the number of visitors increasing from 780,000 in 1996, to 1 million in 1999, and to 2.089 million foreign visitors in 2008, allowing the country to earn $2.144-billion in that year. By 2016, 2.6 million tourists visited Costa Rica, spending roughly US$3.4 billion. Tourism directly supported 110,000 jobs and indirectly supported 271,000 in 2016.
Costa Rica has not discovered sources of fossil fuels—apart from minor coal deposits—but its mountainous terrain and abundant rainfall have permitted the construction of a dozen hydroelectric power plants, making it self-sufficient in all energy needs, except for refined petroleum. In 2017, Costa Rica was considering the export of electricity to neighbouring countries. Mild climate and trade winds make neither heating nor cooling necessary, particularly in the highland cities and towns where some 90% of the population lives.
Renewable energy in Costa Rica is the norm. In 2016, 98.1 per cent of the country's electricity came from green sources: hydro generating stations, geothermal plants, wind turbines, solar panels and biomass plants.
Infrastructure
Costa Rica's infrastructure has suffered from a lack of maintenance and new investment. The country has an extensive road system of more than 30,000 kilometers, although much of it is in disrepair; this also applies to ports, railways and water delivery systems. According to a 2016 U.S. government report, investment from China which attempted to improve the infrastructure found the "projects stalled by bureaucratic and legal concerns".
Most parts of the country are accessible by road. The main highland cities in the country's Central Valley are connected by paved all-weather roads with the Atlantic and Pacific coasts and by the Pan American Highway with Nicaragua and Panama, the neighboring countries to the North and the South. Costa Rica's ports are struggling to keep pace with growing trade. They have insufficient capacity, and their equipment is in poor condition. The railroad didn't function for several years, until recent government effort to reactivate it for city transportation. An August 2016 OECD report provided this summary: "The road network is extensive but of poor quality, railways are in disrepair and only slowly being reactivated after having been shut down in the 1990s, seaports quality and capacity are deficient. Internal transportation overly relies on private road vehicles as the public transport system, especially railways, is inadequate."
In a June 2017 interview, President Luis Guillermo Solís said that private sector investment would be required to solve the problems. "Of course Costa Rica’s infrastructure deficit is a challenge that outlasts any one government and I hope that we have created the foundations for future administrations to continue building. I have just enacted a law to facilitate Public Private Partnerships, which are the ideal way to develop projects that are too large for the government to undertake. For example the new airport that we are building to serve the capital city will cost $2 billion, so it will need private-sector involvement. There is also the potential for a ‘dry canal’ linking sea ports on our Atlantic and Caribbean Coasts that could need up to $16 billion of investment."
The government hopes to bring foreign investment, technology, and management into the telecommunications and electrical power sectors, which are monopolies of the state. ICE (Instituto Costarricense de Electricidad) has the monopoly on telecommunications, internet and electricity services. Some limited competition is allowed. In 2011, two new private companies began offering cellular phone service and others offer voice communication over internet connections (VOIP) for overseas calls.
According to transparency.org, Costa Rica had a reputation as one of the most stable, prosperous, and among the least corrupt in Latin America in 2007. However, in fall 2004, three former Costa Rican presidents, José María Figueres, Miguel Angel Rodríguez, and Rafael Angel Calderon, were investigated on corruption charges related to the issuance of government contracts. After extensive legal proceedings Calderon and Rodriguez were sentenced; however, the inquiry on Figueres was dismissed and he was not charged.
More recently, Costa Rica reached 40th place in 2015, with a score of 55 on the Perception of Corruption scale; this is better than the global average. Countries with the lowest perceived corruption rated 90 on the scale. In late May 2017, the country
Costa Rica applied to become a member of the OECD Anti-Bribery Convention, to be effective in July 2017.
Foreign trade
Costa Rica has sought to widen its economic and trade ties, both within and outside the region. Costa Rica signed a bilateral trade agreement with Mexico in 1994, which was later amended to cover a wider range of products. Costa Rica joined other Central American countries, plus the Dominican Republic, in establishing a Trade and Investment Council with the United States in March 1998, which later became the Dominican Republic–Central America Free Trade Agreement. Costa Rica has bilateral free trade agreements with the following countries and blocs which took effect on (see date):
Canada (November 1, 2002)
Caribbean Community (CARICOM)¨ (November 15, 2002)
Chile (February 15, 2002)
China (August 1, 2011).
Colombia (September 2016)
Dominican Republic (March 7, 2002)
El Salvador Customs union, (1963, re-launched on October 29, 1993)
European Free Trade Association (2013)
European Union (October 1, 2013)
Guatemala Customs union, (1963, re-launched on October 29, 1993)
Honduras Customs union, (1963, re-launched on October 29, 1993)
Mexico (January 1, 1995)
Nicaragua Customs union, (1963, re-launched on October 29, 1993)
Panama (July 31, 1973, renegotiated and expanded for January 1, 2009)
Perú (June 1, 2013)
United States (January 1, 2009, CAFTA-DR)
Singapore (April 6, 2010)
South Korea (March 18, 2019)
There are no significant trade barriers that would affect imports and the country has been lowering its tariffs in accordance with other Central American countries. Costa Rica also is a member of the Cairns Group, an organization of agricultural exporting countries that are seeking access to more markets to increase the exports of agricultural products. Opponents of free agricultural trade have sometimes attempted to block imports of products already grown in Costa Rica, including rice, potatoes, and onions. By 2015, Costa Rica's agricultural exports totalled US$2.7 billion.
In 2015, the top export destinations for all types of products were the United States (US$4.29 billion), Guatemala ($587 million), the Netherlands ($537 million), Panama ($535 million) and Nicaragua ($496 million). The top import origins were the United States ($6.06 billion), China ($1.92 billion), Mexico ($1.14 billion), Japan ($410 million) and Guatemala ($409 million). The most significant products imported were Refined Petroleum (8.41% of the total imports) and Automobiles (4.68%). Total imports in 2015 were US$15 billion, somewhat higher than the total exports of a US$12.6 billion, for a negative trade balance of US$2.39 billion.
Statistics
The following table shows the main economic indicators in 1980–2019 (with IMF staff stimtates in 2020–2025). Inflation below 5% is in green.
GDP:
US$61.5 billion (2017 estimate)
GDP real growth rate:
4.3% (2017 estimate)
GDP per capita:
purchasing power parity: $12,382 (2017 estimate)
GDP composition by sector:
agriculture: 5.5% (2016 estimate) Bananas, pineapples, coffee, beef, sugarcane, rice, corn, dairy products, vegetables, timber, fruits and ornamental plants.
industry:
18.6% (2016 estimate) Electronic components, food processing, textiles and apparel, construction materials, cement, fertilizer.
services:
75.9% (2016 estimate) Hotels, restaurants, tourist services, banks, call centers and insurance.
Government bond ratings: (January 2017) Standard & Poor's: BB-; Moody's: Ba2
Budget deficit: 6.1 percent of the GDP
Population below poverty line:
20.5% (2017)
Household income or consumption by percentage share:
lowest 10%:
1.2%
highest 10%:
39.5% (2009 est.)
Inflation rate (consumer prices):
2.6% (2017 estimate)
Labor force:
2.295 million (2016) Note: 15 and older, excluding Nicaraguans living in the country
Labor force by occupation:
agriculture 12.9%, industry 18.57%, services 69.02% (2016)
Unemployment rate:
8.1% (2017 estimate)
Budget: US15.9 billion (2017 proposed) Note: 46% will require financing
Industries:
microprocessors, food processing, textiles and clothing, construction materials, fertilizer, plastic products
Industrial production growth rate:
4.3% (2013)
Electricity production:
9.473 billion kWh (2010)
Electricity production by source: 98.1% from "green sources" (2016)
Agriculture products:
bananas, pineapples, other tropical fruits, coffee, palm oil, sugar, corn, rice, beans, potatoes, beef, timber
Exports: US$12.6 billion (2015)
Major export commodities: Medical Instruments ($2B), Bananas ($1.24B), Tropical Fruits ($1.22B), Integrated Circuits ($841M) and Orthopedic Appliances ($555M).
Export partners (2016): United States ($4.29B), Guatemala ($587M), the Netherlands ($537M), Panama ($535M), Nicaragua ($496M)
Imports:
US $15.1 billion (2015)
Major import commodities: Refined Petroleum ($1.26B), Cars ($702M), Packaged Medicaments ($455M), Broadcasting Equipment ($374M) and Computers ($281M).
Origin of imports (2016): United States ($6.06B), China ($1.92B), Mexico ($1.14B), Japan ($410M) and Guatemala ($409M).
External debt:
US$26.2 billion (January 2016)
Economic aid – recipient:
$107.1 million (1995)
Currency:
1 Costa Rican colon (₡) = 100 centimos
Exchange rates:
Costa Rican colones (₡) per US$1 – 526.46 (March 27, 2015), US$1 – 600 (late May 2017), US$1 – 563 (end of July 2017), US$1 – 677 (May 2022)
Fiscal year:
January 1 – December 31
External links
Costa Rica Exports, Imports and Trade Balance World Bank
Tariffs applied by Costa Rica as provided by ITC's Market Access Map, an online database of customs tariffs and market requirements.
References
Economy of Costa Rica
OECD member economies |
5573 | https://en.wikipedia.org/wiki/Croatia | Croatia | Croatia (, ; , ), officially the Republic of Croatia ( ), is a country located at the crossroads of Central and Southern Europe. Its coast lies entirely on the Adriatic Sea. It borders Slovenia to the northwest, Hungary to the northeast, Serbia to the east, Bosnia and Herzegovina and Montenegro to the southeast, and shares a maritime border with Italy to the west and southwest. Its capital and largest city, Zagreb, forms one of the country's primary subdivisions, with twenty counties. The country spans , and has a population of nearly 3.9 million.
The Croats arrived in modern-day Croatia in the late 6th century, then part of Roman Illyria. By the 7th century, they had organized the territory into two duchies. Croatia was first internationally recognized as independent on 7 June 879 during the reign of Duke Branimir. Tomislav became the first king by 925, elevating Croatia to the status of a kingdom. During the succession crisis after the Trpimirović dynasty ended, Croatia entered a personal union with Hungary in 1102. In 1527, faced with Ottoman conquest, the Croatian Parliament elected Ferdinand I of Austria to the Croatian throne. In October 1918, the State of Slovenes, Croats, and Serbs, independent from Austria-Hungary, was proclaimed in Zagreb, and in December 1918, it merged into the Kingdom of Yugoslavia. Following the Axis invasion of Yugoslavia in April 1941, most of Croatia was incorporated into a Nazi-installed puppet state, the Independent State of Croatia. A resistance movement led to the creation of the Socialist Republic of Croatia, which after the war became a founding member and constituent of the Socialist Federal Republic of Yugoslavia. On 25 June 1991, Croatia declared independence, and the War of Independence was successfully fought over the next four years.
Croatia is a republic and a parliamentary liberal democracy. It is a member of the European Union, the Eurozone, the Schengen Area, NATO, the United Nations, the Council of Europe, the OSCE, the World Trade Organization, a founding member of the Union for the Mediterranean, and is currently in the process of joining the OECD. An active participant in United Nations peacekeeping, Croatia contributed troops to the International Security Assistance Force and was elected to fill a non-permanent seat on the United Nations Security Council in the 2008–2009 term for the first time.
Croatia is a developed country with an advanced high-income economy and ranks 40th in the Human Development Index. According to the Gini coefficient, it also ranks among the top 20 countries with the lowest income inequality in the world. Service, industrial sectors, and agriculture dominate the economy. Tourism is a significant source of revenue for the country, which is ranked among the top 20 most popular tourist destinations in the world. Since 2000s, the Croatian government has heavily invested in infrastructure, especially transport routes and facilities along the Pan-European corridors. Croatia has also positioned itself as a regional energy leader in the early 2020s and is contributing to the diversification of Europe's energy supply via its floating liquefied natural gas import terminal off Krk island, LNG Hrvatska. Croatia provides social security, universal health care, and tuition-free primary and secondary education while supporting culture through public institutions and corporate investments in media and publishing.
Etymology
Croatia's non-native name derives from Medieval Latin , itself a derivation of North-West Slavic , by liquid metathesis from Common Slavic period *Xorvat, from proposed Proto-Slavic *Xъrvátъ which possibly comes from the 3rd-century Scytho-Sarmatian form attested in the Tanais Tablets as (, alternate forms comprise and ). The origin of the ethnonym is uncertain, but most probably is from Proto-Ossetian / Alanian *xurvæt- or *xurvāt-, in the meaning of "one who guards" ("guardian, protector").
The oldest preserved record of the Croatian ethnonym's native variation *xъrvatъ is of the variable stem, attested in the Baška tablet in style zvъnъmirъ kralъ xrъvatъskъ ("Zvonimir, Croatian king"), while the Latin variation Croatorum is archaeologically confirmed on a church inscription found in Bijaći near Trogir dated to the end of the 8th or early 9th century. The presumably oldest stone inscription with fully preserved ethnonym is the 9th-century Branimir inscription found near Benkovac, where Duke Branimir is styled Dux Cruatorvm, likely dated between 879 and 892, during his rule. The Latin term is attributed to a charter of Duke Trpimir I of Croatia, dated to 852 in a 1568 copy of a lost original, but it is not certain if the original was indeed older than the Branimir inscription.
History
Prehistory
The area known as Croatia today was inhabited throughout the prehistoric period. Neanderthal fossils dating to the middle Palaeolithic period were unearthed in northern Croatia, best presented at the Krapina site. Remnants of Neolithic and Chalcolithic cultures were found in all regions. The largest proportion of sites is in the valleys of northern Croatia. The most significant are Baden, Starčevo, and Vučedol cultures. Iron Age hosted the early Illyrian Hallstatt culture and the Celtic La Tène culture.
Antiquity
The region of modern day Croatia was settled by Illyrians and Liburnians, while the first Greek colonies were established on the islands of Hvar, Korčula, and Vis. In 9 AD, the territory of today's Croatia became part of the Roman Empire. Emperor Diocletian was native to the region. He had a large palace built in Split, to which he retired after abdicating in AD 305.
Middle Ages
During the 5th century, the last de jure Western Roman Emperor Julius Nepos ruled a small realm from the palace after fleeing Italy in 475. The period ends with Avar and Croat invasions in the late 6th and first half of the 7th century and the destruction of almost all Roman towns. Roman survivors retreated to more favourable sites on the coast, islands, and mountains. The city of Dubrovnik was founded by such survivors from Epidaurum.
The ethnogenesis of Croats is uncertain. The most accepted theory, the Slavic theory, proposes migration of White Croats from White Croatia during the Migration Period. Conversely, the Iranian theory proposes Iranian origin, based on Tanais Tablets containing Ancient Greek inscriptions of given names Χορούαθος, Χοροάθος, and Χορόαθος (Khoroúathos, Khoroáthos, and Khoróathos) and their interpretation as anthroponyms of Croatian people.
According to the work De Administrando Imperio written by 10th-century Byzantine Emperor Constantine VII, Croats arrived in the Roman province of Dalmatia in the first half of the 7th century after they defeated the Avars. However, that claim is disputed: competing hypotheses date the event between the late 6th-early 7th (mainstream) or the late 8th-early 9th (fringe) centuries, but recent archaeological data has established that the migration and settlement of the Slavs/Croats was in the late 6th and early 7th century. Eventually, a dukedom was formed, Duchy of Croatia, ruled by Borna, as attested by chronicles of Einhard starting in 818. The record represents the first document of Croatian realms, vassal states of Francia at the time. Its neighbor to the North was Principality of Lower Pannonia, at the time ruled by duke Ljudevit who ruled the territories between the Drava and Sava rivers, centred from his fort at Sisak. This population and territory throughout history was tightly related and connected to Croats and Croatia.
According to Constantine VII the Christianisation of Croats began in the 7th century, but the claim is disputed, and generally, Christianisation is associated with the 9th century. It is assumed that initially encompassed only the elite and related people. The Frankish overlordship ended during the reign of Mislav, or his successor Trpimir I. The native Croatian royal dynasty was founded by duke Trpimir I in the mid 9th century, who defeated the Byzantine and Bulgarian forces.
The first native Croatian ruler recognised by the Pope was duke Branimir, who received papal recognition from Pope John VIII on 7 June 879.
Tomislav was the first king of Croatia, noted as such in a letter of Pope John X in 925. Tomislav defeated Hungarian and Bulgarian invasions. The medieval Croatian kingdom reached its peak in the 11th century during the reigns of Petar Krešimir IV (1058–1074) and Dmitar Zvonimir (1075–1089). When Stjepan II died in 1091, ending the Trpimirović dynasty, Dmitar Zvonimir's brother-in-law Ladislaus I of Hungary claimed the Croatian crown. This led to a war and personal union with Hungary in 1102 under Coloman.
Personal union with Hungary (1102) and Habsburg Monarchy (1527)
For the next four centuries, the Kingdom of Croatia was ruled by the Sabor (parliament) and a Ban (viceroy) appointed by the king. This period saw the rise of influential nobility such as the Frankopan and Šubić families to prominence, and ultimately numerous Bans from the two families. An increasing threat of Ottoman conquest and a struggle against the Republic of Venice for control of coastal areas ensued. The Venetians controlled most of Dalmatia by 1428, except the city-state of Dubrovnik, which became independent. Ottoman conquests led to the 1493 Battle of Krbava field and the 1526 Battle of Mohács, both ending in decisive Ottoman victories. King Louis II died at Mohács, and in 1527, the Croatian Parliament met in Cetin and chose Ferdinand I of the House of Habsburg as the new ruler of Croatia, under the condition that he protects Croatia against the Ottoman Empire while respecting its political rights.
Following the decisive Ottoman victories, Croatia was split into civilian and military territories in 1538. The military territories became known as the Croatian Military Frontier and were under direct Habsburg control. Ottoman advances in Croatia continued until the 1593 Battle of Sisak, the first decisive Ottoman defeat, when borders stabilised. During the Great Turkish War (1683–1698), Slavonia was regained, but western Bosnia, which had been part of Croatia before the Ottoman conquest, remained outside Croatian control. The present-day border between the two countries is a remnant of this outcome. Dalmatia, the southern part of the border, was similarly defined by the Fifth and the Seventh Ottoman–Venetian Wars.
The Ottoman wars drove demographic changes. During the 16th century, Croats from western and northern Bosnia, Lika, Krbava, the area between the rivers of Una and Kupa, and especially from western Slavonia, migrated towards Austria. Present-day Burgenland Croats are direct descendants of these settlers. To replace the fleeing population, the Habsburgs encouraged Bosnians to provide military service in the Military Frontier.
The Croatian Parliament supported King Charles III's Pragmatic Sanction and signed their own Pragmatic Sanction in 1712. Subsequently, the emperor pledged to respect all privileges and political rights of the Kingdom of Croatia, and Queen Maria Theresa made significant contributions to Croatian affairs, such as introducing compulsory education.
Between 1797 and 1809, the First French Empire increasingly occupied the eastern Adriatic coastline and its hinterland, ending the Venetian and the Ragusan republics, establishing the Illyrian Provinces. In response, the Royal Navy blockaded the Adriatic Sea, leading to the Battle of Vis in 1811. The Illyrian provinces were captured by the Austrians in 1813 and absorbed by the Austrian Empire following the Congress of Vienna in 1815. This led to the formation of the Kingdom of Dalmatia and the restoration of the Croatian Littoral to the Kingdom of Croatia under one crown. The 1830s and 1840s featured romantic nationalism that inspired the Croatian National Revival, a political and cultural campaign advocating the unity of South Slavs within the empire. Its primary focus was establishing a standard language as a counterweight to Hungarian while promoting Croatian literature and culture. During the Hungarian Revolution of 1848, Croatia sided with Austria. Ban Josip Jelačić helped defeat the Hungarians in 1849 and ushered in a Germanisation policy.
By the 1860s, the failure of the policy became apparent, leading to the Austro-Hungarian Compromise of 1867. The creation of a personal union between the Austrian Empire and the Kingdom of Hungary followed. The treaty left Croatia's status to Hungary, which was resolved by the Croatian–Hungarian Settlement of 1868 when the kingdoms of Croatia and Slavonia were united. The Kingdom of Dalmatia remained under de facto Austrian control, while Rijeka retained the status of corpus separatum introduced in 1779.
After Austria-Hungary occupied Bosnia and Herzegovina following the 1878 Treaty of Berlin, the Military Frontier was abolished. The Croatian and Slavonian sectors of the Frontier returned to Croatia in 1881, under provisions of the Croatian–Hungarian Settlement. Renewed efforts to reform Austria-Hungary, entailing federalisation with Croatia as a federal unit, were stopped by World War I.
First Yugoslavia (1918–1941)
On 29 October 1918 the Croatian Parliament (Sabor) declared independence and decided to join the newly formed State of Slovenes, Croats, and Serbs, which in turn entered into union with the Kingdom of Serbia on 4 December 1918 to form the Kingdom of Serbs, Croats, and Slovenes. The Croatian Parliament never ratified the union with Serbia and Montenegro. The 1921 constitution defining the country as a unitary state and abolition of Croatian Parliament and historical administrative divisions effectively ended Croatian autonomy.
The new constitution was opposed by the most widely supported national political party—the Croatian Peasant Party (HSS) led by Stjepan Radić.
The political situation deteriorated further as Radić was assassinated in the National Assembly in 1928, leading to King Alexander I to establish a dictatorship in January 1929. The dictatorship formally ended in 1931 when the king imposed a more unitary constitution. The HSS, now led by Vladko Maček, continued to advocate federalisation, resulting in the Cvetković–Maček Agreement of August 1939 and the autonomous Banovina of Croatia. The Yugoslav government retained control of defence, internal security, foreign affairs, trade, and transport while other matters were left to the Croatian Sabor and a crown-appointed Ban.
World War II and Independent State of Croatia
In April 1941, Yugoslavia was occupied by Nazi Germany and Fascist Italy. Following the invasion, a German-Italian installed puppet state named the Independent State of Croatia (NDH) was established. Most of Croatia, Bosnia and Herzegovina, and the region of Syrmia were incorporated into this state. Parts of Dalmatia were annexed by Italy, Hungary annexed the northern Croatian regions of Baranja and Međimurje. The NDH regime was led by Ante Pavelić and ultranationalist Ustaše, a fringe movement in pre-war Croatia. With German and Italian military and political support, the regime introduced racial laws and launched a genocide campaign against Serbs, Jews, and Roma. Many were imprisoned in concentration camps; the largest was the Jasenovac complex. Anti-fascist Croats were targeted by the regime as well. Several concentration camps (most notably the Rab, Gonars and Molat camps) were established in Italian-occupied territories, mostly for Slovenes and Croats. At the same time, the Yugoslav Royalist and Serbian nationalist Chetniks pursued a genocidal campaign against Croats and Muslims, aided by Italy. Nazi German forces committed crimes and reprisals against civilians in retaliation for Partisan actions, such as in the villages of Kamešnica and Lipa in 1944.
A resistance movement emerged. On 22 June 1941, the 1st Sisak Partisan Detachment was formed near Sisak, the first military unit formed by a resistance movement in occupied Europe. That sparked the beginning of the Yugoslav Partisan movement, a communist, multi-ethnic anti-fascist resistance group led by Josip Broz Tito. In ethnic terms, Croats were the second-largest contributors to the Partisan movement after Serbs. In per capita terms, Croats contributed proportionately to their population within Yugoslavia. By May 1944 (according to Tito), Croats made up 30% of the Partisan's ethnic composition, despite making up 22% of the population. The movement grew fast, and at the Tehran Conference in December 1943, the Partisans gained recognition from the Allies.
With Allied support in logistics, equipment, training and airpower, and with the assistance of Soviet troops taking part in the 1944 Belgrade Offensive, the Partisans gained control of Yugoslavia and the border regions of Italy and Austria by May 1945. Members of the NDH armed forces and other Axis troops, as well as civilians, were in retreat towards Austria. Following their surrender, many were killed in the Yugoslav death march of Nazi collaborators. In the following years, ethnic Germans faced persecution in Yugoslavia, and many were interned.
The political aspirations of the Partisan movement were reflected in the State Anti-fascist Council for the National Liberation of Croatia, which developed in 1943 as the bearer of Croatian statehood and later transformed into the Parliament in 1945, and AVNOJ—its counterpart at the Yugoslav level.
Based on the studies on wartime and post-war casualties by demographer Vladimir Žerjavić and statistician Bogoljub Kočović, a total of 295,000 people from the territory (not including territories ceded from Italy after the war) died, which amounted to 7.3% of the population, among whom were 125–137,000 Serbs, 118–124,000 Croats, 16–17,000 Jews, and 15,000 Roma. In addition, from areas joined to Croatia after the war, a total of 32,000 people died, among whom 16,000 were Italians and 15,000 were Croats. Approximately 200,000 Croats from the entirety of Yugoslavia (including Croatia) and abroad were killed in total throughout the war and its immediate aftermath, approximately 5.4% of the population.
Second Yugoslavia (1945–1991)
After World War II, Croatia became a single-party socialist federal unit of the SFR Yugoslavia, ruled by the Communists, but having a degree of autonomy within the federation. In 1967, Croatian authors and linguists published a Declaration on the Status and Name of the Croatian Standard Language demanding equal treatment for their language.
The declaration contributed to a national movement seeking greater civil rights and redistribution of the Yugoslav economy, culminating in the Croatian Spring of 1971, which was suppressed by Yugoslav leadership. Still, the 1974 Yugoslav Constitution gave increased autonomy to federal units, basically fulfilling a goal of the Croatian Spring and providing a legal basis for independence of the federative constituents.
Following Tito's death in 1980, the political situation in Yugoslavia deteriorated. National tension was fanned by the 1986 SANU Memorandum and the 1989 coups in Vojvodina, Kosovo, and Montenegro. In January 1990, the Communist Party fragmented along national lines, with the Croatian faction demanding a looser federation. In the same year, the first multi-party elections were held in Croatia, while Franjo Tuđman's win exacerbated nationalist tensions. Some of the Serbs in Croatia left Sabor and declared autonomy of the unrecognised Republic of Serbian Krajina, intent on achieving independence from Croatia.
Croatian War of Independence
As tensions rose, Croatia declared independence on 25 June 1991. However, the full implementation of the declaration only came into effect after a three-month moratorium on the decision on 8 October 1991. In the meantime, tensions escalated into overt war when the Serbian-controlled Yugoslav People's Army (JNA) and various Serb paramilitary groups attacked Croatia.
By the end of 1991, a high-intensity conflict fought along a wide front reduced Croatia's control to about two-thirds of its territory. Serb paramilitary groups then began a campaign of killing, terror, and expulsion of the Croats in the rebel territories, killing thousands of Croat civilians and expelling or displacing as many as 400,000 Croats and other non-Serbs from their homes. Serbs living in Croatian towns, especially those near the front lines, were subjected to various forms of discrimination. Croatian Serbs in Eastern and Western Slavonia and parts of the Krajina were forced to flee or were expelled by Croatian forces, though on a restricted scale and in lesser numbers. The Croatian Government publicly deplored these practices and sought to stop them, indicating that they were not a part of the Government's policy.
On 15 January 1992, Croatia gained diplomatic recognition by the European Economic Community, followed by the United Nations. The war effectively ended in August 1995 with a decisive victory by Croatia; the event is commemorated each year on 5 August as Victory and Homeland Thanksgiving Day and the Day of Croatian Defenders. Following the Croatian victory, about 200,000 Serbs from the self-proclaimed Republic of Serbian Krajina fled the region and hundreds of mainly elderly Serb civilians were killed in the aftermath of the military operation. Their lands were subsequently settled by Croat refugees from Bosnia and Herzegovina. The remaining occupied areas were restored to Croatia following the Erdut Agreement of November 1995, concluding with the UNTAES mission in January 1998. Most sources number the war deaths at around 20,000.
Independent Croatia (1991–present)
After the end of the war, Croatia faced the challenges of post-war reconstruction, the return of refugees, establishing democracy, protecting human rights, and general social and economic development.
The 2000s period is characterized by democratization, economic growth, structural and social reforms, and problems such as unemployment, corruption, and the inefficiency of public administration. In November 2000 and March 2001, the Parliament amended the Constitution, first adopted on 22 December 1990, changing its bicameral structure back into its historic unicameral form and reducing presidential powers.
Croatia joined the Partnership for Peace on 25 May 2000 and became a member of the World Trade Organization on 30 November 2000. On 29 October 2001, Croatia signed a Stabilisation and Association Agreement with the European Union, submitted a formal application for the EU membership in 2003, was given the status of a candidate country in 2004, and began accession negotiations in 2005. Although the Croatian economy had enjoyed a significant boom in the early 2000s, the financial crisis in 2008 forced the government to cut spending, thus provoking a public outcry.
Croatia served on the United Nations Security Council in the 2008–2009 term for the first time, assuming the non-permanent seat in December 2008. On 1 April 2009, Croatia joined NATO.
A wave of anti-government protests in 2011 reflected a general dissatisfaction with the current political and economic situation. The protests brought together diverse political persuasions in response to recent government corruption scandals and called for early elections. On 28 October 2011 MPs voted to dissolve Parliament and the protests gradually subsided. President Ivo Josipović agreed to a dissolution of Sabor on Monday, 31 October and scheduled new elections for Sunday 4 December 2011.
On 30 June 2011, Croatia successfully completed EU accession negotiations. The country signed the Accession Treaty on 9 December 2011 and held a referendum on 22 January 2012, where Croatian citizens voted in favor of an EU membership. Croatia joined the European Union on 1 July 2013.
Croatia was affected by the 2015 European migrant crisis when Hungary's closure of borders with Serbia pushed over 700,000 refugees and migrants to pass through Croatia on their way to other EU countries.
On 25 January 2022, the OECD Council decided to open accession negotiations with Croatia. Throughout the accession process, Croatia is to implement numerous reforms that will advance all spheres of activity – from public services and the justice system to education, transport, finance, health, and trade. In line with the OECD Accession Roadmap from June 2022, Croatia will undergo technical reviews by 25 OECD committees and is so far progressing at a faster pace than expected. Full membership is expected in 2025 and is the last big foreign policy goal Croatia still has to achieve.
On 1 January 2023 Croatia adopted the euro as its official currency, replacing the kuna, and became the 20th Eurozone member. On the same day, Croatia became the 27th member of the border-free Schengen Area, thus marking its full EU integration.
On 19 October 2016, Andrej Plenković began serving as the current Croatian Prime Minister. The most recent presidential elections, held on 5 January 2020, elected Zoran Milanović as president.
Geography
Croatia is situated in Central and Southeast Europe, on the coast of the Adriatic Sea. Hungary is to the northeast, Serbia to the east, Bosnia and Herzegovina and Montenegro to the southeast and Slovenia to the northwest. It lies mostly between latitudes 42° and 47° N and longitudes 13° and 20° E. Part of the territory in the extreme south surrounding Dubrovnik is a practical exclave connected to the rest of the mainland by territorial waters, but separated on land by a short coastline strip belonging to Bosnia and Herzegovina around Neum. The Pelješac Bridge connects the exclave with mainland Croatia.
The territory covers , consisting of of land and of water. It is the world's 127th largest country. Elevation ranges from the mountains of the Dinaric Alps with the highest point of the Dinara peak at near the border with Bosnia and Herzegovina in the south to the shore of the Adriatic Sea which makes up its entire southwest border. Insular Croatia consists of over a thousand islands and islets varying in size, 48 of which permanently inhabited. The largest islands are Cres and Krk, each of them having an area of around .
The hilly northern parts of Hrvatsko Zagorje and the flat plains of Slavonia in the east which is part of the Pannonian Basin are traversed by major rivers such as Danube, Drava, Kupa, and the Sava. The Danube, Europe's second longest river, runs through the city of Vukovar in the extreme east and forms part of the border with Vojvodina. The central and southern regions near the Adriatic coastline and islands consist of low mountains and forested highlands. Natural resources found in quantities significant enough for production include oil, coal, bauxite, low-grade iron ore, calcium, gypsum, natural asphalt, silica, mica, clays, salt, and hydropower. Karst topography makes up about half of Croatia and is especially prominent in the Dinaric Alps. Croatia hosts deep caves, 49 of which are deeper than , 14 deeper than and three deeper than . Croatia's most famous lakes are the Plitvice lakes, a system of 16 lakes with waterfalls connecting them over dolomite and limestone cascades. The lakes are renowned for their distinctive colours, ranging from turquoise to mint green, grey or blue.
Climate
Most of Croatia has a moderately warm and rainy continental climate as defined by the Köppen climate classification. Mean monthly temperature ranges between in January and in July. The coldest parts of the country are Lika and Gorski Kotar featuring a snowy, forested climate at elevations above . The warmest areas are at the Adriatic coast and especially in its immediate hinterland characterised by Mediterranean climate, as the sea moderates temperature highs. Consequently, temperature peaks are more pronounced in continental areas.
The lowest temperature of was recorded on 3 February 1919 in Čakovec, and the highest temperature of was recorded on 4 August 1981 in Ploče.
Mean annual precipitation ranges between and depending on geographic region and climate type. The least precipitation is recorded in the outer islands (Biševo, Lastovo, Svetac, Vis) and the eastern parts of Slavonia. However, in the latter case, rain occurs mostly during the growing season. The maximum precipitation levels are observed on the Dinara mountain range and in Gorski Kotar.
Prevailing winds in the interior are light to moderate northeast or southwest, and in the coastal area, prevailing winds are determined by local features. Higher wind velocities are more often recorded in cooler months along the coast, generally as the cool northeasterly bura or less frequently as the warm southerly jugo. The sunniest parts are the outer islands, Hvar and Korčula, where more than 2700 hours of sunshine are recorded per year, followed by the middle and southern Adriatic Sea area in general, and northern Adriatic coast, all with more than 2000 hours of sunshine per year.
Biodiversity
Croatia can be subdivided into ecoregions based on climate and geomorphology. The country is one of the richest in Europe in terms of biodiversity. Croatia has four types of biogeographical regions—the Mediterranean along the coast and in its immediate hinterland, Alpine in most of Lika and Gorski Kotar, Pannonian along Drava and Danube, and Continental in the remaining areas. The most significant are karst habitats which include submerged karst, such as Zrmanja and Krka canyons and tufa barriers, as well as underground habitats. The country contains three ecoregions: Dinaric Mountains mixed forests, Pannonian mixed forests, and Illyrian deciduous forests.
The karst geology harbours approximately 7,000 caves and pits, some of which are the habitat of the only known aquatic cave vertebrate—the olm. Forests are significantly present, as they cover representing 44% of Croatian land area. Other habitat types include wetlands, grasslands, bogs, fens, scrub habitats, coastal and marine habitats.
In terms of phytogeography, Croatia is a part of the Boreal Kingdom and is a part of Illyrian and Central European provinces of the Circumboreal Region and the Adriatic province of the Mediterranean Region. The World Wide Fund for Nature divides Croatia between three ecoregions—Pannonian mixed forests, Dinaric Mountains mixed forests and Illyrian deciduous forests.
Croatia hosts 37,000 known plant and animal species, but their actual number is estimated to be between 50,000 and 100,000. More than a thousand species are endemic, especially in Velebit and Biokovo mountains, Adriatic islands and karst rivers. Legislation protects 1,131 species. The most serious threat is habitat loss and degradation. A further problem is presented by invasive alien species, especially Caulerpa taxifolia algae. Croatia had a 2018 Forest Landscape Integrity Index mean score of 4.92/10, ranking it 113th of 172 countries.
Invasive algae are regularly monitored and removed to protect benthic habitat. Indigenous cultivated plant strains and domesticated animal breeds are numerous. They include five breeds of horses, five of cattle, eight of sheep, two of pigs, and one poultry. Indigenous breeds include nine that are endangered or critically endangered. Croatia has 444 protected areas, encompassing 9% of the country. Those include eight national parks, two strict reserves, and ten nature parks. The most famous protected area and the oldest national park in Croatia is Plitvice Lakes National Park, a UNESCO World Heritage Site. Velebit Nature Park is a part of the UNESCO Man and the Biosphere Programme. The strict and special reserves, as well as the national and nature parks, are managed and protected by the central government, while other protected areas are managed by counties. In 2005, the National Ecological Network was set up, as the first step in the preparation of the EU accession and joining of the Natura 2000 network.
Governance
The Republic of Croatia is a unitary, constitutional state using a parliamentary system. Government powers in Croatia are legislative, executive, and judiciary powers.
The president of the republic () is the head of state, directly elected to a five-year term and is limited by the Constitution to two terms. In addition to serving as commander in chief of the armed forces, the president has the procedural duty of appointing the prime minister with the parliament and has some influence on foreign policy.
The Government is headed by the prime minister, who has four deputy prime ministers and 16 ministers in charge of particular sectors. As the executive branch, it is responsible for proposing legislation and a budget, enforcing the laws, and guiding foreign and internal policies. The Government is seated at Banski dvori in Zagreb.
Law and judicial system
A unicameral parliament () holds legislative power. The number of Sabor members can vary from 100 to 160. They are elected by popular vote to serve four-year terms. Legislative sessions take place from 15 January to 15 July, and from 15 September to 15 December annually. The two largest political parties in Croatia are the Croatian Democratic Union and the Social Democratic Party of Croatia.
Croatia has a civil law legal system in which law arises primarily from written statutes, with judges serving as implementers and not creators of law. Its development was largely influenced by German and Austrian legal systems. Croatian law is divided into two principal areas—private and public law. Before EU accession negotiations were completed, Croatian legislation had been fully harmonised with the Community acquis.
The main national courts are the Constitutional Court, which oversees violations of the Constitution, and the Supreme Court, which is the highest court of appeal. Administrative, Commercial, County, Misdemeanor, and Municipal courts handle cases in their respective domains. Cases falling within judicial jurisdiction are in the first instance decided by a single professional judge, while appeals are deliberated in mixed tribunals of professional judges. Lay magistrates also participate in trials. The State's Attorney Office is the judicial body constituted of public prosecutors empowered to instigate prosecution of perpetrators of offences.
Law enforcement agencies are organised under the authority of the Ministry of the Interior which consist primarily of the national police force. Croatia's security service is the Security and Intelligence Agency (SOA).
Foreign relations
Croatia has established diplomatic relations with 194 countries. supporting 57 embassies, 30 consulates and eight permanent diplomatic missions. 56 foreign embassies and 67 consulates operate in the country in addition to offices of international organisations such as the European Bank for Reconstruction and Development (EBRD), International Organization for Migration (IOM), Organization for Security and Co-operation in Europe (OSCE), World Bank, World Health Organization (WHO), International Criminal Tribunal for the former Yugoslavia (ICTY), United Nations Development Programme (UNDP), United Nations High Commissioner for Refugees (UNHCR), and UNICEF.
As of 2019, the Croatian Ministry of Foreign Affairs and European Integration employed 1,381 personnel and expended 765.295 million kunas (€101.17 million). Stated aims of Croatian foreign policy include enhancing relations with neighbouring countries, developing international co-operation and promotion of the Croatian economy and Croatia itself.
Croatia is a member of the European Union. As of 2021, Croatia had unsolved border issues with Bosnia and Herzegovina, Montenegro, Serbia, and Slovenia. Croatia is a member of NATO. On 1 January 2023, Croatia simultaneously joined both the Schengen Area and the Eurozone, having previously joined the ERM II on 10 July 2020.
Croatian diaspora
The Croatian diaspora consists of communities of ethnic Croats and Croatian citizens living outside Croatia. Croatia maintains intensive contacts with Croatian communities abroad (e.g., administrative and financial support of cultural, sports activities, and economic initiatives). Croatia actively maintain foreign relations to strengthen and guarantee the rights of the Croatian minority in various host countries.
Military
The Croatian Armed Forces (CAF) consist of the Air Force, Army, and Navy branches in addition to the Education and Training Command and Support Command. The CAF is headed by the General Staff, which reports to the defence minister, who in turn reports to the president. According to the constitution, the president is the commander-in-chief of the armed forces. In case of immediate threat during wartime, he issues orders directly to the General Staff.
Following the 1991–95 war, defence spending and CAF size began a constant decline. , military spending was an estimated 1.68% of the country's GDP, 67th globally. In 2005 the budget fell below the NATO-required 2% of GDP, down from the record high of 11.1% in 1994. Traditionally relying on conscripts, the CAF went through a period of reforms focused on downsizing, restructuring and professionalisation in the years before accession to NATO in April 2009. According to a presidential decree issued in 2006, the CAF employed around 18,100 active duty military personnel, 3,000 civilians and 2,000 voluntary conscripts between 18 and 30 years old in peacetime.
Compulsory conscription was abolished in January 2008. Until 2008 military service was obligatory for men at age 18 and conscripts served six-month tours of duty, reduced in 2001 from the earlier scheme of nine months. Conscientious objectors could instead opt for eight months of civilian service.
, the Croatian military had 72 members stationed in foreign countries as part of United Nations-led international peacekeeping forces. , 323 troops served the NATO-led ISAF force in Afghanistan. Another 156 served with KFOR in Kosovo.
Croatia has a military-industrial sector that exported around 493 million kunas (€65,176 million) worth of military equipment in 2020. Croatian-made weapons and vehicles used by CAF include the standard sidearm HS2000 manufactured by HS Produkt and the M-84D battle tank designed by the Đuro Đaković factory. Uniforms and helmets worn by CAF soldiers are locally produced and marketed to other countries.
Administrative divisions
Croatia was first divided into counties in the Middle Ages. The divisions changed over time to reflect losses of territory to Ottoman conquest and subsequent liberation of the same territory, changes of the political status of Dalmatia, Dubrovnik, and Istria. The traditional division of the country into counties was abolished in the 1920s when the Kingdom of Serbs, Croats and Slovenes and the subsequent Kingdom of Yugoslavia introduced oblasts and banovinas respectively.
Communist-ruled Croatia, as a constituent part of post-World War II Yugoslavia, abolished earlier divisions and introduced municipalities, subdividing Croatia into approximately one hundred municipalities. Counties were reintroduced in 1992 legislation, significantly altered in terms of territory relative to the pre-1920s subdivisions. In 1918, the Transleithanian part was divided into eight counties with their seats in Bjelovar, Gospić, Ogulin, Osijek, Požega, Varaždin, Vukovar, and Zagreb.
As of 1992, Croatia is divided into 20 counties and the capital city of Zagreb, the latter having the dual authority and legal status of a county and a city. County borders changed in some instances, last revised in 2006. The counties subdivide into 127 cities and 429 municipalities. Nomenclature of Territorial Units for Statistics (NUTS) division is performed in several tiers. NUTS 1 level considers the entire country in a single unit; three NUTS 2 regions come below that. Those are Northwest Croatia, Central and Eastern (Pannonian) Croatia, and Adriatic Croatia. The latter encompasses the counties along the Adriatic coast. Northwest Croatia includes Koprivnica-Križevci, Krapina-Zagorje, Međimurje, Varaždin, the city of Zagreb, and Zagreb counties and the Central and Eastern (Pannonian) Croatia includes the remaining areas—Bjelovar-Bilogora, Brod-Posavina, Karlovac, Osijek-Baranja, Požega-Slavonia, Sisak-Moslavina, Virovitica-Podravina, and Vukovar-Syrmia counties. Individual counties and the city of Zagreb also represent NUTS 3 level subdivision units in Croatia. The NUTS local administrative unit divisions are two-tiered. LAU 1 divisions match the counties and the city of Zagreb in effect making those the same as NUTS 3 units, while LAU 2 subdivisions correspond to cities and municipalities.
Economy
Croatia's economy qualifies as high-income. International Monetary Fund data projected that Croatian nominal GDP reached $67,84 billion, or $17.398 per capita for 2021 while purchasing power parity GDP was $132,88 billion, or $32.942 per capita. According to Eurostat, Croatian GDP per capita in PPS stood at 65% of the EU average in 2019. Real GDP growth in 2022 was 6.2 per cent. The average net salary of a Croatian worker in October 2019 was 6,496 HRK per month (roughly 873 EUR), and the average gross salary was 8,813 HRK per month (roughly 1,185 EUR). , the unemployment rate dropped to 7.2% from 9.6% in December 2018. The number of unemployed persons was 106.703. The unemployment rate between 1996 and 2018 averaged 17.38%, reaching an all-time high of 23.60% in January 2002 and a record low of 8.40% in September 2018. In 2017, economic output was dominated by the service sector — accounting for 70.1% of GDP — followed by the industrial sector with 26.2% and agriculture accounting for 3.7%.
According to 2017 data, 1.9% of the workforce were employed in agriculture, 27.3% by industry and 70.8% in services. Shipbuilding, food processing, pharmaceuticals, information technology, biochemical, and timber industry dominate the industrial sector. In 2018, Croatian exports were valued at 108 billion kunas (€14.61 billion) with 176 billion kunas (€23.82 billion) worth of imports. Croatia's largest trading partner was the rest of the European Union, led by Germany, Italy, and Slovenia.
As a result of the war, economic infrastructure sustained massive damage, particularly the tourism industry. From 1989 to 1993, the GDP fell 40.5%. The Croatian state still controls significant economic sectors, with government expenditures accounting for 40% of GDP. A particular concern is a backlogged judiciary system, with inefficient public administration and corruption, upending land ownership. In the 2022 Corruption Perceptions Index, published by Transparency International, the country ranked 57th. At the end of June 2020, the national debt stood at 85.3% of GDP.
Tourism
Tourism dominates the Croatian service sector and accounts for up to 20% of GDP. Tourism income for 2019 was estimated to be €10.5 billion. Its positive effects are felt throughout the economy, increasing retail business, and increasing seasonal employment. The industry is counted as an export business because foreign visitor spending significantly reduces the country's trade imbalance. The tourist industry has rapidly grown, recording a fourfold rise in tourist numbers since independence, attracting more than 11 million visitors each year. Germany, Slovenia, Austria, Italy, Poland and Croatia itself provide the most visitors. Tourist stays averaged 4.7 days in 2019.
Much of the tourist industry is concentrated along the coast. Opatija was the first holiday resort. It first became popular in the middle of the 19th century. By the 1890s, it had become one of the largest European health resorts. Resorts sprang up along the coast and islands, offering services catering to mass tourism and various niche markets. The most significant are nautical tourism, supported by marinas with more than 16 thousand berths, cultural tourism relying on the appeal of medieval coastal cities and cultural events taking place during the summer. Inland areas offer agrotourism, mountain resorts, and spas. Zagreb is a significant destination, rivalling major coastal cities and resorts.
Croatia has unpolluted marine areas with nature reserves and 116 Blue Flag beaches. Croatia was ranked first in Europe for swimming water quality in 2022 by European Environmental Agency.
Croatia ranked as the 23rd-most popular tourist destination in the world according to the World Tourism Organization in 2019. About 15% of these visitors, or over one million per year, participate in naturism, for which Croatia is famous. It was the first European country to develop commercial naturist resorts. In 2023, luggage storage company Bounce gave Croatia the highest solo travel index in the world (7.58), while a joint Pinterest and Zola wedding trends report from 2023 put Croatia among the most popular honeymoon destinations.
Infrastructure
Transport
The motorway network was largely built in the late 1990s and the 2000s (decade). As of December 2020, Croatia had completed of motorways, connecting Zagreb to other regions and following various European routes and four Pan-European corridors. The busiest motorways are the A1, connecting Zagreb to Split and the A3, passing east to west through northwest Croatia and Slavonia.
A widespread network of state roads in Croatia acts as motorway feeder roads while connecting major settlements. The high quality and safety levels of the Croatian motorway network were tested and confirmed by EuroTAP and EuroTest programmes.
Croatia has an extensive rail network spanning , including of electrified railways and of double track railways. The most significant railways in Croatia are within the Pan-European transport corridors Vb and X connecting Rijeka to Budapest and Ljubljana to Belgrade, both via Zagreb. Croatian Railways operates all rail services.
The construction of 2.4-kilometre-long Pelješac Bridge, the biggest infrastructure project in Croatia connects the two halves of Dubrovnik-Neretva County and shortens the route from the West to the Pelješac peninsula and the islands of Korčula and Lastovo by more than 32 km. The construction of the Pelješac Bridge started in July 2018 after Croatian road operator Hrvatske ceste (HC) signed a 2.08 billion kuna deal for the works with a Chinese consortium led by China Road and Bridge Corporation (CRBC). The project is co-financed by the European Union with 357 million euro. The construction was completed in July 2022.
There are international airports in Dubrovnik, Osijek, Pula, Rijeka, Split, Zadar, and Zagreb. The largest and busiest is Franjo Tuđman Airport in Zagreb. , Croatia complies with International Civil Aviation Organization aviation safety standards and the Federal Aviation Administration upgraded it to Category 1 rating.
Ports
The busiest cargo seaport is the Port of Rijeka. The busiest passenger ports are Split and Zadar. Many minor ports serve ferries connecting numerous islands and coastal cities with ferry lines to several cities in Italy. The largest river port is Vukovar, located on the Danube, representing the nation's outlet to the Pan-European transport corridor VII.
Energy
of crude oil pipelines serve Croatia, connecting the Rijeka oil terminal with refineries in Rijeka and Sisak, and several transhipment terminals. The system has a capacity of 20 million tonnes per year. The natural gas transportation system comprises of trunk and regional pipelines, and more than 300 associated structures, connecting production rigs, the Okoli natural gas storage facility, 27 end-users and 37 distribution systems. Croatia also plays an important role in regional energy security. The floating liquefied natural gas import terminal off Krk island LNG Hrvatska commenced operations on January 1, 2021, positioning Croatia as a regional energy leader and contributing to diversification of Europe's energy supply.
Croatian energy production covers 85% of nationwide natural gas and 19% of oil demand. In 2008, 47.6% of Croatia's primary energy production involved natural gas (47.7%), hydropower (25.4%), crude oil (18.0%), fuelwood (8.4%), and other renewable energy sources (0.5%). In 2009, net total electrical power production reached 12,725 GWh, while it imported 28.5% of its electric power energy needs.
Krško Nuclear Power Plant (Slovenia) supplies a large part of Croatian imports. 50% is owned by Hrvatska elektroprivreda, providing 15% of Croatia's electricity.
Demographics
With an estimated population of 4.13 million in 2019, Croatia ranks 127th by population in the world. Its 2018 population density was 72.9 inhabitants per square kilometre, making Croatia one of the more sparsely populated European countries. The overall life expectancy in Croatia at birth was 76.3 years in 2018.
The total fertility rate of 1.41 children per mother, is one of the lowest in the world, far below the replacement rate of 2.1, it remains considerably below the high of 6.18 children rate in 1885. Croatia's death rate has continuously exceeded its birth rate since 1991. Croatia subsequently has one of the world's oldest populations, with an average age of 43.3 years. The population rose steadily from 2.1 million in 1857 until 1991, when it peaked at 4.7 million, with the exceptions of censuses taken in 1921 and 1948, i.e. following the world wars. The natural growth rate is negative with the demographic transition completed in the 1970s. In recent years, the Croatian government has been pressured to increase permit quotas for foreign workers, reaching an all-time high of 68.100 in 2019. In accordance with its immigration policy, Croatia is trying to entice emigrants to return. From 2008 to 2018, Croatia's population dropped by 10%.
The population decrease was greater a result of war for independence. The war displaced large numbers of the population and emigration increased. In 1991, in predominantly occupied areas, more than 400,000 Croats were either removed from their homes by Serb forces or fled the violence. During the war's final days, about 150–200,000 Serbs fled before the arrival of Croatian forces during Operation Storm. After the war, the number of displaced persons fell to about 250,000. The Croatian government cared for displaced persons via the social security system and the Office of Displaced Persons and Refugees. Most of the territories abandoned during the war were settled by Croat refugees from Bosnia and Herzegovina, mostly from north-western Bosnia, while some displaced people returned to their homes.
According to the 2013 United Nations report, 17.6% of Croatia's population were immigrants. According to the 2021 census, the majority of inhabitants are Croats (91.6%), followed by Serbs (3.2%), Bosniaks (0.62%), Roma (0.46%), Albanians (0.36%), Italians (0.36%), Hungarians (0.27%), Czechs (0.20%), Slovenes (0.20%), Slovaks (0.10%), Macedonians (0.09%), Germans (0.09%), Montenegrins (0.08%), and others (1.56%). Approximately 4 million Croats live abroad.
Religion
Croatia has no official religion. Freedom of religion is a Constitutional right that protects all religious communities as equal before the law and separated from the state.
Croatian Constitution, Article 41</ref> According to the 2011 census, 91.36% of Croatians identify as Christian; of these, Catholics make up the largest group, accounting for 86.28% of the population, after which follows Eastern Orthodoxy (4.44%), Protestantism (0.34%), and other Christians (0.30%).The largest religion after Christianity is Islam (1.47%). 4.57% of the population describe themselves as non-religious. In the Eurostat Eurobarometer Poll of 2010, 69% of the population responded that "they believe there is a God". In a 2009 Gallup poll, 70% answered yes to the question "Is religion an important part of your daily life?" However, only 24% of the population attends religious services regularly.
Languages
Croatian is the official language of the Republic of Croatia. Minority languages are in official use in local government units where more than a third of the population consists of national minorities or where local enabling legislation applies. Those languages are Czech, Hungarian, Italian, Serbian, and Slovak. The following minority languages are also recognised: Albanian, Bosnian, Bulgarian, German, Hebrew, Macedonian, Montenegrin, Polish, Romanian, Istro-Romanian, Romani, Russian, Rusyn, Slovene, Turkish, and Ukrainian.
According to the 2011 Census, 95.6% of citizens declared Croatian as their native language, 1.2% declared Serbian as their native language, while no other language reaches more than 0.5%. Croatian is a member of the South Slavic languages of Slavic languages group and is written using the Latin alphabet. There are three major dialects spoken on the territory of Croatia, with standard Croatian based on the Shtokavian dialect. The Chakavian and Kajkavian dialects are distinguished from Shtokavian by their lexicon, phonology and syntax.
A 2011 survey revealed that 78% of Croats claim knowledge of at least one foreign language. According to a 2005 EC survey, 49% of Croats speak English as the second language, 34% speak German, 14% speak Italian, and 10% speak French. Russian is spoken by 4%, and 2% of Croats speak Spanish. However several large municipalities support minority languages. A majority of Slovenes (59%) have some knowledge of Croatian. The country is a part of various language-based international associations, most notably the European Union Language Association.
Education
Literacy in Croatia stands at 99.2 per cent. Primary education in Croatia starts at the age of six or seven and consists of eight grades. In 2007 a law was passed to increase free, noncompulsory education until 18 years of age. Compulsory education consists of eight grades of elementary school.
Secondary education is provided by gymnasiums and vocational schools. As of 2019, there are 2,103 elementary schools and 738 schools providing various forms of secondary education. Primary and secondary education are also available in languages of recognised minorities in Croatia, where classes are held in Czech, German, Hungarian, Italian, and Serbian languages.
There are 137 elementary and secondary level music and art schools, as well as 120 schools for disabled children and youth and 74 schools for adults. Nationwide leaving exams () were introduced for secondary education students in the school year 2009–2010. It comprises three compulsory subjects (Croatian language, mathematics, and a foreign language) and optional subjects and is a prerequisite for university education.
Croatia has eight public universities and two private universities. The University of Zadar, the first university in Croatia, was founded in 1396 and remained active until 1807, when other institutions of higher education took over until the foundation of the renewed University of Zadar in 2002. The University of Zagreb, founded in 1669, is the oldest continuously operating university in Southeast Europe. There are also 15 polytechnics, of which two are private, and 30 higher education institutions, of which 27 are private. In total, there are 55 institutions of higher education in Croatia, attended by more than 157 thousand students.
There are 205 companies, government or education system institutions and non-profit organisations in Croatia pursuing scientific research and development of technology. Combined, they spent more than 3 billion kuna (€400 million) and employed 10,191 full-time research staff in 2008. Among the scientific institutes operating in Croatia, the largest is the Ruđer Bošković Institute in Zagreb. The Croatian Academy of Sciences and Arts in Zagreb is a learned society promoting language, culture, arts and science from its inception in 1866. Croatia was ranked 4th in the Global Innovation Index in 2023.
The European Investment Bank provided digital infrastructure and equipment to around 150 primary and secondary schools in Croatia. Twenty of these schools got specialised assistance in the form of gear, software, and services to help them integrate the teaching and administrative operations.
Healthcare
Croatia has a universal health care system, whose roots can be traced back to the Hungarian-Croatian Parliament Act of 1891, providing a form of mandatory insurance of all factory workers and craftsmen. The population is covered by a basic health insurance plan provided by statute and optional insurance. In 2017, annual healthcare related expenditures reached 22.0 billion kuna (€3.0 billion). Healthcare expenditures comprise only 0.6% of private health insurance and public spending. In 2017, Croatia spent around 6.6% of its GDP on healthcare.
In 2020, Croatia ranked 41st in the world in life expectancy with 76.0 years for men and 82.0 years for women, and it had a low infant mortality rate of 3.4 per 1,000 live births.
There are hundreds of healthcare institutions in Croatia, including 75 hospitals, and 13 clinics with 23,049 beds. The hospitals and clinics care for more than 700 thousand patients per year and employ 6,642 medical doctors, including 4,773 specialists. There is total of 69,841 health workers. There are 119 emergency units in health centres, responding to more than a million calls. The principal cause of death in 2016 was cardiovascular disease at 39.7% for men and 50.1% for women, followed by tumours, at 32.5% for men and 23.4% for women. In 2016 it was estimated that 37.0% of Croatians are smokers. According to 2016 data, 24.40% of the Croatian adult population is obese.
Language
Standard Croatian is the official language of the Republic of Croatia, and became the 24th official language of the European Union upon its accession in 2013.
Croatian replaced Latin as the official language of the Croatian government in the 19th century. Following the Vienna Literary Agreement in 1850, the language and its Latin script underwent reforms to create an unified "Croatian or Serbian" or "Serbo-Croatian" standard, which under various names became the official language of Yugoslavia. In SFR Yugoslavia, from 1972 to 1989, the language was constitutionally designated as the "Croatian literary language" and the "Croatian or Serbian language". It was the result of a resistance to and secession from "Serbo-Croatian" in the form of the Declaration on the Status and Name of the Croatian Literary Language as part of the Croatian Spring. Since gaining independence in the early 1990s, the Republic of Croatia constitutionally designates the language as "Croatian language" and regulates it through linguistic prescription. The long-standing aspiration to developing its own expressions, thus enriching itself, as opposed to taking over foreign solutions in the form of loanwords has been described as Croatian linguistic purism.
Croatia introduced in 2021 a new model of linguistic categorisation of Bunjevac dialect (as New-Shtokavian Ikavian dialects of the Shtokavian dialect of the Croatian language) in three sub-branches: Dalmatian (also called Bosnian-Dalmatian), Danubian (also called Bunjevac), and Littoral-Lika. Its speakers largely use the Latin alphabet and are living in parts of Bosnia and Herzegovina, different parts of Croatia, southern parts (inc. Budapest) of Hungary as well in the autonomous province Vojvodina of Serbia.
The Institute of Croatian Language and Linguistics added the Bunjevac dialect to the List of Protected Intangible Cultural Heritage of the Republic of Croatia on 8 October 2021.
Culture
Because of its geographical position, Croatia represents a blend of four different cultural spheres. It has been a crossroads of influences from western culture and the east since the schism between the Western Roman Empire and the Byzantine Empire, and also from Central Europe and Mediterranean culture. The Illyrian movement was the most significant period of national cultural history, as the 19th century proved crucial to the emancipation of Croatians and saw unprecedented developments in all fields of art and culture, giving rise to many historical figures.
The Ministry of Culture is tasked with preserving the nation's cultural and natural heritage and overseeing its development. Further activities supporting the development of culture are undertaken at the local government level. The UNESCO's World Heritage List includes ten sites in Croatia. The country is also rich with intangible culture and holds 15 of UNESCO's World's intangible culture masterpieces, ranking fourth in the world. A global cultural contribution from Croatia is the necktie, derived from the cravat originally worn by the 17th-century Croatian mercenaries in France.
In 2019, Croatia had 95 professional theatres, 30 professional children's theatres, and 51 amateur theatres visited by more than 2.27 million viewers per year. Professional theatres employ 1,195 artists. There are 42 professional orchestras, ensembles, and choirs, attracting an annual attendance of 297 thousand. There are 75 cinemas with 166 screens and attendance of 5.026 million.
Croatia has 222 museums, visited by more than 2.71 million people in 2016. Furthermore, there are 1,768 libraries, containing 26.8 million volumes, and 19 state archives. The book publishing market is dominated by several major publishers and the industry's centrepiece event—Interliber exhibition held annually at Zagreb Fair.
Arts, literature, and music
Architecture in Croatia reflects influences of bordering nations. Austrian and Hungarian influence is visible in public spaces and buildings in the north and the central regions, architecture found along coasts of Dalmatia and Istria exhibits Venetian influence. Squares named after culture heroes, parks, and pedestrian-only zones, are features of Croatian towns and cities, especially where large scale Baroque urban planning took place, for instance in Osijek (Tvrđa), Varaždin, and Karlovac. The subsequent influence of the Art Nouveau was reflected in contemporary architecture. The architecture is the Mediterranean with a Venetian and Renaissance influence in major coastal urban areas exemplified in works of Giorgio da Sebenico and Nicolas of Florence such as the Cathedral of St. James in Šibenik. The oldest preserved examples of Croatian architecture are the 9th-century churches, with the largest and the most representative among them being Church of St. Donatus in Zadar.
Besides the architecture encompassing the oldest artworks, there is a history of artists in Croatia reaching the Middle Ages. In that period the stone portal of the Trogir Cathedral was made by Radovan, representing the most important monument of Romanesque sculpture from Medieval Croatia. The Renaissance had the greatest impact on the Adriatic Sea coast since the remainder was embroiled in the Hundred Years' Croatian–Ottoman War. With the waning of the Ottoman Empire, art flourished during the Baroque and Rococo. The 19th and 20th centuries brought affirmation of numerous Croatian artisans, helped by several patrons of the arts such as bishop Josip Juraj Strossmayer. Croatian artists of the period achieving renown were Vlaho Bukovac, Ivan Meštrović, and Ivan Generalić.
The Baška tablet, a stone inscribed with the glagolitic alphabet found on the Krk island and dated to , is considered to be the oldest surviving prose in Croatian. The beginning of more vigorous development of Croatian literature is marked by the Renaissance and Marko Marulić. Besides Marulić, Renaissance playwright Marin Držić, Baroque poet Ivan Gundulić, Croatian national revival poet Ivan Mažuranić, novelist, playwright, and poet August Šenoa, children's writer Ivana Brlić-Mažuranić, writer and journalist Marija Jurić Zagorka, poet and writer Antun Gustav Matoš, poet Antun Branko Šimić, expressionist and realist writer Miroslav Krleža, poet Tin Ujević and novelist, and short story writer Ivo Andrić are often cited as the greatest figures in Croatian literature.
Croatian music varies from classical operas to modern-day rock. Vatroslav Lisinski created the country's first opera, Love and Malice, in 1846. Ivan Zajc composed more than a thousand pieces of music, including masses and oratorios. Pianist Ivo Pogorelić has performed across the world.
Media
In Croatia, the Constitution guarantees the freedom of the press and the freedom of speech. Croatia ranked 64th in the 2019 Press Freedom Index report compiled by Reporters Without Borders which noted that journalists who investigate corruption, organised crime or war crimes face challenges and that the Government was trying to influence the public broadcaster HRT's editorial policies. In its 2019 Freedom in the World report, the Freedom House classified freedoms of press and speech in Croatia as generally free from political interference and manipulation, noting that journalists still face threats and occasional attacks. The state-owned news agency HINA runs a wire service in Croatian and English on politics, economics, society, and culture.
, there are thirteen nationwide free-to-air DVB-T television channels, with Croatian Radiotelevision (HRT) operating four, RTL Televizija three, and Nova TV operating two channels, and the Croatian Olympic Committee, Kapital Net d.o.o., and Author d.o.o. companies operate the remaining three. Also, there are 21 regional or local DVB-T television channels. The HRT is also broadcasting a satellite TV channel. In 2020, there were 155 radio stations and 27 TV stations in Croatia. Cable television and IPTV networks are gaining ground. Cable television already serves 450 thousand people, around 10% of the total population of the country.
In 2010, 314 newspapers and 2,678 magazines were published in Croatia. The print media market is dominated by the Croatian-owned Hanza Media and Austrian-owned Styria Media Group who publish their flagship dailies , Večernji list and 24sata. Other influential newspapers are Novi list and Slobodna Dalmacija. In 2020, 24sata was the most widely circulated daily newspaper, followed by Večernji list and Jutarnji list.
Croatia's film industry is small and heavily subsidised by the government, mainly through grants approved by the Ministry of Culture with films often being co-produced by HRT. Croatian cinema produces between five and ten feature films per year. Pula Film Festival, the national film awards event held annually in Pula, is the most prestigious film event featuring national and international productions. Animafest Zagreb, founded in 1972, is the prestigious annual film festival dedicated to the animated film. The first greatest accomplishment by Croatian filmmakers was achieved by Dušan Vukotić when he won the 1961 Academy Award for Best Animated Short Film for Ersatz (). Croatian film producer Branko Lustig won the Academy Awards for Best Picture for Schindler's List and Gladiator.
Cuisine
Croatian traditional cuisine varies from one region to another. Dalmatia and Istria have culinary influences of Italian and other Mediterranean cuisines which prominently feature various seafood, cooked vegetables and pasta, and condiments such as olive oil and garlic. Austrian, Hungarian, Turkish, and Balkan culinary styles influenced continental cuisine. In that area, meats, freshwater fish, and vegetable dishes are predominant.
There are two distinct wine-producing regions in Croatia. The continental in the northeast of the country, especially Slavonia, produces premium wines, particularly whites. Along the north coast, Istrian and Krk wines are similar to those in neighbouring Italy, while further south in Dalmatia, Mediterranean-style red wines are the norm. Annual production of wine exceeds 140 million litres. Croatia was almost exclusively a wine-consuming country up until the late 18th century when a more massive beer production and consumption started. The annual consumption of beer in 2020 was 78.7 litres per capita which placed Croatia in 15th place among the world's countries.
There are 11 restaurants in Croatia with a Michelin star and 89 restaurants bearing some of the Michelin's marks.
Sports
There are more than 400,000 active sportspeople in Croatia. Out of that number, 277,000 are members of sports associations and nearly 4,000 are chess members and contract bridge associations. Association football is the most popular sport. The Croatian Football Federation (), with more than 118,000 registered players, is the largest sporting association. The Croatian national football team came in third in 1998 and 2022 and second in the 2018 FIFA World Cup. The Prva HNL football league attracts the highest average attendance of any professional sports league. In season 2010–11, it attracted 458,746 spectators.
Croatian athletes competing at international events since Croatian independence in 1991 won 44 Olympic medals, including 15 gold medals. Also, Croatian athletes won 16 gold medals at world championships, including four in athletics at the World Championships in Athletics. In tennis, Croatia won Davis Cup in 2005 and 2018. Croatia's most successful male players Goran Ivanišević and Marin Čilić have both won Grand Slam titles and have got into the top 3 of the ATP rankings. Iva Majoli became the first Croatian female player to win the French Open when she won it in 1997. Croatia hosted several major sports competitions, including the 2009 World Men's Handball Championship, the 2007 World Table Tennis Championships, the 2000 World Rowing Championships, the 1987 Summer Universiade, the 1979 Mediterranean Games, and several European Championships.
The governing sports authority is the Croatian Olympic Committee (), founded on 10 September 1991 and recognised by the International Olympic Committee since 17 January 1992, in time to permit the Croatian athletes to appear at the 1992 Winter Olympics in Albertville, France representing the newly independent nation for the first time at the Olympic Games.
See also
Outline of Croatia
Index of Croatia-related articles
Explanatory notes
Citations
General and cited references
External links
Key Development Forecasts for Croatia from International Futures
Balkan countries
Countries in Europe
Countries and territories where Croatian is an official language
Member states of the European Union
Member states of NATO
Member states of the Union for the Mediterranean
Member states of the United Nations
Member states of the Three Seas Initiative
Republics
States and territories established in 1991 |
5574 | https://en.wikipedia.org/wiki/History%20of%20Croatia | History of Croatia | At the time of the Roman Empire, the area of modern Croatia comprised two Roman provinces, Pannonia and Dalmatia. After the collapse of the Western Roman Empire in the 5th century, the area was subjugated by the Ostrogoths for 50 years, before being incorporated into the Byzantine Empire.
Croatia, as a polity, first appeared as a duchy in the 7th century, the Duchy of Croatia. With the nearby Principality of Lower Pannonia, it was united and elevated into the Kingdom of Croatia which lasted from 925 until 1102. From the 12th century, the Kingdom of Croatia entered a personal union with the Kingdom of Hungary. It remained a distinct state with its ruler (Ban) and Sabor, but it elected royal dynasties from neighboring powers, primarily Hungary, Naples, and the Habsburg monarchy.
The period from the 15th to the 17th centuries was marked by intense struggles between the Ottoman Empire to the south and the Habsburg Empire to the north.
Following the First World War and the dissolution of Austria-Hungary in 1918, Croatian lands were incorporated into the Kingdom of Yugoslavia. Following the German invasion of Yugoslavia in April 1941, the puppet state Independent State of Croatia allied to the Axis powers, was established. It was defeated in May 1945, after the German Instrument of Surrender. The Socialist Republic of Croatia was formed as a constituent republic of the Socialist Federal Republic of Yugoslavia. In 1991, Croatia's leadership severed ties with Yugoslavia and proclaimed independence amidst the dissolution of Yugoslavia.
Prehistoric period
The area known today as Croatia was inhabited by hominids throughout the prehistoric period. Fossils of Neanderthals dating to the middle Palaeolithic period have been unearthed in northern Croatia, with the most famous and best-presented site in Krapina. Remnants of several Neolithic and Chalcolithic cultures have been found throughout the country. Most of the sites are in the northern Croatian river valleys, and the most significant cultures whose presence was discovered include the Starčevo, Vučedol and Baden cultures. The Iron Age left traces of the early Illyrian Hallstatt culture and the Celtic La Tène culture.
Protohistoric period
Greek author Hecataeus of Miletus mentions that around 500 BC, the Eastern Adriatic region was settled by Histrians, Liburnians, and Illyrians. Greek colonization saw settlers establish communities on the Issa (Vis) and Pharos (Hvar) islands.
Roman expansion
Before the Roman expansion, the eastern Adriatic coast formed the northern part of the Illyrian kingdom from the 4th century BC to the Illyrian Wars in the 220s BC. In 168 BC, the Roman Republic established its protectorate south of the Neretva river. The area north of the Neretva was slowly incorporated into Roman possession until the province of Illyricum was formally established 32–27 BC.
These lands then became part of the Roman province of Illyricum. Between 6 and 9 AD, tribes including the Dalmatae, who gave name to these lands, rose up against the Romans in the Great Illyrian revolt, but the uprising was crushed, and in 10 AD Illyricum was split into two provinces—Pannonia and Dalmatia. The province of Dalmatia spread inland to cover all of the Dinaric Alps and most of the eastern Adriatic coast. Dalmatia was the birthplace of the Roman Emperor Diocletian, who, when he retired as Emperor in 305 AD, built a large palace near Salona, from which the city of Split later developed.
Historians such as Theodore Mommsen and Bernard Bavant argue that all of Dalmatia was fully Romanized and Latin-speaking by the 4th century. Others, such as Aleksandar Stipčević, argue that the process of Romanization was selective and involved mostly the urban centers but not the countryside, where previous Illyrian socio-political structures were adapted to Roman administration and political structure only where necessary. has argued that the Vlachs, or Morlachs, were Latin-speaking, pastoral peoples who lived in the Balkan mountains since pre-Roman times. They are mentioned in the oldest Croatian chronicles.
After the Western Roman Empire collapsed in 476, with the beginning of the Migration Period, Julius Nepos briefly ruled his diminished domain from Diocletian's Palace after his 476 flight from Italy. The region was then ruled by the Ostrogoths until 535 when Justinian I added the territory to the Byzantine Empire. Later, the Byzantines formed the Theme of Dalmatia in the same territory.
Migration period
The Roman period ended with the Avar and Croat invasions in the 6th and 7th centuries and the destruction of almost all Roman towns. Roman survivors retreated to more favorable sites on the coast, islands, and mountains. The city of Ragusa was founded by survivors from Epidaurum. According to the work De Administrando Imperio, written by the 10th-century Byzantine Emperor Constantine VII, the Croats arrived in what is today Croatia from southern Poland and Western Ukraine in the early 7th century. However, that claim is disputed and competing hypotheses date the event between late the 6th-early 7th (mainstream) or the late 8th-early 9th (fringe) centuries. Recent archaeological data established that the migration and settlement of the Slavs/Croats occurred in the late 6th and early 7th centuries.
Duchy of Croatia (800–925)
From the middle of the seventh century until the unification in 925, there were two duchies on the territory of today's Croatia, Duchy of Croatia and Principality of Lower Pannonia. Eventually, a dukedom was formed, the Duchy of Croatia, ruled by Borna, as attested by chronicles of Einhard starting in the year 818. The record represents the first documented Croatian realms, vassal states of Francia at the time. The most important ruler of Lower Pannonia was Ljudevit Posavski, who fought against the Franks between 819 and 823. He ruled Pannonian Croatia from 810 to 823.
The Frankish overlordship ended during the reign of Mislav two decades later. Duke Mislav was succeeded by Duke Trpimir, the founder of the Trpimirović dynasty. Trpimir successfully fought against Byzantium, Venice and Bulgaria. Duke Trpimir was succeeded by Duke Domagoj, who repeatedly led wars against the Venetians and the Byzantines, and the Venetians called this Croatian ruler "the worst Croatian prince" (dux pessimus Croatorum) According to Constantine VII, the Christianization of Croats began in the 7th century, but the claim is disputed and generally, Christianization is associated with the 9th century. In 879, under Branimir, the duke of Croatia, Dalmatian Croatia received papal recognition as a state from Pope John VIII.
Kingdom of Croatia (925–1102)
The first king of Croatia is generally considered to have been Tomislav in the first half of the 10th century, who is mentioned as such in notes from Church Councils of Split and the letter of Pope John X.
Other important Croatian rulers from that period are:
Mihajlo Krešimir II, 949–969, who conquered Bosnia and restored the power of the Croatian kingdom,
Stjepan Držislav, 969–997, is an ally of Byzantium in the war with the Bulgarian emperor Samuil,
Stjepan, 1030–1058, restored the Croatian kingdom and founded the diocese in Knin.
Two Croatian queens are also known from that century, and Helen of Zadar, whose epitaph was found in the Solin area at the end of the 19th century during archeological excavations conducted by Frane Bulić.
The medieval Croatian kingdom reached its peak in the 11th century during the reigns of Petar Krešimir IV (1058–1074) and Demetrius Zvonimir (1075–1089). King Petar Krešimir IV used The Great Shism of 1054 which weakened the Byzantine rule over Dalmatian cities to assert his own control over them. He left the cities a certain amount of self-rule, but also collected a certain amount of tribute and demanded their ships in the case of war. Except for croatization of old cities such as Zadar and Split, Petar Krešimir IV encouraged the development of new cities such as Biograd, Nin, Karin, Skradin and Šibenik. He also encouraged the foundation of new monasteries and gave donations to the Church. Historians such as Trpimir Macan consider that during Krešimir's reign medieval Croatian kingdom reached its greatest extent. Modern historians also consider that his rule probably ended when he was captured by Norman count Amicus of Giovinazzo.Krešimir IV was succeeded by Demetrius Zvonimir who married Hungarian princess Helen and ruled from Knin as his capital. Zvonimir's rule was marked by stability. He was a papal vassal and enjoyed a papal protection as seen when his kingdom was threatened by an invasion of knight Wezelin, who was deterred after pope threatened to excommunicate him. He had a son named Radovan who died at young age, so Zvonimir left no male heir when he died in 1089.
He was succeeded by Stjepan II who died in 1091, ending the Trpimirović dynasty, Ladislaus I of Hungary then claimed the Croatian crown on the basis of Zvonimir's wife Jelena (Helen), who was the daughter of Hungarian king Béla I. Opposition to this claim led to a war between the army loyal to Petar Snačić, another pretender to the throne and the army loyal to the Hungarian king Koloman I. Following defeat of Petar Snačić's army in Battle of Gvozd Mountain, a personal union of Croatia and Hungary was created in 1102 with Coloman as a ruler.
This meant that Croatia and Hungary still remained separate kingdoms which are connected only by a common king. One example of this was a coronation process as new kings of Hungary had to be separately crowned kings of Croatia. There was also an institution of ban (viceroy) of Croatia representing a royal deputy, separate tax system, money and army.
Personal union with Hungary (1102–1527) and the Republic of Venice
Croatia under the Árpád dynasty
One consequence of entering a personal union with Hungary under the Hungarian king was the introduction of a feudal system. Later kings sought to restore some of their influence by giving certain privileges to the towns. Somewhere between Second and Third Crusade, Knights Templars and Hospitallers appeared in Croatian lands for the first time. According to historian Lelja Dobronić the purpose of their arrival appears to be to secure transport routes and protect travelers going from Europe towards the Middle East.
In 1202, after proclamation of Fourth crusade, the crusader army turned out to be unable to pay the Venetians agreed amount of money for their maritime transport to the Holy Land. Venetians in turn demanded that crusaders capture town of Zadar (Zara) in order to compensate for the lack of money and hand it over to them. Although part of the crusaders refused to participate in this, sharp warnings issued from the pope himself and citizens of Zadar even posting crosses on their town walls to show crusaders they are Christians as well, the crusaders violently captured the city in November 1202 and looted it along with Venetians. The pope in turn excommunicated the entire crusader army. Hungarian-Croatian king Emeric provided no real help to Zadar either. He only wrote a letter to pope Innocent III asking him to make crusaders return the town to its legitimate ruler.
In year 1217, the Hungarian king Andrew II took the sign of the cross and vowed to go on the Fifth Crusade. After assembling his army king marched by so-called "via exercitualis" (English: the military road) from Hungary proper southwards to Koprivnica and further towards: Križevci, Zagreb, Topusko, Bihać and then Knin, eventually reaching town of Split on the Adriatic coast. After staying in Split for three weeks for logistical reasons and realising that Croatians will not be joining his crusade, king and his army sailed off to the Holy Land. Historian Krešimir Kužić attributes this low desire of Croatians to join king Andrew's crusade to earlier bad memories related to destruction and looting of Zadar in 1202. When king Andrew II returned from the crusade, he brought back a number of relics, some of which remain stored in the treasury of Zagreb Cathedral.
Andrew's son King Béla IV was forced to deal with troubles brought by the first Mongol invasion of Hungary. Following the Hungarian defeat in the Battle of the Sajó River in 1241, the king withdrew to Dalmatia, hoping to take refuge there, with the Mongols in pursuit. The Mongol army followed the king to Split hinterland, which they ravaged. The king took refuge in nearby town of Trogir, hoping to make use of nearby islands which offered some protection in case Mongols reach him. Meanwhile, Mongols thinking that the king is hiding in Klis fortress attempted to clib up the steep cliffs of Klis, while the fort defenders hurled rocks on their heads. Eventually hand-to-hand combat developed inside the fortress, but upon realising that king isn't in Klis the Mongols abandoned their attempts to take the fort and headed towards Trogir. As Mongols prepared to attack Trogir, king Bela prepared boats in an attempt to flee across the sea.
This decisive Mongol attack on Trogir never happened as they withdrew upon receiving news about the death of Ögedei Khan. As Croatian historian Damir Karbić notes, during Béla's stay in Dalmatia, members of the Šubić noble family earned merit for sheltering him, so in return, the king granted them the County of Bribir in hereditary possession, where their power grew until reached the peak in the time of Paul I Šubić of Bribir.
This period, therefore, saw the rise of the Frankopans and the Šubićs, native nobility, to prominence. Numerous future Bans of Croatia originated from these two noble families. The princes of Bribir from the Šubić family became particularly influential, as they asserted their control over large parts of Dalmatia, Slavonia, and even Bosnia.
Croatia under the Anjou dynasty
By the early 14th century lord Paul Šubić accumulated so much power, that he ruled as a de facto independent ruler. He coined his own money and held the hereditary title of Ban of Croatia. Following the death of king Ladislaus IV of Hungary, who had no male heir, a succession crisis emerged, and in 1300, Paul invited Charles Robert of Anjou to come to the Kingdom of Hungary and take over its royal seat. A civil war ensued, in which Charles' party prevailed after winning a decisive victory in the Battle of Rozgony in 1312.
Coronations of the kings of Croatia gradually fell into abeyance as a custom. Charles Robert was the last to be separately crowned as King of Croatia in 1301, after which Croatia had a separate constitution. Lord Paul Šubić died in 1312, and his son Mladen inherited the title of Ban of Croatia. Mladen's power was diminished due to the new king's policy of centralization, after he and his forces were defeated by the royal army and its allies in the Battle of Bliska in 1322. The power vacuum caused by the downfall of Mladen Šubić was used by Venice to reassert control over Dalmatian cities.
The ensuing reign of King Louis the Great (1342–1382) is considered the golden age of medieval Croatian history. Louis launched a campaign against Venice, with aim of retaking Dalmatian cities, and eventually succeeded, forcing Venice to sign the Treaty of Zadar in 1358. The same peace treaty caused the Republic of Ragusa to gain independence from Venice.
Anti-Court struggles period
After king Louis The Great died in 1382, the Kingdom of Hungary and Croatia descended into a period of destructive dynastic struggles called The Anti-Court movement. The struggle was waged between two factions, one of which was centered around late king's daughter Mary, her mother queen Elizabeth, and her fiancé Sigismund of Luxemburg. The faction which opposed them was a coalition of Croatian nobility which supported Charles of Durazzo to become a new king of Hungary and Croatia. This faction consisted of powerful John of Palisna, and Horvat brothers, who opposed the idea of being ruled by a female and, secondly, of being ruled by Sigismund of Luxemburg whom they considered alien. As alternative, they arranged for Charles of Durazzo to come to Croatia and crowned him as new king of Hungary-Croatia in Szekezfehervar in December 1385. Charles' opponents - queen Elizabeth and princess Mary, responded by organizing Charles' assassination in Buda in February 1386. Enraged anti-court supporters then retaliated by making an ambush for two queens near Gorjani in July 1386, where their escort was eliminated and both queens were taken to captivity in Novigrad Castle near Zadar. Once in Novigrad, queen Elizabeth was strangled to death, but her daughter Mary was eventually rescued by her fiancé Sigismund.
In 1387, Sigismund of Luxemburg crowned himself a new king of Hungary-Croatia. In following period he too became engaged in power struggle against opposing Croatian and Bosnian nobility in order to assert his rule over the realm. In 1396, Sigismund organized a crusade against the expanding Ottomans which culminated in Battle of Nicopolis. When the battle ended, it was unclear whether Sigismund got out alive or not, so Stephen II Lackfi proclaimed Ladislaus of Naples a new king of Hungary-Croatia. When Sigismund, nonetheless did returned to Croatia, he summoned diet in Križevci in 1397, where he confronted his adversaries and eliminated them. Sigismund was again forced fight for the control, but by 1403 entire southern Croatia and Dalmatian cities defected to Ladislaus of Naples. Sigismund eventually managed to crush anti-court movement by winning 1408 Battle of Dobor in Bosnia. Since anti-king Ladislaus lost hope of prevailing in struggle against Sigismund, he sold all his nominal possessions in Dalmatia to Republic of Venice for 100 000 Ducats in 1409. The Venetians asserted their control over most of Dalmatia by 1428. The rule of Venice over most of Dalmatia continued on for nearly four centuries ( 1420–1797) until the end of The Republic by Treaty of Campo Formio. Another long term consequence of Anti-Court struggles was arrival of Ottomans to neighbouring Kingdom of Bosnia at the invite of powerful Bosnian duke Hrvoje Vukčić Hrvatinić to help him fight against forces of king Sigismund. The Ottomans gradually strengthened their influence in Bosnia until finally completely conquering the kingdom in 1463.
Ottoman expansion
Serious Ottoman attacks on Croatian lands began after the fall of Bosnia to the Ottomans in 1463. At this point main Ottoman attacks were not yet directed towards Central Europe, with Vienna as its main objective, but towards renaissance Italy with Croatia standing on their way between. As the Ottomans launched expansion further into Europe, Croatian lands became a place of permanent warfare. This period of history is considered to be one of the direst for the people living in Croatia. Baroque poet Pavao Ritter Vitezović subsequently described this period of Croatian history as "two centuries of weeping Croatia".
Armies of Croatian nobility fought numerous battles to counter the Ottoman akinji and martolos raids. The Ottoman forces frequently raided the Croatian countryside, plundering towns and villages and captured the local inhabitants as slaves. These "scorched earth" tactics, also called "The Small War", were usually conducted once a year with intention to soften up the region's defenses, but didn't result in actual conquest of territory.
After death of king Mathias Corvinus in 1490, a succession war ensued, where supporters of Vladislaus Jagiellon prevailed over those of Maximilian Habsburg, another contester to the throne of Kingdom of Hungary-Croatia. Maximilian gained many supporters among Croatian nobility and a favourable peace treaty he concluded with Vladislaus enabled Croatians to increasingly turn towards Habsburgs when seeking protections from the Ottoman attacks, as their lawful king Vladislaus turned out unable to provide any. On same year, the estates of Croatia also declined to recognize Vladislaus II as a ruler until he had taken an oath to respect their liberties and insisted that he strike from the constitution certain phrases which seemed to reduce Croatia to the rank of a mere province. The dispute was resolved in 1492.
Meanwhile, the ongoing Ottoman attacks combined with famines, diseases, and a cold climate, caused vast depopulation and a refugee crisis, as people fled to safer areas. Croatian historian Ivan Jurković points out that due to the combination of these factors, Croatia "lost almost three-fifths of its population" and the compactness of its territory. As a result, the center of the Croatian medieval state gradually shifted northwards into western Slavonia (Zagreb). Frequent Ottoman raids eventually led to the 1493 Battle of Krbava field which ended in Croatian defeat.
Croatia in the Habsburg monarchy (1527–1918)
Remnants of the remnants
Croats fought an increasing number of battles, but lost increasing swathes of territory to the Ottoman Empire, until being reduced to what is commonly called in Croatian historiography the "Remains of the Remains of Once Glorious Croatian Kingdom" (Reliquiae reliquiarum olim inclyti regni Croatiae), or simply the "Remains of the Remains". A decisive battle between Hungarian army and the Ottomans occurred on Mohács in 1526, where Hungarian king Louis II was killed and his army was destroyed. As a consequence, in November of the same year, the Hungarian parliament elected János Szapolyai as the new king of Hungary. In December 1526, another Hungarian parliament elected Ferdinand Habsburg as King of Hungary.
The Croatian nobles met in Cetingrad in 1527 and chose Ferdinand I of the House of Habsburg as the new ruler of Croatia, on the condition that he contribute to the defense of Croatia against the Ottomans, and respect its political rights. The Diet of Slavonia, on the other hand, elected Szapolyai. A civil war between the two rival kings ensued, but later both crowns united as the Habsburgs prevailed over Szapolyai. The Ottoman Empire used these instabilities to expand in the 16th century to include most of Slavonia, western Bosnia (then called Turkish Croatia), and Lika. Those territories initially made up part of Rumelia Eyalet, and subsequently parts of Budin Eyalet, Bosnia Eyalet, and Kanije Eyalet.
Later in the same century, Croatia was so weak that its parliament authorized Ferdinand Habsburg to carve out large areas of Croatia and Slavonia adjacent to the Ottoman Empire for the creation of the Military Frontier (Vojna Krajina, German: Militaergrenze) - a buffer zone for the Ottoman Empire managed directly by the Imperial War Council in Austria. This buffer area became devastated and depopulated due to constant warfare and was subsequently settled by Serbs, Vlachs, Croats, and Germans. As a result of their compulsory military service to the Habsburg Empire during the conflict with the Ottoman Empire, the population in the Military Frontier was free of serfdom and enjoyed much political autonomy, unlike the population living in the parts managed by the Croatian Ban and Sabor. They were considered free peasant-soldiers who were granted land without the usual feudal obligations, except for the military service. This was officially confirmed by an Imperial decree of 1630 called Statuta Valachorum (Vlach Statutes).
The territory of Military Frontier was initially divided into areas of Varaždin Generalcy, Karlovac Generalcy and Žumberak District. The area between villages of Bović and Brkiševina was called Banska Krajina (or subsequently Banovina, Banija). The difference between that part of the Military Frontier and the rest was that it was under command and financing of ban of Croatia so its defense was basically the responsibility of Croatia. Unlike rest of the Military Frontier which was under direct command of Imperial Military Authorities, Banska Krajina could not be taken away from Croatia.
Hasan Pasha's Great Offensive on Croatia
In 1590's belligerent Teli Hasan Pasha was appointed new governor of Ottoman Bosnian Eyalet. He launched his great offensive on Croatia, aimed at completely conquering Croatian "Remnants of the Remnants". In order to do that, he mobilized all available troops from his Bosnian Eyalet. Although his offensive did achieve substantial success against Croatians and their allies, such as victories in Siege of Bihać (which Croatians never managed to retake again) or in Battle of Brest, his campaign was ultimately stopped in June 1593 Battle of Sisak. Not only the Ottomans lost this battle, but Hasan Pasha got killed in the fray. News of Bosnian Pasha's defeat near Sisak caused outrage in Constantinople. Now, the Ottomans officially decided to declare war to Habsburg Monarchy, triggering the start of Long Turkish War. In strategic sense, the Ottoman defeat near Sisak led to stabilization of border between Croatia and the Ottoman Empire. Historian Nenad Moačanin claims that this stability of Croatian-Ottoman border was a general characteristic of the 17th century, as Ottoman Empire's might started declining.
Zrinski-Frankopan conspiracy
During the 17th century, distinguished Croatian noble Nikola Zrinski became one of the most prominent Croatian generals in the fight against the Ottomans. In 1663/1664 he led a successful incursion into Ottoman-controlled territory. The campaign ended in the destruction of the vital Osijek bridge, which served as a connection between the Pannonian plain and the Balkan territories. As a reward for his victory against the Ottomans, Zrinski was commended by French king Louis XIV, thereby establishing contact with the French court. Croatian nobility also constructed Novi Zrin castle which sought to protect Croatia and Hungary from further Ottoman advances. At the same time, emperor Leopold of Habsburg sought to impose absolute rule on the entire Habsburg territory, which meant a loss of authority for the Croatian parliament and Ban and caused dissatisfaction with Habsburg rule among Croats.
In July 1664, a large Ottoman army besieged and destroyed Novi Zrin. As this army marched on Austrian lands, its campaign ended at the Battle of St. Gotthard, where it was destroyed by the Habsburg imperial army. Given this victory, Croatians expected a decisive Habsburg counter-offensive to push the Ottomans back and relieve pressure on Croatian lands, but Leopold decided to conclude the unfavorable Vasvar peace treaty with the Ottomans because it solved problems he had on the Rhine with the French at the time. In Croatia, his decision caused outrage among leading nobles and sparked a conspiracy to replace the Habsburgs with different rulers. After Nikola Zrinski died under unusual circumstances while hunting, his relatives Fran Krsto Frankopan and Petar Zrinski supported the conspiracy.
The conspirators established contact with the French, Venetians, Poles, and eventually even the Ottomans, only to be discovered by Habsburg spies at the Ottoman court who served as the sultan's translators. The conspirators were invited to reconcile with the emperor, to which they agreed. However, when they came to Austria, they were charged with high treason and sentenced to death. They were executed in Wiener Neustadt in April 1671. Their families, whose history was intertwined with centuries of Croatian history, were subsequently eradicated by imperial authorities, and all of their possessions were confiscated.
Great Turkish War: A revived Croatia
Despite the decline of Ottoman might in the 17th century, the Ottoman high command decided to attack the Habsburg capital of Vienna in 1683, as the Vasvár peace treaty was about to expire. Their attack, however, ended in disaster, and the Ottomans were ultimately routed near Vienna by joint Christian armies defending the city. Soon thereafter, the Holy League was formed and the Great Turkish War was launched. In the Croatian theater of operations, several commanders distinguished themselves, including friar Luka Ibrišimović, whose rebels defeated the Ottomans in Požega, and Marko Mesić, who led the anti-Ottoman uprising in Lika. Hajduk leader Stojan Janković distinguished himself by leading troops in Dalmatia. Croatian Ban Nikola (Miklos) Erdody led his troops in Siege of Virovitica, which was liberated from the Ottomans in 1684. Osijek was liberated by 1687, Kostajnica was liberated by 1688, and Slavonski Brod was liberated by 1691. An attempt to retake Bihać was also made in 1697 but was eventually called off due to lack a of cannons. In the same year, general Eugene of Savoy led a 6500-strong army from Osijek into Bosnia, where he raided the seat of Bosnia Eyalet, Sarajevo, burning it to the ground. After this raid, large groups of Christian refugees from Bosnia settled in what was then an almost empty Slavonia. After the decisive Ottoman defeat in the Battle of Zenta in 1697 by the forces of Eugene of Savoy, the Peace of Karlowitz was signed in 1699, confirming the liberation of all of Slavonia from the Ottomans. For Croatia, nonetheless, large chunks of its late medieval territories between the rivers Una and Vrbas were lost, as they remained part of the Ottoman Bosnia Eyalet. In the following years, the use of the German language spread in the new military borderland and proliferated over the next two centuries as German-speaking colonists settled in the borderlands.
Enlightened despotism
By the 18th century, the Ottoman Empire had been driven out of Hungary, and Austria brought the empire under central control. Since the emperor Charles VI had no male heirs, he wanted to leave the imperial throne to his daughter Maria Theresa of Austria, which eventually led to the War of Austrian Succession of 1741–1748. The Croatian Parliament decided to accept Maria Theresa as a legitimate ruler by drafting the Pragmatic Sanction of 1712, asking in return that whoever inherited the throne recognize and respect Croatian autonomy from Hungary. The king unwillingly granted this. The rule of Maria Theresa brought limited modernization in education and health care. Croatian Royal Council (Consilium Regni Croatiae), which served as the de facto Croatian government, was founded in Varaždin in 1767, but it was abolished in 1779 and its authority was passed to Hungary. The foundation of the Croatian Royal Council in Varaždin made this town the administrative capital of Croatia, however, a large fire in 1776 caused significant damage to the city, so these major Croatian administrative institutions moved to Zagreb.
Maria Theresa's heir, Joseph II of Austria, also ruled in an enlightened absolutist manner, but his reforms were marked by attempts at centralization and Germanization. In this period, roads were built connecting Karlovac with Rijeka, and Jozefina connecting Karlovac with Senj. With the Treaty of Sistova, which ended the Austro-Turkish War (1788-1791), the Ottoman-held areas of Donji Lapac and Cetingrad, along with the villages of Drežnik Grad and Jasenovac, were ceded to the Habsburg monarchy and incorporated into the Croatian Military Frontier.
19th century in Croatia
Napoleonic Wars
As Napoleon's armies started to dominate Europe, Croatian lands came into contact with the French as well. When Napoleon abolished the Republic of Venice in 1797, former Venetian possessions in Dalmatia came under Habsburg rule. In 1809, as Napoleon defeated the Austrians in the Battle of Wagram, French-controlled territory eventually expanded to the Sava river. The French founded the "Illyrian Provinces" centered in Ljubljana and appointed Marshal Auguste de Marmont as their governor-general. The French presence brought the liberal ideas of the French Revolution to the Croats. The French founded Masonic lodges, built infrastructure, and printed the first newspapers in the local language in Dalmatia. Called Kraglski Datmatin/Il Regio Dalmata, it was printed in both Italian and Croatian. Croatian soldiers accompanied Napoleon in his conquests as far as Russia. In 1808, Napoleon abolished the Republic of Ragusa. Ottomans from Bosnia raided French Croatia and occupied the area of Cetingrad in 1809. Auguste de Marmont reacted by occupying Bihać on 5 May 1810. After the Ottomans promised to stop raiding French territories and withdraw from the Cetingrad, he withdrew from Bihać.
With the fall of Napoleon, the French-controlled Croatian lands came back under Austrian rule.
Croatian national revival and the Illyrian Movement
Under the influence of German romanticism, French political thought, and pan-Slavism, Croatian romantic nationalism emerged in the mid-19th century to counteract the Germanization and Magyarization of Croatia. Ljudevit Gaj emerged as a leader of the Croatian national movement. One of the important issues to be resolved was the question of language, where regional Croatian dialects had to be standardized. Since the Shtokavian dialect, widespread among Croats, was also common with Serbs, this movement likewise had a South-Slavic characteristic. At the time, "Croatian" only referred to the population in southwestern parts of what is today Croatia, while "Illyrian" was used throughout the south-Slavic world; wider masses of people were attempted to attract by using the Illyrian name. Illyrian activists chose the Shtokavian dialect over Kajkavian as the standardized version of Croatian language. The Illyrian movement was not accepted by the Serbs or the Slovenes, and it remained strictly a Croatian national movement. In 1832, Croatian count Janko Drašković wrote a manifesto of Croatian national revival called Disertacija (Dissertation). The manifesto called for the unification of Croatia with Slavonia, Dalmatia, Rijeka, the Military Frontier, Bosnia, and Slovene lands into a single unit inside the Hungarian part of the Austrian Empire. This unit would have Croatian as the official language and would be governed by Ban. The movement spread throughout Dalmatia, Istria and among Bosnian Francisian monks. It resulted in the emergence of the modern Croatian nation and eventually the formation of the first Croatian political parties. After the usage, the Illyrian name was banned in 1843; the proponents of Illyrianism changed their name to Croatian.
On 2 May 1843, Ivan Kukuljević Sakcinski held the first speech on Croatian language in the Croatian Sabor, requesting that the Croatian language be made the official language in public institutions. At this point, this was a significant step, because Latin was still in use in public institutions in Croatia. In the Sabor in 1847 Croatian was proclaimed as an official language in Croatia.
According to Croatian historian Nenad Moačanin, appearance of Romanticism also affected portion of Vlachs settled in Croatian depopulated areas who declared themselves as Serbs.
Croats in revolutions of 1848
In the Revolutions of 1848, the Triune Kingdom of Croatia, Slavonia, and Dalmatia, driven by fear of Magyar nationalism, supported the Habsburg court against Hungarian revolutionary forces.
During a session of the Croatian Sabor held on 25 March 1848, colonel Josip Jelačić was elected as Ban of Croatia, and a petition called "Demands of The People" (Zahtjevanja naroda) was drafted to be handed over to the Austrian Emperor. These liberal demands asked for independence, unification of Croatian lands, a Croatian government responsible to the Croatian parliament and independent from Hungary, financial independence from Hungary, the introduction of the Croatian language in offices and schools, freedom of the press, religious freedom, abolishment of serfdom, abolishment of nobility privileges, the foundation of a people's army, and equality before the law.
As the Hungarian government denied the existence of the Croatian name and nationhood and treated Croatian institutions like provincial authorities, Jelačić severed ties between Croatia and Hungary. In May 1848, Ban's Council was formed which had all the executive powers of the Croatian government. The Croatian parliament abolished feudalism, serfdom and demanded that the Monarchy become a constitutional federal state of equal nations with independent national governments and one federal parliament in the capital of Vienna. The Croatian parliament also demanded the unification of the Military Frontier and Dalmatia with Croatia proper. Sabor also asked for an undefined alliance with Istria, Slovene lands and parts of southern Hungary inhabited with Croats and Serbs. Jelačić was also appointed the governor of Rijeka and Dalmatia as well as the "Imperial Commander of Military Frontier", thus having most of the Croatian lands under his rule. The breakdown of negotiations between Croats and the Hungarians eventually led to war. Jelačić declared war on Hungary on 7 September 1848. On 11 September 1848, the Croatian army crossed the Drava river and annexed Međimurje. Upon crossing Drava, Jelačić ordered his army to switch Croatian national flags with Habsburg Imperial flags.
Despite the contributions of its Ban Josip Jelačić in quenching the Hungarian war of independence, in the aftermath, Croatia was not treated any more favorably by Vienna than the Hungarians and therefore lost its domestic autonomy.
Croatia in Dual Monarchy
The dual monarchy of Austria-Hungary was created in 1867 through the Austro-Hungarian Compromise. Croatian autonomy was restored in 1868 with the Croatian–Hungarian Settlement, which was comparatively favorable for the Croatians, but still problematic because of issues such as the unresolved status of Rijeka. In 1873, the territory of Military Frontier was demilitarized and in July 1871 a decision was made to incorporate it into Croatia with Croatian ban Ladislav Pejačević taking over the authority. Pejačević's successor Károly Khuen-Héderváry caused further problems by violating the Croatian-Hungarian Settlement through his hardline Magyarization policies in period from 1883 to 1903. Héderváry's Magyarization of Croatia led to massive riots in 1903, when Croatian protesters burnt Hungarian flags and clashed with the gendarmes and the military, resulting in the death of several protesters. As a consequence of these riots, Héderváry left his position as Ban of Croatia, but was appointed prime minister of Hungary.
A year earlier, in 1902, Srbobran, the newspaper of Zagreb Serbs, published an article titled "Do istrage naše ili vaše" (Until us, or you get exterminated). The article was filled with Greater Serbian ideology; its text denied the existence of the Croatian nation and the Croatian language and announced Serbian victory over "servile Croats", who would, the article proclaimed, be exterminated.
The article sparked major anti-Serb riots in Zagreb, in which barricades were raised and Serb-owned properties were attacked. Serbs of Zagreb eventually distanced themselves from the opinions published in the article.
World War I brought an end to the Dual Monarchy. Croatia suffered a great loss of life in World War I. Late in the war, there were proposals to transform the dualist monarchy into a federalist one, with a separate Croatian/South Slavic section, however, these plans were never carried out, due to Woodrow Wilson's announcement of a policy of self-determination for peoples of Austria-Hungary.
Shortly before the end of the war in 1918, the Croatian Parliament severed relations with Austria-Hungary after receiving the news that the Czechoslovak parts had also separated from Austria-Hungary. The Triune Kingdom of Croatia, Slavonia, and Dalmatia became a part of the newly created provisional State of Slovenes, Croats and Serbs. This internationally unrecognized state was composed of all of the South Slavic territories of the old Austro-Hungarian Monarchy with a transitional government located in Zagreb. Its biggest issue, however, was the advancing Italian army that sought to capture the Croatian Adriatic territories promised to them by the Treaty of London in 1915. A solution was sought through unification with the Kingdom of Serbia, which had an army capable of confronting the Italians as well as the international legitimacy among the members of the Entente Cordiale, which was about to carve new European borders at the Paris Peace Conference.
Croats inside the first Yugoslavia (1918–1941)
A new state was created in late 1918. Syrmia left Croatia-Slavonia and joined Serbia together with Vojvodina, shortly followed by a referendum to join Bosnia and Herzegovina to Serbia. The People's Council of Slovenes, Croats and Serbs (Narodno vijeće), guided by what was by that time a half-century-long tradition of pan-Slavism and without the sanction of the Croatian Sabor, merged with the Kingdom of Serbia into the Kingdom of the Serbs, Croats, and Slovenes.
An Italian army eventually took Istria, started to annex the Adriatic islands one by one, and even landed in Zadar. A partial resolution to the so-called Adriatic question came in 1920 with the Treaty of Rapallo.
The Kingdom underwent a crucial change in 1921 to the dismay of Croatia's largest political party, the Croatian Peasant Party (Hrvatska seljačka stranka). The new constitution abolished historical/political entities, including Croatia and Slavonia, centralizing authority in the capital of Belgrade. The Croatian Peasant Party boycotted the government of the Serbian People's Radical Party throughout the period, except for a brief interlude between 1925 and 1927, when external Italian expansionism was at hand with her allies, Albania, Hungary, Romania, and Bulgaria, threatening Yugoslavia as a whole. Two differing concepts of how the new common state should be governed became the main source of conflict between Croatian elites led by the Croatian Peasant Party and Serbian elites. Leading Croatian politicians sought a federalized new state in which Croats would have certain autonomy (similar to what they had before in Austria-Hungary), while Serb-centered parties advocated unitarist policies, centralization, and assimilation. The new country's military was also a predominately Serbian institution; by 1938 only about 10% of all Army officers were Croats. The new school system was Serb-centered with Croatian teachers being either retired, purged, or transferred. Serbs were also posted as high state officials. The replacement of old Austro-Hungarian krones was conducted through an unfair rate of four Krones for one Serbian Dinar.
In the early 1920s, the Yugoslav government of Serbian prime minister Nikola Pašić used police pressure on voters and ethnic minorities, confiscation of opposition pamphlets, and election-rigging to keep the opposition, mainly the Croatian Peasant Party and its allies, in the minority in the Yugoslav parliament. Pašić believed that Yugoslavia should be as centralized as possible, creating a Greater Serbian national concept of concentrated power in the hands of Belgrade in place of distinct regional governments and identities.
Murders of 1928 and royal dictatorship
During a Parliament session in 1928, Puniša Račić, a deputy of the Serbian Radical People's Party, shot at Croatian deputies, resulting in the killing of Pavle Radić and Đuro Basariček and the wounding of Ivan Pernar and Ivan Granđa. Stjepan Radić, a Croatian political champion at the time, was wounded and later succumbed to his wounds. These multiple murders caused the outrage of the Croatian population and ignited violent demonstrations, strikes, and armed conflicts throughout Croatian parts of the country. The Greater Serbian-influenced Royal Yugoslav Court even considered "amputation" of Croatian parts of the country, while leaving Yugoslavia only inside Greater Serbian borders, however, Croatian Peasant Party leadership rejected this idea. While Račić was subsequently tried for multiple murders, he served his sentence in a luxurious villa in Požarevac, where he had several servants at his disposal and was allowed to leave and return at any time.
In response to the shooting at the National Assembly, King Alexander abolished the parliamentary system and proclaimed a royal dictatorship. He imposed a new constitution aimed at removing all existing national identities and imposing "integral Yugoslavism". He also renamed the country from the Kingdom of Serbs, Croats and Slovenes to the Kingdom of Yugoslavia. The territory of Croatia was largely divided among the Sava Banovina and the Littoral Banovina. Political parties were banned and the royal dictatorship took on an increasingly harsh character. Vladko Maček, who had succeeded Radić as leader of the Croatian Peasant Party, the largest political party in Croatia, was imprisoned. Ante Pavelić was exiled from Yugoslavia and created the ultranationalist Ustaše Movement, with the ultimate goal of destroying Yugoslavia and making Croatia an independent country. According to the British historian Misha Glenny, the murder in March 1929 of Toni Schlegel, editor of the pro-Yugoslavian newspaper Novosti, brought a "furious response" from the regime. In Lika and west Herzegovina in particular, described as "hotbeds of Croatian separatism", Glenny wrote that the majority-Serb police acted "with no restraining authority whatsoever". In the words of a prominent Croatian writer, Schlegel's death became the pretext for terror in all forms. Politics was soon "indistinguishable from gangsterism". In 1931, the royal regime organized the assassination of Croatian scientist and intellectual Milan Šufflay on the streets of Zagreb. The assassination was condemned by globally renowned intellectuals such as Albert Einstein and Heinrich Mann. In 1932, the Ustaše Movement unsuccessfully planned the Velebit uprising in Lika. Despite the oppressive climate, few rallied to the Ustaša cause and the movement was never able to gain serious support among the Croatian population.
Banovina of Croatia
In 1934, King Aleksandar was assassinated during a state visit to Marseille by a coalition of the Ustaše and the Bulgarian Internal Macedonian Revolutionary Organization (IMRO), thus ending the Royal dictatorship. The government of Serbian Radical Milan Stojadinović, which took power in 1935, distanced Yugoslavia from its former allies of France and the United Kingdom and moved the country closer to Fascist Italy and Nazi Germany. In 1937 Yugoslav gendarmes led by Radical Party member Jovo Koprivica killed dozens of youth members of the Croatian Peasant Party in Senj because they sang Croatian patriotic songs. With the rise of Nazis in Germany and the looming possibility of another European war, Serbian political elites decided that it was time to fix relations with the Croats, the second largest ethnic group in the country, so that in the event of a new war the country would be united and without ethnic divisions. Negotiations started, resulting in the Cvetković–Maček Agreement and the creation of Banovina of Croatia, an autonomous Croatian province inside Yugoslavia. Banovina of Croatia was created in 1939 out of the two Banates, as well as parts of the Zeta, Vrbas, Drina, and Danube Banates. It had a reconstructed Croatian Parliament which would choose a Croatian Ban and Viceban. This Croatia included a part of Bosnia, most of Herzegovina, and Dubrovnik and its surroundings.
World War II and the Independent State of Croatia (1941–1945)
The Axis occupation of Yugoslavia in 1941 allowed the Croatian radical right Ustaše to come into power, forming the "Independent State of Croatia" (Nezavisna Država Hrvatska, NDH), led by Ante Pavelić, who assumed the role of Poglavnik. Following the pattern of other fascist regimes in Europe, the Ustaše enacted racial laws and formed eight concentration camps targeting minority Serbs, Romas, and Jewish populations, as well as Croatian and Bosnian Muslim opponents of the regime. The biggest concentration camp was Jasenovac in Croatia. The NDH had a program, formulated by Mile Budak, to purge Croatia of Serbs, by "killing one third, expelling the other third and assimilating the remaining third". The main targets for persecution were the Serbs, of whom approximately 330,000 were killed.
Various Serbian nationalist Chetnik groups also committed atrocities against Croats across many areas of Lika and parts of northern Dalmatia. During World War II in Yugoslavia, the Chetniks killed an estimated 18,000-32,000 Croats.
The anti-fascist communist-led Partisan movement, based on a pan-Yugoslav ideology, emerged in early 1941 under the command of Croatian-born Josip Broz Tito, and spread quickly into many parts of Yugoslavia. The 1st Sisak Partisan Detachment, often hailed as the first armed anti-fascist resistance unit in occupied Europe, was formed in Croatia, in the Brezovica Forest near the town of Sisak. As the movement began to gain popularity, the Partisans gained strength from Croats, Bosniaks, Serbs, Slovenes, and Macedonians who believed in a unified, but federal, Yugoslav state.
By 1943, the Partisan resistance movement had gained the upper hand and in 1945, with help from the Soviet Red Army (passing only through small parts such as Vojvodina), expelled the Axis forces and local supporters. The State Anti-Fascist Council for the National Liberation of Croatia (ZAVNOH) functioned since 1942 and formed an interim civil government by 1943. NDH's ministers of War and Internal Security Mladen Lorković and Ante Vokić tried to switch to the Allied side. Pavelić was, in the beginning, supporting them but when he found that he would need to leave his position he imprisoned them in Lepoglava prison where they were executed.
Following the defeat of the Independent State of Croatia at the end of the war, a large number of Ustaše, civilians supporting them (ranging from sympathizers, young conscripts or anti-communists), Chetniks and anti-Communists attempted to flee in the direction of Austria, hoping to surrender to British forces and to be given refuge. Following the Bleiburg repatriations, they were instead interned by British forces, and returned to the Partisans where they were subject to mass executions.
Socialist Yugoslavia (1945–1991)
Tito's leadership of the LCY (1945–1980)
Croatia was one of six constituent socialist republics of the Socialist Federative Republic of Yugoslavia. Under the new communist system, privately owned factories and estates were nationalized, and the economy was based on a type of planned market socialism. The country underwent a rebuilding process, recovered from World War II, went through industrialization, and started developing tourism.
The country's socialist system also provided free apartments from large companies, which with the workers' self-management investments paid for the living spaces. From 1963, the citizens of Yugoslavia were allowed to travel to almost any country because of the neutral politics. No visas were required to travel to eastern or western countries or capitalist or communist nations. Such free travel was unheard of at the time in the Eastern Bloc countries, and in some western countries as well (e.g., Spain or Portugal, both dictatorships at the time). This proved to be helpful for Croatia's inhabitants who found working in foreign countries more financially rewarding. Upon retirement, a popular plan was to return to live in Croatia (then Yugoslavia) to buy more expensive property.
In Yugoslavia, the people of Croatia were guaranteed free healthcare, free dental care, and secure pensions. The older generation found this very comforting as pensions would sometimes exceed their former paychecks. Free trade and travel within the country also helped Croatian industries that imported and exported throughout all the former republics.
Students and military personnel were encouraged to visit other republics to learn more about the country, and all levels of education, including secondary education and higher education, were free. In reality, the housing was inferior with poor heat and plumbing, the medical care often lacking even in the availability of antibiotics, schools were propaganda machines and travel was a necessity to provide the country with hard currency. The propagandists, who want people to believe "neutral policies" equalized Serbs and Croats, severely restricted free speech and did not protect citizens from ethnic attacks.
Membership in the League of Communists of Yugoslavia was as much a prerequisite for admission to colleges and government jobs as in the Soviet Union under Joseph Stalin or Nikita Khrushchev. Private sector businesses did not grow as the taxes on private enterprise were often prohibitive. Inexperienced management sometimes ruled policy and controlled decisions by brute force. Strikes were forbidden, and owners/managers were not permitted to make changes or decisions which would impact their productivity or profit.
The economy developed into a type of socialism called samoupravljanje (self-management), in which workers controlled socially-owned enterprises. This kind of market socialism created significantly better economic conditions than in the Eastern Bloc countries. Croatia went through intensive industrialization in the 1960s and 1970s with industrial output increasing several-fold and with Zagreb surpassing Belgrade in industry. Factories and other organizations were often named after Partisans who were declared national heroes. This practice also spread to street names, as well as the names of parks and buildings.
Before World War II, Croatia's industry was not developed, with the vast majority of the people employed in agriculture. By 1991, the country was completely transformed into a modern industrialized state. At the same time, the Croatian Adriatic coast had become a popular tourist destination, and the coastal republics (but mostly SR Croatia) profited greatly from this, as tourist numbers reached levels still unsurpassed in modern Croatia. The government brought unprecedented economic and industrial growth, high levels of social security, and a very low crime rate. The country completely recovered from WWII and achieved a very high GDP and economic growth rate, significantly higher than those of the present-day republic.
The constitution of 1963 balanced power in the country between the Croats and the Serbs and alleviated the imbalance coming from the fact that the Croats were again in a minority position. Trends after 1965 (like the fall of OZNA and UDBA chief Aleksandar Ranković from power in 1966), however, led to the Croatian Spring of 1970–71, when students in Zagreb organized demonstrations to achieve greater civil liberties and greater Croatian autonomy. The regime stifled public protest and incarcerated the leaders, but this led to the ratification of a new constitution in 1974, giving more rights to the individual republics.
Radical Ustaše cells of Croatian émigrés based in Australia and Western Europe planned and attempted to carry out acts of sabotage within Yugoslavia, including an incursion from Austria of 19 armed men in June 1971, who unsuccessfully aimed to incite a popular Croatian uprising against what they called the "Serbo-communist" regime in Belgrade.
Until the breakup of Yugoslavia (1980–1991)
In 1980, after Tito's death, economic, political, and religious difficulties started to mount and the federal government began to crumble. The crisis in Kosovo and, in 1986, the emergence of Slobodan Milošević in Serbia provoked a very negative reaction in Croatia and Slovenia; politicians from both republics feared that his motives would threaten their republics' autonomy. With the climate of change throughout Eastern Europe during the 1980s, the communist hegemony was challenged (at the same time, the Milošević government began to gradually concentrate Yugoslav power in Serbia, and calls for free multi-party elections were becoming louder).
In June 1989, the Croatian Democratic Union (HDZ) was founded by Croatian nationalist dissidents led by Franjo Tuđman, a former fighter in Tito's Partisan movement and a JNA General. At this time, Yugoslavia was still a one-party state and open manifestations of Croatian nationalism were considered dangerous, so a new party was founded in an almost conspiratorial manner. It was only on 13 December 1989 that the governing League of Communists of Croatia agreed to legalize opposition political parties and hold free elections in the spring of 1990.
On 23 January 1990, at its 14th Congress, the Communist League of Yugoslavia voted to remove its monopoly on political power. The same day, it effectively ceased to exist as a national party when the League of Communists of Slovenia walked out after SR Serbia's President Slobodan Milošević blocked all their reformist proposals, which caused the League of Communists of Croatia to further distance themselves from the idea of a joint state.
Republic of Croatia (1991–present)
Introduction of multi-party political system
On 22 April and 7 May 1990, the first free multi-party elections were held in Croatia. Franjo Tuđman's Croatian Democratic Union (HDZ) won by a 42% margin against Ivica Račan's reformed communist Party of Democratic Change (SDP) who won 26%. Croatia's first-past-the-post election system enabled Tuđman to form the government relatively independently, as the win translated into 205 mandates (out of 351 total). The HDZ intended to secure independence for Croatia, contrary to the wishes of some ethnic Serbs in the republic and federal politicians in Belgrade. The excessively polarized climate soon escalated into complete estrangement between the two nations and spiraled into sectarian violence.
On 25 July 1990, a Serbian Assembly was established in Srb, north of Knin, as the political representation of the Serbian people in Croatia. The Serbian Assembly declared "sovereignty and autonomy of the Serb people in Croatia". Their position was that if Croatia could secede from Yugoslavia, then the Serbs could secede from Croatia. Milan Babić, a dentist from the southern town of Knin, was elected president. The rebel Croatian Serbs established some paramilitary militias under the leadership of Milan Martić, the police chief in Knin.
On 17 August 1990, the Serbs of Croatia began what became known as the Log Revolution, where barricades of logs were placed across roads throughout the South as an expression of their secession from Croatia. This effectively cut Croatia in two, separating the coastal region of Dalmatia from the rest of the country. The Croatian government responded to the road blockades by sending special police teams in helicopters to the scene, but they were intercepted by SFR Yugoslav Air Force fighter jets and forced to turn back to Zagreb.
The Croatian constitution was passed in December 1990, categorizing Serbs as a minority group along with other ethnic groups. On 21 December 1990, Babić's administration announced the creation of a Serbian Autonomous Oblast of Krajina (or SAO Krajina). Other Serb-dominated communities in eastern Croatia announced that they would also join SAO Krajina and ceased paying taxes to the Zagreb government.
On Easter Sunday, 31 March 1991, the first fatal clashes occurred when police from the Croatian Ministry of the Interior (MUP) entered the Plitvice Lakes National Park to expel rebel Serb forces. Serb paramilitaries ambushed a bus carrying Croatian police into the national park on the road north of Korenica, sparking a day-long gun battle between the two sides. During the fighting, one Croat and one Serb policeman were killed. Twenty other people were injured and twenty-nine Krajina Serb paramilitaries and policemen were taken prisoner by Croatian forces. Among the prisoners was Goran Hadžić, who would later become the President of the Republic of Serbian Krajina.
On 2 May 1991, the Croatian parliament voted to hold an independence referendum. On 19 May 1991, with a turnout of almost 80%, 93.24% voted for independence. Krajina boycotted the referendum. They had held their referendum a week earlier on 12 May 1991 in the territories they controlled and voted to remain in Yugoslavia. The Croatian government did not recognize their referendum as valid.
On 25 June 1991, the Croatian Parliament declared independence from Yugoslavia. Slovenia declared independence from Yugoslavia on the same day.
War of Independence (1991–1995)
During the Croatian War of Independence, the civilian population fled the areas of armed conflict en masse, with hundreds of thousands of Croats moving away from the Bosnian and Serbian border areas. In many places, masses of civilians were forced out by the Yugoslav National Army (JNA), which consisted mostly of conscripts from Serbia and Montenegro, and irregulars from Serbia, participating in what became known as ethnic cleansing.
The border city of Vukovar underwent a three-month siege during the Battle of Vukovar. It left most of the city destroyed and a majority of the population was forced to flee. The city was taken over by the Serbian forces on 18 November 1991 and the Vukovar massacre occurred.
Subsequent United Nations-sponsored cease fires followed, and the warring parties were mostly entrenched. The Yugoslav People's Army retreated from Croatia into Bosnia and Herzegovina where a new cycle of tensions was escalating—the Bosnian War was about to start. During 1992 and 1993, Croatia also handled an estimated 700,000 refugees from Bosnia, mainly Bosnian Muslims.
Armed conflict in Croatia remained intermittent and mostly small-scale until 1995. In early August, Croatia embarked on Operation Storm, an attack that quickly reconquered most of the territories from the Republic of Serbian Krajina authorities, leading to a mass exodus of the Serbian population. Estimates of the number of Serbs who fled before, during and after the operation range from 90,000 to 200,000.
As a result of this operation, a few months later the Bosnian War ended with the negotiation of the Dayton Agreement. A peaceful integration of the remaining Serbian-controlled territories in eastern Slavonia was completed in 1998 under UN supervision. The majority of the Serbs who fled from former Krajina did not return due to fears of ethnic violence, discrimination, and property repossession problems; and the Croatian government has yet to achieve the conditions for full reintegration. According to the United Nations High Commissioner for Refugees, around 125,000 ethnic Serbs who fled the 1991–1995 conflict are registered as having returned to Croatia, of whom around 55,000 remain permanently.
Transition period
Croatia became a member of the Council of Europe in 1996. Between 1995 and 1997 Franjo Tuđman became increasingly more authoritiarian and refused to formally acknowledge local election results in City of Zagreb, leading to the Zagreb crisis. In 1996 his government attempted to shut down Radio 101, a popular radio station which was critical towards HDZ and often made fun of HDZ and Tuđman himself. When Radio 101's broadcasting rights were revoked in 1996, some 120,000 Croatian citizens protested in Ban Jelačić Square against the decision. Tuđman gave the order to suppress the protest with a riot police, but then-minister of the internal affairs Ivan Jarnjak disobeyed his order for which he was subsequently dismissed from his position. While the years 1996 and 1997 were a period of post-war recovery and improving economic conditions, in 1998 and 1999 Croatia experienced an economic depression resulting in the unemployment of thousands.
The remainder of former Krajina, adjacent to the FR Yugoslavia, negotiated a peaceful reintegration process with the Croatian government. The so-called Erdut Agreement made the area a temporary protectorate of the UN Transitional Administration for Eastern Slavonia, Baranja and Western Sirmium. The area was formally re-integrated into Croatia by 1998.
Franjo Tuđman's government started to lose popularity as it was criticized for its involvement in suspicious privatization deals in the early 1990s, as well as for international isolation. The country experienced a mild recession in 1998 and 1999.
Tuđman died in 1999 and in the early 2000 parliamentary elections, the nationalist Croatian Democratic Union (HDZ) government was replaced by a center-left coalition under the Social Democratic Party of Croatia, with Ivica Račan as prime minister. At the same time, presidential elections were held which were won by a moderate, Stjepan Mesić. The new Račan government amended the constitution, changing the political system from a presidential system to a parliamentary system, transferring most executive presidential powers from the president to the institutions of the parliament and the prime minister.
The new government also started several large building projects, including state-sponsored housing, more rebuilding efforts to enable refugee return, and the building of the A1 highway. The country achieved notable economic growth during these years, while the unemployment rate continued to rise until 2001 when it finally started falling. Croatia became a World Trade Organization (WTO) member in 2000 and started the Accession of Croatia to the European Union in 2003.
In late 2003, new parliamentary elections were held and a reformed HDZ party won under the leadership of Ivo Sanader, who became prime minister. European accession was delayed by controversies over the extradition of army generals to the International Criminal Tribunal for the former Yugoslavia (ICTY), including the runaway Ante Gotovina.
Sanader was reelected in the closely contested 2007 parliamentary election. Other complications continued to stall the EU negotiating process, most notably Slovenia's blockade of Croatia's EU accession in 2008–2009. In June 2009, Sanader abruptly resigned from his post and named Jadranka Kosor in his place. Kosor introduced austerity measures to counter the economic crisis and launched an anti-corruption campaign aimed at public officials. In late 2009, Kosor signed an agreement with Borut Pahor, the premier of Slovenia, that allowed the EU accession to proceed.
In the Croatian presidential election, 2009–2010, Ivo Josipović, the candidate of the SDP won a landslide victory.
Sanader tried to come back into HDZ in 2010 but was then ejected, and USKOK soon had him arrested on several corruption charges.
In November 2012, a court in Croatia sentenced former Prime Minister Ivo Sanader, in office from 2003 to 2009, to 10 years in prison for taking bribes. Sanader tried to argue that the case against him was politically motivated.
In 2011, the accession agreement was concluded, giving Croatia the all-clear to join.
The 2011 Croatian parliamentary election was held on 4 December 2011, and the Kukuriku coalition won. After the election, the center-left government was formed led by new prime minister Zoran Milanović.
Croatia in European Union
Following the ratification of the Treaty of Accession 2011 and the successful 2012 Croatian European Union membership referendum, Croatia joined the EU on 1 July 2013.
In the 2014–15 Croatian presidential election, Kolinda Grabar-Kitarović became the first Croatian female President.
The 2015 Croatian parliamentary election resulted in the victory of the Patriotic Coalition which formed a new government with the Bridge of Independent Lists. However, a vote of no confidence brought down the Cabinet of Tihomir Orešković. After the 2016 Croatian parliamentary election, the Cabinet of Andrej Plenković was formed.
In January 2020, the former prime minister Zoran Milanović of the Social Democrats (SDP) won the presidential election. He defeated center-right incumbent Kolinda Grabar-Kitarović of the ruling Croatian Democratic Union (HDZ). In March 2020, the Croatian capital Zagreb experienced a 5.3 magnitude earthquake which caused significant damage to the city. In July 2020, the ruling center-right party HDZ won the parliamentary election. On 12 October 2020 right-wing extremist Danijel Bezuk attempted an attack on the building of the Croatian government, wounded a police officer in the process, and then killed himself. In December 2020. Banovina, one of the less developed regions of Croatia was shaken by a 6.4 M earthquake which killed several people and destroyed the town of Petrinja. Throughout two and half years of the global COVID-19 pandemic, 16,103 Croatian citizens died from the disease. In March 2022, a Soviet-made Tu-141 drone crashed in Zagreb, most likely due to the 2022 Russian invasion of Ukraine. On 26 July 2022, Croatian authorities opened Pelješac Bridge, thus connecting the southernmost part of Croatia with the rest of the country. On 1 January 2023 Croatia became a member of both the Eurozone and Schengen Area.
See also
Bans of Croatia
Croatian art
Croatian History Museum
Croatian Military Frontier
Croatian nobility
Culture of Croatia
History of Dalmatia
History of Hungary
History of Istria
Hundred Years' Croatian–Ottoman War
Kingdom of Dalmatia
Kingdom of Slavonia
Kings of Croatia
List of noble families of Croatia
List of rulers of Croatia
Military history of Croatia
Timeline of Croatian history
Turkish Croatia
Twelve noble tribes of Croatia
References
Bibliography
Patterson, Patrick Hyder. "The futile crescent? Judging the legacies of Ottoman rule in Croatian history". Austrian History Yearbook, vol. 40, 2009, p. 125+. online.
External links
Croatian Institute of History
Museum Documentation Center
The Croatian History Museum
Journal of Croatian Studies
Short History of Croatia
Overview of History, Culture, and Science
History of Croatia: Primary Documents
Overview of History, Culture and Science of Croatia
WWW-VL History:Croatia
Dr. Michael McAdams: Croatia – Myth and Reality
Historical Maps of Croatia
Croatia under Tomislav -from Nada Klaic book
The History Files: Croatia
A brief history of Croatia
The Early History of Croatia
Croatia since Independence 1990-2018 |
5575 | https://en.wikipedia.org/wiki/Geography%20of%20Croatia | Geography of Croatia | The geography of Croatia is defined by its location—it is described as a part of Central Europe and Southeast Europe, a part of the Balkans and Southern Europe. Croatia's territory covers , making it the 127th largest country in the world. Bordered by Slovenia in the northwest, Hungary in the northeast, Bosnia and Herzegovina and Serbia in the east, Montenegro in the southeast and the Adriatic Sea in the south, it lies mostly between latitudes 42° and 47° N and longitudes 13° and 20° E. Croatia's territorial waters encompass in a wide zone, and its internal waters located within the baseline cover an additional .
The Pannonian Basin and the Dinaric Alps, along with the Adriatic Basin, represent major geomorphological parts of Croatia. Lowlands make up the bulk of Croatia, with elevations of less than above sea level recorded in 53.42% of the country. Most of the lowlands are found in the northern regions, especially in Slavonia, itself a part of the Pannonian Basin plain. The plains are interspersed with horst and graben structures, believed to have broken the Pliocene Pannonian Sea's surface as islands. The greatest concentration of ground at relatively high elevations is found in the Lika and Gorski Kotar areas in the Dinaric Alps, but high areas are found in all regions of Croatia to some extent. The Dinaric Alps contain the highest mountain in Croatia— Dinara—as well as all other mountains in Croatia higher than . Croatia's Adriatic Sea mainland coast is long, while its 1,246 islands and islets encompass a further of coastline—the most indented coastline in the Mediterranean. Karst topography makes up about half of Croatia and is especially prominent in the Dinaric Alps, as well as throughout the coastal areas and the islands.
62% of Croatia's territory is encompassed by the Black Sea drainage basin. The area includes the largest rivers flowing in the country: the Danube, Sava, Drava, Mur and Kupa. The remainder belongs to the Adriatic Sea drainage basin, where the largest river by far is the Neretva. Most of Croatia has a moderately warm and rainy continental climate as defined by the Köppen climate classification. The mean monthly temperature ranges between and . Croatia has a number of ecoregions because of its climate and geomorphology, and the country is consequently among the most biodiverse in Europe. There are four types of biogeographical regions in Croatia: Mediterranean along the coast and in its immediate hinterland; Alpine in the elevated Lika and Gorski Kotar; Pannonian along the Drava and Danube; and Continental in the remaining areas. There are 444 protected natural areas in Croatia, encompassing 8.5% of the country; there are about 37,000 known species in Croatia, and the total number of species is estimated to be between 50,000 and 100,000.
The permanent population of Croatia by the 2011 census reached 4.29 million. The population density was 75.8 inhabitants per square kilometre, and the overall life expectancy in Croatia at birth was 75.7 years. The country is inhabited mostly by Croats (89.6%), while minorities include Serbs (4.5%), and 21 other ethnicities (less than 1% each) recognised by the constitution. Since the counties were re-established in 1992, Croatia is divided into 20 counties and the capital city of Zagreb. The counties subdivide into 127 cities and 429 municipalities. The average urbanisation rate in Croatia stands at 56%, with a growing urban population and shrinking rural population. The largest city and the nation's capital is Zagreb, with an urban population of 797,952 in the city itself and a metropolitan area population of 978,161. The populations of Split and Rijeka exceed 100,000, and five more cities in Croatia have populations over 50,000.
Area and borders
Croatia's territory covers , making it the 127th largest country in the world. The physical geography of Croatia is defined by its location—it is described as a part of Southeast Europe. Croatia borders Bosnia–Herzegovina (for 1,009.1 km), Slovenia for 667.8 km in the northwest, in the east, Hungary for 355.5 km in the north, Serbia (for 317.6 km) in the east, Montenegro (for 22.6 km) in the southeast and the Adriatic Sea in the west, south and southwest. It lies mostly between latitudes 42° and 47° N and longitudes 13° and 20° E. Part of the extreme south of Croatia is separated from the rest of the mainland by a short coastline strip around Neum belonging to Bosnia–Herzegovina. The country's shape is described as a 'horseshoe' (), and it arose as a result of medieval geopolitics.
Croatia's border with Hungary was inherited from Yugoslavia. Much of the border with Hungary follows the Drava River or its former river bed; that part of the border dates from the Middle Ages. The border in Međimurje and Baranya was defined as a border between the Kingdom of Hungary and the Kingdom of Serbs, Croats, and Slovenes, later renamed the Kingdom of Yugoslavia, pursuant to the Treaty of Trianon of 1920. The present outline of the border with Bosnia–Herzegovina and border with Montenegro is largely the result of the Ottoman conquest and subsequent recapture of territories in the Great Turkish War of 1667–1698 formally ending with the Treaty of Karlowitz, as well as the Fifth and Seventh Ottoman–Venetian Wars. This border had minor modifications in 1947 when all borders of the former Yugoslav constituent republics were defined by demarcation commissions implementing the AVNOJ decisions of 1943 and 1945 regarding the federal organisation of Yugoslavia. The commissions also defined Baranya and Međimurje as Croatian territories, and moreover set up the present-day border between Serbia and Croatia in Syrmia and along the Danube River between Ilok and the Drava river's mouth and further north to the Hungarian border; the Ilok/Drava section matched the border between the Kingdom of Croatia-Slavonia and Bács-Bodrog County that existed until 1918 (the end of World War I). Most of the border with Slovenia was also defined by the commissions, matching the northwestern border of the Kingdom of Croatia-Slavonia, and establishing a new section of Croatian border north of the Istrian peninsula according to the ethnic composition of the territory previously belonging to the Kingdom of Italy.
Pursuant to the 1947 Treaty of Peace with Italy the islands of Cres, Lastovo and Palagruža and the cities of Zadar and Rijeka and most of Istria went to communist Yugoslavia and Croatia, while carving out the Free Territory of Trieste (FTT) as a city-state. The FTT was partitioned in 1954 as Trieste itself and the area to the north of it were placed under Italian control, and the rest under Yugoslav control. The arrangement was made permanent by the Treaty of Osimo in 1975. The former FTT's Yugoslav part was partitioned between Croatia and Slovenia, largely conforming to the area population's ethnic composition.
In the late 19th century, Austria-Hungary established a geodetic network, for which the elevation benchmark was determined by the Adriatic Sea's average level at the Sartorio pier in Trieste. This benchmark was subsequently retained by Austria, adopted by Yugoslavia, and kept by the states that emerged after its dissolution, including Croatia.
Extreme points
The geographical extreme points of Croatia are Žabnik in Međimurje County as the northernmost point, Rađevac near Ilok in Vukovar-Syrmia County as the easternmost point, Cape Lako near Bašanija in Istria County as the westernmost point and the islet of Galijula in Palagruža archipelago in Split-Dalmatia County as the southernmost point. On the mainland, Cape Oštra of the Prevlaka peninsula in Dubrovnik-Neretva County is the southernmost point.
Maritime claims
Italy and Yugoslavia defined their delineation of the continental shelf in the Adriatic Sea in 1968, with an additional agreement on the boundary in the Gulf of Trieste signed in 1975 in accordance with the Treaty of Osimo. All the successor states of former Yugoslavia accepted the agreements. Prior to Yugoslavia's break-up, Albania, Italy and Yugoslavia initially proclaimed territorial waters, subsequently reduced to the international-standard ; all sides adopted baseline systems. Croatia also declared its Ecological and Fisheries Protection Zone (ZERP)—a part of its Exclusive Economic Zone—as extending to the continental shelf boundary. Croatia's territorial waters encompass ; its internal waters located within the baseline cover an additional .
Border disputes
Maritime border disputes
Croatia and Slovenia started negotiations to define maritime borders in the Gulf of Piran in 1992 but failed to agree, resulting in a dispute. Both countries also declared their economic zones, which partially overlap. Croatia's application to become an EU member state was initially suspended pending resolution of its border disputes with Slovenia. These were eventually settled with an agreement to accept the decision of an international arbitration commission set up via the UN, enabling Croatia to progress towards EU membership. The dispute has caused no major practical problems in areas other than the EU membership negotiations progress, even before the arbitration agreement.
The maritime boundary between Bosnia–Herzegovina and Croatia was formally settled in 1999, but a few issues are still contested—the Klek peninsula and two islets in the border area. The Croatia–Montenegro maritime boundary is disputed in the Bay of Kotor, at the Prevlaka peninsula. The situation was exacerbated by the peninsula's occupation by the Yugoslav People's Army and later by the Serbian-Montenegrin army, which in turn was replaced by a United Nations observer mission that lasted until 2002. Croatia took over the area with an agreement that allowed Montenegrin presence in Croatian waters in the bay, and the dispute has become far less contentious since the independence of Montenegro in 2006.
Land border disputes
The land border disputes pertain to comparatively small strips of land. The Croatia–Slovenia border disputes are: along the Dragonja River's lower course where Slovenia claims three hamlets on the river's left bank; the Sveta Gera peak of Žumberak where exact territorial claims were never made and appear to be limited to a military barracks on the peak itself; and along the Mura River where Slovenia wants the border to be along the current river bed instead of along a former one and claims a (largely if not completely uninhabited) piece of land near Hotiza. These claims are likewise in the process of being settled by binding arbitration.
There are also land border disputes between Croatia and Serbia. The two countries presently control one bank of the present-day river each, but Croatia claims that the border line should follow the cadastral borders between the former municipalities of SR Croatia and SR Serbia along the Danube, as defined by a Yugoslav commission in 1947 (effectively following a former river bed); borders claimed by Croatia also include the Vukovar and Šarengrad islands in the Danube as its territory. There is also a border dispute with Bosnia–Herzegovina, specifically Croatia claims Unčica channel on the right bank of Una as the border at Hrvatska Kostajnica, while Bosnia and Herzegovina claims Una River course as the border there.
Physical geography
Geology
The geology of Croatia has some Precambrian rocks mostly covered by younger sedimentary rocks and deformed or superimposed by tectonic activity.
The country is split into two main onshore provinces, a smaller part of the Pannonian Basin and the larger Dinarides. These areas are very different.
The carbonate platform karst landscape of Croatia helped to create the weathering conditions to form bauxite, gypsum, clay, amphibolite, granite, spilite, gabbro, diabase and limestone.
Topography
Most of Croatia is lowlands, with elevations of less than above sea level recorded in 53.42% of the country. Most of the lowlands are found in the country's northern regions, especially in Slavonia, representing a part of the Pannonian Basin. Areas with elevations of above sea level encompass 25.61% of Croatia's territory, and the areas between above sea level cover 17.11% of the country. A further 3.71% of the land is above sea level, and only 0.15% of Croatia's territory is elevated greater than above sea level. The greatest concentration of ground at relatively high elevations is found in the Lika and Gorski Kotar areas in the Dinaric Alps, but such areas are found in all regions of Croatia to some extent. The Pannonian Basin and the Dinaric Alps, along with the Adriatic Basin, represent major geomorphological parts of Croatia.
Adriatic Basin
Croatia's Adriatic Sea mainland coast is long, while its 1,246 islands and islets have a further of coastline. The distance between the extreme points of Croatia's coastline is . The number of islands includes all islands, islets, and rocks of all sizes, including ones emerging only at low tide. The largest islands in the Adriatic are Cres and Krk, each covering ; the tallest is Brač, reaching above sea level. The islands include 47 permanently inhabited ones, the most populous among them being Krk and Korčula.
The shore is the most indented coastline in the Mediterranean. The majority of the coast is characterised by a karst topography, developed from the Adriatic Carbonate Platform. Karstification there largely began after the final raising of the Dinarides in the Oligocene and Miocene epochs, when carbonate rock was exposed to atmospheric effects such as rain; this extended to below the present sea level, exposed during the Last Glacial Maximum's sea level drop. It is estimated that some karst formations are related to earlier drops of sea level, most notably the Messinian salinity crisis. The eastern coast's largest part consists of carbonate rocks, while flysch rock is significantly represented in the Gulf of Trieste coast, on the Kvarner Gulf coast opposite Krk, and in Dalmatia north of Split. There are comparably small alluvial areas of the Adriatic coast in Croatia—most notably the Neretva river delta. Western Istria is gradually subsiding, having sunk about in the past 2,000 years.
In the Middle Adriatic Basin, there is evidence of Permian volcanism in the area of Komiža on the island of Vis, in addition to the volcanic islands of Jabuka and Brusnik. Earthquakes are frequent in the area around the Adriatic Sea, although most are too faint to be felt; an earthquake doing significant damage happens every few decades, with major earthquakes every few centuries.
Dinaric Alps
The Dinaric Alps are linked to a Late Jurassic to recent times fold and thrust belt, itself part of the Alpine orogeny, extending southeast from the southern Alps. The Dinaric Alps in Croatia encompass the entire Gorski Kotar and Lika regions, as well as considerable parts of Dalmatia, with their northeastern edge running from Žumberak to the Banovina region, along the Sava River, and their westernmost landforms being Ćićarija and Učka mountains in Istria. The Dinaric Alps contain the highest mountain in Croatia— Dinara—as well as all other mountains in Croatia higher than : Biokovo, Velebit, Plješivica, Velika Kapela, Risnjak, Svilaja and Snježnik.
Karst topography makes up about half of Croatia and is especially prominent in the Dinaric Alps. There are numerous caves in Croatia, 49 of which are deeper than , 14 deeper than and 3 deeper than . The longest cave in Croatia, Kita Gaćešina, is at the same time the longest cave in the Dinaric Alps at .
Pannonian Basin
The Pannonian Basin took shape through Miocenian thinning and subsidence of crust structures formed during the Late Paleozoic Variscan orogeny. The Paleozoic and Mesozoic structures are visible in Papuk and other Slavonian mountains. The processes also led to the formation of a stratovolcanic chain in the basin 12–17 Mya; intensified subsidence was observed until 5 Mya as well as flood basalts at about 7.5 Mya. The contemporary tectonic uplift of the Carpathian Mountains severed water flow to the Black Sea and the Pannonian Sea formed in the basin. Sediments were transported to the basin from the uplifting Carpathian and Dinaric mountains, with particularly deep fluvial sediments being deposited in the Pleistocene epoch during the Transdanubian Mountains' formation. Ultimately, up to of sediment was deposited in the basin, and the sea eventually drained through the Iron Gate gorge.
The results are large plains in eastern Slavonia's Baranya and Syrmia regions, as well as in river valleys, especially along the Sava, Drava and Kupa. The plains are interspersed by horst and graben structures, believed to have broken the Pannonian Sea's surface as islands. The tallest among such landforms are Ivanšćica and Medvednica north of Zagreb—both are also at least partially in Hrvatsko Zagorje—as well as Psunj and Papuk that are the tallest among the Slavonian mountains surrounding Požega. Psunj, Papuk and adjacent Krndija consist mostly of Paleozoic rocks from 300 to 350 Mya. Požeška gora, adjacent to Psunj, consists of much more recent Neogene rocks, but there are also Upper Cretaceous sediments and igneous rocks forming the main, ridge of the hill; these represent the largest igneous landform in Croatia. A smaller piece of igneous terrain is also present on Papuk, near Voćin. The two, as well as the Moslavačka gora mountains, are possibly remnants of a volcanic arc from the same tectonic plate collision that caused the Dinaric Alps.
Hydrography
The largest part of Croatia—62% of its territory—is encompassed by the Black Sea drainage basin. The area includes the largest rivers flowing in the country: the Danube, Sava, Drava, Mura and Kupa. The rest belongs to the Adriatic Sea drainage basin, where the largest river by far is the Neretva. The longest rivers in Croatia are the Sava, Drava, Kupa and a section of the Danube. The longest rivers emptying into the Adriatic Sea are the Cetina and an only section of the Neretva.
The largest lakes in Croatia are Lake Vrana located in the northern Dalmatia, Lake Dubrava near Varaždin, Peruća Lake (reservoir) on the Cetina River, Lake Prokljan near Skradin and Lake Varaždin reservoir through which the Drava River flows near Varaždin. Croatia's most famous lakes are the Plitvice lakes, a system of 16 lakes with waterfalls connecting them over dolomite and limestone cascades. The lakes are renowned for their distinctive colours, ranging from turquoise to mint green, grey or blue. Croatia has a remarkable wealth in terms of wetlands. Four of those are included in the Ramsar list of internationally important wetlands: Lonjsko Polje along the Sava and Lonja rivers near Sisak, Kopački Rit at the confluence of the Drava and Danube, the Neretva Delta and Crna Mlaka near Jastrebarsko.
Average annual precipitation and evaporation rates are and , respectively. Taking into consideration the overall water balance, the total Croatian water resources amount to per year per capita, including per year per capita from sources inside Croatia.
Climate
Most of Croatia has a moderately warm and rainy oceanic climate (Cfb) as defined by the Köppen climate classification. Mean monthly temperatures range between (in January) and (in July). The coldest parts of the country are Lika and Gorski Kotar where a snowy forested climate is found at elevations above . The warmest areas of Croatia are at the Adriatic coast and especially in its immediate hinterland, which are characterised by a Mediterranean climate since temperatures are moderated by the sea. Consequently, temperature peaks are more pronounced in the continental areas: the lowest temperature of was recorded on 4 February 1929 in Gospić, and the highest temperature of was recorded on 5 August 1981 in Ploče.
The mean annual precipitation is depending on the geographic region and prevailing climate type. The least precipitation is recorded in the outer islands (Vis, Lastovo, Biševo, and Svetac) and in the eastern parts of Slavonia; however, in the latter case it is mostly during the growing season. The most precipitation is observed on the Dinara mountain range and in Gorski Kotar, where some of the highest annual precipitation totals in Europe occur.
The prevailing winds in the interior are light to moderate northeast or southwest; in the coastal area, the prevailing winds are determined by local area features. Higher wind velocities are more often recorded in cooler months along the coast, generally as cool northeasterly buras or, less frequently, as warm southerly jugos. The sunniest parts of the country are the outer islands, Hvar and Korčula, where more than 2,700 hours of sunshine are recorded per year, followed by the southern Adriatic Sea area in general, northern Adriatic coast, and Slavonia, all with more than 2,000 hours of sunshine per year.
Climate change
Biodiversity
Croatia can be subdivided between a number of ecoregions because of its climate and geomorphology, and the country is consequently one of the richest in Europe in terms of biodiversity. There are four types of biogeographical regions in Croatia: Mediterranean along the coast and in its immediate hinterland, Alpine in most of Lika and Gorski Kotar, Pannonian along the Drava and Danube, and continental in the remaining areas. Among the most significant are karst habitats; these include submerged karst, such as Zrmanja and Krka canyons and tufa barriers, as well as underground habitats. The karst geology has produced approximately 7,000 caves and pits, many of which are inhabited by troglobitic (exclusively cave-dwelling) animals such as the olm, a cave salamander and the only European troglobitic vertebrate. Forests are also significant in the country, as they cover representing 46.8% of Croatia's land surface. The other habitat types include wetlands, grasslands, bogs, fens, scrub habitats, coastal and marine habitats. In terms of phytogeography, Croatia is part of the Boreal Kingdom; specifically, it is part of the Illyrian and Central European provinces of the Circumboreal Region and the Adriatic province of the Mediterranean Region. The World Wide Fund for Nature divides land in Croatia into three ecoregions—Pannonian mixed forests, Dinaric Mountains mixed forests and Illyrian deciduous forests. Biomes in Croatia include temperate broadleaf/mixed forest and Mediterranean forests, woodlands and scrub; all are in the Palearctic realm.
Croatia has 38,226 known taxa, 2.8% of which are endemic; the actual number (including undiscovered species) is estimated to be between 50,000 and 100,000. The estimate is supported by nearly 400 new taxa of invertebrates discovered in Croatia in 2000–2005 alone. There are more than a thousand endemic species, especially in the Velebit and Biokovo mountains, Adriatic islands and karst rivers. Legislation protects 1,131 species. Indigenous cultivars of plants and breeds of domesticated animals are also numerous; they include five breeds of horses, five breeds of cattle, eight breeds of sheep, two breeds of pigs and a poultry breed. Even the indigenous breeds include nine endangered or critically endangered ones.
There are 444 Croatian protected areas, encompassing 8.5% of the country. These include 8 national parks, 2 strict reserves and 11 nature parks, accounting for 78% of the total protected area. The most famous protected area and the oldest national park in Croatia is the Plitvice Lakes National Park, a UNESCO World Heritage Site. Velebit Nature Park is a part of the UNESCO Man and the Biosphere Programme. The strict and special reserves, as well as the national and nature parks, are managed and protected by the central government, while other protected areas are managed by counties. In 2005, the National Ecological Network was set up as the first step in preparation for EU membership and joining the Natura 2000 network.
Habitat destruction represents a threat to biodiversity in Croatia, as developed and agricultural land is expanded into previous natural habitats, while habitat fragmentation occurs as roads are created or expanded. A further threat to biodiversity is the introduction of invasive species, with Caulerpa racemosa and C. taxifolia identified as especially problematic ones. The invasive algae are monitored and regularly removed to protect the benthic habitat. Agricultural monocultures have also been identified as a threat to biodiversity.
Ecology
The ecological footprint of Croatia's population and industry varies significantly between the country's regions since 50% of the population resides in 26.8% of the nation's territory, with a particularly high impact made by the city of Zagreb and Zagreb County areas—their combined area comprises 6.6% of Croatia's territory while encompassing 25% of the population. The ecological footprint is most notably from the increased development of settlements and the sea coast leading to habitat fragmentation. Between 1998 and 2008, the greatest changes of land use pertained to artificially developed areas, but the scale of development is negligible compared to EU member states.
The Croatian Environment Agency (CEA), a public institution established by the Government of Croatia to collect and analyse information on the environment, has identified further ecological problems as well as various degrees of progress in terms of curbing their environmental impact. These problems include inadequate legal landfills as well as the presence of illegal landfills; between 2005 and 2008, 62 authorised and 423 illegal landfills were rehabilitated. In the same period, the number of issued waste management licences doubled, while the annual municipal solid waste volume increased by 23%, reaching per capita. The processes of soil acidification and organic matter degradation are present throughout Croatia, with increasing soil salinity levels in the Neretva river plain and spreading areas of alkali soil in Slavonia.
Croatian air pollution levels reflect the drop in industrial production recorded in 1991 at the onset of the Croatian War of Independence—pre-war emission levels were only reached in 1997. The use of desulfurised fuels has led to a 25% reduction of sulphur dioxide emissions between 1997 and 2004, and a further 7.2% drop by 2007. The rise in NOx emissions halted in 2007 and reversed in 2008. The use of unleaded petrol reduced emissions of lead into the atmosphere by 91.5% between 1997 and 2004. Air quality measurements indicate that the air in rural areas is essentially clean, and in urban centres it generally complies with legal requirements. The most significant sources of greenhouse gas (GHG) emissions in Croatia are energy production (72%), industry (13%) and agriculture (11%). The average annual increase of GHG emissions is 3%, remaining within the Kyoto Protocol limits. Between 1990 and 2007, the use of ozone depleting substances was reduced by 92%; their use is expected to be abolished by 2015.
Even though Croatia has sufficient water resources at its disposal, these are not uniformly distributed and public water supply network losses remain high—estimated at 44%. Between 2004 and 2008, the number of stations monitoring surface water pollution increased by 20%; the CEA reported 476 cases of water pollution in this period. At the same time organic waste pollution levels decreased slightly, which is attributed to the completion of new sewage treatment plants; their number increased 20%, reaching a total of 101. Nearly all of Croatia's groundwater aquifers are top quality, unlike the available surface water; the latter's quality varies in terms of biochemical oxygen demand and bacteriological water analysis results. As of 2008, 80% of the Croatian population are served by the public water supply system, but only 44% of the population have access to the public sewerage network, with septic systems in use. Adriatic Sea water quality monitoring between 2004 and 2008 indicated very good, oligotrophic conditions along most of the coast, while areas of increased eutrophication were identified in the Bay of Bakar, the Bay of Kaštela, the Port of Šibenik and near Ploče; other areas of localised pollution were identified near the larger coastal cities. In the period between 2004 and 2008, the CEA identified 283 cases of marine pollution (including 128 from vessels), which was a drop of 15% relative to the period encompassed by the previous report, 1997 to August 2005.
Land use
As of 2006, 46.8% of Croatia was occupied by of forest and shrub, while a further or 40.4% of the land was used for diverse agricultural uses including , or 7.8% of the total, for permanent crops. Bush and grass cover was present on or 8.4% of the territory, inland waters took up or 1.0% and marshes covered or 0.4% of the country. Artificial surfaces (primarily consisting of urban areas, roads, non-agricultural vegetation, sports areas and other recreational facilities) took up or 3.1% of the country's area. The greatest impetus for land use changes is the expansion of settlements and road construction.
Because of the Croatian War of Independence, there are numerous leftover minefields in Croatia, largely tracing former front lines. As of 2006, suspected minefields covered . As of 2012, 62% of the remaining minefields are situated in forests, 26% of them are found in agricultural land, and 12% are found in other land; it is expected that mine clearance will be complete by 2019.
Regions
Croatia is traditionally divided into numerous, often overlapping geographic regions, whose borders are not always clearly defined. The largest and most readily recognisable ones throughout the country are Central Croatia (also described as the Zagreb macro-region), Eastern Croatia (largely corresponding with Slavonia), and Mountainous Croatia (Lika and Gorski Kotar; to the west of Central Croatia). These three comprise the inland or continental part of Croatia. Coastal Croatia consists of a further two regions: Dalmatia or the southern littoral, between the general area of the city of Zadar and the southernmost tip of the country; and the northern littoral located north of Dalmatia, encompassing the Croatian Littoral and Istria. The geographical regions generally do not conform to county boundaries or other administrative divisions, and all of them encompass further, more specific, geographic regions.
Human geography
Demographics
The demographic features of the Croatian population are known through censuses, normally conducted in ten-year intervals and analysed by various statistical bureaus since the 1850s. The Croatian Bureau of Statistics has performed this task since the 1990s. The latest census in Croatia was performed in April 2011. The permanent population of Croatia at the 2011 census had reached 4.29 million. The population density was 75.8 inhabitants per square kilometre, and the overall life expectancy in Croatia at birth is 75.7 years. The population rose steadily (with the exception of censuses taken following the two world wars) from 2.1 million in 1857 until 1991, when it peaked at 4.7 million. Since 1991, Croatia's death rate has continuously exceeded its birth rate; the natural growth rate of the population is thus currently negative. Croatia is currently in the demographic transition's fourth or fifth stage. In terms of age structure, the population is dominated by the 15‑ to 64‑year‑old segment. The median age of the population is 41.4, and the gender ratio of the total population is 0.93 males per 1 female.
Croatia is inhabited mostly by Croats (89.6%), while minorities include Serbs (4.5%) and 21 other ethnicities (less than 1% each) recognised by the Constitution of Croatia. The demographic history of Croatia is marked by significant migrations, including: the Croats' arrival in the area; the growth of the Hungarian and German speaking population after the personal union of Croatia and Hungary; joining of the Habsburg Empire; migrations set off by the Ottoman conquests; and the growth of the Italian-speaking population in Istria and Dalmatia during the Venetian rule there. After Austria-Hungary's collapse, the Hungarian population declined, while the German-speaking population was forced out or fled during the last part of and after World War II, and a similar fate was suffered by the Italian population. The late 19th century and the 20th century were marked by large scale economic migrations abroad. The 1940s and the 1950s in Yugoslavia were marked by internal migrations in Yugoslavia, as well as by urbanisation. The most recent significant migrations came as a result of the Croatian War of Independence when hundreds of thousands were displaced.
The Croatian language is Croatia's official language, but the languages of constitutionally-recognised minorities are officially used in some local government units. Croatian is the native language identified by 96% of the population. A 2009 survey revealed that 78% of Croatians claim knowledge of at least one foreign language—most often English. The largest religions of Croatia are Roman Catholicism (86.3%), Orthodox Christianity (4.4%) and Islam (1.5%). Literacy in Croatia stands at 98.1%. The proportion of the population aged 15 and over attaining academic degrees has grown rapidly since 2001, doubling and reaching 16.7% by 2008. An estimated 4.5% of GDP is spent for education. Primary and secondary education are available in Croatian and in the languages of recognised minorities. Croatia has a universal health care system and in 2010, the nation spent 6.9% of its GDP on healthcare. The net monthly income in September 2011 averaged 5,397 kuna ( ). The most significant sources of employment in 2008 were wholesale and retail trade, the manufacturing industry and construction. In October 2011, the unemployment rate was 17.4%. Croatia's median equivalent household income tops the average Purchasing Power Standard of the ten countries which joined the EU in 2004, while trailing the EU average. The 2011 census recorded a total of 1.5 million private households; most owned their own housing.
Political geography
Croatia was first subdivided into counties in the Middle Ages. The divisions changed over time to reflect losses of territory to Ottoman conquest and subsequent liberation of the same territory, in addition to changes in the political status of Dalmatia, Dubrovnik and Istria. The traditional division of the country into counties was abolished in the 1920s, when the Kingdom of Serbs, Croats and Slovenes and the subsequent Kingdom of Yugoslavia introduced oblasts and banovinas respectively. Communist-ruled Croatia, as a constituent part of post-WWII Yugoslavia, abolished earlier divisions and introduced (mostly rural) municipalities, subdividing Croatia into approximately one hundred municipalities. Counties were reintroduced in 1992 by legislation, significantly altered in terms of territory relative to the pre-1920s subdivisions—for instance, in 1918 the Transleithanian part of Croatia was divided into eight counties with their seats in Bjelovar, Gospić, Ogulin, Požega, Vukovar, Varaždin, Osijek and Zagreb, while the 1992 legislation established 14 counties in the same territory. Međimurje County was established in the eponymous region acquired through the 1920 Treaty of Trianon. (The 1990 Croatian Constitution provided for a Chamber of the Counties as part of the government, and for counties themselves without specifying their names or number. However, the counties were not actually re-established until 1992, and the first Chamber of the Counties was elected in 1993.)
Since the counties were re-established in 1992, Croatia has been divided into 20 counties and the capital city of Zagreb, the latter having the authority and legal status of a county and a city at the same time (Zagreb County outside the city is administratively separate as of 1997). The county borders have changed in some instances since (for reasons such as historical ties and requests by cities), with the latest revision taking place in 2006. The counties subdivide into 127 cities and 429 municipalities.
The EU Nomenclature of Territorial Units for Statistics (NUTS) division of Croatia is performed in several tiers. NUTS 1 level places the entire country in a single unit, while there are three NUTS 2 regions; these are Central and Eastern (Pannonian) Croatia, Northwest Croatia and Adriatic Croatia. The last encompasses all counties along the Adriatic coast. Northwest Croatia includes the city of Zagreb and Krapina-Zagorje, Varaždin, Koprivnica-Križevci, Međimurje and Zagreb counties, and the Central and Eastern (Pannonian) Croatia includes the remaining areas—Bjelovar-Bilogora, Virovitica-Podravina, Požega-Slavonia, Brod-Posavina, Osijek-Baranja, Vukovar-Syrmia, Karlovac and Sisak-Moslavina counties. Individual counties and the city of Zagreb represent NUTS 3 level subdivision units in Croatia. The NUTS Local administrative unit divisions are two-tiered. The LAU 1 divisions match the counties and the city of Zagreb—in effect making these the same as NUTS 3 units—while the LAU 2 subdivisions correspond to the cities and municipalities of Croatia.
Urbanisation
The average urbanisation rate in Croatia stands at 56%, with a growing urban population and shrinking rural population. The largest city and the nation's capital is Zagreb, with an urban population of 686,568 in the city itself. Zagreb's metropolitan area encompasses 341 additional settlements and, by the year 2001, the population of the area had reached 978,161; approximately 60% of Zagreb County's residents live in Zagreb's metropolitan area, as does about 41% of Croatia's urban population. The cities of Split and Rijeka are the largest settlements on the Croatian Adriatic coast, with each city's population being over 100,000. There are four other Croatian cities exceeding 50,000 people: Osijek, Zadar, Pula and Slavonski Brod; the Zagreb district of Sesvete, which has the status of a standalone settlement but not a city, also has such a large population. A further eleven cities are populated by more than 20,000.
See also
Geography of Europe
References
Works cited
External links |
5576 | https://en.wikipedia.org/wiki/Demographics%20of%20Croatia | Demographics of Croatia | The demographic characteristics of the population of Croatia are known through censuses, normally conducted in ten-year intervals and analysed by various statistical bureaus since the 1850s. The Croatian Bureau of Statistics has performed this task since the 1990s. The latest census in Croatia was performed in autumn of 2021. According to final results published on 22 September 2022 the permanent population of Croatia at the 2021 census (31st Aug) had reached 3.87 million. The population density is 68.7 inhabitants per square kilometre, and the overall life expectancy in Croatia at birth was 78,2 years in 2018. The population rose steadily (with the exception of censuses taken following the two world wars) from 2.1 million in 1857 until 1991, when it peaked at 4.7 million. Since 1991, Croatia's death rate has continuously exceeded its birth rate; the natural growth rate of the population is negative. Croatia is in the fourth (or fifth) stage of the demographic transition. In terms of age structure, the population is dominated by the 15 to 64 year‑old segment. The median age of the population is 43.4, and the gender ratio of the total population is 0.93 males per 1 female.
Croatia is inhabited mostly by Croats (91.63%), while minorities include Serbs (3.2%), and 21 other ethnicities (less than 1% each). The demographic history of Croatia is marked by significant migrations, including the arrival of the Croats in the area growth of Hungarian and German-speaking population since the union of Croatia and Hungary, and joining of the Habsburg Empire, migrations set off by Ottoman conquests and growth of Italian speaking population in Istria and in Dalmatia during Venetian rule there. After the collapse of Austria-Hungary, the Hungarian population declined, while the German-speaking population was forced or compelled to leave after World War II and similar fate was suffered by the Italian population. Late 19th century and the 20th century were marked by large scale economic migrations abroad. The 1940s and the 1950s in Yugoslavia were marked by internal migrations in Yugoslavia, as well as by urbanisation. Recently, significant migrations came as a result of the Croatian War of Independence when hundreds of thousands were displaced, while the 2010s brought a new wave of emigration which strengthened after Croatia's accession to the EU in 2013.
Croatian is the official language, but minority languages are officially used in some local government units. Croatian is declared as the native language by 95.60% of the population. A 2009 survey revealed that 78% of Croatians claim knowledge of at least one foreign language—most often English. The main religions of Croatia are Roman Catholic (86.28%), Eastern Orthodoxy (4.44%) and Islam (1.47%). Literacy in Croatia stands at 98.1%. The proportion of the population aged 15 and over attaining academic degrees grew rapidly since 2001, doubling and reaching 16.7% by 2008. An estimated 4.5% of the GDP is spent for education. Primary and secondary education are available in Croatian and in languages of recognised minorities. Croatia has a universal health care system and in 2010, the nation spent 6.9% of its GDP on healthcare. Net monthly income in September 2011 averaged 5,397 kuna ( 729 euro). The most significant sources of employment in 2008 were manufacturing industry, wholesale and retail trade and construction. In January 2020, the unemployment rate was 8.4%. Croatia's median equivalent household income tops average Purchasing Power Standard of the ten countries which joined the EU in 2004, while trailing the EU average. 2011 census recorded a total of 1.5 million private households, which predominantly owned their own housing. The average urbanisation rate in Croatia stands at 56%, with an augmentation of the urban population and a reduction of the rural population.
Population
With a population of 3,871,833 in 2021, Croatia ranks 128th in the world by population. Its population density is 75.8 inhabitants per square kilometre. The overall life expectancy in Croatia at birth is 78 years.
The total fertility rate of 1.50 children per mother is one of the lowest in the world. Since 1991, Croatia's death rate has nearly continuously exceeded its birth rate. The Croatian Bureau of Statistics forecast that the population may even shrink to 3.1 million by 2051, depending on the actual birth rate and the level of net migration. The population of Croatia rose steadily from 2.1 million in 1857 until 1991, when it peaked at 4.7 million, with the exception of censuses taken in 1921 and 1948, i.e. following two world wars. The natural growth rate of the population is negative. Croatia started advancing from the first stage of the demographic transition in the late 18th and early 19th centuries (depending on where in Croatia is being discussed). Croatia is in the fourth or fifth stage of the demographic transition.
An explanation for the population decrease in the 1990s is the Croatian War of Independence. During the war, large sections of the population were displaced and emigration increased. In 1991, in predominantly Serb areas, more than 400,000 Croats and other non-Serbs were either removed from their homes by the Croatian Serb forces or fled the violence. In 1995, during the final days of the war, more than 120,000 and perhaps as many as 200,000 Serbs fled the country before the arrival of Croatian forces during Operation Storm. Within a decade following the end of the war, only 117,000 Serb refugees returned out of the 300,000 displaced during the entire war. According to 2001 Croatian census there were 201,631 Serbs in Croatia, compared to the census from 1991 when the number was 581,663. Most of Croatia's remaining Serbs never lived in areas occupied in the Croatian War of Independence. Serbs have been only partially re-settled in the regions they previously inhabited, while some of the settlements previously inhabited by Serbs were settled by Croat refugees from Bosnia and Herzegovina, mostly from Republika Srpska.
In 2014, there were 39,566 live births in Croatia, comprising 20,374 male and 19,192 female children. Virtually all of those were performed in medical facilities; only 19 births occurred elsewhere. Out of the total number, 32,677 children were born in wedlock or within 300 days after the end of the marriage, and the average age of mothers at the birth of their first child was 28.4 years. General fertility rate, i.e. number of births per 1,000 women aged 15–49 is 42.9, with the age specific rate peaking at 101.0 per million for women aged 25–29. In 2009, 52,414 persons died in Croatia, 48.5% of whom died in medical facilities and 90.0% of whom were receiving medical treatment at the time. Cardiovascular disease and cancer were the primary causes of death in the country, with 26,235 and 13,280 deaths respectively. In the same year, there were 2,986 violent deaths, including 2,121 due to accidents. The latter figure includes 616 deaths in traffic accidents. In 2014, the birth rate was 9.3 per mille, exceeded by the mortality rate of 12.0 per mille. The infant mortality rate was 5.0 per mille in 2014. In terms of age structure, the population of Croatia is dominated by the 15–64 year older segment (68.1%), while the size of the population younger than 15 and older than 64 is relatively small (15.1% and 16.9% respectively). The median age of the population is 41.4. The sex ratio of the population is 1.06 males per 1 female at birth and up to 14 years of age, and 0.99 males per 1 female between the ages of 15 and 64. But at ages over 64 the ratio is 0.64 males per 1 female. The ratio for the total population is 0.93 males per 1 female.
In contrast to the shrinking native population, since the late 1990s there has been a positive net migration into Croatia, reaching a level of more than 7,000 net immigrants in 2006. In accordance with its immigration policy, Croatia is also trying to entice emigrants to return. Croatian citizenship is acquired in a multitude of ways, based on origin, place of birth, naturalization and international treaties. In recent years, the Croatian government has been pressured each year to add 40% to work permit quotas for foreign workers.
There were 8,468 immigrants to Croatia in 2009, more than half of them (57.5%) coming from Bosnia and Herzegovina, a sharp decline from the previous year's 14,541. In the same year, there were 9,940 emigrants from the country, 44.8% of them leaving to Serbia. The number of emigrants represents a substantial increase compared to the figure of 7,488 recorded in 2008. In 2009, the net migration to and from abroad peaked in the Sisak-Moslavina County (−1,093 persons) and the city of Zagreb (+830 persons).
In 2009, a total of 22,382 marriages were performed in Croatia as well as 5,076 divorces. The 2001 census recorded 1.47 million households in the country.
Census data
The first modern population census in the country was conducted in 1857, and 15 more have been performed since then. Since 1961 the censuses are conducted in regular ten-year intervals, with the latest one in 2011. The first institution set up in the country specifically for the purposes of maintaining population statistics was the State Statistical Office, founded in 1875. Since its founding, the office changed its name and structure several times and was alternately subordinated to other institutions and independent, until the most recent changes in 1992, when the institution became the Croatian Bureau of Statistics. The 2011 census was performed on 1–28 April 2011, recording situation as of 31 March 2011. The first census results, containing the number of the population by settlement, were published on 29 June 2011, and the final comprehensive set of data was published in December 2012. The 2011 census and processing of the data gathered by the census was expected to cost 171.9 million kuna (23.3 million euro). The 2011 census was performed using new methodology: the permanent population was determined as the enumerated population who lived in the census area for at least 12 months prior to the census, or plans to live in the same area for at least 12 months after the census. This method was also retroactively applied to the 2001 census data.
Total Fertility Rate from 1880 to 1899
The total fertility rate is the number of children born per woman. It is based on fairly good data for the entire period. Sources: Our World in Data and Gapminder Foundation.
Total Fertility Rate from 1915 to 1940
Vital statistics
Births and deaths before WWI
Births and deaths after WWII
Source: Croatian Bureau of Statistics
Current vital statistics
Structure of the population
Marriages and divorces
Ethnic groups
Croatia is inhabited mostly by Croats (91.63%), while minority groups include:Serbs (3.2%), Bosniaks, Hungarians, Italians, Albanians, Slovenes, Germans, Czechs, Roma and others (less than 1% each). The Constitution of the Republic of Croatia explicitly identifies 22 minorities. Those are Serbs, Czechs, Slovaks, Italians, Istro-Romanians ("Vlachs"), Hungarians, Jews, Germans, Austrians, Ukrainians, Romanians, Ruthenians, Macedonians, Bosniaks, Slovenes, Montenegrins, Russians, Bulgarians, Poles, Roma, Turks and Albanians.
1900–1931
1948–2021
Significant migrations
The demographic history of Croatia is characterised by significant migrations, starting with the arrival of the Croats in the area. According to the work De Administrando Imperio written by the 10th-century Byzantine Emperor Constantine VII, the Croats arrived in the area of modern-day Croatia in the early 7th century. However, that claim is disputed, and competing hypotheses date the event between the 6th and the 9th centuries. Following the establishment of a personal union of Croatia and Hungary in 1102, and the joining of the Habsburg Empire in 1527, the Hungarian and German-speaking population of Croatia began gradually increasing in number. The processes of Magyarization and Germanization varied in intensity but persisted to the 20th century. The Ottoman conquests initiated a westward migration of parts of the Croatian population; the Burgenland Croats are direct descendants of some of those settlers. To replace the fleeing Croats the Habsburgs called on the Orthodox populations of Bosnia and Serbia to provide military service in the Croatian Military Frontier. Serb migration into this region peaked during the Great Serb Migrations of 1690 and 1737–39. Similarly, Venetian Republic rule in Istria and in Dalmatia, following the Fifth and the Seventh Ottoman–Venetian Wars ushered gradual growth of Italian speaking population in those areas. Following the collapse of Austria-Hungary in 1918, the Hungarian population declined, especially in the areas north of the Drava river, where they represented the majority before World War I.
The period between 1890 and World War I was marked by large economic emigration from Croatia to the United States, and particularly to the areas of Pittsburgh, Pennsylvania, Cleveland, Ohio, and Chicago, Illinois. Besides the United States, the main destination of the migrants was South America, especially Argentina, Chile, Bolivia and Peru. It is estimated that 500,000 people left Croatia during this period. After World War I, the main focus of emigration shifted to Canada, where about 15,000 people settled before the onset of World War II. During World War II and in the period immediately following the war, there were further significant demographic changes as the German-speaking population, the Volksdeutsche, were either forced or otherwise compelled to leave—reducing their number from the prewar German population of Yugoslavia of 500,000, living in parts of present-day Croatia and Serbia, to the figure of 62,000 recorded in the 1953 census. A similar fate was suffered by the Italian population in Yugoslavia populating parts of present-day Croatia and Slovenia, as 350,000 left for Italy. The 1940s and the 1950s in Yugoslavia were marked by colonisation of settlements where the displaced Germans used to live by people from the mountainous parts of Bosnia and Herzegovina, Serbia and Montenegro, and migrations to larger cities spurred on by the development of industry. In the 1960s and 1970s, another wave of economic migrants left Croatia. They largely moved to Canada, Australia, New Zealand and Western Europe. During this period, 65,000 people left for Canada, and by the mid-1970s there were 150,000 Croats who moved to Australia. Particularly large European emigrant communities of Croats exist in Germany, Austria and Switzerland, which largely stem from the 1960s and 1970s migrations.
A series of significant migrations came as a result of the 1991–1995 Croatian War of Independence. In 1991, more than 400,000 Croats and other non-Serbs were displaced by the Croatian Serb forces or fled the violence in areas with significant Serb populations. During the final days of the war, in 1995, between 120,000 and 200,000 Serbs fled the country following the Operation Storm. Ten years after the war, only a small portion of Serb refugees returned out of the 400,000 displaced during the entire war. Most of the Serbs in Croatia who remained never lived in areas occupied during the Croatian War of Independence. Serbs have been only partially re-settled in the regions they previously inhabited; some of these areas were later settled by Croat refugees from Bosnia and Herzegovina.
Significant migrations have been happening after the accession of Croatia to the European Union, with a persistent growth since 2013, and the population leaving is largely younger and more educated.
Demographic losses in the 20th century wars and pandemics
In addition to demographic losses through significant migrations, the population of Croatia suffered significant losses due to wars and epidemics. In the 20th century alone, there were several such events. The first was World War I, when the loss of the population of Croatia amounted to an estimated 190,000 persons, or about 5.5% of the total population recorded by the 1910 census. The 1918 flu pandemic started to take its toll in Croatia in July 1918, with peaks of the disease occurring in October and November. Available data is scarce, but it is estimated that the pandemic caused at least 15,000–20,000 deaths. Around 295,000 people were killed on the territory of present-day Croatia during World War II, according to the demographer Bogoljub Kočović. The demise of the armed forces of the Independent State of Croatia and of the civilians accompanying the troops at the end of World War II was followed by the Yugoslav death march of Nazi collaborators. A substantial number of people were executed, but the exact number is disputed. The claims range from 12,000–15,000 to as many as 80,000 killed in May 1945. Finally, approximately 20,000 were killed or went missing during the 1991–1995 Croatian War of Independence. The figure pertains only to those persons who would have been recorded by the 1991 census as living in Croatia.
Other demographic statistics
Demographic statistics according to the World Population Review.
One birth every 14 minutes
One death every 10 minutes
Net loss of one person every 22 minutes
One net migrant every 72 minutes
The following demographic statistics are from the CIA World Factbook.
Population
4,270,480 (July 2018 est.)
Age structure
0-14 years: 14.21% (male 312,805 /female 293,931)
15-24 years: 11.09% (male 242,605 /female 230,853)
25-54 years: 40.15% (male 858,025 /female 856,455)
55-64 years: 14.65% (male 304,054 /female 321,543)
65 years and over: 19.91% (male 342,025 /female 508,184) (2018 est.)
Median age
total: 43.3 years. Country comparison to the world: 20th
male: 41.4 years
female: 45.3 years (2018 est.)
Birth rate
8.8 births/1,000 population (2018 est.) Country comparison to the world: 208th
Death rate
12.4 deaths/1,000 population (2018 est.) Country comparison to the world: 16th
Total fertility rate
1.41 children born/woman (2018 est.) Country comparison to the world: 212nd
Net migration rate
-1.4 migrant(s)/1,000 population (2018 est.) Country comparison to the world: 150th
Population growth rate
-0.51% (2018 est.) Country comparison to the world: 221st
Mother's mean age at first birth
28 years (2014 est.)
Life expectancy at birth
total population: 76.3 years (2018 est.) Country comparison to the world: 87th
male: 73.2 years (2018 est.)
female: 79.6 years (2018 est.)
Ethnic groups
Croat 90.4%, Serb 4.4%, other 4.4% (including Bosniak, Hungarian, Slovene, Czech, and Romani), unspecified 0.8% (2011 est.)
Languages
Croatian (official) 95.6%, Serbian 1.2%, other 3% (including Hungarian, Czech, Slovak, and Albanian), unspecified 0.2% (2011 est.)
Religions
Roman Catholic 86.3%, Orthodox 4.4%, Muslim 1.5%, other 1.5%, unspecified 2.5%, not religious or atheist 3.8% (2011 est.)
Nationality
noun: Croat(s), Croatian(s)
adjective: Croatian
note: the French designation of "Croate" to Croatian mercenaries in the 17th century eventually became "Cravate" and later came to be applied to the soldiers' scarves – the cravat; Croatia celebrates Cravat Day every 18 October
Dependency ratios
total dependency ratio: 50.9 (2015 est.)
youth dependency ratio: 22.4 (2015 est.)
elderly dependency ratio: 28.5 (2015 est.)
potential support ratio: 3.5 (2015 est.)
Urbanization
urban population: 56.9% of total population (2018)
rate of urbanization: -0.08% annual rate of change (2015–20 est.)
Literacy
definition: age 15 and over can read and write (2015 est.)
total population: 99.3%
male: 99.7%
female: 98.9% (2015 est.)
School life expectancy (primary to tertiary education)
total: 15 years
male: 14 years
female: 16 years (2016)
Unemployment, youth ages 15–24
total: 31.3% (2016 est.) Country comparison to the world: 26th
male: 31.2% (2016 est.)
female: 31.3% (2016 est.)
Languages
Croatian is the official language of Croatia, and one of 24 official languages of the European Union since 2013. Minority languages are in official use in local government units where more than a third of the population consists of national minorities or where local legislation mandates their use. These languages are Czech, Hungarian, Italian, Ruthenian, Serbian and Slovak. Besides these, the following languages are also recognised: Albanian, Bosnian, Bulgarian, German, Hebrew, Macedonian, Montenegrin, Polish, Romanian, Romani, Russian, Rusyn, Slovenian, Turkish and Ukrainian. According to the 2021 Census, 95.25% of citizens of Croatia declared Croatian as their native language, 1.16% declared Serbian as their native language, while no other language is represented in Croatia by more than 0.5% of native speakers among the population of Croatia.
In the region of Dalmatia, each city historically spoke a variant of the Dalmatian language. It developed from Latin like all Romance languages, but became heavily influenced by Venetian and Croatian. The language fell out of use in the region by the 16th century and went extinct when the last speaker died in 1898.
Croatian replaced Latin as the official language of the Croatian government in 1847. The Croatian lect is generally viewed as one of the four standard varieties of the Shtokavian dialect of Serbo-Croatian, a South Slavic language. Croatian is written using the Latin alphabet and there are three major dialects spoken on the territory of Croatia, with the Shtokavian idiom used as the literary standard. The Chakavian and Kajkavian dialects are distinguished by their lexicon, phonology, and syntax.
From 1961 to 1991, the official language was formally designated as Serbo-Croatian or Croato-Serbian. Even during socialist rule, Croats often referred to their language as Croato-Serbian (instead of Serbo-Croatian) or as Croatian. Croatian and Serbian variants of the language were not officially recognised as separate at the time, but referred to as the "West" and "East" versions, and preferred different alphabets: the Gaj's Latin alphabet and Karadžić's Cyrillic alphabet. Croats are protective of their language from foreign influences, as the language was under constant change and threats imposed by previous rulers (i.e. Austrian German, Hungarian, Italian and Turkish words were changed and altered to "Slavic" looking/sounding ones).
A 2009 survey revealed that 78% of Croats claim knowledge of at least one foreign language. According to a survey ordered by the European commission in 2005, 49% of Croats speak English as their second language, 34% speak German, and 14% speak Italian. French and Russian are spoken by 4% each, and 2% of Croats speak Spanish. A substantial proportion of Slovenes (59%) have a certain level of knowledge of Croatian.
Religions
The main religions of Croatia are Roman Catholicism 78.97%, no religion 6.39%, other Christianity 4.84%, undeclared 3.86%, Eastern Orthodoxy 3.32%, Islam 1.32%, Protestantism 0.26%, others 1.87%. In the Eurostat Eurobarometer Poll of 2005, 67% of the population of Croatia responded that "they believe there is a God" and 7% said they do not believe "there is any sort of spirit, God, or life force", while 25% expressed a belief in "some sort of spirit or life force". In a 2009 Gallup poll, 70% answered affirmatively when asked "Is religion an important part of your daily life?" Significantly, a 2008 Gallup survey of the Balkans indicated church and religious organisations as the most trusted institutions in the country. The survey revealed that 62% of the respondents assigned "a lot" or "some" trust to those institutions, ranking them ahead of all types of governmental, international or non-governmental institutions.
Public schools allow religious education, in cooperation with religious communities that have agreements with the government, but attendance is not mandatory. The classes are organized widely in public elementary and secondary schools. In 2009, 92% of elementary school pupils and 87% of secondary school students attended the religious education classes. Public holidays in Croatia also include the religious festivals of Epiphany, Easter Monday, Feast of Corpus Christi, Assumption Day, All Saints' Day, Christmas, and St. Stephen's or Boxing Day. The religious festival public holidays are based on the Catholic liturgical year, but citizens of the Republic of Croatia who celebrate different religious holidays have the right not to work on those dates. This includes Christians who celebrate Christmas on 7 January per the Julian calendar, Muslims on the days of Eid al-Fitr and Eid al-Adha, and Jews on the days of Rosh Hashanah and Yom Kippur. Marriages performed by the religious communities having agreements with the state are officially recognized, eliminating the need to register the marriages in a registrar office.
The legal position of religious communities is defined by special legislation, specifically regarding government funding, tax benefits, and religious education in schools. Other matters are left to each religious community to negotiate separately with the government. Registration of the communities is not mandatory, but registered communities become legal persons and enjoy tax and other benefits. The law stipulates that to be eligible for registration, a religious group must have at least 500 believers and be registered as a civil association for 5 years. Religious groups based abroad must submit written permission for registration from their country of origin.
Education
Literacy in Croatia is 98.1 percent. The 2001 census reported that 15.7% of the population over the age of 14 has an incomplete elementary education, and 21.9% has only an elementary school education. 42.8% of the population over the age of 14 has a vocational education and 4.9% completed gymnasium. 4.2% of the same population received an undergraduate degree, while 7.5% received an academic degree, and 0.5% received a postgraduate or a doctoral degree. Croatia recorded a substantial growth of the population attaining academic degrees and by 2008, this population segment was estimated to encompass 16.7% of the total population of Croatians 15 and over. A worldwide study about the quality of living in different countries published by Newsweek in August 2010 ranked the Croatian education system at 22nd, a position shared with Austria. In 2004, it was estimated that 4.5% of the GDP is spent for education, while schooling expectancy was estimated to 14 years on average. Primary education in Croatia starts at the age of six or seven and consists of eight grades. In 2007 a law was passed to increase free, noncompulsory education until 18 years of age. Compulsory education consists of eight grades of elementary school. Secondary education is provided by gymnasiums and vocational schools. As of 2010, there are 2,131 elementary schools and 713 schools providing various forms of secondary education. Primary and secondary education are also available in languages of recognised minorities in Croatia, where classes are held in Czech, Hungarian, Italian, Serbian and German languages.
There are 84 elementary level and 47 secondary level music and art schools, as well as 92 schools for disabled children and youth and 74 schools for adults. Nationwide leaving exams () were introduced for secondary education students in the 2009–2010 school year. It comprises three compulsory subjects (Croatian language, mathematics, and a foreign language) and optional subjects and is a prerequisite for a university education.
Croatia has eight public universities, the University of Zagreb, University of Split, University of Rijeka, University of Osijek, University of Zadar, University of Dubrovnik, University of Pula and Dubrovnik International University.
The University of Zadar, the first university in Croatia, was founded in 1396 and remained active until 1807, when other institutions of higher education took over. It was reopened in 2002. The University of Zagreb, founded in 1669, is the oldest continuously operating university in Southeast Europe. There are also 11 polytechnics and 23 higher education institutions, of which 19 are private. In total, there are 132 institutions of higher education in Croatia, attended by more than 145 thousand students.
There are 205 companies, government or education system institutions and non-profit organizations in Croatia pursuing scientific research and the development of technology. Combined, they spent more than 3 billion kuna (400 million euro) and employed 10,191 full-time research staff in 2008. Among the scientific institutes operating in Croatia, the largest is the Ruđer Bošković Institute in Zagreb. The Croatian Academy of Sciences and Arts in Zagreb is a learned society promoting language, culture, arts and science since its inception in 1866. Scientists from Croatia include inventors and Nobel Prize winners.
Health
Croatia has a universal health care system, the roots of which can be traced back to the Hungarian-Croatian Parliament Act of 1891, providing a form of mandatory insurance for all factory workers and craftsmen. The population is covered by a basic health insurance plan provided by statute and optional insurance. In 2014, the annual compulsory healthcare related expenditures reached 21.8 billion kuna (2.9 billion euro). Healthcare expenditures comprise only 0.6% of private health insurance and public spending. In 2010, Croatia spent 6.9% of its GDP on healthcare, representing a decline from approximately 8% estimated in 2008, when 84% of healthcare spending came from public sources. According to the World Health Organization (WHO), Croatia ranks around the 50th in the world in terms of life expectancy.
There are hundreds of healthcare institutions in Croatia, including 79 hospitals and clinics with 23,967 beds. The hospitals and clinics care for more than 700 thousand patients per year and employ 5,205 medical doctors, including 3,929 specialists. There are 6,379 private practice offices, and a total of 41,271 health workers in the country. There are 63 emergency medical service units, responding to more than a million calls. The principal cause of death in 2008 was cardiovascular disease at 43.5% for men and 57.2% for women, followed by tumours, at 29.4% for men and 21.4% for women. Other significant causes of death are injuries, poisonings and other external causes (7.7% men/3.9% women), digestive system diseases (5.7% men/3.6% women), respiratory system diseases (5.1% men/3.5% women) and endocrine, nutritional and metabolic diseases (2.1% men/3.0% women). There is no other cause of disease affecting more than 3% of the population. In 2014 only 22 Croatians had been infected with HIV/AIDS and 4 had died from the disease. In 2008 it was estimated by the WHO that 27.4% of Croatians over age of 15 were smokers. According to 2003 WHO data, 22% of the Croatian adult population is obese.
Economic indicators
Personal income, jobs and unemployment
Net monthly income in September 2011 averaged 5,397 kuna ( 729 euro), dropping 2.1% relative to the previous month. In the same month, gross monthly income averaged 7,740 kuna ( 1,046 euro), and it includes the net salary along with income tax, retirement pension insurance, healthcare insurance, occupational safety and health insurance and employment promotion tax. The average net monthly income grew compared to 5,311 kuna ( 717 euro) in 2009 or 3,326 kuna ( 449 euro) in 2000. The highest net salaries were paid in financial services sector, and in April 2011 those averaged 10,041 kuna ( 1,356 euro), while the lowest ones, paid in the same month, were in the manufacturing and leather processing industries, averaging at 2,811 kuna ( 380 euro). Since January 2016, the minimum wage in Croatia is 3,120 kuna before tax ( 400 euro).
Number of employed persons recorded steady growth between 2000 and 2008 when it peaked, followed by 4% decline in 2009. That year, there were 1.499 million employed persons, with 45% of that number pertaining to women. The total number of employed persons includes 252,000 employed in crafts and freelance professionals and 35,000 employed in agriculture. The most significant sources of employment in 2008 were manufacturing industry and wholesale and retail trade (including motor vehicle repair services) employing 278,640 and 243,640 respectively. Further significant employment sector was construction industry comprising 143,336 jobs that year. In the same year, more than 100,000 were employed in public administration, defence and compulsory social insurance sector as well as in education. Since 2009, negative trends persisted in Croatia with jobs in the industry declined further by 3.5%. Number of unemployed and retired persons combined exceeded number of employed in August 2010, as it fell to 1.474 million. In 2009, labour force consisted of 1.765 million persons out of 3.7 million working age population—aged 15 and over. In October 2011, unemployment rate stood at 17.4%. 7.2% of employed persons hold a second job.
In comparison with the member states of the European Union (EU), Croatia's median equivalent household income in terms of the Purchasing Power Standard (PPS) stands at 470, topping average PPS of the ten countries which joined the EU in 2004 (EU10), as well as Romania and Bulgaria, while significantly lagging behind the EU average. Within Croatia, the highest PPS is recorded in Istria County (769), the City of Zagreb (640) and the Primorje-Gorski Kotar County (576). The lowest PPS is observed in the Bjelovar-Bilogora County and the Virovitica-Podravina County (267).
Urbanisation and housing
2011 census recorded a total of 1,534,148 private households in Croatia as well as 1,487 other residential communities such as retirement homes, convents etc. At the same time, there were 1,923,522 permanent housing units—houses and apartments. 2001 census recorded 1.66 million permanent housing units, including 196 thousand intermittently occupied and 42 thousand abandoned ones. Average size of a permanently used housing unit is . The intermittently used housing units include 182 thousand vacation houses and 8 thousand houses used during agricultural works. The same census also recorded 25 thousand housing units used for business purposes only. As of 2007, 71% of the households owned their own housing and had no mortgage or other loans to repay related to the housing, while further 9% were repaying loans for their housing. The households vary by type and include single households (13%), couples (15%), single parent households (4%), couples with children (27%) and extended family households (20%). There are approximately 500 homeless persons in Croatia, largely living in Zagreb.
Average urbanisation rate in Croatia stands at 56%, with the maximum rate recorded within the territory of the City of Zagreb, where it reached 94.5% and Zagreb metropolitan area comprising the City of Zagreb and the Zagreb County, where it stands at 76.4%. Very significant rate of urbanisation was observed in the second half of the 20th century. 1953 census recorded 57% of population which was active in agriculture, while a census performed in 1991
noted only 9.1% of population active in that field. This points to augmentation of urban population and reduction of rural population.
See also
Croats
Croatian diaspora
Croatian Bureau of Statistics
Demographics of the Kingdom of Yugoslavia
Demographics of the Socialist Federal Republic of Yugoslavia
Notes
References
Sources
External links
Human Rights Watch Report "Broken Promises: Impediments to Refugee Return to Croatia"
United Nations Statistics Division Millennium Indicators for Croatia
Population of Croatia 1931–2001
Society of Croatia
Demographics of Yugoslavia |
5577 | https://en.wikipedia.org/wiki/Politics%20of%20Croatia | Politics of Croatia | The politics of Croatia are defined by a parliamentary, representative democratic republic framework, where the Prime Minister of Croatia is the head of government in a multi-party system. Executive power is exercised by the Government and the President of Croatia. Legislative power is vested in the Croatian Parliament (). The Judiciary is independent of the executive and the legislature. The parliament adopted the current Constitution of Croatia on 22 December 1990 and decided to declare independence from Yugoslavia on 25 May 1991. The Constitutional Decision on the Sovereignty and Independence of the Republic of Croatia
came into effect on 8 October 1991. The constitution has since been amended several times. The first modern parties in the country developed in the middle of the 19th century, and their agenda and appeal changed, reflecting major social changes, such as the breakup of Austria-Hungary, the Kingdom of Serbs, Croats and Slovenes, dictatorship and social upheavals in the kingdom, World War II, the establishment of Communist rule and the breakup of the SFR Yugoslavia.
The President of the Republic () is the head of state and the commander in chief of the Croatian Armed Forces and is directly elected to serve a five-year term. The government (), the main executive power of Croatia, is headed by the prime minister, who has four deputy prime ministers who serve also as government ministers. Twenty ministers are in charge of particular activities. The executive branch is responsible for proposing legislation and a budget, executing the laws, and guiding the foreign and internal policies. The parliament is a unicameral legislative body. The number of Sabor representatives (MPs) ranges from 100 to 160; they are elected by popular vote to serve four-year terms. The powers of the legislature include enactment and amendment of the constitution and laws; adoption of the government budget, declarations of war and peace, defining national boundaries, calling referendums and elections, appointments and relief of officers, supervising the Government of Croatia and other holders of public powers responsible to the Sabor, and granting of amnesties. The Croatian constitution and legislation provides for regular presidential and parliamentary elections, and the election of county prefects (county presidents) and assemblies, and city and municipal mayors and councils.
Croatia has a three-tiered, independent judicial system governed by the Constitution of Croatia and national legislation enacted by the Sabor. The Supreme Court () is the highest court of appeal in Croatia, while municipal and county courts are courts of general jurisdiction. Specialised courts in Croatia are: commercial courts and the Superior Commercial Court, misdemeanour courts and the Superior Misdemeanour Court, administrative courts and the Superior Administrative Court. Croatian Constitutional Court () is a court that deals primarily with constitutional law. Its main authority is to rule on whether laws that are challenged are in fact unconstitutional, i.e., whether they conflict with constitutionally established rights and freedoms. The State Attorney's Office represents the state in legal proceedings.
Legal framework
Croatia is a unitary democratic parliamentary republic. Following the collapse of the ruling Communist League, Croatia adopted a new constitution in 1990 – which replaced the 1974 constitution adopted by the Socialist Republic of Croatia – and organised its first multi-party elections. While the 1990 constitution remains in force, it has been amended four times since its adoption—in 1997, 2000, 2001 and 2010. Croatia declared independence from Yugoslavia on 8 October 1991, which led to the breakup of Yugoslavia. Croatia's status as a country was internationally recognised by the United Nations in 1992. Under its 1990 constitution, Croatia operated a semi-presidential system until 2000 when it switched to a parliamentary system. Government powers in Croatia are divided into legislative, executive and judiciary powers. The legal system of Croatia is civil law and, along with the institutional framework, is strongly influenced by the legal heritage of Austria-Hungary. By the time EU accession negotiations were completed on 30 June 2010, Croatian legislation was fully harmonised with the Community acquis. Croatia became a member state of the European Union on 1 July 2013.
Executive
The President of the Republic () is the head of state. The president is directly elected and serves a five-year term. The president is the commander in chief of the armed forces, has the procedural duty of appointing the prime minister with the consent of the Sabor (Parliament) through a majority vote (majority of all MPs), and has some influence on foreign policy. The most recent presidential election was held on 11 January 2015 and was won by Zoran Milanovic. He took the oath of office on ?. The constitution limits holders of the presidential office to a maximum of two terms and prevents the president from being a member of any political party. Consequently, the president-elect withdraws from party membership before inauguration.
The government (), the main executive power of Croatia, is headed by the prime minister who has four deputies, who also serve as government ministers. There are 16 other ministers who are appointed by the prime minister with the consent of the Sabor (majority of all MPs); these are in charge of particular sectors of activity. As of 19 October 2016, the Deputy Prime Ministers are Martina Dalić, Davor Ivo Stier, Ivan Kovačić, and Damir Krstičević. Government ministers are from the Croatian Democratic Union (HDZ), and the Bridge of Independent Lists (MOST) with five independent ministers. The executive branch is responsible for proposing legislation and a budget, executing the laws, and guiding the country's foreign and domestic policies. The government's official residence is at Banski dvori. As of 19 October 2016, the prime minister is Andrej Plenković.
|President
|Zoran Milanović
|Social Democratic Party of Croatia
|19 February 2021
|-
|Prime Minister
|Andrej Plenković
|Croatian Democratic Union
|19 October 2016
|}
Legislature
The Parliament of Croatia () is a unicameral legislative body. A second chamber, the Chamber of Counties (), was set up in 1993 pursuant to the 1990 Constitution. The Chamber of Counties was originally composed of three deputies from each of the twenty counties and the city of Zagreb. However, as it had no practical power over the Chamber of Representatives, it was abolished in 2001 and its powers were transferred to the county governments. The number of Sabor representatives can vary from 100 to 160; they are all elected by popular vote and serve four-year terms. 140 members are elected in multi-seat constituencies, up to six members are chosen by proportional representation to represent Croatians living abroad and five members represent ethnic and national communities or minorities. The two largest political parties in Croatia are the Croatian Democratic Union (HDZ) and the Social Democratic Party of Croatia (SDP). The last parliamentary election was held on 11 September 2016 in Croatia and on 10 and 11 September 2016 abroad.
The Sabor meets in public sessions in two periods; the first from 15 January to 30 June, and the second from 15 September to 15 December. Extra sessions can be called by the President of the Republic, by the president of the parliament or by the government. The powers of the legislature include enactment and amendment of the constitution, enactment of laws, adoption of the state budget, declarations of war and peace, alteration of the country's boundaries, calling and conducting referendums and elections, appointments and relief of office, supervising the work of the Government of Croatia and other holders of public powers responsible to the Sabor, and granting amnesty. Decisions are made based on a majority vote if more than half of the Chamber is present, except in cases of constitutional issues.
Elections
The Croatian constitution and legislation provides for regular elections for the office of the President of the Republic, parliamentary, county prefects, county assemblies, city and municipal mayors and city and municipal councils. The President of the Republic is elected to a five-year term by a direct vote of all citizens of Croatia. A majority vote is required to win. A runoff election round is held in cases where no candidate secures the majority in the first round of voting. The presidential elections are regulated by the constitution and dedicated legislation; the latter defines technical details, appeals and similar issues.
140 members of parliament are elected to a four-year term in ten multi-seat constituencies, which are defined on the basis of the existing county borders, with amendments to achieve a uniform number of eligible voters in each constituency to within 5%. Citizens of Croatia living abroad are counted in an eleventh constituency; however, its number of seats was not fixed for the last parliamentary election. It was instead calculated based on numbers of votes cast in the ten constituencies in Croatia and the votes cast in the eleventh constituency. In the 2007 parliamentary election the eleventh constituency elected five MPs. Constitutional changes first applied in the 2011 parliamentary election have abolished this scheme and permanently assigned three MPs to the eleventh constituency. Additionally, eight members of parliament are elected by voters belonging to twenty-two recognised minorities in Croatia: the Serb minority elects three MPs, Hungarians and Italians elect one MP each, Czech and Slovak minorities elect one MP jointly, while all other minorities elect two more MPs to the parliament. The Standard D'Hondt formula is applied to the vote, with a 5% election threshold. The last parliamentary election, held in 2016, elected 151 MPs.
The county prefects and city and municipal mayors are elected to four-year terms by majority of votes cast within applicable local government units. A runoff election is held if no candidate achieves a majority in the first round of voting. Members of county, city, and municipal councils are elected to four-year terms through proportional representation; the entire local government unit forms a single constituency. The number of council members is defined by the councils themselves based on applicable legislation. Electoral committees are then tasked with determining whether the national minorities are represented in the council as required by the constitution. If the minorities are not represented, further members, who belong to the minorities and who have not been elected through the proportional representation system, are selected from electoral candidate lists and added to the council.
Latest presidential election
Latest parliamentary election
Judiciary
Croatia has a three-tiered, independent judicial system governed by the constitution and national legislation enacted by the Sabor. The Supreme Court () is the highest court of appeal in Croatia; its hearings are open and judgments are made publicly, except in cases where the privacy of the accused is to be protected. Judges are appointed by the National Judicial Council and judicial office is permanent until seventy years of age. The president of the Supreme Court is elected for a four-year term by the Croatian Parliament at the proposal of the President of the Republic. As of 2017, the president of the Supreme Court is Đuro Sessa. The Supreme Court has civil and criminal departments. The lower two levels of the three-tiered judiciary consist of county courts and municipal courts. There are fifteen county courts and sixty-seven municipal courts in the country.
There are other specialised courts in Croatia; commercial courts and the Superior Commercial Court, misdemeanour courts that try trivial offences such as traffic violations, the Superior Misdemeanour Court, the Administrative Court and the Croatian Constitutional Court (). The Constitutional Court rules on matters regarding compliance of legislation with the constitution, repeals unconstitutional legislation, reports any breaches of provisions of the constitution to the government and the parliament, declares the speaker of the parliament acting president upon petition from the government in the event the country's president becomes incapacitated, issues consent for commencement of criminal procedures against or arrest of the president, and hears appeals against decisions of the National Judicial Council. The court consists of thirteen judges elected by members of the parliament for an eight-year term. The president of the Constitutional Court is elected by the court judges for a four-year term. As of June 2012, the president of the Constitutional Court is Jasna Omejec. The National Judicial Council () consists of eleven members, specifically seven judges, two university professors of law and two parliament members, nominated and elected by the Parliament for four-year terms, and may serve no more than two terms. It appoints all judges and court presidents, except in case of the Supreme Court. As of January 2015, the president of the National Judicial Council is Ranko Marijan, who is also a Supreme Court judge.
The State Attorney's Office represents the state in legal procedures. As of April 2018, Dražen Jelenić is the General State Attorney, and there are twenty-three deputies in the central office and lower-ranking State Attorneys at fifteen county and thirty-three municipal State Attorney's Offices. The General State Attorney is appointed by the parliament. A special State Attorney's Office dedicated to combatting corruption and organised crime, USKOK, was set up in late 2001.
Local government
Croatia was first subdivided into counties () in the Middle Ages. The divisions changed over time to reflect losses of territory to Ottoman conquest and the subsequent recapture of the same territory, and changes to the political status of Dalmatia, Dubrovnik and Istria. The traditional division of the country into counties was abolished in the 1920s, when the Kingdom of Serbs, Croats and Slovenes and the subsequent Kingdom of Yugoslavia introduced oblasts and banovinas respectively. After 1945 under Communist rule, Croatia, as a constituent part of Yugoslavia, abolished these earlier divisions and introduced municipalities, subdividing Croatia into approximately one hundred municipalities. Counties, significantly altered in terms of territory relative to the pre-1920s subdivisions, were reintroduced in 1992 legislation. In 1918, the Transleithanian part of Croatia was divided into eight counties with their seats in Bjelovar, Gospić, Ogulin, Požega, Vukovar, Varaždin, Osijek and Zagreb; the 1992 legislation established fifteen counties in the same territory. Since the counties were re-established in 1992, Croatia is divided into twenty counties and the capital city of Zagreb, the latter having the authority and legal status of a county and a city at the same time. In some instances, the boundaries of the counties have been changed, with the latest revision taking place in 2006. The counties subdivide into 128 cities and 428 municipalities.
The county prefects, city and municipal mayors are elected to four-year terms by a majority of votes cast within applicable local government units. If no candidate achieves a majority in the first round, a runoff election is held. Members of county, city and municipal councils are elected to four-year terms, through proportional representation with the entire local government unit as a single constituency.
The number of members of the councils is defined by the councils themselves, based on applicable legislation. Electoral committees are then tasked with determining whether the national ethnic minorities are represented on the council as required by the constitution. Further members who belong to the minorities may be added to the council in no candidate of that minority has been elected through the proportional representation system. Election silence, as in all other types of elections in Croatia, when campaigning is forbidden, is enforced the day before the election and continues until 19:00 hours on the election day when the polling stations close and exit polls may be announced. Eight nationwide local elections have been held in Croatia since 1990, the most recent being the 2017 local elections to elect county prefects and councils, and city and municipal councils and mayors. In 2017, the HDZ-led coalitions won a majority or plurality in fifteen county councils and thirteen county prefect elections. SDP-led coalitions won a majority or plurality in five county councils, including the city of Zagreb council, and the remaining county council election was won by IDS-SDP coalition. The SDP won two county prefect elections, the city of Zagreb mayoral election, the HSS and the HNS won a single county prefect election each.
History
Within Austria-Hungary
Events of 1848 in Europe and the Austrian Empire brought dramatic changes to Croatian society and politics, provoking the Croatian national revival that strongly influenced and significantly shaped political and social events in Croatia. At the time, the Sabor and Ban Josip Jelačić advocated the severance of ties with the Kingdom of Hungary, emphasising links to other South Slavic lands within the empire. Several prominent Croatian political figures emerged, such as Ante Starčević, Eugen Kvaternik, Franjo Rački and Josip Juraj Strossmayer. A period of neo-absolutism was followed by the Austro-Hungarian Compromise of 1867 and the Croatian–Hungarian Settlement, which granted limited independence to Croatia. This was compounded by Croatian claims of uninterrupted statehood since the early Middle Ages as a basis for a modern state. Two political parties that evolved in the 1860s and contributed significantly to the sentiment were the Party of Rights, led by Starčević and Kvaternik, and the People's Party, led by Janko Drašković, Ivan Kukuljević Sakcinski, Josip Juraj Strossmayer and Ivan Mažuranić. They were opposed by the National Constitutional Party, which was in power for most of the period between the 1860s and the 1918, and advocated closer ties between Croatia and Hungary.
Other significant parties formed in the era were the Serb People's Independent Party, which later formed the Croat-Serb Coalition with the Party of Rights and other Croat and Serb parties. The Coalition ruled Croatia between 1903 and 1918. The leaders of the Coalition were Frano Supilo and Svetozar Pribićević. The Croatian Peasant Party (HSS), established in 1904 and led by Stjepan Radić, advocated Croatian autonomy but achieved only moderate gains by 1918. In Dalmatia, the two major parties were the People's Party – a branch of the People's Party active in Croatia-Slavonia – and the Autonomist Party, advocating maintaining autonomy of Dalmatia, opposite to the People's Party demands for unification of Croatia-Slavonia and Dalmatia. The Autonomist Party, most notably led by Antonio Bajamonti, was also linked to Italian irredentism. By 1900, the Party of Rights had made considerable gains in Dalmatia. The Autonomists won the first three elections, but all elections since 1870 were won by the People's Party. In the period 1861–1918 there were seventeen elections in the Kingdom of Croatia-Slavonia and ten in the Kingdom of Dalmatia.
First and Second Yugoslavia
After the establishment of the Kingdom of Serbs, Croats and Slovenes, the HSS established itself as the most popular Croatian political party and was very popular despite efforts to ban it. The 1921 constitution defined the kingdom as a unitary state and abolished the historical administrative divisions, which effectively ended Croatian autonomy; the constitution was opposed by HSS. The political situation deteriorated further as Stjepan Radić of the HSS was assassinated in the Yugoslav Parliament in 1928, leading to the dictatorship of King Alexander in January 1929. The HSS, now led by Vladko Maček, continued to advocate the federalisation of Yugoslavia, resulting in the Cvetković–Maček Agreement of August 1939 and the autonomous Banovina of Croatia. The Yugoslav government retained control of defence, internal security, foreign affairs, trade, and transport while other matters were left to the Croatian Sabor and a crown-appointed Ban. This arrangement was soon made obsolete with the beginning of World War II, when the Independent State of Croatia, which banned all political opposition, was established. Since then, the HSS continues to operate abroad.
In the 1945 election, the Communists were unopposed because the other parties abstained. Once in power, the Communists introduced a single-party political system, in which the Communist Party of Yugoslavia was the ruling party and the Communist Party of Croatia was its branch. In 1971, the Croatian national movement, which sought greater civil rights and the decentralisation of the Yugoslav economy, culminated in the Croatian Spring, which was suppressed by the Yugoslav leadership. In January 1990, the Communist Party fragmented along national lines; the Croatian faction demanded a looser federation.
Modern Croatia
In 1989, the government of the Socialist Republic of Croatia decided to tolerate political parties in response to growing demands to allow political activities outside the Communist party. The first political party founded in Croatia since the beginning of the Communist rule was the Croatian Social Liberal Party (HSLS), established on 20 May 1989, followed by the Croatian Democratic Union on 17 June 1989. In December 1989, Ivica Račan became the head of the reformed Communist party. At the same time, the party cancelled political trials, released political prisoners and endorsed a multi-party political system. The Civil Organisations Act was formally amended to allow political parties on 11 January 1990, legalising the parties that were already founded.
By the time of the first round of the first multi-party elections, held on 22 April 1990, there were 33 registered parties. The most relevant parties and coalitions were the League of Communists of Croatia – Party of Democratic Changes (the renamed Communist party), the Croatian Democratic Union (HDZ), and the Coalition of People's Accord (KNS), which included the HSLS led by Dražen Budiša, and the HSS, which resumed operating in Croatia in December 1989. The runoff election was held on 6 May 1990. The HDZ, led by Franjo Tuđman, won ahead of the reformed Communists and the KNS. The KNS, led by Savka Dabčević-Kučar and Miko Tripalo – who had led the Croatian Spring – soon splintered into individual parties. The HDZ maintained a parliamentary majority until the 2000 parliamentary election, when it was defeated by the Social Democratic Party of Croatia (SDP), led by Račan. Franjo Gregurić, of the HDZ, was appointed prime minister to head a national unity government in July 1991 as the Croatian War of Independence escalated in intensity. His appointment lasted until August 1992. During his term, Croatia's declaration of independence from Yugoslavia took effect on 8 October 1991. The HDZ returned to power in the 2003 parliamentary election, while the SDP remained the largest opposition party.
Franjo Tuđman won the presidential elections in 1992 and 1997. During his terms, the Constitution of Croatia, adopted in 1990, provided for a semi-presidential system. After Tuđman's death in 1999, the constitution was amended and much of the presidential powers were transferred to the parliament and the government. Stjepan Mesić won two consecutive terms in 2000 and 2005 on a Croatian People's Party (HNS) ticket. Ivo Josipović, an SDP candidate, won the presidential elections in December 2009 and January 2010. Kolinda Grabar-Kitarović defeated Josipović in the January 2015 election run-off, becoming the first female president of Croatia.
In January 2020, former prime minister Zoran Milanovic of the Social Democrats (SDP) won the presidential election. He defeated center-right incumbent Kolinda Grabar-Kitarovic of the ruling Croatian Democratic Union (HDZ) in the second round of the election.
In July 2020, the ruling right-wing HDZ won the parliamentary election. Since 2016 ruled HDZ-led coalition of prime minister Andrej Plenković continued to govern.
See also
List of political parties in Croatia
Foreign relations of Croatia
Left-wing politics in Croatia
Far-right politics in Croatia
References |
5578 | https://en.wikipedia.org/wiki/Economy%20of%20Croatia | Economy of Croatia | The economy of Croatia is a high-income, service-based social market economy with the tertiary sector accounting for 70% of total gross domestic product (GDP).
Croatia has a fully integrated and globalized economy. Croatia's road to globalization started as soon as the country gained independence, with tourism as one of the country's core industries dependent on the global market. Croatia joined the World Trade Organization in 2000, NATO in 2009, has been a member of the European Union since 1 July 2013, and it finally joined the Eurozone and the Schengen Area on 1 January 2023. Croatia is also negotiating membership of OECD organization, which it hopes to join by 2025. Further integration into the EU structures will continue in the coming years, including participation in ESA, CERN as well as EEA membership in the next 24 months.
With its entry into the Eurozone, Croatia is now classified as a developed country or an advanced economy, a designation given by the IMF to highly developed industrial nations, with GDP (nominal) per capita above $20,000, which includes all members of the Eurozone.
Croatia was hit hard by the 2008 global financial crisis, which affected the Croatian economy with a significant downturn in economic growth as well as progress in economic reform that resulted in six years of recession and a cumulative decline in GDP of 12.5%. Croatia formally emerged from the recession in the fourth quarter of 2014, and had continuous GDP growth until 2020. The Croatian economy reached pre crisis levels in 2019, but due to the Coronavirus pandemic GDP decreased by 8.4% in 2020. Growth rebounded in 2021 and Croatia recorded its largest year-over-year GDP growth since 1991.
Croatia's post-pandemic recovery was supported by strong private consumption, better-than-expected performance in the tourism industry, and a boom in merchandise exports. Croatian exports in 2021 and 2022 saw rapid growth of nearly 25% and 26% respectively, with exports in 2021 reaching 143.7 billion kuna and exports in 2022 expanding further by 26% to reach projected 182 billion kuna. Croatian Economy also saw continuation of rapid economic growth based on good tourism receipts and export numbers, as well as rapidly expanding ICT sector which saw rapid growth and revenue that rival Croatian Tourism. ICT sector alone is generating €7 billion of service exports and it is expected to expand further in 2023 and 2024 at an average of 15%.
In 2022, Croatian economy is expected to grow between 5.9 and 7.8% in real terms and it is expected to reach between $72 and $73.6 billion according to preliminary estimates by Croatian Government surpassing early estimates of 491 billion kuna or $68.5 billion. Croatian Purchasing Power Parity in 2022 for the first time should exceed $40 000, however considering Croatian economy experienced 6 years of deep recession, catching up will take several more years of high growth. Economic outlook for 2023 for Croatian economy are mixed, depends largely on how the big Eurozone economies perform, Croatia's largest trading partners; Italy, Germany, Austria, Slovenia and France are expected to slow down, but avoid recession according to latest economic projections and estimates, so Croatian economy as a result could see better then expected results in 2023, early projections of between 1 and 2.6% economic growth in 2023 with inflation at 7% is a significant slow down for the country,however country is experiencing major internal and inward investment cycle unparalleled in recent history. EU recovery funds in tune of €8.7 billion coupled with large EU investments in recently earthquake affected areas of Croatia, as well as major investments by local business in to renewable energy sector, also EU supported and funded, as well as major investments in transport infrastructure and rapidly expanding Croatia's ICT sector, Croatian economy could see continuation of rapid growth in 2023.
Tourism is one of the main pillars of the Croatian economy, comprising 19.6% of Croatia's GDP. Croatia is working to become an energy powerhouse with its floating liquefied natural gas (LNG) regasification terminal on the island of Krk and investments in green energy, particularly wind energy, solar and geothermal energy, having opened 17 MW geothermal power plant in Ciglena in late 2019, that is the largest power plant in continental Europe with binary technology and starting the work on the second one in the summer of 2021. The government intends to spend about $1.4 billion on grid modernisation, with a goal of increasing renewable energy source connections by at least 800 MW by 2026 and 2,500 MW by 2030 and predicts that renewable energy resources as a share of total energy consumption will grow to 36.4% in 2030, and to 65.6% in 2050.
In 2021 Croatia joined the list of countries with its own automobile industry, with Rimac Automobili's Nevera started being produced. The company also took over Bugatti Automobiles in November same year and started building its new HQ in Zagreb, titled as the ‘Rimac Campus’, that will serve as the company’s international research and development (R&D) and production base for all future Rimac products, as well as home of R&D for future Bugatti models. The company also plans to build battery systems for different manufacturers from the automotive industry
This campus will also become the home of R&D for future Bugatti models due to the new joint venture, though these vehicles will be built at Bugatti’s Molsheim plant in France.
On Friday, 12 November 2021 Fitch raised Croatia's credit rating by one level, from ‘BBB-‘ to ‘BBB’, Croatia's highest credit rating in history, with a positive outlook, noting progress in preparations for euro area membership and a strong recovery of the Croatian economy from the pandemic crisis.
In late March 2022 Croatian Bureau of Statistics announced that Croatia's industrial output rose by 4% in February, thus growing for 15 months in a row. Croatia continued to have strong growth during 2022 fuelled by tourism revenue and increased exports. According to a preliminary estimate, Croatia's GDP in Q2 grew by 7.7% from the same period of 2021. The International Monetary Fund (IMF) projected in early September 2022 that Croatia's economy will expand by 5.9% in 2022, whilst EBRD expects Croatian GDP growth to reach 6.5% by the end of 2022. Pfizer announced launching a new production plant in Savski Marof whilst Croatian IT industry grew 3.3% confirming the trend that started with the Coronavirus pandemic where Croatia's digital economy increased by 16 percent on average annually from 2019 to 2021, and by 2030 its value could reach 15 percent of GDP, with the ICT sector the main driver of that growth.
Croatia joined both the Eurozone and Schengen Area in January 2023 which helps strengthen the country’s integration into the European economy and makes cross border trade with both European countries and European trade partners easier. The minimum wage is expected to rise to NET 700 EUR in 2023, further increasing consumer spending and combating the high inflation rate.
History
Pre-1990
When Croatia was still part of the Dual Monarchy, its economy was largely agricultural. However, modern industrial companies were also located in the vicinity of the larger cities. The Kingdom of Croatia had a high ratio of population working in agriculture. Many industrial branches developed in that time, like forestry and wood industry (stave fabrication, the production of potash, lumber mills, shipbuilding). The most profitable one was stave fabrication, the boom of which started in the 1820s with the clearing of the oak forests around Karlovac and Sisak and again in the 1850s with the marshy oak masses along the Sava and Drava rivers. Shipbuilding in Croatia played a huge role in the 1850s Austrian Empire, especially the long-range sailing boats. Sisak and Vukovar were the centres of river-shipbuilding. Slavonia was also mostly an agricultural land and it was known for its silk production. Agriculture and the breeding of cattle were the most profitable occupations of the inhabitants. It produced corn of all kinds, hemp, flax, tobacco, and great quantities of liquorice.
The first steps towards industrialization began in the 1830s and in the following decades the construction of big industrial enterprises took place. During the 2nd half of the 19th and early 20th century there was an upsurge of industry in Croatia, strengthened by the construction of railways and the electric-power production. However, the industrial production was still lower than agricultural production. Regional differences were high. Industrialization was faster in inner Croatia than in other regions, while Dalmatia remained one of the poorest provinces of Austria-Hungary. The slow rate of modernization and rural overpopulation caused extensive emigration, particularly from Dalmatia. According to estimates, roughly 400,000 Croats emigrated from Austria-Hungary between 1880 and 1914. In 1910 8.5% of the population of Croatia-Slavonia lived in urban settlements.
In 1918 Croatia became part of the Kingdom of Yugoslavia, which was in the interwar period one of the least developed countries in Europe. Most of its industry was based in Slovenia and Croatia, but further industrial development was modest and centered on textile mills, sawmills, brick yards and food-processing plants. The economy was still traditionally based on agriculture and raising of livestock, with peasants accounting for more than half of Croatia's population.
In 1941 the Independent State of Croatia (NDH), a World War II puppet state of Germany and Italy, was established in parts of Axis-occupied Yugoslavia. The economic system of NDH was based on the concept of "Croatian socialism". The main characteristic of the new system was the concept of a planned economy with high levels of state involvement in economic life. The fulfillment of basic economic interests was primarily ensured with measures of repression. All large companies were placed under state control and the property of the regime's national enemies was nationalized. Its currency was the NDH kuna. The Croatian State Bank was the central bank, responsible for issuing currency. As the war progressed the government kept printing more money and its amount in circulation was rapidly increasing, resulting in high inflation rates.
After World War II, the new Communist Party of Yugoslavia resorted to a command economy on the Soviet model of rapid industrial development. In accordance with the socialist plan, mainly companies in the pharmaceutical industry, the food industry and the consumer goods industry were founded in Croatia. Metal and heavy industry was mainly promoted in Bosnia and Serbia. By 1948 almost all domestic and foreign-owned capital had been nationalized. The industrialization plan relied on high taxation, fixed prices, war reparations, Soviet credits, and export of food and raw materials. Forced collectivization of agriculture was initiated in 1949. At that time 94% of agricultural land was privately owned, and by 1950 96% was under the control of the social sector. A rapid improvement of food production and the standard of living was expected, but due to bad results the program was abandoned three years later.
Throughout the 1950s Croatia experienced rapid urbanization. Decentralization came in 1965 and spurred growth of several sectors including the prosperous tourist industry. SR Croatia was, after SR Slovenia, the second most developed republic in Yugoslavia with a ~55% higher GDP per capita than the Yugoslav average, generating 31.5% of Yugoslav GDP or $30.1Bn in 1990. Croatia and Slovenia accounted for nearly half of the total Yugoslav GDP, and this was reflected in the overall standard of living. In the mid-1960s, Yugoslavia lifted emigration restrictions and the number of emigrants increased rapidly. In 1971 224,722 workers from Croatia were employed abroad, mostly in West Germany. Foreign remittances contributed $2 billion annually to the economy by 1990. Profits gained through Croatia's industry were used to develop poor regions in other parts of former Yugoslavia, leading to Croatia contributing much more to the federal Yugoslav economy than it gained in return. This, coupled with austerity programs and hyperinflation in the 1980s, led to discontent in both Croatia and Slovenia which eventually fuelled political movements calling for independence.
Transition and war years
In the late 1980s and early 1990s, with the collapse of socialism and the beginning of economic transition, Croatia faced considerable economic problems stemming from:
the legacy of longtime communist mismanagement of the economy;
damage during the internecine fighting to bridges, factories, power lines, buildings, and houses;
the large refugee and displaced population, both Croatian and Bosnian;
the disruption of economic ties; and
mishandled privatization
At the time Croatia gained independence, its economy (and the whole Yugoslavian economy) was in the middle of recession. Privatization under the new government had barely begun when war broke out in 1991. As a result of the Croatian War of Independence, infrastructure sustained massive damage in the period 1991–92, especially the revenue-rich tourism industry. Privatization in Croatia and transformation from a planned economy to a market economy was thus slow and unsteady, largely as a result of public mistrust when many state-owned companies were sold to politically well-connected at below-market prices. With the end of the war, Croatia's economy recovered moderately, but corruption, cronyism, and a general lack of transparency stymied economic reforms and foreign investment. The privatization of large government-owned companies was practically halted during the war and in the years immediately following the conclusion of peace. As of 2000, roughly 70% of Croatia's major companies were still state-owned, including water, electricity, oil, transportation, telecommunications, and tourism.
The early 1990s were characterized by high inflation rates. In 1991 the Croatian dinar was introduced as a transitional currency, but inflation continued to accelerate. The anti-inflationary stabilization steps in 1993 decreased retail price inflation from a monthly rate of 38.7% to 1.4%, and by the end of the year, Croatia experienced deflation. In 1994 Croatia introduced the kuna as its currency.
As a result of the macro-stabilization programs, the negative growth of GDP during the early 1990s stopped and reversed into a positive trend. Post-war reconstruction activity provided another impetus to growth. Consumer spending and private sector investments, both of which were postponed during the war, contributed to the growth in 1995–1997. Croatia began its independence with a relatively low external debt because the debt of Yugoslavia was not shared among its former republics at the beginning. In March 1995 Croatia agreed with the Paris Club of creditor governments and took 28.5% of Yugoslavia's previously non-allocated debt over 14 years. In July 1996 an agreement was reached with the London Club of commercial creditors, when Croatia took 29.5% of Yugoslavia's debt to commercial banks. In 1997 around 60 percent of Croatia's external debt was inherited from former Yugoslavia.
At the beginning of 1998 value-added tax was introduced. The central government budget was in surplus in that year, most of which was used to repay foreign debt. Government debt to GDP had fallen from 27.30% to 26.20% at the end of 1998. However, the consumer boom was disrupted in mid 1998, as a result of the bank crisis when 14 banks went bankrupt. Unemployment increased and GDP growth slowed down to 1.9%. The recession that began at the end of 1998 continued through most of 1999, and after a period of expansion GDP in 1999 had a negative growth of −0.9%. In 1999 the government tightened its fiscal policy and revised the budget with a 7% cut in spending.
In 1999 the private sector share in GDP reached 60%, which was significantly lower than in other former socialist countries. After several years of successful macroeconomic stabilization policies, low inflation and a stable currency, economists warned that the lack of fiscal changes and the expanding role of the state in the economy caused the decline in the late 1990s and were preventing sustainable economic growth.
Economy since 2000
The new government led by the president of SDP, Ivica Račan, carried out a number of structural reforms after it won the parliamentary elections on 3 January 2000. The country emerged from the recession in the 4th quarter of 1999 and growth picked up in 2000. Due to overall increase in stability, the economic rating of the country improved and interest rates dropped. Economic growth in the 2000s was stimulated by a credit boom led by newly privatized banks, capital investment, especially in road construction, a rebound in tourism and credit-driven consumer spending. Inflation remained tame and the currency, the kuna, stable.
In 2000 Croatia generated 5,899 billion kunas in total income from the shipbuilding sector, which employed 13,592 people. Total exports in 2001 amounted to $4,659,286,000, of which 54.7% went to the countries of the EU. Croatia's total imports were $9,043,699,000, 56% of which originated from the EU.
Unemployment reached its peak in late 2002, but has since been steadily declining. In 2003, the nation's economy would officially recover to the amount of GDP it had in 1990. In late 2003 the new government led by HDZ took over the office. Unemployment continued falling, powered by growing industrial production and rising GDP, rather than only seasonal changes from tourism. Unemployment reached an all-time low in 2008 when the annual average rate was 8.6%, GDP per capita peaked at $16,158, while public debt as percentage of GDP decreased to 29%. Most economic indicators remained positive in this period except for the external debt as Croatian firms focused more on empowering the economy by taking loans from foreign resources. Between 2003 and 2007, Croatia's private-sector share of GDP increased from 60% to 70%.
The Croatian National Bank had to take steps to curb further growth of indebtedness of local banks with foreign banks. The dollar debt figure is quite adversely affected by the EUR/USD ratio—over a third of the increase in debt since 2002 is due to currency value changes.
2009–2015
Economic growth has been hurt by the global financial crisis. Immediately after the crisis it seemed that Croatia did not suffer serious consequences like some other countries. However, in 2009, the crisis gained momentum and the decline in GDP growth, at a slower pace, continued during 2010. In 2011 the GDP stagnated as the growth rate was zero. Since the global crisis hit the country, the unemployment rate has been steadily increasing, resulting in the loss of more than 100,000 jobs. While unemployment was 9.6% in late 2007, in January 2014 it peaked at 22.4%. In 2010 Gini coefficient was 0,32. In September 2012, Fitch ratings agency unexpectedly improved Croatia's economic outlook from negative to stable, reaffirming Croatia's current BBB rating. The slow pace of privatization of state-owned businesses and an over-reliance on tourism have also been a drag on the economy.
Croatia joined the European Union on 1 July 2013 as the 28th member state. The Croatian economy is heavily interdependent on other principal economies of Europe, and any negative trends in these larger EU economies also have a negative impact on Croatia. Italy, Germany and Slovenia are Croatia's most important trade partners. In spite of the rather slow post-recession recovery, in terms of income per capita it is still ahead of some European Union member states such as Bulgaria, and Romania. In terms of average monthly wage, Croatia is ahead of 9 EU members (Czech Republic, Estonia, Slovakia, Latvia, Poland, Hungary, Lithuania, Romania, and Bulgaria).
The annual average unemployment rate in 2014 was 17.3% and Croatia has the third-highest unemployment rate in the European Union, after Greece (26.5%), and Spain (24.%). Of particular concern is the heavily backlogged judiciary system, combined with inefficient public administration, especially regarding the issues of land ownership and corruption in the public sector. Unemployment is regionally uneven: it is very high in eastern and southern parts of the country, nearing 20% in some areas, while relatively low in the north-west and in larger cities, where it is between 3 and 7%. In 2015 external debt rose by 2.7 billion euros since the end of 2014 and is now around €49.3 billion.
2016–2020
During 2015 the Croatian economy started with slow but upward economic growth, which continued during 2016 and conclusive at the end of the year seasonally adjusted was recorded at 3.5%. The better than expected figures during 2016 enabled the Croatian Government and with more tax receipts enabled the repayment of debt as well as narrow the current account deficit during Q3 and Q4 of 2016 This growth in economic output, coupled with the reduction of government debt has made a positive impact on the financial markets with many ratings agencies revising their outlook from negative to stable, which was the first upgrade of Croatia's credit rating since 2007. Due to consecutive months of economic growth and the demand for labour, plus the outflows of residents to other European countries, Croatia had recorded the biggest fall in the number of unemployed during the month of November 2016 from 16.1% to 12.7%.
2020– present
2020
COVID-19 Pandemic has caused more than 400,000 workers to file for economic aid of 4000.00 HRK./month. In the first quarter of 2020, Croatian GDP rose by 0.2% but then in Q2 Government of Croatia announced the biggest quarterly GDP plunge of -15.1% since GDP has been measured. Economic activity also plunged in Q3 2020 when GDP slid by an additional -10.0%.
In autumn 2020 European Commission estimated total GDP loss in 2020 to be -9.6%. Growth was set to pick up in the last month of Q1 2021 and the second quarter of 2021 respectively +1.4% and +3.0%, meaning that Croatia was set to reach 2019 levels by 2022.
2021
In July 2021 projection was improved to 5.4% due to the strong outturn in the first quarter and the positive high-frequency indicators concerning consumption, construction, industry and tourism prospects. In November 2021 Croatia outperformed these projections and the real GDP growth was calculated to be 8.1% for the year 2021, improving its projection of 5.4% GDP growth made in July. The recovery was supported by strong private consumption, the better-than-expected performance of tourism and the ongoing resilience of the export sector. Preliminary data point to tourism-related expenditure already exceeding 2019 levels, which has been supportive of both employment and consumption. Exports of goods have also continued to perform strongly (up 43%yoy in 2Q21) pointing to resilient competitiveness. Expressed in euros, Croatian merchandise exports in the first nine months of 2021 amounted to 13.3 billion euros, an annual increase of 24.6 per cent. At the same time, imports rose 20.3 per cent to 20.4 billion euros. The coverage of imports by exports for the first nine months is 65.4 per cent. This made 2021 Croatian export's record year as the score from 2019 was exceeded by 2 billion euros.
Exports recovered in all major markets, more precisely with all EU countries and CEFTA countries. Specifically, on the EU market, only a lower export result is recorded in relations with Sweden, Belgium and Luxembourg. Italy is again the main market for Croatian products, followed by Germany and Slovenia. Apart from the high contribution of crude oil that Ina sends to Hungary to the Mol refinery for processing, the export of artificial fertilizers from Petrokemija also has a significant contribution to growth.
For 2022, the Commission revised downwards its projection for Croatia's economic growth to 5.6% from 5.9% previously predicted in July 2021. Commission again confirmed that the volume of Croatia's GDP should reach its 2019 level during 2022, while in 2023 the GDP will grow by 3.4%. The Commission warned that the key downside risks stem from Croatia's relatively low vaccination rates, which could lead to stricter containment measures, and continued delays of the earthquake-related reconstruction. On the upside, Croatia's entry into the Schengen area and euro adoption towards the end of the forecast period could benefit investment and trade.
On Friday, 12 November 2021 Fitch raised Croatia's credit rating by one level, from ‘BBB-‘ to ‘BBB’, Croatia's highest credit rating in history, with a positive outlook, noting progress in preparations for Eurozone membership and a strong recovery of the Croatian economy from the pandemic crisis. This is also secured by the failure of the eurosceptic party Hrvatski Suverenisti in a bid on the referendum to block Euro adoption in Croatia. In December 2021 Croatia's industrial production increased for the thirteenth consecutive month, observing the growth of production increasing in all of the five aggregates. meaning that industrial production in 2021 increased by 6.7 percent.
2022
In late March 2022 Croatian Bureau of Statistics announced that Croatia's industrial output rose by 4% in February, thus growing for 15 months in a row. Croatia continued to have strong growth during 2022 fuelled by tourism revenue and increased exports. According to a preliminary estimate, Croatia's GDP in Q2 grew by 7.7% from the same period of 2021. The International Monetary Fund (IMF) projected in early September 2022 that Croatia's economy will expand by 5.9% in 2022, whilst EBRD expects Croatian GDP growth to reach 6.5% by the end of 2022. Pfizer announced launching a new production plant in Savski Marof whilst Croatian IT industry grew 3.3% confirming the trend that started with Coronavirus pandemic where the Croatia's digital economy increased by 16 percent on average annually from 2019 to 2021. It is estimated that by 2030 its value could reach 15 percent of GDP, with the ICT sector being the main driver of that growth.
On 12 July 2022, the Eurogroup approved Croatia becoming the 20th member of the Eurozone, with the formal introduction of the Euro currency to take place on 1 January 2023.
Croatia was also set to join the Schengen Area in 2023. By 2023, the minimum wage is ostensibly expected to rise to NET 700 EUR, increasing consumer spending.
Sectors
In 2022, the sector with the highest number of companies registered in Croatia is Services with 110,085 companies followed by Retail Trade and Construction with 22,906 and 22,121 companies respectively.
Industry
Tourism
Tourism is a notable source of income during the summer and a major industry in Croatia. It dominates the Croatian service sector and accounts for up to 20% of Croatian GDP. Annual tourist industry income for 2011 was estimated at €6.61 billion. Its positive effects are felt throughout the economy of Croatia in terms of increased business volume observed in retail business, processing industry orders and summer seasonal employment. The industry is considered an export business, because it significantly reduces the country's external trade imbalance. Since the conclusion of the Croatian War of Independence, the tourist industry has grown rapidly, recording a fourfold rise in tourist numbers, with more than 10 million tourists each year. The most numerous are tourists from Germany, Slovenia, Austria and the Czech Republic as well as Croatia itself. Length of a tourist stay in Croatia averages 4.9 days.
The bulk of the tourist industry is concentrated along the Adriatic Sea coast. Opatija was the first holiday resort since the middle of the 19th century. By the 1890s, it became one of the most significant European health resorts. Later a large number of resorts sprang up along the coast and numerous islands, offering services ranging from mass tourism to catering and various niche markets, the most significant being nautical tourism, as there are numerous marinas with more than 16 thousand berths, cultural tourism relying on appeal of medieval coastal cities and numerous cultural events taking place during the summer. Inland areas offer mountain resorts, agrotourism and spas. Zagreb is also a significant tourist destination, rivalling major coastal cities and resorts.
Croatia has unpolluted marine areas reflected through numerous nature reserves and 99 Blue Flag beaches and 28 Blue Flag marinas. Croatia is ranked as the 18th most popular tourist destination in the world. About 15% of these visitors (over one million per year) are involved with naturism, an industry for which Croatia is world-famous. It was also the first European country to develop commercial naturist resorts.
Agriculture
Croatian agricultural sector subsists from exports of blue water fish, which in recent years experienced a tremendous surge in demand, mainly from Japan and South Korea. Croatia is a notable producer of organic foods and much of it is exported to the European Union. Croatian wines, olive oil and lavender are particularly sought after. Value of Croatia's agriculture sector is around 3.1 billion according to preliminary data released by the national statistics office.
Croatia has around 1.72 million hectares of agricultural land, however totally utilized land for agricultural in 2020 was around 1.506 million hectares, of these permanent pasture land constituted 536 000 hectares or some 35.5% of total land available to agriculture. Croatia imports significant quantity of fruits and olive oil, despite having large domestic production of the same. In terms of livestock Croatian agriculture had some 15.2 million poultry, 453 000 Cattle, 802 000 Sheep, 1.157 000 Pork/Pigs,88 000 Goats. Croatia also produced 67 000 tons of blue fish, some 9000 of these are Tuna fish, which are farmed and exported to Japan, South Korea and United States.
Croatia produced in 2022:
1.66 million tons of maize;
970 thousand tons of wheat;
524 thousand tons of sugar beet (the beet is used to manufacture sugar and ethanol);
319 thousand tons of barley;
196 thousand tons of soybean;
107 thousand tons of potato;
59 thousand tons of rapeseed;
146 thousand tons of grape;
154 thousand tons of sunflower seed;
In addition to smaller productions of other agricultural products, like apple (93 thousand tons), triticale (62 thousand tons) and olive (34 thousand tons).
Infrastructure
Transport
The highlight of Croatia's recent infrastructure developments is its rapidly developed motorway network, largely built in the late 1990s and especially in the 2000s. By January 2022, Croatia had completed more than of motorways, connecting Zagreb to most other regions and following various European routes and four Pan-European corridors. The busiest motorways are the A1, connecting Zagreb to Split and the A3, passing east–west through northwest Croatia and Slavonia. A widespread network of state roads in Croatia acts as motorway feeder roads while connecting all major settlements in the country. The high quality and safety levels of the Croatian motorway network were tested and confirmed by several EuroTAP and EuroTest programs.
Croatia has an extensive rail network spanning , including of electrified railways and of double track railways. The most significant railways in Croatia are found within the Pan-European transport corridors Vb and X connecting Rijeka to Budapest and Ljubljana to Belgrade, both via Zagreb. All rail services are operated by Croatian Railways.
There are international airports in Zagreb, Zadar, Split, Dubrovnik, Rijeka, Osijek and Pula. As of January 2011, Croatia complies with International Civil Aviation Organization aviation safety standards and the Federal Aviation Administration upgraded it to Category 1 rating.
The busiest cargo seaport in Croatia is the Port of Rijeka and the busiest passenger ports are Split and Zadar. In addition to those, a large number of minor ports serve an extensive system of ferries connecting numerous islands and coastal cities in addition to ferry lines to several cities in Italy. The largest river port is Vukovar, located on the Danube, representing the nation's outlet to the Pan-European transport corridor VII.
Energy
There are of crude oil pipelines in Croatia, connecting the Port of Rijeka oil terminal with refineries in Rijeka and Sisak, as well as several transhipment terminals. The system has a capacity of 20 million tonnes per year. The natural gas transportation system comprises of trunk and regional natural gas pipelines, and more than 300 associated structures, connecting production rigs, the Okoli natural gas storage facility, 27 end-users and 37 distribution systems.
Croatian production of energy sources covers 85% of nationwide natural gas demand and 19% of oil demand. In 2008, 47.6% of Croatia's primary energy production structure comprised use of natural gas (47.7%), crude oil (18.0%), fuel wood (8.4%), hydro power (25.4%) and other renewable energy sources (0.5%). In 2009, net total electrical power production in Croatia reached 12,725 GWh and Croatia imported 28.5% of its electric power energy needs. The bulk of Croatian imports are supplied by the Krško Nuclear Power Plant in Slovenia, 50% owned by Hrvatska elektroprivreda, providing 16% of Croatia's electricity.
Electricity:
production: 14.728 GWh (2021)
consumption: 18.869 GWh (2021)
exports: 7.544 GWh (2021)
imports: 11.505 GWh (2021)
Electricity – production by source:
hydro: 26% (2022)
thermal: 24% (2022)
nuclear: 14% (2022)
renewable: 8% (2022)
import: 28% (2022)
Crude oil:
production: 615 thousand tons (2021)
consumption: 2.456 million tons (2021)
exports: 472 thousand tons (2021)
imports: 2.300 million tons (2021)
proved reserves: (2017)
Natural gas:
production: 746 million m³ (2021)
consumption: 2.906 billion m³ (2021)
exports: 126 million m³ (2021)
imports: 2.291 billion m³ (2021)
proved reserves: 21.094 billion m³ (2019)
Stock exchanges
Zagreb Stock Exchange
Banking
Central bank:
Croatian National Bank
Major commercial banks:
Zagrebačka banka (owned by UniCredit from Italy)
Privredna banka Zagreb (owned by Intesa Sanpaolo from Italy)
Hrvatska poštanska banka
OTP Banka (owned by OTP Bank from Hungary)
Raiffeisen Bank Austria (owned by Raiffeisen from Austria)
Erste & Steiermärkische Bank (former Riječka banka, owned by Erste Bank from Austria)
Central Budget
Overall Budget:
Revenues:
187.30 billion kuna (€24.83 billion), 2023
Expenditures:
200.92 billion kuna (€26.63 billion), 2023
Expenditure by ministries for 2023:
Labor and Pension System, Family and Social Policy – €8.12 billion
Finance – €6.64 billion
Science and Education – €3.41 billion
Health – €2.72 billion
Economy and Sustainable Development – €1.96 billion
Maritime Affairs, Transport and Infrastructure – €1.41 billion
Agriculture – €1.16 billion
Interior – €1.05 billion
Defence – €1.04 billion
Justice and Public Administration – €0.54 billion
Construction, Physical Planning and State Property – €0.51 billion
Regional Development and EU funds – €0.50 billion
Culture and Media – €0.42 billion
Tourism and Sport – €0.17 billion
Veterans' Affairs – €0.16 billion
Foreign and European Affairs – €0.13 billion
Economic indicators
The following table shows the main economic indicators for the period 2000–2022 according to the Croatian Bureau of Statistics.
From the CIA World Factbook 2021.
Real GDP (purchasing power parity):
$123.348 billion
Real GDP growth rate:
13.07%
Real GDP per capita:
$31,600
GDP (official exchange rate): $60,687 billion (2019 est.)
Labor force:
1.656 million (2020 est.)
Labor force – by occupation:
agriculture 1.9%, industry 27.3%, services 70.8% (2017 est.)
Unemployment rate:
8.68%
Population below poverty line:
18.3% (2018 est.)
Household income or consumption by percentage share:
lowest 10%:
2.7%
highest 10%:
23%
(2015 est.)
Distribution of family income – Gini index:
28.9 (2018 est.)
Inflation rate (consumer prices):
2.55%
Budget:
revenues:
$212.81 billion (2019 est.)
expenditures:
$211.069 billion, (2019 est.)
Public debt:
104.89% of GDP
Taxes and revenues: 20.6% (of GDP) (2020 est.)
Agricultural products:
maize, wheat, sugar beet, milk, barley, soybeans, potatoes, pork, grapes, sunflower seed
Industries:
chemicals and plastics, machine tools, fabricated metal, electronics, pig iron and rolled steel products, aluminum, paper, wood products, construction materials, textiles, shipbuilding, petroleum and petroleum refining, food and beverages, tourism
Industrial production growth rate:
9.11%
Current account balance:
$2.082 billion
Exports:
$35.308 billion
Exports – commodities:
refined petroleum, packaged medicines, crude petroleum, electricity, electrical transformers
Exports – partners:
Italy 13%,
Germany 13%,
Slovenia 10%,
Bosnia and Herzegovina 9%,
Austria 6%,
Serbia 5% (2019)
Imports:
$36.331 billion
Imports – commodities:
crude petroleum, cars, refined petroleum, packaged medicines, electricity (2019)
Imports – partners:
Germany 14%,
Italy 14%,
Slovenia 11%,
Hungary 7%,
Austria 6%
(2019)
Reserves of foreign exchange and gold:
$28.309 billion (31 December 2021 est.)
Debt – external:
$48.263 billion (2019 est.)
Currency:
euro (EUR)
Exchange rates:
EUR per US$1 –
0.845
Gross Domestic Product
See also
Economy of Europe
Areas of Special State Concern (Croatia)
Croatia and the euro
Croatia and the World Bank
Croatian brands
Taxation in Croatia
References
External links
Croatian National Bank
Croatian Chamber of Economy
GDP per inhabitant varied by one to six across the EU27 Member States
Tariffs applied by Croatia as provided by ITC's ITCMarket Access Map , an online database of customs tariffs and market requirements.
Croatia
Croatia
Croatia |
5580 | https://en.wikipedia.org/wiki/Transport%20in%20Croatia | Transport in Croatia | Transport in Croatia relies on several main modes, including transport by car, train, ship and plane. Road transport incorporates a comprehensive network of state, county and local routes augmented by a network of highways for long-distance travelling. Water transport can be divided into sea, based on the ports of Rijeka, Ploče, Split and Zadar, and river transport, based on Sava, Danube and, to a lesser extent, Drava. Croatia has 9 international airports and several airlines, of which the most notable are Croatia Airlines and Trade Air. Rail network is fairly developed but regarding inter-city transport, bus tends to be far more common than the rail.
Air transport
Croatia counts 9 civil, 13 sport and 3 military airports. There are nine international civil airports: Zagreb Airport, Split Airport, Dubrovnik Airport, Zadar Airport, Pula Airport, Rijeka Airport (on the island of Krk), Osijek Airport, Bol and Mali Lošinj. The two busiest airports in the country are the ones serving Zagreb and Split.
By the end of 2010, significant investments in the renovation of Croatian airports began. New modern and spacious passenger terminals were opened in 2017 at Zagreb and Dubrovnik Airports and in 2019 at Split Airport. The new passenger terminals at Dubrovnik Airport and Zagreb Airport are the first in Croatia to feature jet bridges.
Airports that serve cities on the Adriatic coast receive the majority of the traffic during the summer season due to the large number of flights from foreign air carriers (especially low-cost) that serve these airports with seasonal flights.
Croatia Airlines is the state-owned flag carrier of Croatia. It is headquartered in Zagreb and its main hub is Zagreb Airport.
Croatia is connected by air with a large number of foreign (especially European) destinations, while its largest cities are interconnected by a significant number of domestic air routes such as lines between Zagreb and Split, Dubrovnik and Zadar, between Osijek and Rijeka, between Osijek and Split and between Zadar and Pula. This routes are operated by domestic air carriers such as Croatia Airlines or Trade Air.
Rail transport
Railway corridors
The Croatian railway network is classified into three groups: railways of international, regional and local significance.
The most important railway lines follow Pan-European corridors V/branch B (Rijeka - Zagreb - Budapest) and X, which connect with each other in Zagreb. With international passenger trains, Croatia is directly connected with two of the neighbouring (Slovenia and Hungary), and many medium-distanced Central European countries such as Czech Republic, Slovakia (during the summer season), Austria, Germany and Switzerland.
Dubrovnik and Zadar are the two of the most populous and well known cities in Croatia that are not connected with the railway, while the city of Pula (together with the rest of westernmost Istria County) can only be directly reached by railway through Slovenia (unless one takes the railway company's organized bus service between Rijeka and Lupoglav). As the most of the country's interior-based larger towns are connected with the railway on which regular passenger train operation is provided (opposite to the coastal part of the country), there are many small inland towns, villages and remote areas that are served by the trains running on regional or local corridors.
Infrastructure condition
In Croatia, railways are served by standard-gauge (1,435 mm; 4 ft 8+1⁄2). Construction length of the railway network is 2617 km; 1626.12 mi. (2341 km; /1454.63 mi. of single-track corridors and 276 km / 171.49 mi. of double-track corridors). 1013 km (629.44 mi.) of railways are electrified, according to the annual rail network public report of Croatian Railways (2023 issue). The largest part of country's railway infrastructure dates back from the pre-World War II period and more than half of the core routes were, in fact, built during the Habsburg monarchy i.e. before the World War I. More on that, there were also significant lack of investments and decrease of proper maintenance in Croatian railway infrastructure, roughly from the time of country's independence (1991) to late 2000's, which mainly resulted in slowing of permitted track speeds, increase of the riding times and decrease in the overall quality of passenger transport, especially since 2010's on Inter City level. As a result, fair amount of routes lag significantly behind the West-European standards in the form of infrastructural condition.
However, major infrastructure improvements started to occur in early 2010's and continued through 2020's, such as full-profile reconstruction and/or upgrading of the country's international and most of the regional/local corridors. Those improvements, among other things, results in increasing of both maximum track speed and operation safety, shortening of the travel time and modernization of supporting infrastructure (stations, platforms and other equipment.
First newly built railway in Croatia since 1967 (L214) was opened in December 2019.
The official rail speed record in Croatia is . Maximum speed reached in regular service is .
Passenger transport
All nationwide and commuter passenger rail services in Croatia are operated by the country's national railway company Croatian Railways.
Road transport
From the time of Napoleon and building the Louisiana road, the road transport in Croatia has significantly improved, topping most European countries. Croatian highways are widely regarded as being one of the most modern and safe in Europe. This is because the largest part of the Croatian motorway and expressway system (autoceste and brze ceste, resp.) has been recently constructed (mainly in the 2000s), and further construction is continuing. The motorways in Croatia connect most major Croatian cities and all major seaports. The two longest routes, the A1 and the A3, span the better part of the country and the motorway network connects most major border crossings.
Tourism is of major importance for the Croatian economy, and as most tourists come to vacation in Croatia in their own cars, the highways serve to alleviate summer jams. They have also been used as a means of stimulating urgently needed economic growth, and for the sustainable development of this country. Croatia now has a considerable highway density for a country of its size, helping it cope with the consequences of being a transition economy and having suffered in the Croatian War of Independence.
Some of the most impressive parts of the road infrastructure in Croatia includes the Sveti Rok and Mala Kapela tunnels on the A1 motorway, and the Pelješac Bridge in the southernmost part of the country.
, Croatia has a total of of roads.
Traffic laws
The traffic signs adhere to the Vienna Convention on Road Signs and Signals.
The general speed limits are:
in inhabited areas 50 km/h
outside of inhabited areas 90 km/h
on marked expressways 110 km/h
on marked motorways 130 km/h
Some of the more technical safety measures include that all new Croatian tunnels have modern safety equipment and there are several control cereers, which monitor highway traffic.
Motorways
Motorways (, plural ) in Croatia applies to dual carriageway roads with at least two traffic lanes in each driving direction and an emergency lane. Direction road signs at Croatian motorways have green background with white lettering similar to the German Autobahn. The designations of motorways are "A" and the motorway number. , the Croatian motorway network is long, with additional of new motorways under construction.
The list of completed motorways is as follows (see individual articles for further construction plans and status):
A1, Zagreb - Bosiljevo - Split - Ploče (E71, E65)
A2, Zagreb - Krapina - Macelj (E59)
A3, Bregana - Zagreb - Lipovac (E70)
A4, Goričan - Varaždin/Čakovec - Zagreb (E71)
A5, Osijek - Đakovo - Sredanci (E73)
A6, Bosiljevo - Rijeka (E65)
A7, Rupa - Rijeka bypass (E61)
A8, Kanfanar interchange - Matulji (E751)
A9, Umag - Pula (E751)
A10, A1 Ploče interchange - Metković border crossing
A11, Velika Gorica - Lekenik
Toll is charged on most Croatian motorways, and exceptions are the A11 motorway, Zagreb bypass and Rijeka bypass, as well as sections adjacent to border crossings (except eastbound A3). Payment at toll gates is by all major credit cards or cash, in Euro. Most motorways are covered by the closed toll collection system, where a driver receives a ticket at the entrance gates and pays at the exit gates according to the number of sections travelled. Open toll collection is used on some bridges and tunnels and short stretches of tolled highway, where drivers immediately pay the toll upon arriving. Various forms of prepaid electronic toll collection systems are in place which allow quicker collection of toll, usually at a discounted rate, as well as use of dedicated toll plaza lanes (for ENC system of the electronic toll collection).
Expressways
The term brza cesta or expressway refers to limited-access roads specifically designated as such by legislation and marked with appropriate limited-access road traffic signs. The expressways may comprise two or more traffic lanes, while they normally do not have emergency lanes.
Polu-autocesta or semi-highway refers to a two-lane, undivided road running on one roadway of a motorway while the other is in construction. By legal definition, all semi-highways are expressways.
The expressway routes in Croatia usually correspond to a state road (see below) and are marked a "D" followed by a number. The "E" numbers are designations of European routes.
State roads
Major roads that aren't part of the motorway system are državne ceste (state routes). They are marked with the letter D and the road's number.
The most traveled state routes in Croatia are:
D1, connects Zagreb and Split via Lika - passes through Karlovac, Slunj, Plitvice, Korenica, Knin, Sinj.
D2, connects Varaždin and Osijek via Podravina - passes through Koprivnica, Virovitica, Slatina, Našice.
D8, connects Rijeka and Dubrovnik, widely known as Jadranska magistrala and part of E65 - runs along the coastline and connects many cities on the coast, including Crikvenica, Senj, Zadar, Šibenik, Trogir, Split, Omiš, Makarska and Ploče.
Since the construction of A1 motorway beyond Gorski kotar started, D1 and D8 are much less used.
These routes are monitored by Croatian roadside assistance because they connect important locations. Like all state routes outside major cities, they are only two-lane arterials and do not support heavy traffic. All state routes are routinely maintained by Croatian road authorities. The road sign for a state route has a blue background and the route's designation in white. State routes have one, two or three-digit numbers.
County roads and minor roads
Secondary routes are known as county roads. They are marked with signs with yellow background and road number. These roads' designations are rarely used, but usually marked on regional maps if these roads are shown. Formally, their designation is the letter Ž and the number. County roads have four-digit numbers.
The least known are the so-called local roads. Their designations are never marked on maps or by roadside signs and as such are virtually unknown to public. Their designations consist of the letter L and a five-digit number.
Bus traffic
Buses represent the most-accepted, cheapest and widely used means of public transport. National bus traffic is very well developed - from express buses that cover longer distances to bus connections between the smallest villages in the country, therefore it's possible to reach most of the remotest parts of Croatia by bus on a daily basis. Every larger town usually has a bus station with the ticket office(s) and timetable information. Buses that run on national lines in Croatia (owned and run by private companies) are comfortable and modern-equipped vehicles, featuring air-conditioning and offering pleasant traveling comfort.
National bus travel is generally divided in inter-city (Međugradski prijevoz), inter-county (Međužupanijski prijevoz) and county (local; Županijski prijevoz) transport. Although there can be bus companies whose primary goal is to serve inter-city lines, a certain bus company can - and most of them usually do - operate all or most of the above-mentioned modes of transport.
The primary goal of intercity buses is to connect the largest cities in the country with each other in the shortest possible time. Buses on inter-city level usually offer far more frequent daily services and shorter riding time than trains, mostly due to the large amount of competing companies and great quality of the country's freeway network. According to timetables of bus companies, there are several types of inter-city bus lines. Some lines run directly on the highway to connect certain cities by the shortest route. Other lines run on lower-ranked roads (all the way or part of the way) even when there is a highway alternative, to connect settlements along the way, while some lines run on the highway and sometimes (one time or more) temporarily exit it to serve some smaller settlement nearby, thus giving the opportunity to a certain smaller settlement to be connected by express service.
Buses on county lines usually run between larger cities or towns in a particular county, connecting towns and smaller villages along the way. These buses are mostly used by local residents - students or workers and occasional passengers, so the timetables and line frequencies of these bus routes are mostly adjusted according to the needs of passenger's daily migrations. Since there is no bus terminal in smaller villages, passengers which board buses from those stations buy a ticket from the driver while boarding the bus, unless they have a monthly student or worker pass, in which case they must validate it each time they board the vehicle. Buses running on inter-county lines usually have the same or very similar purpose, except they cross county borders to transport passengers to the more distanced larger town or area.
There are many international bus routes from Croatia to the neighboring countries (Slovenia, Bosnia and Herzegovina, Serbia, Hungary) and to other European countries. International bus services correspond to European standards.
Zagreb has the largest and busiest bus terminal in Croatia. It is located near the downtown in Trnje district on the Marin Držić Avenue. The bus terminal is close to the main railway station and it is easy to reach by tram lines and by car.
Maritime and river transport
Maritime transport
Coastal infrastructure
Republic of Croatia counts six ports open for public traffic of outstanding (international) economic importance and those are the ports: Rijeka, Zadar, Šibenik, Split, Ploče and Dubrovnik. There are also numerous smaller public ports located along the country's coast.
Rijeka is the country's largest cargo port, followed by Ploče which is of great economic importance for the neighboring Bosnia and Herzegovina. The three most common destinations for foreign cruise ships are the ports of Dubrovnik, Split and Zadar. Split is the country's largest passenger port, serving as the public port for domestic ferry, conventional ship and catamaran services as well as for international ferry, cruise or mega cruise services.
Zadar has two public transport ports opened for passenger traffic – one located in the town center served by conventional ship and catamaran services and the other located in the suburb of Gaženica, serving ferry and cruise ship services. Republic of Croatia defined the need to relieve the Zadar's passenger port and the historic center of Zadar and move ferry traffic from the city center to the new passenger port in Gaženica. Work on the construction of the new port began in 2009, and a new ferry port of approximately 100,000 square meters was opened to traffic in 2015. The advantages of the Port of Gaženica are the short distance from the city center (3.5 kilometers), the proximity of the airport and quality traffic connection with the A1 Motorway. The Port of Gaženica meets multiple traffic requirements - it serves for domestic ferry traffic, international ferry traffic, passenger traffic on mega cruisers and RO-RO traffic, with all the necessary infrastructure and accompanying upgrades. In 2019, the passenger port of Gaženica was named Port of the Year at the most prestigious Seatrade Cruise Awards held in Hamburg.
Connection of islands and the mainland
Performing of the public transport on national conventional ship, catamaran and ferry lines and all occasional public maritime lines in Croatia is supervised by the government-founded Agency for coastal line traffic (Agencija za obalni linijski promet). Croatia has about 50 inhabited islands along its coast (most of which are reached from either Zadar or Split ports), which means that there is a large number of local car ferry, conventional ship and catamaran connections. The vast majority of Croatian islands have a road network and several ports for public transport - usually a single ferry port and one or more additional ports mostly located near the bay settlements, served exclusively by conventional ships and catamarans. According to sailing schedules or in case of extraordinary conditions, conventional and catamaran ships can also serve ferry ports. There are also very small number of car-free islands that are accessible only by conventional ship or catamaran services, such as Silba in northern Dalmatia.
Regarding national ferry lines, in the lead terms of the number of transported passengers and vehicles are the one between Split and Supetar on the island of Brač (central Dalmatia) and one between Valbiska (island of Krk) and Merag (island of Cres) in northern Kvarner Gulf. Ferry line between Zadar and Preko on the island of Ugljan (northern Dalmatia) is the most frequent one in Croatia and the rest of the Adriatic - in the summer sailing schedule on this there is around 20 departures per day in each direction. The longest ferry line in Croatia is Zadar - Ist - Olib - Silba (passenger service only) - Premuda - Mali Lošinj (), while the shortest one is between Biograd na Moru and Tkon on the island of Pašman (), both operating in northern Dalmatia.
Almost all ferry lines in Croatia are provided by the state-owned shipping company Jadrolinija, except the ferry service between Stinica and Mišnjak on the island of Rab (Kvarner Gulf area) which is operated by the company “Rapska Plovidba d.d”. Catamaran and passenger ship services are operated by Jadrolinija and several other companies such as "Krilo - Kapetan Luka" , "G&V Line Iadera" , Tankerska plovidba, "Miatours d.o.o." etc. Jadrolinija alone provides a total of 34 national lines with almost 600 departures per day during the summer tourist season, when the number of ferry, conventional ship and catamaran lines on the most capacity-demanding routes is significantly higher compared to the off-season period.
International routes
With its largest vessels, Jadrolinija connects Croatia with Italy by operating international cross-Adriatic routes Split - Ancona - Split, Zadar - Ancona - Zadar and Dubrovnik - Bari - Dubrovnik. Ferry line between Split and Ancona is also operated by Italian operator SNAV.
River transport
Croatia is also on the important Danube waterway which connects Eastern and Central Europe. The major Danube port is Vukovar, but there are also some smaller ports in Osijek, Sisak and Slavonski Brod.
Navigable rivers:
Danube(E 80) - 137,5 km from entering Croatia near Batina to exits near Ilok; VIc class
Sava(E 80–12) - 383.2 km from Sisak until it exits Croatia near Gunja; II-IV class
Drava(E 80–08) - 14 km from the mouth of the Danube to Osijek; IV class
Total waterway length (2021): 534.7 km
Pipelines
The projected capacity of the oil pipeline is 34 million tons of oil per year, and the installed 20 million tons of oil per year. The system was built for the needs of refineries in Croatia, Slovenia, Serbia and Bosnia and Herzegovina, as well as users in Hungary, the Czech Republic and Slovakia. The total capacity of the storage space today is 2,100,000 m3 for crude oil and 242,000 m3 for petroleum products. The pipeline is long and it is fully controlled by JANAF. The system consists of: reception and dispatch Terminal Omišalj on the island of Krk, with two berths for tankers and storage space for oil and derivatives, receiving and dispatching terminals in Sisak, Virje and Slavonski Brod with oil storage space at the Sisak and Virje terminals, Žitnjak Terminal in Zagreb, for storage of petroleum products with railway and truck transfer stations for delivery, reception and dispatch of derivatives.
Natural gas is transported by Plinacro, which operates of the transmission system in 19 counties, with more than 450 overhead transmission system facilities, including a compressor station and 156 metering and reduction stations through which gas is delivered to system users. The system houses the Okoli underground storage facility with a working volume of 553 million cubic meters of natural gas.
Public transport
Public transport within the most of the largest cities (and their suburbs/satellite towns) in Croatia is mostly provided by the city buses owned and operated by municipal organizations such as Zagrebački električni tramvaj in Zagreb, Promet Split in Split, "Autotrolej" d.o.o." in Rijeka, "Liburnija Zadar" in Zadar, "Gradski Prijevoz Putnika d.o.o." in Osijek, etc.
In addition to city buses, the cities of Zagreb and Osijek have tram networks. Tram lines in Zagreb are operated by Zagrebački električni tramvaj (which also operates a single funicular line - mostly for tourist purposes - and a gondola lift system), while the tram lines in Osijek are operated by "Gradski Prijevoz Putnika d.o.o.". Tram network in the capital city of Zagreb is, however, far more extensive than the one in Osijek.
See also
Croatian car number plates
Transport in Zagreb
Hrvatske autoceste
Croatian Railways
List of E-roads in Croatia
References |
5582 | https://en.wikipedia.org/wiki/Foreign%20relations%20of%20Croatia | Foreign relations of Croatia | The Republic of Croatia is a sovereign country at the crossroads of Central Europe, Southeast Europe, and the Mediterranean that declared its independence from Yugoslavia on 25 June 1991. Croatia is a member of the European Union (EU), United Nations (UN), the Council of Europe, NATO, the World Trade Organization (WTO), Union for the Mediterranean and a number of other international organizations. Croatia has established diplomatic relations with 187 countries. The president and the Government, through the Ministry of Foreign and European Affairs, co-operate in the formulation and implementation of foreign policy.
The main objectives of Croatian foreign policy during the 1990s were gaining international recognition and joining the United Nations. These objectives were achieved by 2000, and the main goals became NATO and EU membership. Croatia fulfilled these goals in 2009 and 2013 respectively. Current Croatian goals in foreign policy are: positioning within the EU institutions and in the region, cooperation with NATO partners and strengthening multilateral and bilateral cooperation worldwide.
History
The first native Croatian ruler recognised by the Pope was duke Branimir, who received papal recognition from Pope John VIII on 7 June 879. Tomislav was the first king of Croatia, noted as such in a letter of Pope John X in 925.
Maritime Republic of Ragusa (1358-1808) maintained widespread diplomatic relations with the Ottoman Empire, Republic of Venice, Papal States and other states. Diplomatic relations of the Republic of Ragusa are often perceived as a historical inspiration for the contemporary Croatian diplomacy. During the Wars of the Holy League Ragusa avoided alignment with either side in the conflict rejecting Venetian calls to join the Holy League.
Antun Mihanović, author of the anthem of Croatia, spent over 20 years as a consul of the Austrian Empire in Belgrade (Principality of Serbia), Bucharest (Wallachia) and Istanbul (Ottoman Empire) starting in 1836. The Yugoslav Committee, political interest group formed by South Slavs from Austria-Hungary during World War I, petitioned Allies of World War I and participated in international events such as the Congress of Oppressed Nationalities of the Austro-Hungarian Empire.
The Association for the Promotion of the League of Nations Values was active in Zagreb in the interwar period organizing lectures by Albert Thomas, Goldsworthy Lowes Dickinson and Ludwig Quidde.
World War II-era Axis puppet Independent State of Croatia maintained diplomatic missions in several countries in Europe.
Socialist Republic of Croatia within Yugoslavia
While each constitution of Yugoslavia defined foreign affairs as a federal level issue, over the years Yugoslav constituent republics played increasingly prominent role in either defining this policy or pursuing their own initiatives. Number of diplomats from Croatia gained significant experience in the service to the prominent Cold War era Yugoslav diplomacy.
In June 1943 Vladimir Velebit became the point of contact for foreign military missions in their dealings with the Yugoslav Partisans. Ivan Šubašić (1944-1945), Josip Smodlaka (NKOJ: 1943–1945), Josip Vrhovec (1978-1982) and Budimir Lončar (1987-1991) led the federal level Ministry of Foreign Affairs while numerous Croatian diplomats served in Yugoslav embassies or multilateral organizations. In 1956 Brijuni archipelago in People's Republic of Croatia hosted the Brioni Meeting, one of the major early initiatives leading to the establishment of the Non-Aligned Movement. Between 1960 and 1967 Vladimir Velebit was executive secretary of the United Nations Economic Commission for Europe. During the Croatian Spring Croatian economist Hrvoje Šošić argued for the separate admission of the Socialist Republic of Croatia into the United Nations similar to the membership of Ukrainian and Byelorussian Soviet Socialist Republic which led to his imprisonment. In 1978, Croatia together with SR Slovenia joined the newly established Alps-Adriatic Working Group. The breakup of Yugoslavia led to mass transfers of experts from federal institutions enabling post-Yugoslav states to establish their own diplomatic bodies primarily by employing former Yugoslav cadres.
The 2001 Agreement on Succession Issues of the Former Socialist Federal Republic of Yugoslavia formally assigned to Croatia a portion of the diplomatic and consular properties of the previous federation.
Foreign policy since independence
On 17 December 1991 the European Economic Community adopted the "Common Position for the recognition of the Yugoslav Republics" requesting the Yugoslav republics wishing to gain recognition to accept provisions of international law protecting human rights as well as national minorities rights in hope that credible guarantees may prevent incentives for violent confrontations. Later that month Croatian Parliament introduced the Constitutional Act on the Rights of National Minorities in the Republic of Croatia opening the way for 15 January 1992 collective recognition by the Community. Croatia maintained some links beyond the Euro-Atlantic world via its observer status in the Non-Aligned Movement which it enjoyed already at the 10th Summit of the Non-Aligned Movement in Jakarta, Indonesia.
Following the international recognition of Croatia in 1992 the country was faced with the Croatian War of Independence between 1992 and 1995. Significant part of the country was outside of the control of the central government with the declaration of self-proclaimed unrecognized Republic of Serbian Krajina. In 1992 signing of the Sarajevo Agreement led to the cease-fire to allow the UNPROFOR deployment in the country. Diplomatic efforts led to unsuccessful proposals which included Daruvar Agreement and Z-4 Plan. In 1995 UNCRO mission took over UNPROFOR mandate yet soon after Operation Storm led to a decisive victory for the Croatian Army with only the Eastern Slavonia, Baranja and Western Syrmia remaining initially as a rump territory of Krajina. A diplomatic solution that avoided the conflict in Eastern Slavonia was reached on 12 November 1995 via the signing of the Erdut Agreement with significant support and facilitation from the international community (primarily the United States, and with United Nations and various European actors). Temporary UNTAES administration over the region opened the way for the signing of the Dayton Agreement which ended the Bosnian War. It also led to the signing of 1996 Agreement on Normalization of Relations between the Federal Republic of Yugoslavia and the Republic of Croatia.
With the resolution of some of the major bilateral issues arising from the Yugoslav Wars Croatian foreign policy has focused on greater Euro-Atlantic integration, mainly entering the European Union and NATO. The progress was nevertheless slow in the period between 1996 and 1999 with rising concerns over authoritarian tendencies in the country. In order to gain access to European and trans-Atlantic institutions, it has had to undo many negative effects of the breakup of Yugoslavia and the war that ensued, and improve and maintain good relations with its neighbours. Croatia has had an uneven record in these areas between 1996 and 1999 during the right-wing HDZ government, inhibiting its relations with the European Union and the United States. In 1997 United States diplomacy even called upon its European partners to suspend Croatia from the Council of Europe as long as country fails to show adequate respect for human and minority rights. Lack of improvement in these areas severely hindered the advance of Croatia's prospects for further Euro-Atlantic integration. Progress in the areas of Dayton, Erdut, and refugee returns were evident in 1998, but progress was slow and required intensive international engagement. Croatia's unsatisfactory performance implementing broader democratic reforms in 1998 raised questions about the ruling party's commitment to basic democratic principles and norms. Areas of concern included restrictions on freedom of speech, one-party control of public TV and radio, repression of independent media, unfair electoral regulations, a judiciary that is not fully independent, and lack of human and civil rights protection.
With 1999 death of President Franjo Tuđman, 2000 Croatian parliamentary election as well as corresponding regional changes such as the Overthrow of Slobodan Milošević, European Union organized 2000 Zagreb Summit and 2003 Thessaloniki Summit in which European integration perspective was recognized for all the countries in the region. The new SDP-led centre-left coalition government slowly relinquished control over public media companies and did not interfere with freedom of speech and independent media, though it did not complete the process of making Croatian Radiotelevision independent. Judiciary reforms remained a pending issue as well. Government's foreign relations were severely affected by the hesitance and stalling of the extradition of Croatian general Janko Bobetko to the International Criminal Tribunal for the Former Yugoslavia (ICTY), and inability to take general Ante Gotovina into custody for questioning by the Court. Nevertheless, Croatia managed to enter NATO's Partnership for Peace Programme in May 2000, World Trade Organization in July 2000, signing a Stabilization and Association Agreement with the EU in October 2001, Membership Action Plan in May 2002, and joined the Central European Free Trade Agreement (CEFTA) in December 2002. The EU membership application was the last major international undertaking of the Račan government, which submitted a 7,000-page report in reply to the questionnaire by the European Commission. Negotiations were initiated with the achievement of the full cooperation with the Hague Tribunal in October 2005. Croatian president Stjepan Mesić participated in NAM conferences in Havana in 2006 and Sharm el-Sheikh in 2009 using country's post-Yugoslav link with the Third World in its successful campaign for the Eastern European Spot at the United Nations Security Council in 2008–2009 (in open competition with Czech Republic which was member state both of EU and NATO).
Refugee returns accelerated since 1999, reached a peak in 2000, but then slightly decreased in 2001 and 2002. The OSCE Mission to Croatia, focusing on the governed by the UNTAES, continued to monitor human rights and the return of refugees until December 2007 with the OSCE office in Zagreb finally closing in 2012. Croatian Serbs continue to have problems with restitution of property and acceptance to the reconstruction assistance programmes. Combined with lacking economic opportunities in the rural areas of former Krajina, the return process was only partial.
Accession to the European Union
At the time of Croatia's application to the European Union, three EU members states were yet to ratify the Stabilization and Association Agreement: United Kingdom, the Netherlands and Italy. The new Sanader government elected in 2003 elections repeated the assurances that Croatia will fulfill the missing political obligations, and expedited the extradition of several ICTY inductees. The European Commission replied to the answers of the questionnaire sent to Croatia on 20 April 2004 with a positive opinion. The country was finally accepted as EU candidate in July 2004. Italy and United Kingdom ratified the Stabilization and Association Agreement shortly thereafter, while the ten EU member states that were admitted to membership that year ratified it all together at a 2004 European Summit. In December 2004, the EU leaders announced that accession negotiations with Croatia would start on 17 March 2005 provided that Croatian government cooperates fully with the ICTY. The main issue, the flight of general Gotovina, however, remained unsolved and despite the agreement on an accession negotiation framework, the negotiations did not begin in March 2005. On 4 October 2005 Croatia finally received green light for accession negotiations after the Chief Prosecutor of the ICTY Carla Del Ponte officially stated that Croatia is fully cooperating with the Tribunal. This has been the main condition demanded by EU foreign ministers for accession negotiations. The ICTY called upon other southern European states to follow Croatia's good example. Thanks to the consistent position of Austria during the meeting of EU foreign ministers, a long period of instability and the questioning of the determination of the Croatian government to extradite alleged war criminals has ended successfully. Croatian Prime minister Ivo Sanader declared that full cooperation with the Hague Tribunal will continue. The accession process was also complicated by the insistence of Slovenia, an EU member state, that the two countries' border issues be dealt with prior to Croatia's accession to the EU.
Croatia finished accession negotiations on 30 June 2011, and on 9 December 2011, signed the Treaty of Accession. A referendum on EU accession was held in Croatia on 22 January 2012, with 66% of participants voting in favour of joining the Union. The ratification process was concluded on 21 June 2013, and entry into force and accession of Croatia to the EU took place on 1 July 2013.
Current events
The main objective of the Croatian foreign policy is positioning within the EU institutions and in the region, cooperation with NATO partners and strengthening multilateral and bilateral cooperation.
Government officials in charge of foreign policy include the Minister of Foreign and European Affairs, currently Gordan Grlić-Radman, and the President of the Republic, currently Zoran Milanović.
Croatia has established diplomatic relations with 186 countries (see List of diplomatic relations of Croatia). As of 2009, Croatia maintains a network of 51 embassies, 24 consulates and eight permanent diplomatic missions abroad. Furthermore, there are 52 foreign embassies and 69 consulates in the Republic of Croatia in addition to offices of international organizations such as the European Bank for Reconstruction and Development, International Organization for Migration, Organization for Security and Co-operation in Europe (OSCE), World Bank, World Health Organization, International Criminal Tribunal for the former Yugoslavia (ICTY), United Nations Development Programme, United Nations High Commissioner for Refugees and UNICEF.
International organizations
Republic of Croatia participates in the following international organizations:
CE,
CEI,
EAPC,
EBRD,
ECE,
EU,
FAO,
G11,
IADB,
IAEA,
IBRD,
ICAO,
ICC,
ICRM,
IDA,
IFAD,
IFC,
IFRCS,
IHO,
ILO,
IMF,
IMO,
Inmarsat,
Intelsat,
Interpol,
IOC,
IOM,
ISO,
ITU,
ITUC,
NAM (observer),
NATO,
OAS (observer),
OPCW,
OSCE,
PCA,
PFP,
SECI,
UN,
UNAMSIL,
UNCTAD,
UNESCO,
UNIDO,
UNMEE,
UNMOGIP,
UPU,
WCO,
WEU (associate),
WHO,
WIPO,
WMO,
WToO,
WTO
There exists a Permanent Representative of Croatia to the United Nations.
Foreign support
Croatia receives support from donor programs of:
European Bank for Reconstruction and Development (EBRD)
European Union
International Bank for Reconstruction and Development
International Monetary Fund
USAID
Between 1991 and 2003, the EBRD had directly invested a total of 1,212,039,000 EUR into projects in Croatia.
In 1998, U.S. support to Croatia came through the Southeastern European Economic Development Program (SEED), whose funding in Croatia totaled $23.25 million. More than half of that money was used to fund programs encouraging sustainable returns of refugees and displaced persons. About one-third of the assistance was used for democratization efforts, and another 5% funded financial sector restructuring.
In 2003 USAID considered Croatia to be on a "glide path for graduation" along with Bulgaria. Its 2002/2003/2004 funding includes around $10 million for economic development, up to $5 million for the development of democratic institutions, about $5 million for the return of population affected by war and between 2 and 3 million dollars for the "mitigation of adverse social conditions and trends". A rising amount of funding is given to cross-cutting programs in anti-corruption, slightly under one million dollars.
The European Commission has proposed to assist Croatia's efforts to join the European Union with 245 million euros from PHARE, ISPA and SAPARD aid programs over the course of 2005 and 2006.
International disputes
Relations with neighbouring states have normalized somewhat since the breakup of Yugoslavia. Work has begun — bilaterally and within the Stability Pact for South Eastern Europe since 1999 — on political and economic cooperation in the region.
Bosnia and Herzegovina
Discussions continue between Croatia and Bosnia and Herzegovina on various sections of the border, the longest border with another country for each of these countries.
Sections of the Una river and villages at the base of Mount Plješevica are in Croatia, while some are in Bosnia, which causes an excessive number of border crossings on a single route and impedes any serious development in the region. The Zagreb-Bihać-Split railway line is still closed for major traffic due to this issue.
The border on the Una river between Hrvatska Kostajnica on the northern, Croatian side of the river, and Bosanska Kostajnica on the southern, Bosnian side, is also being discussed. A river island between the two towns is under Croatian control, but is also claimed by Bosnia. A shared border crossing point has been built and has been functioning since 2003, and is used without hindrance by either party.
The Herzegovinian municipality of Neum in the south makes the southernmost part of Croatia an exclave and the two countries are negotiating special transit rules through Neum to compensate for that. Recently Croatia has opted to build a bridge to the Pelješac peninsula to connect the Croatian mainland with the exclave but Bosnia and Herzegovina has protested that the bridge will close its access to international waters (although Croatian territory and territorial waters surround Bosnian-Herzegovinian territory and waters completely) and has suggested that the bridge must be higher than 55 meters for free passage of all types of ships. Negotiations are still being held.
Italy
The relations between Croatia and Italy have been largely cordial and friendly, although occasional incidents do arise on issues such as the Istrian–Dalmatian exodus or the Ecological and Fisheries Protection Zone.
Montenegro
Croatia and Montenegro have a largely latent border dispute over the Prevlaka peninsula.
Serbia
The border between Croatia and Serbia in the area of the Danube is disputed while at the same time the issue is not considered of the highest priority for either country in their bilateral relations. The issue therefore only occasionally entered into in the public debate with other open issues being higher on the agenda, yet with some commentators fearing that the issue may once be used as an asymmetric pressure tool in the accession of Serbia to the European Union. While Serbia holds the opinion that the thalweg of the Danube valley and the centerline of the river represents the international border between the two countries, Croatia disagrees and claims that the international border lies along the boundaries of the cadastral municipalities located along the river—departing from the course at several points along a section. The cadastre-based boundary reflects the course of the Danube which existed in the 19th century, before meandering and hydrotechnical engineering works altered its course. The area size of the territory in dispute is reported variously, up to and is uninhabited area of forests and islands. Croatian and Serbian authorities have made only occasional attempts to resolve the issue with the establishment of a joint commission that rarely met and the 2018 statement by presidents of the two countries that the issue will be brought to international arbitration if agreement is not reached until 2020.
Slovenia
Croatia and Slovenia have several land and maritime boundary disputes, mainly in the Gulf of Piran, regarding
Slovenian access to international waters, a small number of pockets of land on the right-hand side of the river Dragonja,
and around the Sveta Gera peak.
Slovenia was disputing Croatia's claim to establish the Ecological and Fisheries Protection Zone, an economic section of the Adriatic.
Other issues that have yet to be fully resolved include:
Croatian depositors' savings in the former Ljubljanska banka
Diplomatic relations
List of countries with which Croatia maintains diplomatic relations:
Bilateral relations
Multilateral
Africa
Americas
Asia
Europe
Oceania
See also
Croatian passport
List of diplomatic missions in Croatia
List of diplomatic missions of Croatia
Visa requirements for Croatian citizens
List of diplomatic relations of Croatia
Foreign relations of Yugoslavia
References
External links
Ministry of Foreign Affairs and European Integration
Government of the Republic of Croatia
EBRD and Croatia
Stability Pact for South Eastern Europe |
5584 | https://en.wikipedia.org/wiki/History%20of%20Cuba | History of Cuba | Christopher Columbus mistakenly thought that Cuba was Cipango, the fabled country of wealth, pearls, precious stones, and spices that Marco Polo said was located approximately 1500 miles off the coast of India. As a result, he altered his course to the southwest, and on October 28, 1492, he landed in Cuba. The island of Cuba was inhabited by various Amerindian cultures prior to the arrival of the explorer Christopher Columbus in 1492. After his arrival, Spain conquered Cuba and appointed Spanish governors to rule in Havana. The administrators in Cuba were subject to the Viceroy of New Spain and the local authorities in Hispaniola. In 1762–63, Havana was briefly occupied by Britain, before being returned to Spain in exchange for Florida. A series of rebellions between 1868 and 1898, led by General Máximo Gómez, failed to end Spanish rule and claimed the lives of 49,000 Cuban guerrillas and 126,000 Spanish soldiers. However, the Spanish–American War resulted in a Spanish withdrawal from the island in 1898, and following three-and-a-half years of subsequent US military rule, Cuba gained formal independence in 1902.
In the years following its independence, the Cuban republic saw significant economic development, but also political corruption and a succession of despotic leaders, culminating in the overthrow of the dictator Fulgencio Batista by the 26th of July Movement, led by Fidel Castro, during the 1953–1959 Cuban Revolution. The new government aligned with the Soviet Union and embraced communism. In the early 1960s, Castro's regime withstood invasion, faced nuclear Armageddon, and experienced a civil war that included Dominican support for regime opponents. Following the Warsaw Pact invasion of Czechoslovakia (1968), Castro publicly declared Cuba's support. His speech marked the start of Cuba's complete absorption into the Eastern Bloc. During the Cold War, Cuba also supported Soviet policy in Afghanistan, Poland, Angola, Ethiopia, Nicaragua, and El Salvador. The Cuban economy was mostly supported by Soviet subsidies.
With the dissolution of the USSR in 1991 Cuba was plunged into a severe economic crisis known as the Special Period that ended in 2000 when Venezuela began providing Cuba with subsidized oil. The country has been politically and economically isolated by the United States since the Revolution, but has gradually gained access to foreign commerce and travel as efforts to normalise diplomatic relations have progressed. Domestic economic reforms are also beginning to tackle existing economic problems which arose in the aftermath of the special period (i.e. the introduction of the dual currency system).
Pre-Columbian (to 1500)
Cuba's earliest known human inhabitants inhabited the island in the 4th millennium BC. The oldest known Cuban archeological site, Levisa, dates from approximately 3100 BC. A wider distribution of sites date from after 2000 BC, most notably represented by the Cayo Redondo and Guayabo Blanco cultures of western Cuba. These neolithic cultures used ground stone and shell tools and ornaments, including the dagger-like gladiolitos. The Cayo Redondo and Guayabo Blanco cultures lived a subsistence lifestyle based on fishing, hunting and collecting wild plants.
The indigenous Guanajatabey, who had inhabited Cuba for centuries, were driven to the far west of the island by the arrival of subsequent waves of migrants, including the Taíno and Ciboney. These people had migrated north along the Caribbean island chain. The Taíno and Siboney were part of a cultural group commonly called the Arawak, who inhabited parts of northeastern South America prior to the arrival of Europeans. Initially, they settled at the eastern end of Cuba, before expanding westward across the island. The Spanish Dominican clergyman and writer Bartolomé de las Casas estimated that the Taíno population of Cuba had reached 350,000 by the end of the 15th century. The Taíno cultivated the yuca root, harvested it and baked it to produce cassava bread. They also grew cotton and tobacco, and ate maize and sweet potatoes.
Spanish conquest
Christopher Columbus, on his first Spanish-sponsored voyage to the Americas in 1492, sailed south from what is now the Bahamas to explore the northeast coast of Cuba and the northern coast of Hispaniola. Columbus, who was searching for a route to India, believed the island to be a peninsula of the Asian mainland. Columbus arrived at Cuba on October 27, 1492, and he landed on October 28, 1492, at Puerto de Nipe.
During a second voyage in 1494, Columbus passed along the south coast, landing at various inlets including what was to become Guantánamo Bay. With the Papal Bull of 1493, Pope Alexander VI commanded Spain to conquer and convert the pagans of the New World to Catholicism. The Spanish began to create permanent settlements on the island of Hispaniola, east of Cuba, soon after Columbus' arrival in the Caribbean, but the coast of Cuba was not fully mapped by Europeans until 1508, by Sebastián de Ocampo. In 1511, Diego Velázquez de Cuéllar set out from Hispaniola to form the first Spanish settlement in Cuba, with orders from Spain to conquer the island. The settlement was at Baracoa, but the new settlers were greeted with stiff resistance from the local Taíno population. The Taínos were initially organized by cacique (chieftain) Hatuey, who had himself relocated from Hispaniola to escape Spanish rule. After a prolonged guerrilla campaign, Hatuey and successive chieftains were captured and burnt alive, and within three years the Spanish had gained control of the island. In 1514, a south coast settlement was founded in what was to become Havana. The current city was founded in 1519.
Clergyman Bartolomé de las Casas observed a number of massacres initiated by the invaders, notably the massacre near Camagüey of the inhabitants of Caonao. According to his account, some three thousand villagers had traveled to Manzanillo to greet the Spanish with food, and were "without provocation, butchered". The surviving indigenous groups fled to the mountains or the small surrounding islands before being captured and forced into reservations. One such reservation was Guanabacoa, today a suburb of Havana.
In 1513, Ferdinand II of Aragon issued a decree establishing the encomienda land settlement system that was to be incorporated throughout the Spanish Americas. Velázquez, who had become Governor of Cuba, was given the task of apportioning the land and the indigenous peoples to groups throughout the new colony. The scheme was not a success, however, as the natives either succumbed to diseases brought from Spain such as measles and smallpox, or simply refused to work, preferring to move into the mountains. Desperate for labor for the new agricultural settlements, the Conquistadors sought slaves from surrounding islands and the continental mainland. Velazquez's lieutenant Hernán Cortés launched the Spanish conquest of the Aztec Empire in Cuba, sailing from Santiago to the Yucatán Peninsula. However, these new arrivals also dispersed into the wilderness or died of disease.
Despite the difficult relations between the natives and the new Europeans, some cooperation was in evidence. The Spanish were shown by the natives how to nurture tobacco and consume it as cigars. There were also many unions between the largely male Spanish colonists and indigenous women. Modern studies have revealed traces of DNA that renders physical traits similar to Amazonian tribes in individuals throughout Cuba, although the native population was largely destroyed as a culture and civilization after 1550. Under the Spanish New Laws of 1552, indigenous Cuban were freed from encomienda, and seven towns for indigenous peoples were set up. There are indigenous descendant Cuban (Taíno) families in several places, mostly in eastern Cuba. The local indigenous population also left their mark on the language, with some 400 Taíno terms and place-names surviving to the present day. For example, Cuba and Havana were derived from Classic Taíno, and indigenous words such as tobacco, hurricane and canoe were transferred to English.
Colonial period
The Spanish established sugar and tobacco as Cuba's primary products, and the island soon supplanted Hispaniola as the prime Spanish base in the Caribbean. African slaves were imported to work the plantations as field labor. However, restrictive Spanish trade laws made it difficult for Cubans to keep up with the 17th and 18th century advances in processing sugar cane until the Haitian Revolution saw French planters flee to Cuba. Spain also restricted Cuba's access to the slave trade, instead issuing foreign merchants asientos to conduct it on Spain's behalf, and ordered regulations on trade with Cuba. The resultant stagnation of economic growth was particularly pronounced in Cuba because of its great strategic importance in the Caribbean, and the stranglehold that Spain kept on it as a result.
Colonial Cuba was a frequent target of buccaneers, pirates and French corsairs. In response to repeated raids, defenses were bolstered throughout the island during the 16th century. In Havana, the fortress of Castillo de los Tres Reyes Magos del Morro was built to deter potential invaders. Havana's inability to resist invaders was dramatically exposed in 1628, when a Dutch fleet led by Piet Heyn plundered the Spanish ships in the city's harbor. In 1662, English pirate Christopher Myngs captured and briefly occupied Santiago de Cuba on the eastern part of the island.
Nearly a century later, the British Royal Navy launched another invasion, capturing Guantánamo Bay in 1741 during the War of Jenkins' Ear. Admiral Edward Vernon saw his 4,000 occupying troops capitulate to raids by Spanish troops, and more critically, an epidemic, forcing him to withdraw his fleet to British Jamaica. In the War of the Austrian Succession, the British carried out unsuccessful attacks against Santiago de Cuba in 1741 and again in 1748. Additionally, a skirmish between British and Spanish naval squadrons occurred near Havana in 1748.
The Seven Years' War, which erupted in 1754 across three continents, eventually arrived in the Spanish Caribbean. In 1762 a British expedition of five warships and 4,000 troops set out from Portsmouth to capture Cuba. The British arrived on 6 June, and by August had Havana under siege. When Havana surrendered, the admiral of the British fleet, George Keppel, entered the city as a new colonial governor and took control of the whole western part of the island. The arrival of the British immediately opened up trade with their North American and Caribbean colonies, causing a rapid transformation of Cuban society. Though Havana, which had become the third-largest city in the Americas, was to enter an era of sustained development and closening ties with North America during this period, the British occupation proved short-lived. Pressure from London sugar merchants fearing a decline in sugar prices forced negotiations with the Spanish over colonial territories. Less than a year after Havana was seized, the Peace of Paris was signed by the three warring powers, ending the Seven Years' War. The treaty gave Britain Florida in exchange for Cuba. In 1781, General Bernardo de Gálvez, the Spanish governor of Louisiana, reconquered Florida for Spain with Mexican, Puerto Rican, Dominican, and Cuban troops.
In the 19th century, Cuba became the most important world producer of sugar, thanks to the expansion of slavery and a relentless focus on improving sugar technology. Use of modern refining techniques was especially important because the British Slave Trade Act 1807 abolished the slave trade in the British Empire. The British government set about trying to eliminate the transatlantic slave trade. Under British diplomatic pressure, in 1817 Spain agreed to abolish the slave trade from 1820 in exchange for a payment from London. Cubans rushed to import further slaves in the time legally left to them. Over 100,000 new slaves were imported from Africa between 1816 and 1820. In spite of the new restrictions a large-scale illegal slave trade continued to flourish in the following years. Many Cubans were torn between desire for the profits generated by sugar and a repugnance for slavery. By the end of the 19th century, slavery was abolished.
When Spain opened the Cuban trade ports, it quickly became a popular place. Cubans began to use water mills, enclosed furnaces, and steam engines to produce higher-quality sugar at a much more efficient pace. The boom in Cuba's sugar industry in the 19th century made it necessary for the country to improve its transportation infrastructure. Many new roads were built, and old roads were quickly repaired. Railroads were built relatively early, easing the collection and transportation of perishable sugar cane. By 1860, Cuba was devoted to growing sugar, having to import all other necessary goods. Cuba was particularly dependent on the United States, which bought 82 percent of its sugar. In 1820, Spain abolished the slave trade, hurting the Cuban economy even more and forcing planters to buy more expensive, illegal, and "troublesome" slaves (as demonstrated by the slave rebellion on the Spanish ship Amistad in 1839).
Reformism, annexation, and independence (1800–1898)
In the early 19th century, three major political currents took shape in Cuba: reformism, annexation and independence. Spontaneous and isolated actions added a current of abolitionism. The 1776 Declaration of Independence by the Thirteen Colonies and the successes of the French Revolution of 1789 influenced early Cuban liberation movements, as did the successful revolt of black slaves in Haiti in 1791. One of the first of such movements in Cuba, headed by the free black Nicolás Morales, aimed at gaining equality between "mulatto and whites" and at the abolition of sales taxes and other fiscal burdens. Morales' plot was discovered in 1795 in Bayamo, and the conspirators were jailed.
Reform, autonomy and separatist movements
As a result of the political upheavals caused by the Iberian Peninsular War of 1807–1814 and of Napoleon's removal of Ferdinand VII from the Spanish throne in 1808, a western separatist rebellion emerged among the Cuban Creole aristocracy in 1809 and 1810. One of its leaders, Joaquín Infante, drafted Cuba's first constitution, declaring the island a sovereign state, presuming the rule of the country's wealthy, maintaining slavery as long as it was necessary for agriculture, establishing a social classification based on skin color and declaring Catholicism the official religion. This conspiracy also failed, and the main leaders were deported. In 1812 a mixed-race abolitionist conspiracy arose, organized by José Antonio Aponte, a free-black carpenter. He and others were executed.
The Spanish Constitution of 1812, and the legislation passed by the Cortes of Cádiz after it was set up in 1808, instituted a number of liberal political and commercial policies, which were welcomed in Cuba but also curtailed a number of older liberties. Between 1810 and 1814 the island elected six representatives to the Cortes, in addition to forming a locally elected Provincial Deputation. Nevertheless, the liberal regime and the Constitution proved ephemeral: Ferdinand VII suppressed them when he returned to the throne in 1814. By the end of the 1810s, some Cubans were inspired by the successes of Simón Bolívar in South America. Numerous secret-societies emerged, most notably the "Soles y Rayos de Bolívar", founded in 1821 and led by José Francisco Lemus. It aimed to establish the free Republic of Cubanacán, and it had branches in five districts of the island.
In 1823 the society's leaders were arrested and condemned to exile. In the same year, King Ferdinand VII abolished constitutional rule in Spain yet again. As a result, the national militia of Cuba, established by the Constitution and a potential instrument for liberal agitation, was dissolved, a permanent executive military commission under the orders of the governor was created, newspapers were closed, elected provincial representatives were removed and other liberties suppressed.
This suppression, and the success of independence movements in the former Spanish colonies on the North American mainland, led to a notable rise of Cuban nationalism. A number of independence conspiracies developed during the 1820s and 1830s, but all failed. Among these were the "Expedición de los Trece" (Expedition of the 13) in 1826, the "Gran Legión del Aguila Negra" (Great Legion of the Black Eagle) in 1829, the "Cadena Triangular" (Triangular Chain) and the "Soles de la Libertad" (Suns of Liberty) in 1837. Leading national figures in these years included Félix Varela and Cuba's first revolutionary poet, José María Heredia.
Between 1810 and 1826, 20,000 royalist refugees from the Latin American Revolutions arrived in Cuba. They were joined by others who left Florida when Spain ceded it to the United States in 1819. These influxes strengthened loyalist pro-Spanish sentiments.
Antislavery and independence movements
In 1826 the first armed uprising for independence took place in Puerto Príncipe, led by Francisco de Agüero and Andrés Manuel Sánchez. Both were executed, becoming the first popular martyrs of the Cuban independence movement.
The 1830s saw a surge of activity from the reformist movement, whose main leader, José Antonio Saco, stood out for his criticism of Spanish despotism and of the slave trade. Nevertheless, Cubans remained deprived of the right to send representatives to the Spanish parliament, and Madrid stepped up repression.
Under British diplomatic pressure, the Spanish government had pledged to abolish slavery. In this context, Black revolts in Cuba increased, and were put down with mass executions. One of the most significant was the Conspiración de la Escalera (Ladder Conspiracy) in 1843–1844. The Ladder Conspiracy involved free Black persons and enslaved, as well as white intellectuals and professionals. It is estimated that 300 Black and mixed-race persons died from torture, 78 were executed, over 600 were imprisoned and over 400 expelled from the island. José Antonio Saco, one of Cuba's most prominent thinkers, was expelled.
Following the 1868–1878 rebellion of the Ten Years' War, all slavery was abolished by 1886. Slave traders looked for others sources of cheap labour, such as Chinese colonists and Indians from Yucatán. Another feature of the population was the number of Spanish-born colonists, known as peninsulares, who were mostly adult males; they constituted between ten and twenty per cent of the population between the middle of the 19th century and the great depression of the 1930s.
Possibility of annexation by the United States
Black unrest and attempts by the Spanish metropolis to abolish slavery motivated many Creoles to advocate Cuba's annexation by the United States, where slavery was still legal. Other Cubans supported the idea due to their desire for American-style economic development and democratic freedom. In 1805, President Thomas Jefferson considered annexing Cuba for strategic reasons, sending agents to the island to negotiate with Captain General Someruelos.
In April 1823, U.S. Secretary of State John Quincy Adams discussed the rules of political gravitation: "if an apple severed by its native tree cannot choose but fall to the ground, Cuba, forcibly disjoined from its own unnatural connection with Spain, and incapable of self-support, can gravitate only towards the North American Union which by the same law of nature, cannot cast her off its bosom". He furthermore warned that "the transfer of Cuba to Great Britain would be an event unpropitious to the interest of this Union". Adams voiced concern that a country outside of North America would attempt to occupy Cuba.
On 2 December 1823, U.S. President James Monroe specifically addressed Cuba and other European colonies in his proclamation of the Monroe Doctrine. Cuba, located just from Key West, Florida, was of interest to the doctrine's founders, as they warned European forces to leave "America for the Americans".
The most outstanding attempts in support of annexation were made by the Venezuelan filibuster General Narciso López, who prepared four expeditions to Cuba in the US. The first two, in 1848 and 1849, failed before departure due to U.S. opposition. The third, made up of some 600 men, managed to land in Cuba and take the central city of Cárdenas, but failed eventually due to a lack of popular support. López's fourth expedition landed in Pinar del Río province with around 400 men in August 1851; the invaders were defeated by Spanish troops and López was executed.
Struggle for independence
In the 1860s, Cuba had two more liberal-minded governors, Serrano and Dulce, who encouraged the creation of a Reformist Party, despite the fact that political parties were forbidden. But they were followed by a reactionary governor, Francisco Lersundi, who suppressed all liberties granted by the previous governors and maintained a pro-slavery regime. On 10 October 1868, the landowner Carlos Manuel de Céspedes declared Cuban independence and freedom for his slaves. This began the Ten Years' War from 1868 to 1878. The Dominican Restoration War (1863–65) brought to Cuba an unemployed mass of former Dominicans who had served with the Spanish Army in the Dominican Republic before being evacuated to Cuba. Some of these former soldiers joined the new Revolutionary Army and provided its initial training and leadership.
With reinforcements and guidance from the Dominicans, the Cuban rebels defeated Spanish detachments, cut railway lines, and gained dominance over vast sections of the eastern portion of the island. The Spanish government used the Voluntary Corps to commit harsh acts against the Cuban rebels, and the Spanish atrocities fuelled the growth of insurgent forces; however, they failed to export the revolution to the west. On 11 May 1873, Ignacio Agramonte was killed by a stray bullet; Céspedes was killed on 27 February 1874. In 1875, Máximo Gómez began an invasion of Las Villas west of a fortified military line, or trocha, bisecting the island. The trocha was built between 1869 and 1872; the Spanish erected it to prevent Gómez to move westward from Oriente province. It was the largest fortification built by the Spanish in the Americas.
Gómez was controversial in his calls to burn sugar plantations to harass the Spanish occupiers. After the American admiral Henry Reeve was killed in 1876, Gómez ended his campaign. By that year, the Spanish government had deployed more than 250,000 troops to Cuba, as the end of the Third Carlist War had freed up Spanish soldiers. On 10 February 1878, General Arsenio Martínez Campos negotiated the Pact of Zanjón with the Cuban rebels, and the rebel general Antonio Maceo's surrender on 28 May ended the war. Spain sustained 200,000 casualties, mostly from disease; the rebels sustained 100,000–150,000 dead and the island sustained over $300 million in property damage. The Pact of Zanjón promised the manumission of all slaves who had fought for Spain during the war, and slavery was legally abolished in 1880. However, dissatisfaction with the peace treaty led to the Little War of 1879–80.
Conflicts in the late 19th century (1886–1900)
Background
During the time of the so-called "Rewarding Truce", which encompassed the 17 years from the end of the Ten Years' War in 1878, fundamental changes took place in Cuban society. With the abolition of slavery in October 1886, former slaves joined the ranks of farmers and urban working class. Most wealthy Cubans lost their rural properties, and many of them joined the urban middle class. The number of sugar mills dropped and efficiency increased, with only companies and the most powerful plantation owners owning them. The numbers of campesinos and tenant farmers rose considerably. Furthermore, American capital began flowing into Cuba, mostly into the sugar and tobacco businesses and mining. By 1895, these investments totalled $50 million. Although Cuba remained Spanish politically, economically it became increasingly dependent on the United States.
These changes also entailed the rise of labour movements. The first Cuban labour organization, the Cigar Makers Guild, was created in 1878, followed by the Central Board of Artisans in 1879, and many more across the island. Abroad, a new trend of aggressive American influence emerged. Secretary of State James G. Blaine placed particular importance on the control of Cuba: "If ever ceasing to be Spanish, Cuba must necessarily become American and not fall under any other European domination".
Martí's Insurrection and the start of the war
After his second deportation to Spain in 1878, the pro-independence Cuban activist José Martí moved to the United States in 1881, where he began mobilizing the support of the Cuban exile community in Florida. He sought a revolution and Cuban independence from Spain, but also lobbied to oppose U.S. annexation of Cuba. Propaganda efforts by the Cuban Junta continued for years and intensified starting in 1895.
After deliberations with patriotic clubs across the United States, the Antilles and Latin America, the Partido Revolucionario Cubano (Cuban Revolutionary Party) was officially proclaimed on 10 April 1892, with the purpose of gaining independence for both Cuba and Puerto Rico. Martí was elected delegate, the highest party position. In Foner's words, "Martí's impatience to start the revolution for independence was affected by his growing fear that the United States would succeed in annexing Cuba before the revolution could liberate the island from Spain".
On 25 December 1894, three ships set sail for Cuba from Fernandina Beach, Florida, loaded with armed men and supplies. Two of the ships were seized by U.S. authorities in early January, but the proceedings went ahead. The insurrection began on 24 February 1895, with uprisings across the island. The uprisings in the central part of the island, such as Ibarra, Jagüey Grande and Aguada, suffered from poor co-ordination and failed; the leaders were captured, some of them deported and some executed. In the province of Havana the insurrection was discovered before it got off and the leaders detained. Thus, the insurgents further west in Pinar del Río were ordered to wait.
Martí, on his way to Cuba, gave the Proclamation of Montecristi in Santo Domingo, outlining the policy for Cuba's war of independence: the war was to be waged by blacks and whites alike; participation of all blacks was crucial for victory; Spaniards who did not object to the war effort should be spared, private rural properties should not be damaged; and the revolution should bring new economic life to Cuba.
On 1 and 11 April 1895, the main rebel leaders landed on two expeditions in Oriente: Major Antonio Maceo and 22 members near Baracoa and Martí, Máximo Gómez and four other members in Playitas. Around that time, Spanish forces in Cuba numbered about 80,000, including 60,000 Spanish and Cuban volunteers. The latter were a locally enlisted force that took care of most of the guard and police duties on the island. By December, 98,412 regular troops had been sent to the island and the number of volunteers had increased to 63,000 men. By the end of 1897, there were 240,000 regulars and 60,000 irregulars on the island. The revolutionaries were far outnumbered.
The rebels came to be nicknamed "Mambis" after a black Spanish officer, Juan Ethninius Mamby, who joined the Dominicans in the fight for independence in 1846. When the Ten Years' War broke out in 1868, some of the same soldiers were assigned to Cuba, importing what had by then become a derogatory Spanish slur. The Cubans adopted the name with pride.
After the Ten Years' War, possession of weapons by private individuals was prohibited in Cuba. Thus, one of the most serious and persistent problems for the rebels was a shortage of suitable weapons. This lack of arms forced them to utilise guerrilla tactics, using the environment, the element of surprise, fast horses and simple weapons such as machetes. Most of their firearms were acquired in raids on the Spaniards. Between 11 June 1895 and 30 November 1897, 60 attempts were made to bring weapons and supplies to the rebels from outside Cuba, but only one succeeded, largely due to British naval protection.
Escalation of the war
Martí was killed on 19 May 1895, but Máximo Gómez (a Dominican) and Antonio Maceo (a mulatto) fought on. Gómez used scorched-earth tactics, which entailed dynamiting passenger trains and burning the Spanish loyalists' property and sugar plantations—including many owned by Americans. By the end of June all of Camagüey was at war. Continuing west, Gómez and Maceo joined up with veterans of the 1868 war, Polish internationalists, General Carlos Roloff and Serafín Sánchez in Las Villas. In mid-September, representatives of the five Liberation Army Corps assembled in Jimaguayú to approve the Jimaguayú Constitution. This constitution established a central government, which grouped the executive and legislative powers into one entity, the Government Council, which was headed by Salvador Cisneros and Bartolomé Masó.
After a period of consolidation in the three eastern provinces, the liberation armies headed for Camagüey and then for Matanzas, outmanoeuvring and deceiving the Spanish Army. The revolutionaries defeated the Spanish general Arsenio Martínez Campos and killed his most trusted general at Peralejo. Campos tried the same strategy he had employed in the Ten Years' War, constructing a broad defensive belt across the island, about long and wide. This line, called the trocha, was intended to limit rebel activities to the eastern provinces, and consisted of a railroad, from Jucaro in the south to Moron in the north, on which armored railcars could travel. At various points along this railroad there were fortifications, posts and barbed wire; booby traps were placed at the locations most likely to be attacked.
For the rebels, it was essential to bring the war to the western provinces of Matanzas, Havana and Pinar del Río, where the island's government and wealth was located. In a successful cavalry campaign, overcoming the trochas, the rebels invaded every province. Surrounding all the larger cities and well-fortified towns, they arrived at the westernmost tip of the island on 22 January 1896.
Unable to defeat the rebels with conventional military tactics, the Spanish government sent Gen. Valeriano Weyler y Nicolau (nicknamed The Butcher), who reacted to these rebel successes by introducing terror methods: periodic executions, mass exiles, and the destruction of farms and crops. These methods reached their height on 21 October 1896, when he ordered all countryside residents and their livestock to gather in various fortified areas and towns occupied by his troops. Hundreds of thousands of people had to leave their homes, creating appalling conditions of overcrowding. This was the first recorded and recognized use of concentration camps where non-combatants were removed from their land to deprive the enemy of succor and then the internees were subjected to appalling conditions. It is estimated that this measure caused the death of at least one-third of Cuba's rural population. The forced relocation policy was maintained until March 1898.
Since the early 1880s, Spain had also been suppressing an independence movement in the Philippines, which was intensifying; Spain was thus now fighting two wars, which placed a heavy burden on its economy. In secret negotiations in 1896, Spain turned down the United States' offers to buy Cuba.
Maceo was killed on 7 December 1896. As the war continued, the major obstacle to Cuban success was weapons supply. Although weapons and funding came from within the United States, the supply operation violated American laws, which were enforced by the U.S. Coast Guard; of 71 resupply missions, only 27 got through.
In 1897, the liberation army maintained a privileged position in Camagüey and Oriente, where the Spanish only controlled a few cities. Spanish liberal leader Praxedes Sagasta admitted in May 1897: "After having sent 200,000 men and shed so much blood, we don't own more land on the island than what our soldiers are stepping on". The rebel force of 3,000 defeated the Spanish in various encounters, such as the battle of La Reforma and the surrender of Las Tunas on 30 August, and the Spaniards were kept on the defensive.
As stipulated at the Jimaguayú Assembly two years earlier, a second Constituent Assembly met in La Yaya, Camagüey, on 10 October 1897. The newly adopted constitution decreed that a military command be subordinated to civilian rule. The government was confirmed, naming Bartolomé Masó as president and Domingo Méndez Capote as vice president. Thereafter, Madrid decided to change its policy toward Cuba, replacing Weyler, drawing up a colonial constitution for Cuba and Puerto Rico, and installing a new government in Havana. But with half the country out of its control, and the other half in arms, the new government was powerless and rejected by the rebels.
USS Maine incident
The Cuban struggle for independence had captured the North American imagination for years and newspapers had been agitating for intervention with sensational stories of Spanish atrocities. Americans came to believe that Cuba's battle with Spain resembled the United States's Revolutionary War. North American public opinion was very much in favor of intervening for the Cubans.
In January 1898, a riot by Cuban-Spanish loyalists against the new autonomous government broke out in Havana, leading to the destruction of the printing presses of four local newspapers which published articles critical of the Spanish Army. The U.S. Consul-General cabled Washington, fearing for the lives of Americans living in Havana. In response, the battleship was sent to Havana. On 15 February 1898, the Maine was destroyed by an explosion, killing 268 crewmembers. The cause of the explosion has not been clearly established, but the incident focused American attention on Cuba, and President William McKinley and his supporters could not stop Congress from declaring war to "liberate" Cuba. In an attempt to appease the United States, the colonial government ended the forced relocation policy and offered negotiations with the independence fighters. However, the truce was rejected by the rebels and the concessions proved too late. Madrid asked other European powers for help; they refused.
On 11 April 1898, McKinley asked Congress for authority to send U.S. Armed Forces troops to Cuba for the purpose of ending the civil war. On 19 April, Congress passed joint resolutions supporting Cuban independence and disclaiming any intention to annex Cuba, demanding Spanish withdrawal, and authorizing military force to help Cuban patriots gain independence. This included from Senator Henry Teller the Teller Amendment, which passed unanimously, stipulating that "the island of Cuba is, and by right should be, free and independent". The amendment disclaimed any intention on the part of the United States to exercise jurisdiction or control over Cuba for other than pacification reasons. War was declared on 20/21 April 1898.
Cuban Theatre of the Spanish–American War
Hostilities started hours after the declaration of war when a U.S. contingent under Admiral William T. Sampson blockaded several Cuban ports. The Americans decided to invade Cuba in Oriente where the Cubans were able to co-operate. The first U.S. objective was to capture the city of Santiago de Cuba to destroy Linares' army and Cervera's fleet. To reach Santiago they had to pass through concentrated Spanish defences in the San Juan Hills. Between 22 and 24 June 1898 the Americans landed under General William R. Shafter at Daiquirí and Siboney and established a base. The port of Santiago became the main target of U.S. naval operations, and the American fleet attacking Santiago needed shelter from the summer hurricane season. Nearby Guantánamo Bay was chosen for this purpose and attacked on 6 June. The Battle of Santiago de Cuba, on 3 July 1898, was the largest naval engagement during the Spanish–American War, and resulted in the destruction of the Spanish Caribbean Squadron.
Resistance in Santiago consolidated around Fort Canosa, while major battles between Spaniards and Americans took place at Las Guasimas on 24 June, and at El Caney and San Juan Hill on 1 July, after which the American advance ground to a halt. Spanish troops successfully defended Fort Canosa, allowing them to stabilize their line and bar the entry to Santiago. The Americans and Cubans began a siege of the city, which surrendered on 16 July after the defeat of the Spanish Caribbean Squadron. Thus, Oriente fell under the control of Americans and the Cubans, but U.S. General Nelson A. Miles would not allow Cuban troops to enter Santiago, claiming that he wanted to prevent clashes between Cubans and Spaniards. Cuban General Calixto García, head of the mambi forces in the Eastern department, ordered his troops to hold their areas and resigned, writing a letter of protest to General Shafter.
After losing the Philippines and Puerto Rico, which had also been invaded by the United States, Spain sued for peace on 17 July 1898. On 12 August, the U.S. and Spain signed a protocol of peace, in which Spain agreed to relinquish Cuba. On 10 December 1898, the U.S. and Spain signed the formal Treaty of Paris, recognizing continuing U. S. military occupation. Although the Cubans had participated in the liberation efforts, the United States prevented Cuba from sending representatives to the Paris peace talks or signing the treaty, which set no time limit for U.S. occupation and excluded the Isle of Pines from Cuba. Although the U.S. president had no objection to Cuba's eventual independence, U.S. General William R. Shafter refused to allow Cuban General Calixto García and his rebel forces to participate in the surrender ceremonies in Santiago de Cuba.
U.S. occupation (1898–1902)
After the last Spanish troops left the island in December 1898, the government of Cuba was temporarily handed over to the United States on 1 January 1899. The first governor was General John R. Brooke. Unlike Guam, Puerto Rico, and the Philippines, the United States did not annex Cuba because of the restrictions imposed in the Teller Amendment.
Political changes
The U.S. administration was undecided on Cuba's future status. Once it had been pried away from the Spaniards it was to be assured that it moved and remained in the U.S. sphere. How this was to be achieved was a matter of intense discussion and annexation was an option. Brooke set up a civilian government, placed U.S. governors in seven newly created departments, and named civilian governors for the provinces as well as mayors and representatives for the municipalities. Many Spanish colonial government officials were kept in their posts. The population were ordered to disarm and, ignoring the Mambi Army, Brooke created the Rural Guard and municipal police corps at the service of the occupation forces. Cuba's judicial powers and courts remained legally based on the codes of the Spanish government. Tomás Estrada Palma, Martí's successor as delegate of the Cuban Revolutionary Party, dissolved the party a few days after the signing of the Paris Treaty. The revolutionary Assembly of Representatives was also dissolved.
Economic changes
Before the United States officially took over the government, it had already begun cutting tariffs on American goods entering Cuba, without granting the same rights to Cuban goods going to the United States. Government payments had to be made in U.S. dollars. The Foraker Amendment prohibited the U.S. occupation government from granting privileges and concessions to American investors, to appease anti-imperialists during the occupational period. Despite this, the Cuban economy was soon dominated by American capital. By 1905 nearly 10% of Cuba's land area belonged to Americans. By 1902, American companies controlled 80% of Cuba's ore exports and owned most of the sugar and cigarette factories.
Immediately after the war, there were several serious barriers for foreign businesses attempting to operate in Cuba. The Joint Resolution of 1898, the Teller Amendment, and the Foraker Amendment threatened foreign investment. Eventually, Cornelius Van Horne of the Cuba Company, an early railroad company in Cuba, found a loophole in "revocable permits" justified by preexisting Spanish legislation that effectively allowed railroads to be built in Cuba. General Leonard Wood, the governor of Cuba and a noted annexationist, used this loophole to grant hundreds of franchises, permits, and other concessions to American businesses.
Once the legal barriers were overcome, American investments transformed the Cuban economy. Within two years of entering Cuba, the Cuba Company built a 350-mile railroad connecting the eastern port of Santiago to the existing railways in central Cuba. The company was the largest single foreign investment in Cuba for the first two decades of the twentieth century. By the 1910s it was the largest company in the country. The improved infrastructure allowed the sugar cane industry to spread to the previously underdeveloped eastern part of the country. As many small Cuban sugar cane producers were crippled with debt and damages from the war, American companies were able to quickly and cheaply take over the industry. At the same time, new productive units called centrales could grind up to 2,000 tons of cane a day making large-scale operations most profitable. The large fixed cost of these centrales made them almost exclusively accessible to American companies with large capital stocks. Furthermore, the centrales required a large, steady flow of cane to remain profitable, which led to further consolidation. Cuban cane farmers who had formerly been landowners became tenants on company land. By 1902, 40% of the country's sugar production was controlled by Americans.
With American corporate interests firmly rooted in Cuba, the U.S. tariff system was adjusted accordingly to strengthen trade between the nations. The Reciprocity Treaty of 1903 lowered the U.S. tariff on Cuban sugar by 20%. This gave Cuban sugar a competitive edge in the American marketplace. At the same time, it granted equal or greater concessions on most items imported from the United States. Cuban imports of American goods went from $17 million in the five years before the war, to $38 million in 1905, and eventually to over $200 million in 1918. Likewise, Cuban exports to the United States reached $86 million in 1905 and rose to nearly $300 million in 1918.
Elections and independence
Popular demands for a Constituent Assembly soon emerged. In December 1899, the U.S. War Secretary assured the Cuban populace that the occupation was temporary, that municipal and general elections would be held, that a Constituent Assembly would be set up, and that sovereignty would be handed to Cubans. Brooke was replaced by General Leonard Wood to oversee the transition. Parties were created, including the Cuban National Party, the Federal Republican Party of Las Villas, the Republican Party of Havana and the Democratic Union Party.
The first elections for mayors, treasurers and attorneys of the country's 110 municipalities took place on 16 June 1900, but balloting was limited to literate Cubans older than 21 and with properties worth more than $250. Only members of the dissolved Liberation Army were exempt from these conditions. Thus, the number of about 418,000 male citizens over 21 was reduced to about 151,000. The same elections were held one year later, again for a one-year-term.
Elections for 31 delegates to a Constituent Assembly were held on 15 September 1900 with the same balloting restrictions. In all three elections, pro-independence candidates won overwhelming majorities. The Constitution was drawn up from November 1900 to February 1901 and then passed by the Assembly. It established a republican form of government, proclaimed internationally recognized individual rights and liberties, freedom of religion, separation between church and state, and described the composition, structure and functions of state powers.
On 2 March 1901, the U.S. Congress passed the Army Appropriations Act, stipulating the conditions for the withdrawal of United States troops remaining in Cuba. As a rider, this act included the Platt Amendment, which defined the terms of Cuban-U.S. relations until 1934. The amendment provided for a number of rules heavily infringing on Cuba's sovereignty:
That the government of Cuba shall never enter into any treaty with any foreign power which will impair the independence of Cuba, nor in any manner permit any foreign power to obtain control over any portion of the island.
That Cuba would contract no foreign debt without guarantees that the interest could be served from ordinary revenues.
That Cuba consent that the United States may intervene for the preservation of Cuban independence, to protect life, property, and individual liberty, and to discharging the obligations imposed by the treaty of Paris.
That the Cuban claim to the Isle of Pines (now called Isla de la Juventud) was not acknowledged and to be determined by treaty.
That Cuba commit to providing the United States "lands necessary for coaling or naval stations at certain specified points to be agreed upon".
As a precondition to Cuba's independence, the United States demanded that this amendment be approved fully and without changes by the Constituent Assembly as an appendix to the new constitution. The appendix was approved, after heated debate, by a margin of four votes. Governor Wood admitted: "Little or no independence had been left to Cuba with the Platt Amendment and the only thing appropriate was to seek annexation".
In the presidential elections of 31 December 1901, Tomás Estrada Palma, an American still living in the United States, was the only candidate. His adversary, General Bartolomé Masó, withdrew his candidacy in protest against U.S. favoritism and the manipulation of the political machine by Palma's followers. Palma was elected to be the Republic's first President.
Early 20th century (1902–1959)
The U.S. occupation officially ended when Palma took office on 20 May 1902. Havana and Varadero soon became popular tourist resorts. Though some efforts were made to ease Cuba's ethnic tensions through government policies, racism and informal discrimination towards blacks and mestizos remained widespread.
Guantanamo Bay was leased to the United States as part of the Platt Amendment. The status of the Isle of Pines as Cuban territory was left undefined until 1925, when the United States finally recognized Cuban sovereignty over the island. Palma governed successfully for his four-year term; yet when he tried to extend his time in office, a revolt ensued.
The Second Occupation of Cuba, also known as the Cuban Pacification, was a major US military operation that began in September 1906. After the collapse of Palma's regime, US President Roosevelt invaded and established an occupation that would continue for nearly two-and-a-half years. The stated goal of the operation was to prevent fighting between the Cubans, to protect North American economic interests, and to hold free elections. In 1906, the United States representative William Howard Taft negotiated an end of the successful revolt led by the young general Enrique Loynaz del Castillo. Palma resigned and the United States Governor Charles Magoon assumed temporary control until 1909. Following the election of José Miguel Gómez in November 1908, Cuba was deemed stable enough to allow a withdrawal of American troops, which was completed in February 1909.
For three decades, the country was led by former War of Independence leaders, who after being elected did not serve more than two constitutional terms. The Cuban presidential succession was as follows: José Miguel Gómez (1908–1912); Mario García Menocal (1913–1920); Alfredo Zayas (1921–25) and Gerardo Machado (1925–1933).
Under the Liberal Gómez the participation of Afro-Cubans in the political process was curtailed when the Partido Independiente de Color was outlawed and bloodily suppressed in 1912, as American troops reentered the country to protect the sugar plantations. Under Gómez's successor, Mario Menocal of the Conservative Party, income from sugar rose steeply. Menocal's reelection in 1916 was met with armed revolt by Gómez and other Liberals (the so-called "Chambelona War"), prompting the United States to send in Marines. Gómez was defeated and captured and the rebellion was snuffed out.
In World War I, Cuba declared war on Imperial Germany on 7 April 1917, one day after the United States entered the war. Despite being unable to send troops to fight in Europe, Cuba played a significant role as a base to protect the West Indies from German U-boat attacks. A draft law was instituted, and 25,000 Cuban troops raised, but the war ended before they could be sent into action.
Alfredo Zayas was elected president in 1920 and took office in 1921. When the Cuban financial system collapsed after a drop in sugar prices, Zayas secured a loan from the United States in 1922. One historian has concluded that the continued U.S. military intervention and economic dominance had once again made Cuba "a colony in all but name."
Post-World War I
President Gerardo Machado was elected by popular vote in 1925, but he was constitutionally barred from reelection. Machado, determined to modernize Cuba, set in motion several massive civil works projects such as the Central Highway, but at the end of his constitutional term he held on to power. The United States decided not to interfere militarily. In the late 1920s and early 1930s a number of Cuban action groups staged a series of uprisings that either failed or did not affect the capital.
The Sergeants' Revolt undermined the institutions and coercive structures of the oligarchic state. The young and relatively inexperienced revolutionaries found themselves pushed into the halls of state power by worker and peasant mobilisations. Between September 1933 and January 1934 a loose coalition of radical activists, students, middle-class intellectuals, and disgruntled lower-rank soldiers formed a Provisional Revolutionary Government. This coalition was directed by a popular university professor, Dr Ramón Grau San Martín. The Grau government promised a 'new Cuba' which would belong to all classes, and the abrogation of the Platt Amendment. They believed their legitimacy stemmed from the popular support which brought them to power, and not from the approval of the United States Department of State.
To this end, throughout the autumn of 1933, the government decreed a dramatic series of reforms. The Platt Amendment was unilaterally abrogated, and all the political parties of the Machadato were dissolved. The Provisional Government granted autonomy to the University of Havana, women obtained the right to vote, the eight-hour day was decreed, a minimum wage was established for cane-cutters, and compulsory arbitration was promoted. The government created a Ministry of Labour, and a law was passed establishing that 50 per cent of all workers in agriculture, commerce and industry had to be Cuban citizens. The Grau regime set agrarian reform as a priority, promising peasants legal title to their lands. The Provisional Government survived until January 1934, when it was overthrown by an anti-government coalition of right-wing civilian and military elements. Led by a young mestizo sergeant, Fulgencio Batista, this movement was supported by the United States.
1940 Constitution and the Batista era
Rise of Batista
In 1940, Cuba conducted free and fair national elections. Fulgencio Batista, was originally endorsed by Communist leaders in exchange for the legalization of the Popular Socialist Party and Communist domination of the labor movement. The reorganization of the labor movement during this time was capped with the establishment of the Confederacion de Trajabadores de Cuba (Confederation of Cuban Workers, or CTC), in 1938. However, in 1947, the Communists lost control of the CTC, and their influence in the trade union movement gradually declined into the 1950s. The assumption of the Presidency by Batista in 1952 and the intervening years to 1958 placed tremendous strain on the labor movement, with some independent union leaders resigning from the CTC in opposition to Batista's rule. The relatively progressivist 1940 Constitution was adopted by the Batista administration. The constitution denied Batista the possibility of running consecutively in the 1944 election.
Rather than endorsing Batista's hand-picked successor Carlos Zayas, the Cuban people elected Ramón Grau San Martín in 1944. Grau made a deal with labor unions to continue Batista's pro-labor policies. Grau's administration coincided with the end of World War II, and he presided over an economic boom as sugar production expanded and prices rose. He instituted programs of public works and school construction, increasing social security benefits and encouraging economic development and agricultural production. However, increased prosperity brought increased corruption and urban violence. The country was also steadily gaining a reputation as a base for organized crime, with the Havana Conference of 1946 seeing leading Mafia mobsters descend upon the city.
Grau's presidency was followed by that of Carlos Prío Socarrás, whose government was tainted by increasing corruption and violent incidents among political factions. Eduardo Chibás the leader of the Partido Ortodoxo (Orthodox Party), a nationalist group was widely expected to win in 1952 on an anticorruption platform. However, Chibás committed suicide before he could run, and the opposition was left without a unifying leader. Batista seized power in an almost bloodless coup. President Prío was forced to leave Cuba. Due to the corruption of the previous two administrations, the general public reaction to the coup was somewhat accepting at first. However, Batista soon encountered stiff opposition when he temporarily suspended balloting and the 1940 constitution, and attempted to rule by decree. Nonetheless, elections were held in 1954 and Batista was re-elected under disputed circumstances.
Economic expansion and stagnation
Although corruption was rife under Batista, Cuba did flourish economically. Wages rose significantly; according to the International Labour Organization, the average industrial salary in Cuba was the world's eighth-highest in 1958, and the average agricultural wage was higher than in developed nations such as Denmark and France. Although a third of the population still lived in poverty (according to Batista's government), Cuba was one of the five most developed countries in Latin America by the end of the Batista era, with 56% of the population living in cities.
In the 1950s, Cuba's gross domestic product (GDP) per capita was roughly equal to that of contemporary Italy, although still only a sixth as large as that of the United States. Labour rights were also favourableCuban workers were entitled to a months's paid holiday, nine days' sick leave with pay, and six weeks' leave before and after childbirth. Cuba had Latin America's highest per capita consumption rates of meat, vegetables, cereals, automobiles, telephones and radios during this period. Havana was the world's fourth-most-expensive city at the time. Moreover, Cuba's health service was remarkably developed. By the late 1950s, it had one of the highest numbers of doctors per capita more than in the United Kingdom at that time and the third-lowest adult mortality rate. According to the World Health Organization, the island had the lowest infant mortality rate in Latin America, and the 13th-lowest in the world. Cuba's education spending in the 1950s was the highest in Latin America, relative to GDP. Cuba had the fourth-highest literacy rate in the region, at almost 80% according to the United Nations higher than that of Spain at the time.
However, the United States, rather than Latin America, was the frame of reference for educated Cubans. Middle-class Cubans grew frustrated at the economic gap between Cuba and the US, and increasingly dissatisfied with the administration. Large income disparities arose due to the extensive privileges enjoyed by Cuba's unionized workers. Cuban labour unions had established limitations on mechanization and even banned dismissals in some factories. The labour unions' privileges were obtained in large measure "at the cost of the unemployed and the peasants".
Cuba's labour regulations ultimately caused economic stagnation. Hugh Thomas asserts that "militant unions succeeded in maintaining the position of unionized workers and, consequently, made it difficult for capital to improve efficiency." Between 1933 and 1958, Cuba increased economic regulation enormously. The regulation led to declining investment. The World Bank also complained that the Batista administration raised the tax burden without assessing its impact. Unemployment was high; many university graduates could not find jobs. After its earlier meteoric rise, the Cuban gross domestic product grew at only 1% annually on average between 1950 and 1958.
Political repression and human rights abuses
In 1952, while receiving military, financial, and logistical support from the United States, Batista suspended the 1940 Constitution and revoked most political liberties, including the right to strike. He then aligned with the wealthiest landowners and presided over a stagnating economy that widened the gap between rich and poor Cubans. Eventually it reached the point where most of the sugar industry was in U.S. hands, and foreigners owned 70% of the arable land. Batista's repressive government then began to systematically profit from the exploitation of Cuba's commercial interests, by negotiating lucrative relationships with both the American Mafia, who controlled the drug, gambling, and prostitution businesses in Havana, and with large U.S.-based multinational companies who were awarded lucrative contracts. To quell the growing discontent amongst the populace—displayed through frequent student riots and demonstrations—Batista established tighter censorship of the media, while also utilizing his Bureau for the Repression of Communist Activities secret police to carry out wide-scale violence, torture and public executions. Estimates range from hundreds to about 20,000 people killed.
Cuban Revolution (1952–1959)
In 1952, Fidel Castro, a young lawyer running for a seat in the Chamber of Representatives for the Partido Ortodoxo, circulated a petition to depose Batista's government on the grounds that it had illegitimately suspended the electoral process. The courts ignored the petition. Castro thus resolved to use armed force to overthrow Batista; he and his brother Raúl gathered supporters, and on 26 July 1953 led an attack on the Moncada Barracks near Santiago de Cuba. The attack ended in failurethe authorities killed several of the insurgents, captured Castro himself and sentenced him to 15 years in prison. However, the Batista government released him in 1955, when amnesty was given to many political prisoners. Castro and his brother subsequently went into exile in Mexico, where they met the Argentine revolutionary Ernesto "Che" Guevara. While in Mexico, Guevara and the Castros organized the 26 July Movement with the goal of overthrowing Batista. In December 1956, Fidel Castro led a group of 82 fighters to Cuba aboard the yacht Granma. Despite a pre-landing rising in Santiago by Frank País Pesqueira and his followers among the urban pro-Castro movement, Batista's forces promptly killed, dispersed or captured most of Castro's men.
Castro escaped into the Sierra Maestra mountains with as few as 12 fighters, aided by the urban and rural opposition. Castro and Guevara then began a guerrilla campaign against the Batista régime, with their main forces supported by numerous poorly armed escopeteros and the well-armed fighters of Frank País' urban organization. Growing anti-Batista resistance, including a bloodily crushed rising by Cuban Navy personnel in Cienfuegos, soon led to chaos. At the same time, rival guerrilla groups in the Escambray Mountains also grew more effective. Castro attempted to arrange a general strike in 1958, but could not win support among Communists or labor unions. Multiple attempts by Batista's forces to crush the rebels ended in failure. Castro's forces acquired captured weaponry, the biggest being a government M4 Sherman tank, which would be used in the Battle of Santa Clara.
The United States imposed trade restrictions on the Batista administration and sent an envoy who attempted to persuade Batista to leave the country voluntarily. With the military situation becoming untenable, Batista fled on 1 January 1959, and Castro took over. Within months Castro moved to consolidate his power by marginalizing other resistance groups and imprisoning and executing opponents and dissidents. As the revolution became more radical and continued its marginalization of the wealthy and political opponents, thousands of Cubans fled the island, eventually forming a large exile community in the United States.
Castro's Cuba (1959–2006)
Politics
On 1 January 1959, Che Guevara marched his troops from Santa Clara to Havana, without encountering resistance. Meanwhile, Fidel Castro marched his soldiers to the Moncada Army Barracks, where all 5,000 soldiers in the barracks defected to the Revolutionary movement. On 4 February 1959, Fidel Castro announced a massive reform plan which included a public works project, land reform granting nearly 200,000 families farmland, and nationalization of various industries.
The new government of Cuba soon encountered opposition from militant groups and from the United States. Fidel Castro quickly purged political opponents from the administration. Loyalty to Castro and the revolution became the primary criterion for all appointments. Mass organisation such as labor unions that opposed the revolutionary government were made illegal. By the end of 1960, all opposition newspapers had been closed down and all radio and television stations had come under state control. Teachers and professors found to have involvement with counter-revolution were purged. Fidel's brother Raúl Castro became the commander of the Revolutionary Armed Forces. In September 1960, a system of neighborhood watch networks, known as Committees for the Defense of the Revolution (CDR), was created.
In July 1961, the Integrated Revolutionary Organizations (IRO) was formed, merging Fidel Castro's 26th of July Movement with Blas Roca's Popular Socialist Party and Faure Chomón's Revolutionary Directory 13 March. On 26 March 1962, the IRO became the United Party of the Cuban Socialist Revolution (PURSC), which, in turn, became the Communist Party on 3 October 1965, with Castro as First Secretary. In 1976 a national referendum ratified a new constitution, with 97.7% in favour. The constitution secured the Communist Party's central role in governing Cuba, but kept party affiliation out of the election process. Other smaller parties exist but have little influence and are not permitted to campaign against the Communist Party.
Break with the United States
The United States recognized the Castro government on 7 January 1959. President Dwight D. Eisenhower sent a new ambassador, Philip Bonsal, to replace Earl E. T. Smith, who had been close to Batista. The Eisenhower administration, in agreement with the American media and Congress, did this with the assumption that "Cuba [would] remain in the U.S. sphere of influence". However, Castro belonged to a faction which opposed U.S. influence. On 5 June 1958, at the height of the revolution, he had written: "The Americans are going to pay dearly for what they are doing. When the war is over, I'll start a much longer and bigger war of my own: the war I'm going to fight against them." "Castro dreamed of a sweeping revolution that would uproot his country's oppressive socioeconomic structure and of a Cuba that would be free of the United States".
Only six months after Castro seized power, the Eisenhower administration began to plot his ouster. The United Kingdom was persuaded to cancel a sale of Hawker Hunter fighter aircraft to Cuba. The US National Security Council (NSC) met in March 1959 to consider means to institute a régime-change and the Central Intelligence Agency (CIA) began arming guerillas inside Cuba in May. In January 1960 Roy R. Rubottom, Jr., Assistant Secretary of State for Inter-American Affairs, summarized the evolution of Cuba–United States relations since January 1959: "The period from January to March might be characterized as the honeymoon period of the Castro government. In April a downward trend in US–Cuban relations had been evident… In June we had reached the decision that it was not possible to achieve our objectives with Castro in power and had agreed to undertake the program referred to by Undersecretary of State Livingston T. Merchant. On 31 October in agreement with the Central Intelligence Agency, the Department had recommended to the President approval of a program along the lines referred to by Mr. Merchant. The approved program authorized us to support elements in Cuba opposed to the Castro government while making Castro's downfall seem to be the result of his own mistakes."Braddock to SecState, Havana, 1 February 1960, FRUS 1958–60, 6:778.
In March 1960 the French ship La Coubre blew up in Havana Harbor as it unloaded munitions, killing dozens. The CIA blamed the explosion on the Cuban government.
Relations between the United States and Cuba deteriorated rapidly as the Cuban government, in reaction to the refusal of Royal Dutch Shell, Standard Oil and Texaco to refine petroleum from the Soviet Union in Cuban refineries under their control, took control of those refineries in July 1960. The Eisenhower administration promoted a boycott of Cuba by oil companies; Cuba responded by nationalizing the refineries in August 1960. Cuba expropriated more US-owned properties, notably those belonging to the International Telephone and Telegraph Company (ITT) and to the United Fruit Company. In the Castro government's first agrarian reform law, on 17 May 1959, the state sought to limit the size of land holdings, and to distribute that land to small farmers in "Vital Minimum" tracts. This law served as a pretext for seizing lands held by foreigners and redistributing them to Cuban citizens.
The United States severed diplomatic relations with Cuba on 3 January 1961, and further restricted trade in February 1962. The Organization of American States, under pressure from the United States, suspended Cuba's membership on 22 January 1962, and the U.S. government banned all U.S.–Cuban trade on 7 February. The Kennedy administration extended this ban on 8 February 1963, forbidding U.S. citizens to travel to Cuba or to conduct financial or commercial transactions with the country. The United States later pressured other nations and American companies with foreign subsidiaries to restrict trade with Cuba. The Helms–Burton Act of 1996 makes it very difficult for foreign companies doing business with Cuba to also do business in the United States.
Bay of Pigs invasion
In April 1961, less than four months into the Kennedy administration, the Central Intelligence Agency (CIA) executed a plan that had been developed under the Eisenhower administration. This military campaign to topple Cuba's revolutionary government is now known as the Bay of Pigs Invasion (or La Batalla de Girón in Cuba). The aim of the invasion was to empower existing opposition militant groups to "overthrow the Communist regime" and establish "a new government with which the United States can live in peace." The invasion was carried out by a CIA-sponsored paramilitary group of over 1,400 Cuban exiles called Brigade 2506. Arriving in Cuba by boat from Guatemala on 15 April, the brigade initially overwhelmed Cuba's counter-offensive. But by 20 April, the brigade surrendered and was publicly interrogated before being sent back to the US. The invasion helped further build popular support for the new Cuban government. The Kennedy administration thereafter began Operation Mongoose, a covert CIA campaign of sabotage against Cuba, including the arming of militant groups, sabotage of Cuban infrastructure, and plots to assassinate Castro. All this reinforced Castro's distrust of the US.
Cuban Missile Crisis
Tensions between the two governments peaked again during the October 1962 Cuban Missile Crisis. The United States had a much larger arsenal of long-range nuclear weapons than the Soviet Union, as well as medium-range ballistic missiles (MRBMs), whereas the Soviet Union had a large stockpile of medium-range nuclear weapons. Cuba agreed to let the Soviets secretly place SS-4 Sandal and SS-5 Skean MRBMs on their territory. After Lockheed U-2 reconnaissance photos confirmed the missiles' presence in Cuba, the United States established a cordon in international waters to stop Soviet ships from bringing in more (designated a quarantine rather than a blockade to avoid issues with international law). At the same time, Castro was getting a little too extreme for Moscow, so at the last moment the Soviets called back their ships. In addition, they agreed to remove the missiles already there in exchange for an agreement that the United States would not invade Cuba.
Military build-up
In the 1961 New Year's Day parade, the Communist administration exhibited Soviet tanks and other weapons. Cuban officers received extended military training in the Soviet Union, becoming proficient in the use of advanced Soviet weapons systems. For most of the approximately 30 years of the Cuban-Soviet military collaboration, Moscow provided the Cuban Revolutionary Armed Forces—virtually free of charge—with nearly all of its equipment, training, and supplies, worth approximately $1 billion annually. By 1982, Cuba possessed the best equipped and largest per capita armed forces in Latin America.
Suppression of dissent
Military Units to Aid Production or UMAPs (Unidades Militares para la Ayuda de Producción) in effect, forced labor concentration camps were established in 1965 as a way to eliminate alleged "bourgeois" and "counter-revolutionary" values in the Cuban population.
By the 1970s, the standard of living in Cuba was "extremely spartan" and discontent was rife. Castro changed economic policies in the first half of the 1970s. In the 1970s unemployment reappeared as problem. The solution was to criminalize unemployment with 1971 Anti-Loafing Law; the unemployed would be jailed.
In any given year, there were about 20,000 dissidents held and tortured under inhumane prison conditions. Homosexuals were imprisoned in internment camps in the 1960s, where they were subject to medical-political "reeducation". The anti-Castro Archivo Cuba estimates that 4,000 people were executed.
Emigration
The establishment of a socialist system in Cuba led hundreds of thousands of upper- and middle-class Cubans to flee to the United States and other countries. By 1961, thousands of Cubans had fled for the United States. On 22 March of that year, an exile council was formed. The council planned to defeat the Communist regime and form a provisional government with José Miró Cardona, a noted leader in the civil opposition against Batista, to serve as temporary president.
Between 1959 and 1993, some 1.2 million Cubans left the island for the United States, often by sea in small boats and rafts. Between 30,000 and 80,000 Cubans are estimated to have died trying flee Cuba during this period. In the early years those who could claim dual Spanish-Cuban citizenship left for Spain. A number of Cuban Jews were allowed to emigrate to Israel after quiet negotiations; the majority of the 10,000 or so Jews in Cuba in 1959 eventually left the country.
On 6 November 1965, Cuba and the United States agreed to an airlift for Cubans who wanted to emigrate to the United States. The first of these so-called Freedom Flights left Cuba on 1 December 1965, and by 1971 over 250,000 Cubans had flown to the United States. In 1980 another 125,000 came to United States in the Mariel boatlift. It was discovered that the Cuban government was using the event to rid Cuba of the unwanted segments of its society. In 2012, Cuba abolished its requirement for exit permits, allowing Cuban citizens to travel to other countries more easily.
Involvement in Third World conflicts
From its inception, the Cuban Revolution defined itself as internationalist, seeking to spread its revolutionary ideals abroad and gain foreign allies. Although still a developing country itself, Cuba supported African, Latin American and Asian countries in the fields of military development, health and education. These "overseas adventures" not only irritated the United States but were also quite often a source of dispute with Cuba's ostensible allies in the Kremlin.
The Sandinista insurgency in Nicaragua, which led to the demise of the Somoza dictatorship in 1979, was openly supported by Cuba. However, it was on the African continent where Cuba was most active, supporting a total of 17 liberation movements or leftist governments, in countries including Angola, Equatorial Guinea, Ethiopia, Guinea-Bissau, and Mozambique. Cuba offered to send troops to Vietnam, but the initiative was turned down by the Vietnamese.
Cuba had some 39,000–40,000 military personnel abroad by the late 1970s, with the bulk of the forces in Sub-Saharan Africa but with some 1,365 stationed among Algeria, Iraq, Libya, and South Yemen. Moscow used Cuban surrogate troops in Africa and the Middle East because they had a high level of training for combat in Third World environments, familiarity with Soviet weapons, physical toughness and a tradition of successful guerrilla warfare dating back to the uprisings against Spain in the 19th century. An estimated 7,000–11,000 Cubans died in conflicts in Africa.
As early as 1961, Cuba supported the National Liberation Front in Algeria against France. In 1964, Cuba supported the Simba Rebellion of adherents of Patrice Lumumba in Congo-Leopoldville (present-day Democratic Republic of the Congo). Some 40–50 Cubans fought against Portugal in Guinea-Bissau each year from 1966 until independence in 1974. In late 1973, there were 4,000 Cuban tank troops in Syria as part of an armored brigade which took part in the Yom Kippur War until May 1974.
Its involvement in the Angolan Civil War was particularly intense and noteworthy with heavy assistance given to the Marxist–Leninist MPLA. At the height of its operation, Cuba had as many as 50,000 soldiers stationed in Angola. Cuban soldiers were instrumental in the defeat of South African and Zairian troops and the establishment of Namibia. Cuban soldiers also defeated the FNLA and UNITA armies and established MPLA control over most of Angola. South African Defence Force soldiers were again drawn into the Angolan Civil War in 1987–88, and several inconclusive battles were fought between Cuban and South African forces. Cuban-piloted MiG-23s performed airstrikes against South African forces in South West Africa during the Battle of Cuito Cuanavale.
Cuba's presence in Mozambique was more subdued, involving by the mid-1980s 700 Cuban military and 70 civilian personnel. In 1978, in Ethiopia, 16,000 Cuban combatants, along with the Soviet-supported Ethiopian Army, defeated an invasion force of Somalians. The executing of civilians and refugees, and rape of women by the Ethiopian and Cuban troops was prevalent throughout the war. Assisted by Soviet advisors, the Cubans launched a second offensive in December 1979 directed at the population's means of survival, including the poisoning and destruction of wells and the killing of cattle herds.
Cuba was unable to pay on its own for the costs of its overseas military activities. After it lost its subsidies from the USSR, Cuba withdrew its troops from Ethiopia (1989), Nicaragua (1990), Angola (1991), and elsewhere.
Intelligence cooperation between Cuba and the Soviets
As early as September 1959, Valdim Kotchergin, a KGB agent, was seen in Cuba. Jorge Luis Vasquez, a Cuban who was imprisoned in East Germany, states that the East German Stasi trained the personnel of the Cuban Interior Ministry (MINIT). The relationship between the KGB and the Cuban Intelligence Directorate (DI) was complex and marked by both times of close cooperation and times of extreme competition. The Soviet Union saw the new revolutionary government in Cuba as an excellent proxy agent in areas of the world where Soviet involvement was not popular on a local level. Nikolai Leonov, the KGB chief in Mexico City, was one of the first Soviet officials to recognize Fidel Castro's potential as a revolutionary, and urged the Soviet Union to strengthen ties with the new Cuban leader. The USSR saw Cuba as having far more appeal with new revolutionary movements, western intellectuals, and members of the New Left, given Cuba's perceived David and Goliath struggle against U.S. "imperialism". In 1963, shortly after the Cuban Missile Crisis, 1,500 DI agents, including Che Guevara, were invited to the USSR for intensive training in intelligence operations.
Contemporary period (from 1991)
Starting from the mid-1980s, Cuba experienced a crisis referred to as the "Special Period". When the Soviet Union was dissolved in late 1991, a major supporter of Cuba's economy was lost, leaving it essentially paralyzed because of the economy's narrow basis, focused on just a few products with just a few buyers. National oil supplies, which were mostly imported, were severely reduced. Over 80% of Cuba's trade was lost and living conditions declined. A "Special Period in Peacetime" was declared, which included cutbacks on transport and electricity and even food rationing. In response, the United States tightened its trade embargo, hoping it would lead to Castro's downfall. But the government tapped into a pre-revolutionary source of income and opened the country to tourism, entering into several joint ventures with foreign companies for hotel, agricultural and industrial projects. As a result, the use of U.S. dollars was legalized in 1994, with special stores being opened which only sold in dollars. There were two separate economies, dollar-economy and the peso-economy, creating a social split in the island because those in the dollar-economy made much more money. However, in October 2004, the Cuban government announced an end to this policy: from November U.S. dollars would no longer be legal tender, but would instead be exchanged for convertible pesos with a 10% tax payable to the state on the exchange of U.S. dollars.
A Canadian Medical Association Journal paper states that "The famine in Cuba during the Special Period was caused by political and economic factors similar to the ones that caused a famine in North Korea in the mid-1990s. Both countries were run by authoritarian regimes that denied ordinary people the food to which they were entitled when the public food distribution collapsed; priority was given to the elite classes and the military." The government did not accept American donations of food, medicines and money until 1993, forcing many Cubans to eat anything they could find. Even domestic cats were reportedly eaten.
Extreme food shortages and electrical blackouts led to a brief period of unrest, including numerous anti-government protests and widespread increases in urban crime. In response, the Cuban Communist Party formed hundreds of "rapid-action brigades" to confront protesters. The Communist Party's publication Granma stated that "delinquents and anti-social elements who try to create disorder ... will receive a crushing reply from the people". In July 1994, 41 Cubans drowned attempting to flee the country aboard a tugboat; the Cuban government was later accused of sinking the vessel deliberately.
Thousands of Cubans protested in Havana during the Maleconazo uprising on 5 August 1994. However, the regime's security forces swiftly dispersed them.
Continued isolation and regional engagement
Although contacts between Cubans and foreign visitors were made legal in 1997, extensive censorship had isolated it from the rest of the world. In 1997, a group led by Vladimiro Roca, son of the founder of the Cuban Communist Party, sent a petition, entitled La Patria es de Todos ("the homeland belongs to all") to the Cuban general assembly, requesting democratic and human rights reforms. Roca and his associates were imprisoned but were eventually released. In 2001, a group of Cuban activists collected thousands of signatures for the Varela Project, a petition requesting a referendum on the island's political process, which was openly supported by former U.S. President Jimmy Carter. The petition gathered sufficient signatures to be considered by the Cuban government, but was rejected on an alleged technicality. Instead, a plebiscite was held in which it was formally proclaimed that Castro's brand of socialism would be perpetual.
In 2003, Castro cracked down on independent journalists and other dissidents in an episode which became known as the "Black Spring". The government imprisoned 75 dissident thinkers, including journalists, librarians, human rights activists, and democracy activists, on the basis that they were acting as agents of the United States by accepting aid from the U.S. government.
Though it was largely diplomatically isolated from the West at this time, Cuba nonetheless cultivated regional allies. After the rise to power of Hugo Chávez in Venezuela in 1999, Cuba and Venezuela formed an increasingly close relationship. Additionally, Cuba continued its post-revolution practice of dispatching doctors to assist poorer countries in Africa and Latin America, with over 30,000 health workers deployed overseas by 2007.
End of Fidel Castro's presidency
In 2006, Fidel Castro fell ill and withdrew from public life. The following year, Raúl Castro became Acting President. In a letter dated 18 February 2008, Fidel Castro announced his formal resignation, saying "I will not aspire nor accept...the post of President of the Council of State and Commander in Chief." In 2008, Cuba was struck by three separate hurricanes, in the most destructive hurricane season in the country's history; over 200,000 were left homeless, and over US$5 billion of property damage was caused.
Improving foreign relations
In July 2012, Cuba received its first American goods shipment in over 50 years, following the partial relaxation of the U.S. embargo to permit humanitarian shipments. In October 2012, Cuba announced the abolition of its much-disliked exit permit system, allowing its citizens more freedom to travel abroad. In February 2013, after his reelection as president, Raúl Castro stated that he would retire from government in 2018 as part of a broader leadership transition. In July 2013, Cuba became embroiled in a diplomatic scandal after Chong Chon Gang, a North Korean ship illegally carrying Cuban weapons, was impounded by Panama.
The severe economic strife suffered by Venezuela in the mid-2010s lessened its ability to support Cuba, and may ultimately have contributed to the thawing of Cuban-American relations. In December 2014, after a highly publicized exchange of political prisoners between the United States and Cuba, U.S. President Barack Obama announced plans to re-establish diplomatic relations, establish an embassy in Havana and improve economic ties. Obama's proposal received both strong criticism and praise from different elements of the Cuban American community. In April 2015, the U.S. government announced that Cuba would be removed from its list of state sponsors of terrorism. The U.S. embassy in Havana was formally reopened in August 2015. In 2017, staffing levels at the embassy were reduced following unexplained health incidents.
Economic reforms
As of 2015, Cuba remains one of the few officially socialist states in the world. Though it remains diplomatically isolated and afflicted by economic inefficiency, major currency reforms were begun in the 2010s, and efforts to free up domestic private enterprise are now underway. Living standards in the country have improved significantly since the turmoil of the Special Period, with GDP per capita in terms of purchasing power parity rising from less than US$2,000 in 1999 to nearly $10,000 in 2010. Tourism has furthermore become a significant source of prosperity for Cuba.
Despite the reforms, Cuba remains afflicted by chronic shortages of food and medicines. The electrical and water services are still unreliable. In July 2021, protests erupted over these problems and the government's response to the COVID-19 pandemic, but primarily because of the historical government oppression, profound lack of opportunities, and repression of personal liberties.
After Castro era
Fidel Castro was succeeded both as the leader of the ruling Communist party in 2011 and as the country's president in 2008 by his brother, Raul Castro. In 2018, Miguel Díaz-Canel took over from Raúl Castro as president. In April 2021, Díaz-Canel succeeded Raul Castro also as the leader of the party. He is the first person to hold both the Cuban presidency and the leadership of the Communist Party (PCC) without being a member of the Castro family.
See also
History of the Caribbean
History of Cuban nationality
History of Latin America
List of colonial governors of Cuba
List of Cuba hurricanes
List of presidents of Cuba
Politics of Cuba
Spanish Empire
Spanish colonization of the Americas
Timeline of Cuban history
Notes
References
Bibliography and further reading
Castillo Ramos, Ruben (1956). "Muerto Edesio, El rey de la Sierra Maestra". Bohemia XLVIII No. 9 (12 August 1956). pp. 52–54, 87.
De Paz Sánchez, Manuel Antonio; Fernández, José; López, Nelson (1993–1994). El bandolerismo en Cuba (1800–1933). Presencia canaria y protesta rural. Santa Cruz de Tenerife. Two volumes.
Foner, Philip S. (1962). A History of Cuba and its Relations with the United States.
Franklin, James (1997). Cuba and the United States: A Chronological History. Ocean Press.
Gleijeses, Piero (2002). Conflicting Missions: Havana, Washington, and Africa, 1959–1976. University of North Carolina Press. 552 pp.
Gott, Richard. (2004). Cuba: A New History.
Hernández, Rafael and Coatsworth, John H., eds. (2001). Culturas Encontradas: Cuba y los Estados Unidos. Harvard University Press. 278 pp.
Hernández, José M. (1993). Cuba and the United States: Intervention and Militarism, 1868–1933. University of Texas Press. 288 pp.
Johnson, Willis Fletcher (1920). The History of Cuba. New York: B.F. Buck & Company, Inc.
Kapcia, Antoni. (2021) A Short History of Revolutionary Cuba: Revolution, Power, Authority and the State from 1959 to the Present Day
Kirk, John M. and McKenna, Peter (1997). Canada-Cuba Relations: The Other Good Neighbor Policy. University Press of Florida. 207 pp.
McPherson, Alan (2003). Yankee No! Anti-Americanism in U.S.-Latin American Relations. Harvard University Press. 257 pp.
Morley, Morris H. and McGillian, Chris. Unfinished Business: America and Cuba after the Cold War, 1989–2001. Cambridge University Press. 253 pp.
Offner, John L. (2002). An Unwanted War: The Diplomacy of the United States and Spain over Cuba, 1895–1898. University of North Carolina Press, 1992. 306 pp.
Paterson, Thomas G. (1994). Contesting Castro: The United States and the Triumph of the Cuban Revolution. Oxford University Press. 352 pp.
Pérez, Louis A., Jr. (1998). The War of 1898: The United States and Cuba in History and Historiography. University of North Carolina Press. 192 pp.
Pérez, Louis A. (1990). Cuba and the United States: Ties of Singular Intimacy. University of Georgia Press. 314 pp.
Perez, Louis A. (1989). Lords of the Mountain: Social Banditry and Peasant Protest in Cuba, 1878–1918. Pitt Latin American Series: University of Pittsburgh Press. .
Schwab, Peter (1999). Cuba: Confronting the U.S. Embargo. New York: St. Martin's. 226 pp.
Staten, Clifford L. (2005). The History of Cuba. Palgrave Essential Histories.
Thomas, Hugh (1998). Cuba or the Pursuit of Freedom. .
Tone, John Lawrence (2006). War and Genocide in Cuba, 1895–1898.
Walker, Daniel E. (2004). No More, No More: Slavery and Cultural Resistance in Havana and New Orleans. University of Minnesota Press. 188 pp.
Whitney, Robert W. (2001). State and Revolution in Cuba: Mass Mobilization and Political Change, 1920–1940. Chapel Hill and London: University of North Carolina Press. .
Zeuske, Michael (2004). Insel der Extreme: Kuba im 20. Jahrhundert. Zürich: Rotpunktverlag. .
Zeuske, Michael (2004). Schwarze Karibik: Sklaven, Sklavereikulturen und Emanzipation. Zürich: Rotpunktverlag. .
Danielle Bleitrach, Viktor Dedaj, Jacques-François Bonaldi. Cuba est une île, Cuba es una isla, Le Temps des cerises, 2004. .
External links
Post-USSR: Modern Cuban Struggles, 1991 video from the Dean Peter Krogh Foreign Affairs Digital Archives
Reflecting on Cuba's Bloody History. Peter Coyote. San Francisco Chronicle. 4 March 2009.
Deena Stryker Photographs of Cuba, 1963–1964 and undated – Duke University Libraries Digital Collections
Cuban Historical and Literary Manuscript Collection – University of Miami libraries Digital Collections
American Settlers in Cuba – Historic photographs and information on American settlers in Cuba before the Revolution
Digital Photographic Archive of Historic Havana- a digital archive of 1055 significant buildings in the Historic Center of Havana
Spanish Empire |
5588 | https://en.wikipedia.org/wiki/Economy%20of%20Cuba | Economy of Cuba | The economy of Cuba is a mixed planned economy dominated by state-run enterprises. Most of the labor force is employed by the state. In the 1990s, the ruling Communist Party of Cuba encouraged the formation of worker co-operatives and self-employment. In the late 2010s, private property and free-market rights along with foreign direct investment were granted by the 2018 Cuban constitution. Foreign direct investment in various Cuban economic sectors increased before 2018. As of 2021, Cuba's private sector is allowed to operate in most sectors of the economy. , public-sector employment was 65%, and private-sector employment was 35%, compared to the 2000 ratio of 76% to 23% and the 1981 ratio of 91% to 8%. Investment is restricted and requires approval by the government. In 2021, Cuba ranked 83rd out of 191 on the Human Development Index in the high human development category. , the country's public debt comprised 35.3% of GDP, inflation (CDP) was 5.5%, and GDP growth was 3%. Housing and transportation costs are low. Cubans receive government-subsidized education, healthcare, and food subsidies.
At the time of the Cuban Revolution of 1953–1959, during the military dictatorship regime of Fulgencio Batista, Cuba's GDP per capita was ranked 7th in the 47 economies of Latin America. Its income distribution compared favorably with that of other Latin American countries. However, "available data must be viewed cautiously and assumed to portray merely a rough approximation of conditions at the time," according to Susan Eckstein. However, there were profound social inequalities between city and countryside and between whites and blacks, and Cuba had a trade and unemployment problem. According to the American PBS program American Experience, "[o]n the eve of Fidel Castro's 1959 revolution, Cuba was neither the paradise that would later be conjured by the nostalgic imaginations of Cuba's many exiles nor the hellhole painted by many supporters of the revolution." The socialist revolution was followed by the ongoing United States embargo against Cuba, described by William M. LeoGrande as "the oldest and most comprehensive US economic sanctions regime against any country in the world."
Between 1970 and 1985, Cuba experienced high-sustained rates of growth; according to Claes Brundenius, "Cuba had done remarkably well in terms of satisfying basic needs (especially education and health)" and "was actually following the World Bank recipe from the 1970s: redistribution with growth". During the Cold War, the Cuban economy was heavily dependent on subsidies from the Soviet Union, valued at $65 billion in total from 1960 to 1990 (over three times as the entirety of U.S. economic aid to Latin America through the Alliance for Progress), an average of $2.17 billion a year. This accounted for between 10% and 40% of Cuban GDP, depending on the year. While the massive Soviet subsidies enabled Cuba's enormous state budget, they did not lead to a more advanced or sustainable Cuban economy. Described by economists as "a relatively highly developed Latin American export economy" in 1959 and the early 1960s, Cuba's fundamental economic structure changed very little between then and 1990. Tobacco products such as cigars and cigarettes were the only manufactured products among Cuba's leading exports, and a pre-industrial process produced even these. The Cuban economy remained inefficient and over-specialized in a few highly subsidized commodities provided by the Eastern Bloc countries. Following the fall of the Soviet Union, Cuba's GDP declined by 33% between 1990 and 1993, partially due to the loss of Soviet subsidies and a crash in sugar prices in the early 1990s. This period of economic stagnation and decline is known as the Special Period. Cuba's economy rebounded in the early 2000s due to a combination of marginal liberalization of the economy and heavy subsidies from the government of Venezuela, which provided Cuba with low-cost oil and other subsidies worth up to 12% of Cuban GDP annually.
History
Before the Revolution
Although Cuba belonged to the high-income countries of Latin America since the 1870s, income inequality was high, accompanied by capital outflows to foreign investors. The country's economy had grown rapidly in the early part of the century, fueled by the sale of sugar to the United States.
Before the Cuban Revolution, in 1958, Cuba had a per-capita GDP of $2,363, which placed it in the middle of Latin American countries. According to the UN, between 1950 and 1955, Cuba had a life expectancy of 59.4 years, which placed it in 56th place in the global ranking.
Its proximity to the United States made it a familiar holiday destination for wealthy Americans. Their visits for gambling, horse racing, and golfing made tourism an important economic sector. Tourism magazine Cabaret Quarterly described Havana as "a mistress of pleasure, the lush and opulent goddess of delights". Cuban dictator Fulgencio Batista had plans to line the Malecon, Havana's famous walkway by the water, with hotels and casinos to attract even more tourists.
Cuban Revolution
On 3 March 1959, Fidel Castro seized control of the Cuban Telephone Company, which was a subsidiary of the International Telephone and Telecommunications Corporation. This was the first of many nationalizations made by the new government; the assets seized totaled US$9 billion.
After the 1959 Revolution, citizens were not required to pay a personal income tax (their salaries being regarded as net of any taxes). The government also began to subsidize healthcare and education for all citizens; this action created strong national support for the new revolutionary government.
After the USSR and Cuba reestablished their diplomatic relations in May 1960, the USSR began to buy Cuban sugar in exchange for oil. When oil refineries like Shell, Texaco, and Esso refused to refine Soviet oil, Castro nationalized that industry as well, taking over the refineries on the island. Days later in response, the United States cut the Cuban sugar quota completely; Eisenhower was quoted saying "This action amounts to economic sanctions against Cuba. Now we must look ahead to other economic, diplomatic, and strategic moves." On 7 February 1962, Kennedy expanded the United States embargo to cover almost all U.S. imports.
By the late 1960s, Cuba became dependent on Soviet economic, political, and military aid. It was also around this time that Castro began privately believing that Cuba could bypass the various stages of socialism and progress directly to pure communism. General Secretary Leonid Brezhnev consolidated Cuba's dependence on the USSR when, in 1973, Castro caved to Brezhnev's pressure to become a full member of CEMA.
In 1970, Fidel Castro attempted to motivate the Cuban people to harvest 10 million tons of sugar, in Spanish known as La Zafra, to increase their exports and grow their economy. Despite the help of most of the Cuban population, the country fell short and produced only 7.56 million tons. In July 1970, after the harvest was over, Castro took responsibility for the failure, but later that same year, shifted the blame toward the Sugar Industry Minister saying "Those technocrats, geniuses, super-scientists assured me that they knew what to do to produce the ten million tons. But it was proven, first, that they did not know how to do it and, second, that they exploited the rest of the economy by receiving large amounts of resources ... while there are factories that could have improved with a better distribution of those resources that were allocated to the Ten-Million-Ton plan".
During the Revolutionary period, Cuba was one of the few developing countries to provide foreign aid to other countries. Foreign aid began with the construction of six hospitals in Peru in the early 1970s. It expanded later in the 1970s to the point where some 8000 Cubans worked in overseas assignments. Cubans built housing, roads, airports, schools, and other facilities in Angola, Ethiopia, Laos, Guinea, Tanzania, and other countries. By the end of 1985, 35,000 Cuban workers had helped build projects in some 20 Asian, African, and Latin American countries.
For Nicaragua in 1982, Cuba pledged to provide over $130 million worth of agricultural and machinery equipment and some 4000 technicians, doctors, and teachers.
In 1986, Cuba defaulted on its $10.9 billion debt to the Paris Club. In 1987, Cuba stopped making payments on that debt. In 2002, Cuba defaulted on $750 million in Japanese loans.
Special Period
The Cuban gross domestic product declined at least 35% between 1989 and 1993 due to the loss of 80% of its trading partners and Soviet subsidies. This loss of subsidies coincided with a collapse in world sugar prices. Sugar had done well from 1985 to 1990, crashed precipitously in 1990 and 1991 and did not recover for five years. Cuba had been insulated from world sugar prices by Soviet price guarantees. However, the Cuban economy began to improve again following a rapid improvement in trade and diplomatic relations between Cuba and Venezuela following the election of Hugo Chávez in Venezuela in 1998, who became Cuba's most important trading partner and diplomatic ally.
This era was referred to as the "Special Period in Peacetime", later shortened to "Special Period". A Canadian Medical Association Journal paper claimed, "The famine in Cuba during the Special Period was caused by political and economic factors similar to the ones that caused a famine in North Korea in the mid-1990s because both countries were run by authoritarian regimes that denied ordinary people the food to which they were entitled to when the public food distribution collapsed and priority was given to the elite classes and the military." Other reports painted an equally dismal picture, describing Cubans having to resort to eating anything they could find, from Havana Zoo animals to domestic cats. But although the collapse of centrally planned economies in the Soviet Union and other countries of the Eastern bloc subjected Cuba to severe economic difficulties, which led to a drop in calories per day from 3052 in 1989 to 2600 in 2006, mortality rates were not strongly affected thanks to the priority given on maintaining a social safety net.
Reforms and recovery
The government undertook several reforms to stem excess liquidity, increase labor incentives, and alleviate serious shortages of food, consumer goods, and services. To alleviate the economic crisis, the government introduced a few market-oriented reforms, including opening to tourism, allowing foreign investment, legalizing the U.S. dollar, and authorizing self-employment for some 150 occupations. (This policy was later partially reversed so that while the U.S. dollar is no longer accepted in businesses, it remains legal for Cubans to hold the currency.) These measures resulted in modest economic growth. The liberalized agricultural markets were introduced in October 1994, at which state and private farmers sell above-quota production at free market prices, broadened legal consumption alternatives, and reduced black market prices.
Government efforts to lower subsidies to unprofitable enterprises and to shrink the money supply caused the semi-official exchange rate for the Cuban peso to move from a peak of 120 to the dollar in the summer of 1994 to 21 to the dollar by year-end 1999. The drop in GDP halted in 1994 when Cuba reported 0.7% growth, followed by increases of 2.5% in 1995 and 7.8% in 1996. Growth slowed again in 1997 and 1998 to 2.5% and 1.2% respectively. One of the key reasons was the failure to notice that sugar production had become uneconomic. Reflecting on the Special Period, Cuban president Fidel Castro later admitted that many mistakes had been made, "The country had many economists, and it is not my intention to criticize them, but I would like to ask why we hadn't discovered earlier that maintaining our levels of sugar production would be impossible. The Soviet Union collapsed, oil cost $40 a barrel, and sugar prices were at basement levels, so why did we not rationalize the industry?" Living conditions in 1999 remained well below the 1989 level.
Due to the continued growth of tourism, growth began in 1999 with a 6.2% increase in GDP. Growth then picked up, with a growth in GDP of 11.8% in 2005 according to government figures. In 2007 the Cuban economy grew by 7.5%, higher than the Latin American average. Accordingly, the cumulative growth in GDP since 2004 stood at 42.5%.
However, starting in 1996, the government imposed income taxes on self-employed Cubans.
Cuba ranked third in the region in 1958 in GDP per capita, surpassed only by Venezuela and Uruguay. It had descended to 9th, 11th, or 12th place in the region by 2007. Cuban social indicators suffered less.
Every year the United Nations holds a vote asking countries to choose if the United States is justified in its economic embargo against Cuba and whether it should be lifted. 2016 was the first year that the United States abstained from the vote, rather than voting no, "since 1992 the US and Israel have constantly voted against the resolution – occasionally supported by the Marshall Islands, Palau, Uzbekistan, Albania and Romania". In its 2020 report to the United Nations, Cuba stated that the total cost to Cuba from the United States embargo is $144 billion since its inception.
Post-Fidel Castro reforms
In 2011, "[t]he new economic reforms were introduced, effectively creating a new economic system", which the Brookings Institution dubbed the "New Cuban Economy". Since then, over 400,000 Cubans have signed up to become entrepreneurs. the government listed 181 official jobs no longer under their control—such as taxi driver, construction worker and shopkeeper. Workers must purchase licenses to work for some roles, such as a mule driver, palm-tree trimmer, or well digger. Despite these openings, Cuba maintains nationalized companies for the distribution of all essential amenities (water, power, etc.) and other essential services to ensure a healthy population (education, health care).
Around 2000, half the country's sugar mills closed. Before reforms, imports were double exports, doctors earned £15 per month, and families supplemented incomes with extra jobs. After reforms, more than 150,000 farmers could lease land from the government for surplus crop production. Before the reforms, the only real estate transactions involved homeowners swapping properties; reforms legalized the buying and selling of real estate and created a real estate boom in the country. In 2012 a Havana fast-food burger/pizza restaurant, La Pachanga, started in the owner's home; it served 1,000 meals on a Saturday at £3 each. Tourists can now ride factory steam locomotives through closed sugar mills.
In 2008, Raúl Castro's administration hinted that the purchase of computers, DVD players, and microwaves would become legal; however, monthly wages remain less than 20 U.S. dollars. Mobile phones, which had been restricted to Cubans working for foreign companies and government officials, were legalized in 2008.
In 2010 Fidel Castro, in agreement with Raúl Castro's reformist sentiment, admitted that the Cuban model based on the old Soviet centralized planning model was no longer sustainable. The brothers encouraged the development of a cooperative variant of socialism - where the state plays a less active role in the economy - and the formation of worker-owned co-operatives and self-employment enterprises.
To remedy Cuba's economic structural distortions and inefficiencies, the Sixth Congress approved an expansion of the internal market and access to global markets on 18 April 2011. A comprehensive list of changes is:
expenditure adjustments (education, healthcare, sports, culture)
change in the structure of employment; reducing inflated payrolls and increasing work in the non-state sector
legalizing 201 different personal business licenses
fallow state land in usufruct leased to residents
incentives for non-state employment, as a re-launch of self-employment
proposals for the formation of non-agricultural cooperatives
legalization of the sale and private ownership of homes and cars
greater autonomy for state firms
search for food self-sufficiency, the gradual elimination of universal rationing and change to targeting the poorest population
possibility to rent state-run enterprises (including state restaurants) to self-employed persons
separation of state and business functions
tax-policy update
easier travel for Cubans
strategies for external debt restructuring
On 20 December 2011, a new credit policy allowed Cuban banks to finance entrepreneurs and individuals wishing to make major purchases to make home improvements in addition to farmers. "Cuban banks have long provided loans to farm cooperatives, they have offered credit to new recipients of farmland in usufruct since 2008, and in 2011 they began making loans to individuals for business and other purposes".
The system of rationed food distribution in Cuba was known as the Libreta de Abastecimiento ("Supplies booklet"). ration books at bodegas still procured rice, oil, sugar, and matches above the government average wage of £15 monthly.
Raul Castro signed Law 313 in September 2013 to set up a special economic zone, the first in the country, in the port city of Mariel.
On 22 October 2013, the government eventually announced its intention to end the dual-currency system. The convertible peso (CUC) was no longer issued from 1 January 2021 and ceased circulation on 30 December 2021.
The achievements of the radical social policy of socialist Cuba, which enabled social advancement for the formerly underprivileged classes, were curbed by the economic crisis and the low wages of recent decades. The socialist leadership is reluctant to tackle this problem because it touches a core aspect of its revolutionary legitimacy. As a result, Cuba's National Bureau of Statistics (ONE) publishes little data on the growing socio-economic divide. A nationwide scientific survey shows that social inequalities have become increasingly visible in everyday life and that the Afro-Cuban population is structurally disadvantaged. The report notes that while 58 percent of white Cubans have incomes of less than $3,000 a year, that proportion reaches 95 percent among Afro-Cubans. Afro-Cubans, moreover, receive a very limited portion of family remittances from the Cuban-American community in South Florida, which is mostly white. Remittances from family members from abroad serve often as starting capital for the emerging private sector. The most lucrative branches of business, such as restaurants and lodgings, are run by white people in particular.
In February 2019, Cuban voters approved a new constitution granting the right to private property and greater access to free markets while also maintaining Cuba's status as a socialist state. In June 2019, the 16th ExpoCaribe trade fair took place in Santiago. Since 2014, the Cuban economy has seen a dramatic uptick in foreign investment. In November 2019, Cuba's state newspaper, Granma, published an article acknowledging that despite the deterioration in relations between the U.S. and Cuban governments, the Cuban government continued to make efforts to attract foreign investment in 2018. In December 2018, 525 foreign direct investment projects were reported in Cuba, a dramatic increase from the 246 projects reported in 2014.
In February 2021, the Cuban Cabinet authorized private initiatives in more than 1800 occupations.
The Cuban economy was negatively affected by the COVID-19 pandemic, as well as by additional sanctions from the United States imposed by the Trump administration. In 2020, the country's economy declined by 11%, the country's worst decline in nearly 30 years. Cubans have faced shortages of basic goods as a result.
International debt negotiations
Raul Castro's government began a concerted effort to restructure and to ask for forgiveness of loans and debts with creditor countries, many in the billions of dollars and long in arrears from loans and debts incurred under Fidel Castro in the 1970s and 1980s.
In 2011, China forgave $6 billion in debt owed to it by Cuba.
In 2013, Mexico's Finance Minister Luis Videgaray announced a loan issued by Mexico's foreign trade development bank Bancomext to Cuba more than 15 years prior was worth $487 million. The governments agreed to "waive" 70% of it, approximately $340.9 million. Cuba would repay the remaining $146.1 million over ten years.
In 2014, before making a diplomatic visit to Cuba, Russian President Vladimir Putin forgave over 90% of the debt owed to Russia by Cuba. The forgiveness totaled $32 billion. A remaining $3.2 billion would be paid over ten years.
In 2015, Cuba entered into negotiations over its $11.1 billion debt to 14 members of the Paris Club. In December 2015, the parties announced an agreement - Paris Club nations agreed to forgive $8.5 billion of the $11.1 billion total debt, mostly by waiving interest, service charges, and penalties accrued over the more than two decades of non-payment. The 14 countries party to the agreement were: Austria, Australia, Belgium, Canada, Denmark, Finland, France, Italy, Japan, Spain, Sweden, Switzerland, the Netherlands, and the United Kingdom. The payment for the remaining $2.6 billion would be made over 18 years, with annual payments due by 31 October of every year. The payments would phase in gradually, increasing from an initial 1.6 percent of the total owed until the last payment of 8.9 percent in 2033. Interest would be forgiven from 2015 to 2020, and just 1.5 percent of the total debt still be due thereafter. The agreement contained a penalty clause: should Cuba again not make payments on schedule (by 31 October of any year), it would be charged 9 percent interest until payment and late interest on the portion in arrears. The regime viewed the agreement favorably to resolve the long-standing issues and build business confidence, increasing direct foreign investment and as a preliminary step to gaining access to credit lines in Europe.
In 2018, during a diplomatic visit to Cuba, the General Secretary of the Communist Party of Vietnam Nguyễn Phú Trọng wrote off Cuba's official debt to Vietnam. The forgiveness totaled $143.7 million.
In 2019, Cuba once again defaulted on its Paris Club debt. Of the estimated payment due in 2019 of $80 million, Cuba made only a partial payment that left $30 million owed for that year. Cuban Deputy Prime Minister Ricardo Cabrisas wrote a letter to Odile Renaud-Basso, president of the Paris Club, noting that Cuba was aware that "circumstances dictated that we were not able to honour our commitments with certain creditor countries as agreed in the multilateral Minute signed by the parties in December 2015". He maintained that they had "the intention of settling" the payments in arrears by 31 May 2020.
In May 2020, with payments still not made, Deputy PM Cabrisas sent a letter to the fourteen Paris Club countries in the agreement requesting "a moratorium (of payments) for 2019, 2020 and 2021 and a return to paying in 2022".
Sectors
Energy production
As of 2011, 96% of electricity was produced from fossil fuels. Solar panels were introduced in some rural areas to reduce blackouts, brownouts, and the use of kerosene. Citizens were encouraged to swap inefficient lamps with newer models to reduce consumption. A power tariff reduced inefficient use.
As of August 2012, off-shore petroleum exploration of promising formations in the Gulf of Mexico had been unproductive, with two failures reported. Additional exploration is planned.
In 2007, Cuba produced an estimated 16.89 billion kWh of electricity and consumed 13.93 billion kWh with no exports or imports. In a 1998 estimate, 89.52% of its energy production is a fossil fuel, 0.65% is hydroelectric, and 9.83% is another production. In both 2007 and 2008 estimates, the country produced 62,100 bbl/d of oil and consumed 176,000 bbl/d with 104,800 bbl/d of imports, as well as 197,300,000 bbl proved reserves of oil. Venezuela is Cuba's primary source of oil.
In 2017, Cuba produced and consumed an estimated 1189 million m3 of natural gas and has 70.79 billion m3 of proved reserves the nation did not export or import any natural gas.
Energy sector
The Energy Revolution is a program executed by Cuba in 2006. This program focused on developing the country's socioeconomic status and transitioning Cuba into an energy-efficient economy with diverse energy resources. Cuba's energy sector lacks the resources to produce optimal amounts of power. One of the issues the Energy Revolution program faces comes from Cuba's power production suffering from the absence of investment and the ongoing trade sanctions imposed by the United States. Likewise, the energy sector has received a multimillion-dollar investment distributed among a network of power resources. However, customers are experiencing rolling blackouts of power from energy companies to preserve electricity during Cuba's economic crisis. Furthermore, an outdated electricity grid that's been damaged by hurricanes caused the energy crisis in 2004 and continued to be a major issue during the Energy Revolution. Cuba responded to this situation by providing a variety of different types of energy resources. 6000 small diesel generators, 416 fuel oil generators, 893 diesel generators, 9.4 million incandescent bulbs for energy-saving lamps, 1.33 million fans, 5.5 million electric pressure cookers, 3.4 million electric rice cookers, 0.2 million electric water pumps, 2.04 million domestic refrigerators and 0.1 million televisions were distributed among territories. The electrical grid was restored to only 90% until 2009. Alternative energy has become a major priority as the government has promoted wind and solar power. The crucial challenge the Energy Revolution program will face is developing sustainable energy in Cuba but, take into account a country that's continuing to develop, an economic sanction and the detrimental effects of hurricanes that hit this country.
Agriculture
Cuba produces sugarcane, tobacco, citrus, coffee, rice, potatoes, beans, and livestock. As of 2015, Cuba imported about 70–80% of its food and 80–84% of the food it rations to the public. Raúl Castro ridiculed the bureaucracy that shackled the agriculture sector.
Industry
Industrial production accounted for almost 37% of Cuban GDP or US$6.9 billion and employed 24% of the population, or 2,671,000 people, in 1996. A rally in sugar prices in 2009 stimulated investment and development of sugar processing.
In 2003 Cuba's biotechnology and pharmaceutical industry was gaining in importance. Among the products sold internationally are vaccines against various viral and bacterial pathogens. For example, the drug Heberprot-P was developed as a cure for diabetic foot ulcer and had success in many developing countries. Cuba has also done pioneering work on the development of drugs for cancer treatment.
Scientists such as V. Verez-Bencomo were awarded international prizes for their biotechnology and sugar cane contributions.
Biotechnology
Cuba's biotechnology sector developed in response to the limitations on technology transfer, international financing, and international trade resulting from the United States embargo. The Cuban biotechnology sector is entirely state-owned.
Services
Tourism
In the mid-1990s, tourism surpassed sugar, the mainstay of the Cuban economy, as the primary source of foreign exchange. Havana devotes significant resources to building tourist facilities and renovating historic structures. Cuban officials estimate roughly 1.6 million tourists visited Cuba in 1999, yielding about $1.9 billion in gross revenues. In 2000, 1,773,986 foreign visitors arrived in Cuba. Revenue from tourism reached US$1.7 billion. By 2012, some 3 million visitors brought nearly £2 billion yearly.
The growth of tourism has had social and economic repercussions. This led to speculation of the emergence of a two-tier economy and the fostering of a state of tourist apartheid. This situation was exacerbated by the influx of dollars during the 1990s, potentially creating a dual economy based on the dollar (the currency of tourists) on the one hand and the peso on the other. Scarce imported goods – and even some local manufactures, such as rum and coffee – could be had at dollar-only stores but were hard to find or unavailable at peso prices. As a result, Cubans who earned only in the peso economy, outside the tourist sector, were at a disadvantage. Those with dollar incomes based upon the service industry began to live more comfortably. This widened the gap between Cubans' material living standards, conflicting with the Cuban government's long-term socialist policies.
Retail
Cuba has a small retail sector. A few large shopping centers operated in Havana as of September 2012 but charged US prices. Pre-Revolutionary commercial districts were largely shut down. Most stores are small dollar stores, bodegas, agro-mercados (farmers' markets), and street stands.
Finance
The financial sector remains heavily regulated, and access to credit for entrepreneurial activity is seriously impeded by the shallowness of the financial market.
Foreign investment and trade
The Netherlands receives the largest share of Cuban exports (24%), 70 to 80% of which go through Indiana Finance BV, a company owned by the Van 't Wout family, who have close personal ties with Fidel Castro. This trend can be seen in other colonial Caribbean communities with direct political ties with the global economy. Cuba's primary import partner is Venezuela. The second-largest trade partner is Canada, with a 22% share of the Cuban export market.
Cuba began courting foreign investment in the Special Period. Foreign investors must form joint ventures with the Cuban government. The sole exception to this rule is Venezuelans, who can hold 100% ownership in businesses due to an agreement between Cuba and Venezuela. Cuban officials said in early 1998 that 332 joint ventures had begun. Many of these are loans or contracts for management, supplies, or services normally not considered equity investments in Western economies. Investors are constrained by the U.S.-Cuban Liberty and Democratic Solidarity Act that provides sanctions for those who traffic in property expropriated from U.S. citizens.
Cuba's average tariff rate is 10 percent. As of 2014, the country's planned economy deterred foreign trade and investment. At this point, the state maintained strict capital and exchange controls. In 2017, however, the country reported a record 2 billion in foreign investment. It was also reported that foreign investment in Cuba had increased dramatically since 2014. In September 2019, EU foreign policy chief Federica Mogherini stated during a three-day visit to Cuba that the European Union is committed to helping Cuba develop its economy
Currencies
From 1994 until 2021, Cuba had two official currencies: the national peso (or CUP) and the convertible peso (or CUC, often called "dollar" in the spoken language). In January 2021, however, a long-awaited process of currency unification began, with Cuban citizens being given six months to exchange their remaining CUCs at a rate of one to every 24 CUPs.
In 1994 the possession and use of US dollars were legalized, and by 2004 the US dollar was in widespread use in the country. To capture the hard currency flowing into the island through tourism and remittances – estimated at $500–800 million annually – the government set up state-run "dollar stores" throughout Cuba that sold "luxury" food, household, and clothing items, compared with necessities, which could be bought using national pesos. As such, the standard of living diverged between those with access to dollars and those without. Jobs that could earn dollar salaries or tips from foreign businesses and tourists became highly desirable. Meeting doctors, engineers, scientists, and other professionals working in restaurants or as taxicab drivers was common.
However, in response to stricter economic sanctions by the US and because the authorities were pleased with Cuba's economic recovery, the Cuban government decided in October 2004 to remove US dollars from circulation. In its place, the convertible peso was created, which, although not internationally traded, had a value pegged to the US dollar 1:1. A 10% surcharge was levied for cash conversions from US dollars to the convertible peso, which did not apply to other currencies, thus acting as an encouragement for tourists to bring currencies such as euros, pounds sterling or Canadian dollars into Cuba. An increasing number of tourist zones accept Euros.
Private businesses
Owners of small private restaurants (paladares) originally could seat no more than 12 people and can only employ family members. Set monthly fees must be paid regardless of income earned, and frequent inspections yield stiff fines when any of the many self-employment regulations are violated.
As of 2012, more than 150,000 farmers had signed up to lease land from the government for bonus crops. Before, homeowners were only allowed to swap; once buying and selling were allowed, prices rose.
In cities, "urban agriculture" farms small parcels. Growing organopónicos (organic gardens) in the private sector has been attractive to city-dwelling small producers who sell their products where they produce them, avoiding taxes and enjoying a measure of government help from the Ministry of Agriculture (MINAGRI) in the form of seed houses and advisers.
Wages, development, and pensions
Until June 2019, typical wages ranged from 400 non-convertible Cuban pesos a month, for a factory worker, to 700 per month for a doctor, or around 17–30 US dollars per month. However, the Human Development Index of Cuba still ranks much higher than the vast majority of Latin American nations. After Cuba lost Soviet subsidies in 1991, malnutrition resulted in an outbreak of diseases. Despite this, the poverty level reported by the government is one of the lowest in the developing world, ranking 6th out of 108 countries, 4th in Latin America and 48th among all countries. According to a 2022 report from the Cuban Human Rights Observatory (OCDH), 72 percent of Cubans live below the poverty line. 21 percent of Cubans who live below the poverty line frequently go without breakfast, lunch or dinner due to a lack of money. Pensions are among the smallest in the Americas at $9.50/month. In 2009, Raúl Castro increased minimum pensions by 2 dollars, which he said was to recompense for those who have "dedicated a great part of their lives to working ... and who remain firm in defense of socialism".
Cuba is known for its system of food distribution, the Libreta de Abastecimiento ("Supplies booklet"). The system establishes the rations each person can buy through that system and the frequency of supplies. Despite rumors of ending, the system still exists.
In June 2019, the government announced an increase in public sector wages, especially for teachers and health personnel. The increase was about 300%. In October, the government opened stores where citizens could purchase, via international currencies (USD, euro, etc.) stored on electronic cards, household supplies, and similar goods. These funds are provided by remittances from emigres. The government leaders recognized that the new measure was unpopular but necessary to contain the flight of capital to other countries, such as Panama, where Cuban citizens traveled and imported items to resell on the island.
On 1 January 2021, the government launched the "Tarea Ordenamiento" (Ordering Task), previously announced on national television by President Miguel Díaz Canel and Gen. Raúl Castro, the then-first secretary of the Cuban Communist Party. This is an effort, years in the making, to end the use of the Cuban convertible peso (CUC) and to solely use the Cuban peso (CUP), ostensibly to increase economic efficiency. In February, the government created new restrictions to the private sector, with prohibitions on 124 activities, in areas like national security, health, and educational services. Wages and pensions were increased again, between 4 and 9 times, for all the sectors. For example, a university instructor's salary went from 1500 to 5500 CUP. Additionally, the dollar price was maintained by the Cuban central bank at 24 CUP, but was unable to sell dollars to the population due to the drought of foreign currency created by the COVID-19 pandemic.
Public facilities
Bodegas Local shops offering basic products such as rice, sugar, salt, beans, cooking oil, matches, rum at low prices.
El coppelia A government-owned facility offering ice cream, juice and sweets.
Paladar A small, privately owned restaurant facility.
La farmacia Low-priced medicine, with the lowest costs anywhere in the world.
ETECSA National telephone service provider.
La feria A weekly market (Sunday market-type) owned by the government.
Cervecería Bucanero A beverage manufacturer providing both alcoholic and non-alcoholic beverages.
Ciego Montero The main soft-drink and beverage distributor.
Connection with Venezuela
Cuba and Venezuela have agreements under which Venezuela provides cheap oil in exchange for the assistance of Cuban doctors in the Venezuelan health care system. As of 2015, Cuba had the third-highest number of physicians per capita worldwide (behind Monaco and Qatar) The country sends tens of thousands of doctors to other countries as aid, and to obtain favorable trade terms. According to Carmelo Mesa-Lago, a Cuban-born US economist, in nominal terms, the Venezuelan subsidy is higher than the subsidy which the Soviet Union gave to Cuba, with the Cuban state receiving cheap oil and the Cuban economy receiving around $6 billion annually. In 2013 Carmelo Mesa-Lago said, "If this help stops, industry is paralysed, transportation is paralysed and you'll see the effects in everything from electricity to sugar mills".
From an economic standpoint, Cuba relies much more on Venezuela than Venezuela does on Cuba. As of 2012, Venezuela accounted for 20.8% of Cuba's GDP, while Cuba only accounted for roughly 4% of Venezuela's. Because of this reliance, the most recent economic crisis in Venezuela, with inflation nearing 800% and GDP shrinking by 19% in 2016, Cuba is not receiving their amount of payment and heavily subsidized oil. Further budget cuts are in the plans for 2018, marking a third straight year.
Economic freedom
In 2021, Cuba's economic freedom score from the free-market oriented Heritage Foundation was 28.1, ranking Cuba's economy 176th (among the "least free") on such measures as trade freedom, fiscal freedom, monetary freedom, freedom, and business freedom. Cuba ranked 31st among the 32 South and Central America countries, with the Heritage Foundation rating Venezuela as a "client state" of Cuba's and one of the least free.
In February 2021, the government said that it would allow the private sector to operate in most sectors of the economy, with only 124 activities remaining in the public sector, such as national security, health, and educational services. In August 2021, the Cuban government started allowing citizens to create small and medium-sized private companies, which are allowed to employ up to 100 people. As of 2023, 8,000 companies have been registered in Cuba.
Taxes and revenues
As of 2009, Cuba had $47.08 billion in revenues and $50.34 billion in expenditures, with 34.6% of GDP in public debt, an account balance of $513 million, and $4.647 billion in reserves of foreign exchange and gold. Government spending is around 67 percent of GDP, and public debt is around 35 percent of the domestic economy. Despite reforms, the government plays a large role in the economy.
The top individual income tax rate is 50 percent. The top corporate tax rate is 30 percent (35 percent for wholly foreign-owned companies). Other taxes include a tax on property transfers and a sales tax. The overall tax burden is 24.42 percent of GDP.
See also
References
Citations
Sources
External links
Cuba's Economic Struggles from the Dean Peter Krogh Foreign Affairs Digital Archives
The Road not taken: Pre-Revolutionary Cuban Living Standards in Comparative Perspective, Marianne Ward (Loyola College) and John Devereux (Queens College CUNY)
Archibold, Randal. Inequality Becomes More Visible in Cuba as the Economy Shifts (February 2015), The New York Times
Cave, Danien. "Raúl Castro Thanks U.S., but Reaffirms Communist Rule in Cuba". (December 2014), The New York Times. "Mr. Castro prioritized economics. He acknowledged that Cuban state workers needed better salaries and said Cuba would accelerate economic changes in the coming year, including an end to its dual-currency system. But he said the changes needed to be gradual to create a system of 'prosperous and sustainable communism.
Centro de Estudios de la Economía Cubana
Cuba |
5592 | https://en.wikipedia.org/wiki/Foreign%20relations%20of%20Cuba | Foreign relations of Cuba | Cuba's foreign policy has been fluid throughout history depending on world events and other variables, including relations with the United States. Without massive Soviet subsidies and its primary trading partner, Cuba became increasingly isolated in the late 1980s and early 1990s after the fall of the USSR and the end of the Cold War, but Cuba opened up more with the rest of the world again starting in the late 1990s when they have since entered bilateral co-operation with several South American countries, most notably Venezuela and Bolivia beginning in the late 1990s, especially after the Venezuela election of Hugo Chávez in 1999, who became a staunch ally of Castro's Cuba. The United States used to stick to a policy of isolating Cuba until December 2014, when Barack Obama announced a new policy of diplomatic and economic engagement. The European Union accuses Cuba of "continuing flagrant violation of human rights and fundamental freedoms". Cuba has developed a growing relationship with the People's Republic of China and Russia. Cuba provided civilian assistance workers – principally medical – to more than 20 countries. More than one million exiles have escaped to foreign countries. Cuba's present foreign minister is Bruno Rodríguez Parrilla.
Cuba is currently a lead country on the United Nations Human Rights Council, and is a founding member of the organization known as the Bolivarian Alternative for the Americas, a member of the Community of Latin American and Caribbean States, the Latin American Integration Association and the United Nations. Cuba is a member of the Non-Aligned Movement and hosted its September 2006 summit. In addition as a member of the Association of Caribbean States (ACS), Cuba was re-appointed as the chair- of the special committee on transportation issues for the Caribbean region. Following a meeting in November 2004, several leaders of South America have attempted to make Cuba either a full or associate member of the South American trade bloc known as Mercosur.
History
1917
In 1917, Cuba entered World War I on the side of the allies.
The Cold War
Following the establishment of diplomatic ties to the Soviet Union, and after the Cuban Missile Crisis, Cuba became increasingly dependent on Soviet markets and military and economic aid. Castro was able to build a formidable military force with the help of Soviet equipment and military advisors. The KGB kept in close touch with Havana, and Castro tightened Communist Party control over all levels of government, the media, and the educational system, while developing a Soviet-style internal police force.
Castro's alliance with the Soviet Union caused something of a split between him and Guevara. In 1966, Guevara left for Bolivia in an ill-fated attempt to stir up revolution against the country's government.
On August 23, 1968, Castro made a public gesture to the USSR that caused the Soviet leadership to reaffirm their support for him. Two days after Warsaw Pact invasion of Czechoslovakia to repress the Prague Spring, Castro took to the airwaves and publicly denounced the Czech rebellion. Castro warned the Cuban people about the Czechoslovakian 'counterrevolutionaries', who "were moving Czechoslovakia towards capitalism and into the arms of imperialists". He called the leaders of the rebellion "the agents of West Germany and fascist reactionary rabble."
Relations in Latin America during the Cold War
During the Cold War, Cuba's influence in the Americas was inhibited by the Monroe Doctrine and the dominance of the United States. Despite this Fidel Castro became an influential figurehead for leftist groups in the region, extending support to Marxist Revolutionary movements throughout Latin America, most notably aiding the Sandinistas in overthrowing Somoza in Nicaragua in 1979. In 1971, Fidel Castro took a month-long visit to Chile. The visit, in which Castro participated actively in the internal politics of the country, holding massive rallies and giving public advice to Salvador Allende, was seen by those on the political right as proof to support their view that "The Chilean Way to Socialism" was an effort to put Chile on the same path as Cuba.
Intervention in Cold War conflicts
During the Cold War, Africa was a major target of Cuba's influence. Fidel Castro stated that Africa was chosen in part to represent Cuban solidarity with its own large population of African descent. Exporting Cuba's revolutionary tactics abroad increased its worldwide influence and reputation. Wolf Grabendorff states that "Most African states view Cuban intervention in Africa as help in achieving independence through self-help rather than as a step toward the type of dependence which would result from a similar commitment by the super-powers." Cuban Soldiers were sent to fight in the Simba rebellion in the DRC during the 1960s. Furthermore, by providing military aid Cuba won trading partners for the Soviet bloc and potential converts to Marxism.
Starting in the 1970s, Cuba's intervened in 17 African nations including three insurgencies. Cuba expanded military programs to Africa and the Middle East, sending military missions to Sierra Leone in 1972, South Yemen in 1973, Equatorial Guinea in 1973, and Somalia in 1974. It sent combat troops to Syria in 1973 to fight against Israel. Cuba was following the general Soviet policy of détente with the West, and secret discussions were opened with the United States about peaceful coexistence. They ended abruptly when Cuba sent combat troops to fight in Angola in 1975.
Intervention in Africa
On November 4, 1975, Castro ordered the deployment of Cuban troops to Angola to aid the Marxist MPLA against UNITA, which were supported by the People's Republic of China, United States, Israel, and South Africa (see: Cuba in Angola). After two months on their own, Moscow aided the Cuban mission with the USSR engaging in a massive airlift of Cuban forces into Angola. Both Cuban and South African forces withdrew in the late 1980s and Namibia was granted independence. The Angolan civil war would last until 2002. Nelson Mandela is said to have remarked "Cuban internationalists have done so much for African independence, freedom, and justice." Cuban troops were also sent to Marxist Ethiopia to assist Mengistu Haile Mariam's government in the Ogaden War with Somalia in 1977. Cuba sent troops along with the Soviet Union to aid the FRELIMO government against the Rhodesian and South African-backed RENAMO.
Castro never disclosed the number of casualties in Soviet African wars, but one estimate is that 14,000 Cubans were killed in Cuban military actions abroad.
Intervention in Latin America
In addition, Castro extended support to Marxist Revolutionary movements throughout Latin America, such as aiding the Sandinistas in overthrowing the Somoza government in Nicaragua in 1979.
Leadership of non-aligned movement
In the 1970s, Fidel Castro made a major effort to assume a leadership role in the non-aligned movement, which include over 90 countries. Cuba's intervention in Angola other military advisory missions, economic and social programs were praised fellow non-aligned member. The 1976 world conference of the non-aligned Movement applauded Cuban internationalism, stating that it "assisted the people of Angola in frustrating the expansionist and colonialist strategy of South Africa's racist regime and its allies." The next non-aligned conference was held in Havana in 1979, and chaired by Castro, who became the de facto spokesman for the Movement. The conference in September 1979 marked the peak of Cuban global influence. The non-aligned nations had believed that Cuba was not aligned with the Soviet Union in the Cold War. However, in December 1979, the Soviet Union invaded Afghanistan, an active member of the non-aligned Movement. At the United Nations, non-aligned members voted 56 to 9, with 26 abstaining, to condemn the Soviet invasion. Cuba, however, was deeply in debt financially and politically to Moscow, and voted against the resolution. It lost its reputation as non-aligned in the Cold War. Castro, instead of becoming a spokesman for the Movement, became inactive, and in 1983, leadership passed to India, which had abstained on the UN vote. Cuba lost its bid to become a member of the United Nations Security Council. Cuba's ambitions for a role in global leadership had ended.
Social and economic programs
Cuba had social and economic programs in 40 developing countries. This was possible by a growing Cuban economy in the 1970s. The largest programs were construction projects, in which 8,000 Cubans provided technical advice, planning, and training of engineers. Educational programs involved 3,500 teachers. In addition thousands of specialists, technicians, and engineers were sent as advisors to agricultural mining and transportation sectors around the globe. Cuba also hosted 10,000 foreign students, mostly from Africa and Latin America, in health programs and technical schools. Cuba's extensive program of medical support to international attention. A 2007 study reported:
Since the early 1960s, 28,422 Cuban health workers have worked in 37 Latin American countries, 31,181 in 33 African countries, and 7,986 in 24 Asian countries. Throughout a period of four decades, Cuba sent 67,000 health workers to structural cooperation programs, usually for at least two years, in 94 countries ... an average of 3,350 health workers working abroad every year between 1960 and 2000.
Post–Cold War relations
In the post–Cold War environment Cuban support for guerrilla warfare in Latin America has largely subsided, though the Cuban government continued to provide political assistance and support for left leaning groups and parties in the developing Western Hemisphere.
When Soviet leader Mikhail Gorbachev visited Cuba in 1989, the ideological relationship between Havana and Moscow was strained by Gorbachev's implementation of economic and political reforms in the USSR. "We are witnessing sad things in other socialist countries, very sad things", lamented Castro in November 1989, in reference to the changes that were sweeping such communist allies as the Soviet Union, East Germany, Hungary, and Poland. The subsequent dissolution of the Soviet Union in 1991 had an immediate and devastating effect on Cuba.
Cuba today works with a growing bloc of Latin American politicians opposed to the "Washington consensus", the American-led doctrine that free trade, open markets, and privatization will lift poor third world countries out of economic stagnation. The Cuban government condemned neoliberalism as a destructive force in the developing world, creating an alliance with Presidents Hugo Chávez of Venezuela and Evo Morales of Bolivia in opposing such policies.
Currently, Cuba has diplomatically friendly relationships with Presidents Nicolás Maduro of Venezuela and Daniel Ortega of Nicaragua, with Maduro as perhaps the country's staunchest ally in the post-Soviet era. Cuba has sent thousands of teachers and medical personnel to Venezuela to assist Maduro's socialist oriented economic programs. Maduro, in turn provides Cuba with lower priced petroleum. Cuba's debt for oil to Venezuela is believed to be on the order of one billion US dollars.
In the wake of the Russian invasion of Ukraine and the ongoing international isolation of Russia, Cuba emerged as one of the few countries that maintained friendly relations with the Kremlin. Cuban president Miguel Diaz-Canel visited Vladimir Putin in Moscow in November 2022, where the two leaders opened a monument of Fidel Castro, as well as speaking out against U.S. sanctions against Russian and Cuba.
Diplomatic relations
List of countries with which Cuba maintains diplomatic relations:
Bilateral relations
Africa
Americas
Cuba has supported a number of leftist groups and parties in Latin America and the Caribbean since the 1959 revolution. In the 1960s Cuba established close ties with the emerging Guatemalan social movement led by Luis Augusto Turcios Lima, and supported the establishment of the URNG, a militant organization that has evolved into one of Guatemala's current political parties. In the 1980s Cuba backed both the Sandinistas in Nicaragua and the FMLN in El Salvador, providing military and intelligence training, weapons, guidance, and organizational support.
Asia
Europe
Oceania
Cuba has two embassies in Oceania, located in Wellington (opened in November 2007) and also one in Canberra opened October 24, 2008. It also has a Consulate General in Sydney. However, Cuba has official diplomatic relations with Nauru since 2002 and the Solomon Islands since 2003, and maintains relations with other Pacific countries by providing aid.
In 2008, Cuba will reportedly be sending doctors to the Solomon Islands, Vanuatu, Tuvalu, Nauru and Papua New Guinea, while seventeen medical students from Vanuatu will study in Cuba. It may also provide training for Fiji doctors. Indeed, Fiji's ambassador to the United Nations, Berenado Vunibobo, has stated that his country may seek closer relations with Cuba, and in particular medical assistance, following a decline in Fiji's relations with New Zealand.
International organizations and groups
ACS • ALBA • AOSIS • CELAC • CTO • ECLAC • G33 • G77 • IAEA • ICAO • ICRM • IFAD • ILO • IMO • Interpol • IOC • ISO • ITU • LAES • NAM • OAS • OEI • OPANAL • OPCW • PAHO • Rio Group • UN • UNCTAD • UNESCO • UPU • WCO • WHO • WIPO • WMO
Caribbean Community (CARICOM)
Ties between the nations of the Caribbean Community (CARICOM) and Cuba have remained cordial over the course of the later half of the 20th century. Formal diplomatic relations between the CARICOM economic giants: Barbados, Jamaica, Guyana and Trinidad and Tobago have existed since 1972, and have over time led to an increase in cooperation between the CARICOM Heads of Government and Cuba. At a summit meeting of sixteen Caribbean countries in 1998, Fidel Castro called for regional unity, saying that only strengthened cooperation between Caribbean countries would prevent their domination by rich nations in a global economy. Cuba, for many years regionally isolated, increased grants and scholarships to the Caribbean countries.
To celebrate ties between the Caribbean Community and Cuba in 2002 the Heads of Government of Cuba and CARICOM have designated the day of December 8 to be called 'CARICOM-Cuba Day'. The day is the exact date of the formal opening of diplomatic relations between the first CARICOM-four and Cuba.
In December 2005, during the second CARICOM/CUBA summit held in Barbados, heads of CARICOM and Cuba agreed to deepen their ties in the areas of socio-economic and political cooperation in addition to medical care assistance. Since the meeting, Cuba has opened four additional embassies in the Caribbean Community including: Antigua and Barbuda, Dominica, Suriname, and Saint Vincent and the Grenadines. This development makes Cuba the only nation to have embassies in all independent countries of the Caribbean Community. CARICOM and Canadian politicians have jointly maintained that through the International inclusion of Cuba, a more positive change might indeed be brought about there (politically) as has been witnessed in the People's Republic of China.
Cuban cooperation with the Caribbean was extended by a joint health programme between Cuba and Venezuela named Operación Milagro, set up in 2004. The initiative is part of the Sandino commitment, which sees both countries coming together with the aim of offering free ophthalmology operations to an estimated 4.5 million people in Latin America and the Caribbean over a ten-year period. According to Denzil Douglas, the prime minister of St. Kitts and Nevis, more than 1,300 students from member nations are studying in Cuba while more than 1,000 Cuban doctors, nurses and other technicians are working throughout the region. In 1998 Trinidadian and Tobagonian Prime Minister Patrick Manning had a heart valve replacement surgery in Cuba and returned in 2004 to have a pacemaker implanted.
In December 2008 the CARICOM Heads of Government opened the third Cuba-CARICOM Summit in Cuba. The summit is to look at closer integration of the Caribbean Community and Cuba. During the summit the Caribbean Community (CARICOM) bestowed Fidel Castro with the highest honour of CARICOM, The Honorary Order of the Caribbean Community which is presented in exceptional circumstances to those who have offered their services in an outstanding way and have made significant contributions to the region.
In 2017 Cuba and the Caribbean Community (CARICOM) bloc signed the "CARICOM-Cuba Trade and Economic Cooperation Agreement"
Organization of American States
Cuba was formerly excluded from participation in the Organization of American States under a decision adopted by the Eighth Meeting of Consultation in Punta del Este, Uruguay, on 21 January 1962. The resolution stated that as Cuba had officially identified itself as a Marxist–Leninist government, it was incompatible with "the principles and objectives of the inter-American system." This stance was frequently questioned by some member states. This situation came to an end on 3 June 2009, when foreign ministers assembled in San Pedro Sula, Honduras, for the OAS's 39th General Assembly, passed a vote to lift Cuba's suspension from the OAS. In its resolution (AG/RES 2438), the General Assembly decided that:
Resolution VI, [...] which excluded the Government of Cuba from its participation in the Inter-American system, hereby ceases to have effect
The participation of the Republic of Cuba in the OAS will be the result of a process of dialogue initiated at the request of the Government of Cuba, and in accordance with the practices, purposes, and principles of the OAS.
The reincorporation of Cuba as an active member had arisen regularly as a topic within the inter-American system (e.g., it was intimated by the outgoing ambassador of Mexico in 1998) but most observers did not see it as a serious possibility while the Socialist government remained in power. On 6 May 2005, President Fidel Castro reiterated that the island nation would not "be part of a disgraceful institution that has only humiliated the honor of Latin American nations".
In an editorial published by Granma, Fidel Castro applauded the Assembly's "rebellious" move and said that the date would "be recalled by future generations." However, a Declaration of the Revolutionary Government dated 8 June 2009 stated that while Cuba welcomed the Assembly's gesture, in light of the Organization's historical record "Cuba will not return to the OAS".
Cuba joined the Latin American Integration Association becoming the tenth member (out of 12) on 26 August 1999. The organization was set up in 1980 to encourage trade integration association. Its main objective is the establishment of a common market, in pursuit of the economic and social development of the region.
On September 15, 2006, Cuba officially took over leadership of the Non-Aligned Movement during the 14th summit of the organization in Havana.
Cuban intervention abroad: 1959 – Early 1990s
Cuba became a staunch ally of the USSR during the Cold War, modeling its political structure after that of the CPSU. Owing to the fundamental role Internationalism plays in Cuban socialist ideology, Cuba became a major supporter of liberation movements not only in Latin America, but across the globe.
Black Panthers
In the 1960s and 1970s, Cuba openly supported the black nationalist and Marxist-oriented Black Panther Party of the U.S. Many members found their way into Cuba for political asylum, where Cuba welcomed them as refugees after they had been convicted in the U.S.
Palestine
Cuba also lent support to Palestinian nationalist groups against Israel, namely the Palestine Liberation Organization (PLO) and lesser-known Marxist–Leninist Popular Front for the Liberation of Palestine (PFLP). Fidel Castro called Israel practices "Zionist Fascism." The Palestinians received training from Cuba's General Intelligence Directorate, as well as financial and diplomatic support from the Cuban government. However, in 2010, Castro indicated that he also strongly supported Israel's right to exist.
Irish Republicans
The Irish Republican political party, Sinn Féin has political links to the Cuban government. Fidel Castro expressed support for the Irish Republican cause of a United Ireland.
Humanitarian aid
Since the establishment of the Revolutionary Government of Cuba in 1959, the country has sent more than 52,000 medical workers abroad to work in needy countries, including countries affected by the 2004 Indian Ocean earthquake and the 2005 Kashmir earthquake. There are currently about 20,000 Cuban doctors working in 68 countries across three continents, including a 135-strong medical team in Java, Indonesia.
Read more about Cuba's medical collaboration in Africa at:
White Coats by the Gambia River
Cuba provides Medical Aid to Children Affected by Chernobyl Nuclear Accident:
The children of Chernobyl in My Memory
List of Foreign Ministers of Cuba
See also
Censorship in Cuba
Cocktail Wars
Human rights in Cuba
Intelligence Directorate
List of diplomatic missions in Cuba
List of diplomatic missions of Cuba
Organization of Solidarity with the People of Asia, Africa and Latin America
References
Further reading
Adams, Gordon. "Cuba and Africa: The International Politics of the Liberation Struggle: A Documentary Essay" Latin American Perspectives (1981) 8#1 pp:108-125.
Bain, Mervyn J. "Russia and Cuba: 'doomed' comrades?." Communist and Post-Communist Studies 44.2 (2011): 111–118.
Bain, Mervyn J. Soviet-Cuban Relations, 1985 to 1991: Changing Perceptions in Moscow and Havana (2007)
Bernell, David. "The curious case of Cuba in American foreign policy." Journal of Interamerican Studies and World Affairs 36.2 (1994): 65–104. online
Blue, Sarah. "Cuban Medical Internationalism: Domestic and International Impacts." Journal of Latin American Geography (2010) 9#1.
Domínguez, Jorge I. To Make a World Safe for Revolution: Cuba's Foreign Policy (Harvard UP, 1989) excerpt
Erisman, H. Michael, and John M. Kirk, eds. Redefining Cuban Foreign Policy: The Impact of the "Special Period" (2006)
Falk, Pamela S. "Cuba in Africa." Foreign Affairs 65.5 (1987): 1077–1096. online
Falk, Pamela S. Cuban Foreign Policy: Caribbean Tempest (1986).
Fauriol, Georges, and Eva Loser, eds. Cuba: The International Dimension (1990)
Feinsilver, Julie M. "Fifty Years of Cuba’s Medical Diplomacy: From Idealism to Pragmatism," Cuban Studies 41 (2010), 85–104;
Gleijeses, Piero. "Moscow's Proxy? Cuba and Africa 1975–1988." Journal of Cold War Studies 8.4 (2006): 98–146. online
Gleijeses, Piero. Conflicting Missions: Havana, Washington, and Africa, 1959-1976 (2002) online
Gleijeses, Piero. The Cuban Drumbeat. Castro’s Worldview: Cuban Foreign Policy in a Hostile World (2009)
Harmer, Tanya. "Two, Three, Many Revolutions? Cuba and the Prospects for Revolutionary Change in Latin America, 1967–1975." Journal of Latin American Studies 45.1 (2013): 61–89.
Hatzky, Christine. Cubans in Angola: South-South Cooperation and Transfer of Knowledge, 1976–1991. (U of Wisconsin Press, 2015).
Krull, Catherine. ed. Cuba in a Global Context: International Relations, Internationalism, and Transnationalism (2014) online
Pérez-Stable, Marifeli. "The United States and Cuba since 2000." in Contemporary US-Latin American Relations (Routledge, 2010) pp. 64–83.
Pérez-Stable, Marifeli. The United States and Cuba: Intimate Enemies (2011) recent history online
Smith, Robert F. The United States and Cuba: Business and Diplomacy, 1917-1960 (1960) online
Taylor, Frank F. "Revolution, race, and some aspects of foreign relations in Cuba since 1959." Cuban Studies (1988): 19–41.
External links
Cuban Ministry of Foreign Affairs
Cuban Mission to the United Nations
Text of U.S.- Cuban agreement on military bases
Fidel Castro's 'Reflection' on U.S. Travel Restrictions Miami Herald, April 14, 2009
CWIHP e-Dossier No. 44, with an introduction by Piero Gleijeses (October 2013). The dossier features over 160 Cuban documents pertaining to Havana's policy toward Southern Africa in the final fifteen years of the Cold War.
Representations of other countries in Cuba
Chinese Embassy in Havana
Embassy of India in Havana
The Canadian Embassy in Cuba
Cuban representations to other countries
Cuban embassies around the world
Aspects of Cuba's foreign policy
"Cuba's health diplomacy", British Broadcasting Corporation, February 25, 2010. |
5593 | https://en.wikipedia.org/wiki/Cyprus | Cyprus | Cyprus (), officially the Republic of Cyprus, is an island country located in the eastern Mediterranean Sea, south of the Anatolian Peninsula and east of the Levant. It is geographically in Western Asia, but its cultural ties and geopolitics are overwhelmingly Southeastern European. Cyprus is the third-largest and third-most populous island in the Mediterranean. It is located north of Egypt, east of Greece, south of Turkey, and west of Lebanon and Syria. Its capital and largest city is Nicosia. The northeast portion of the island is de facto governed by the self-declared Turkish Republic of Northern Cyprus.
The earliest known human activity on the island dates to around the 10th millennium BC. Archaeological remains include the well-preserved ruins from the Hellenistic period such as Salamis and Kourion, and Cyprus is home to some of the oldest water wells in the world. Cyprus was settled by Mycenaean Greeks in two waves in the 2nd millennium BC. As a strategic location in the Eastern Mediterranean, it was subsequently occupied by several major powers, including the empires of the Assyrians, Egyptians and Persians, from whom the island was seized in 333 BC by Alexander the Great. Subsequent rule by Ptolemaic Egypt, the Classical and Eastern Roman Empire, Arab caliphates for a short period, the French Lusignan dynasty and the Venetians was followed by over three centuries of Ottoman rule between 1571 and 1878 (de jure until 1914).
Cyprus was placed under the United Kingdom's administration based on the Cyprus Convention in 1878 and was formally annexed by the UK in 1914. The future of the island became a matter of disagreement between the two prominent ethnic communities, Greek Cypriots, who made up 77% of the population in 1960, and Turkish Cypriots, who made up 18% of the population. From the 19th century onwards, the Greek Cypriot population pursued enosis, union with Greece, which became a Greek national policy in the 1950s. The Turkish Cypriot population initially advocated the continuation of the British rule, then demanded the annexation of the island to Turkey, and in the 1950s, together with Turkey, established a policy of taksim, the partition of Cyprus and the creation of a Turkish polity in the north.
Following nationalist violence in the 1950s, Cyprus was granted independence in 1960. The crisis of 1963–64 brought further intercommunal violence between the two communities, displaced more than 25,000 Turkish Cypriots into enclaves and brought the end of Turkish Cypriot representation in the republic. On 15 July 1974, a coup d'état was staged by Greek Cypriot nationalists and elements of the Greek military junta in an attempt at enosis. This action precipitated the Turkish invasion of Cyprus on 20 July, which led to the capture of the present-day territory of Northern Cyprus and the displacement of over 150,000 Greek Cypriots and 50,000 Turkish Cypriots. A separate Turkish Cypriot state in the north was established by unilateral declaration in 1983; the move was widely condemned by the international community, with Turkey alone recognising the new state. These events and the resulting political situation are matters of a continuing dispute.
Cyprus is a major tourist destination in the Mediterranean. With an advanced, high-income economy and a very high Human Development Index, the Republic of Cyprus has been a member of the Commonwealth since 1961 and was a founding member of the Non-Aligned Movement until it joined the European Union on 1 May 2004. On 1 January 2008, the Republic of Cyprus joined the eurozone.
Etymology
The earliest attested reference to Cyprus is the 15th century BC Mycenaean Greek , ku-pi-ri-jo, meaning "Cypriot" (Greek: ), written in Linear B syllabic script.
The classical Greek form of the name is (Kýpros).
The etymology of the name is unknown.
Suggestions include:
the Greek word for the Mediterranean cypress tree (Cupressus sempervirens), κυπάρισσος (kypárissos)
the Greek name of the henna tree (Lawsonia alba), κύπρος (kýpros)
an Eteocypriot word for copper. It has been suggested, for example, that it has roots in the Sumerian word for copper (zubar) or for bronze (kubar), from the large deposits of copper ore found on the island.
Through overseas trade, the island has given its name to the Classical Latin word for copper through the phrase aes Cyprium, "metal of Cyprus", later shortened to Cuprum.
The standard demonym relating to Cyprus or its people or culture is Cypriot. The terms Cypriote and Cyprian (later a personal name) are also used, though less frequently.
The state's official name in Greek literally translates to "Cypriot Republic" in English, but this translation is not used officially; "Republic of Cyprus" is used instead.
History
Prehistoric and Ancient Cyprus
The earliest confirmed site of human activity on Cyprus is Aetokremnos, situated on the south coast, indicating that hunter-gatherers were active on the island from around 10,000 BC, with settled village communities dating from 8200 BC. The arrival of the first humans correlates with the extinction of the 75 cm high Cyprus dwarf hippopotamus and 1 metre tall Cyprus dwarf elephant, the only large mammals native to the island. Water wells discovered by archaeologists in western Cyprus are believed to be among the oldest in the world, dated at 9,000 to 10,500 years old.
Remains of an 8-month-old cat were discovered buried with a human body at a separate Neolithic site in Cyprus. The grave is estimated to be 9,500 years old (7500 BC), predating ancient Egyptian civilisation and pushing back the earliest known feline-human association significantly. The remarkably well-preserved Neolithic village of Khirokitia is a UNESCO World Heritage Site, dating to approximately 6800 BC.
During the Late Bronze Age, the island experienced two waves of Greek settlement. The first wave consisted of Mycenaean Greek traders, who started visiting Cyprus around 1400 BC. A major wave of Greek settlement is believed to have taken place following the Late Bronze Age collapse of Mycenaean Greece from 1100 to 1050 BC, with the island's predominantly Greek character dating from this period. The first recorded name of a Cypriot king is Kushmeshusha, as appears on letters sent to Ugarit in the 13th century BCE. Cyprus occupies an important role in Greek mythology, being the birthplace of Aphrodite and Adonis, and home to King Cinyras, Teucer and Pygmalion. Literary evidence suggests an early Phoenician presence at Kition, which was under Tyrian rule at the beginning of the 10th century BC. Some Phoenician merchants who were believed to come from Tyre colonised the area and expanded the political influence of Kition. After c. 850 BC, the sanctuaries [at the Kathari site] were rebuilt and reused by the Phoenicians.
Cyprus is at a strategic location in the Eastern Mediterranean. It was ruled by the Neo-Assyrian Empire for a century starting in 708 BC, before a brief spell under Egyptian rule and eventually Achaemenid rule in 545 BC. The Cypriots, led by Onesilus, king of Salamis, joined their fellow Greeks in the Ionian cities during the unsuccessful Ionian Revolt in 499 BC against the Achaemenids. The revolt was suppressed, but Cyprus managed to maintain a high degree of autonomy and remained inclined towards the Greek world.
During the whole period of the Persian rule, there is a continuity in the reign of the Cypriot kings and during their rebellions they were crushed by Persian rulers from Asia Minor, which is an indication that the Cypriots were ruling the island with directly regulated relations with the Great King and there wasn't a Persian satrap. The Kingdoms of Cyprus enjoyed special privileges and a semi-autonomous status, but they were still considered vassal subjects of the Great King.
The island was conquered by Alexander the Great in 333 BC and Cypriot navy helped Alexander during the siege of Tyre (332 BC). Cypriot fleet were also sent to help Amphoterus. In addition, Alexander had two Cypriot generals Stasander and Stasanor both from the Soli and later both became satraps in Alexander's empire.
Following Alexander's death, the division of his empire, and the subsequent Wars of the Diadochi, Cyprus became part of the Hellenistic empire of Ptolemaic Egypt. It was during this period that the island was fully Hellenized. In 58 BC Cyprus was acquired by the Roman Republic.
Roman Cyprus
Middle Ages
When the Roman Empire was divided into Eastern and Western parts in 286, Cyprus became part of the East Roman Empire (also called the Byzantine Empire), and would remain so for some 900 years. Under Byzantine rule, the Greek orientation that had been prominent since antiquity developed the strong Hellenistic-Christian character that continues to be a hallmark of the Greek Cypriot community.
Beginning in 649, Cyprus endured several attacks and raids launched by Umayyad Caliphate. Many were quick piratical raids, but others were large-scale attacks in which many Cypriots were slaughtered and great wealth carried off or destroyed. In 680, Emperor Justinian II and Caliph Abd al-Malik agreed to jointly rule Cyprus as a condominium, which would continue for the next 300 years. There are no Byzantine churches which survive from this period; thousands of people were killed, and many cities – such as Salamis – were destroyed and never rebuilt. Byzantine rule was restored in 965, when Emperor Nikephoros II Phokas scored decisive victories on land and sea.
In 1156 Raynald of Châtillon and Thoros II of Armenia brutally sacked Cyprus over a period of three weeks, stealing so much plunder and capturing so many of the leading citizens and their families for ransom, that the island took generations to recover. Several Greek priests were mutilated and sent away to Constantinople.
In 1185 Isaac Komnenos, a member of the Byzantine imperial family, took over Cyprus and declared it independent of the Empire. In 1191, during the Third Crusade, Richard I of England captured the island from Isaac. He used it as a major supply base that was relatively safe from the Saracens. A year later Richard sold the island to the Knights Templar, who, following a bloody revolt, in turn sold it to Guy of Lusignan. His brother and successor Aimery was recognised as King of Cyprus by Henry VI, Holy Roman Emperor.
Following the death in 1473 of James II, the last Lusignan king, the Republic of Venice assumed control of the island, while the late king's Venetian widow, Queen Catherine Cornaro, reigned as figurehead. Venice formally annexed the Kingdom of Cyprus in 1489, following the abdication of Catherine. The Venetians fortified Nicosia by building the Walls of Nicosia, and used it as an important commercial hub. Throughout Venetian rule, the Ottoman Empire frequently raided Cyprus. In 1539 the Ottomans destroyed Limassol and so fearing the worst, the Venetians also fortified Famagusta and Kyrenia.
Although the Lusignan French aristocracy remained the dominant social class in Cyprus throughout the medieval period, the former assumption that Greeks were treated only as serfs on the island is no longer considered by academics to be accurate. It is now accepted that the medieval period saw increasing numbers of Greek Cypriots elevated to the upper classes, a growing Greek middle ranks, and the Lusignan royal household even marrying Greeks. This included King John II of Cyprus who married Helena Palaiologina.
Cyprus under the Ottoman Empire
In 1570, a full-scale Ottoman assault with 60,000 troops brought the island under Ottoman control, despite stiff resistance by the inhabitants of Nicosia and Famagusta. Ottoman forces capturing Cyprus massacred many Greek and Armenian Christian inhabitants. The previous Latin elite were destroyed and the first significant demographic change since antiquity took place with the formation of a Muslim community. Soldiers who fought in the conquest settled on the island and Turkish peasants and craftsmen were brought to the island from Anatolia. This new community also included banished Anatolian tribes, "undesirable" persons and members of various "troublesome" Muslim sects, as well as a number of new converts on the island.
The Ottomans abolished the feudal system previously in place and applied the millet system to Cyprus, under which non-Muslim peoples were governed by their own religious authorities. In a reversal from the days of Latin rule, the head of the Church of Cyprus was invested as leader of the Greek Cypriot population and acted as mediator between Christian Greek Cypriots and the Ottoman authorities. This status ensured that the Church of Cyprus was in a position to end the constant encroachments of the Roman Catholic Church. Ottoman rule of Cyprus was at times indifferent, at times oppressive, depending on the temperaments of the sultans and local officials, and the island began over 250 years of economic decline.
The ratio of Muslims to Christians fluctuated throughout the period of Ottoman domination. In 1777–78, 47,000 Muslims constituted a majority over the island's 37,000 Christians. By 1872, the population of the island had risen to 144,000, comprising 44,000 Muslims and 100,000 Christians. The Muslim population included numerous crypto-Christians, including the Linobambaki, a crypto-Catholic community that arose due to religious persecution of the Catholic community by the Ottoman authorities; this community would assimilate into the Turkish Cypriot community during British rule.
As soon as the Greek War of Independence broke out in 1821, several Greek Cypriots left for Greece to join the Greek forces. In response, the Ottoman governor of Cyprus arrested and executed 486 prominent Greek Cypriots, including the Archbishop of Cyprus, Kyprianos, and four other bishops. In 1828, modern Greece's first president Ioannis Kapodistrias called for union of Cyprus with Greece, and numerous minor uprisings took place. Reaction to Ottoman misrule led to uprisings by both Greek and Turkish Cypriots, although none were successful. After centuries of neglect by the Ottoman Empire, the poverty of most of the people and the ever-present tax collectors fuelled Greek nationalism, and by the 20th century the idea of union with newly independent Greece was firmly rooted among Greek Cypriots.
Under Ottoman rule, numeracy, school enrolment and literacy rates were all low. They persisted some time after Ottoman rule ended, and then increased rapidly during the twentieth century.
Cyprus under the British Empire
In the aftermath of the Russo-Turkish War (1877–1878) and the Congress of Berlin, Cyprus was leased to the British Empire which de facto took over its administration in 1878 (though, in terms of sovereignty, Cyprus remained a de jure Ottoman territory until 5 November 1914, together with Egypt and Sudan) in exchange for guarantees that Britain would use the island as a base to protect the Ottoman Empire against possible Russian aggression.
The island would serve Britain as a key military base for its colonial routes. By 1906, when the Famagusta harbour was completed, Cyprus was a strategic naval outpost overlooking the Suez Canal, the crucial main route to India which was then Britain's most important overseas possession. Following the outbreak of the First World War and the decision of the Ottoman Empire to join the war on the side of the Central Powers, on 5 November 1914 the British Empire formally annexed Cyprus and declared the Ottoman Khedivate of Egypt and Sudan a Sultanate and British protectorate.
In 1915, Britain offered Cyprus to Greece, ruled by King Constantine I of Greece, on condition that Greece join the war on the side of the British. The offer was declined. In 1923, under the Treaty of Lausanne, the nascent Turkish republic relinquished any claim to Cyprus, and in 1925 it was declared a British crown colony. During the Second World War, many Greek and Turkish Cypriots enlisted in the Cyprus Regiment.
The Greek Cypriot population, meanwhile, had become hopeful that the British administration would lead to enosis. The idea of enosis was historically part of the Megali Idea, a greater political ambition of a Greek state encompassing the territories with large Greek populations in the former Ottoman Empire, including Cyprus and Asia Minor with a capital in Constantinople, and was actively pursued by the Cypriot Orthodox Church, which had its members educated in Greece. These religious officials, together with Greek military officers and professionals, some of whom still pursued the Megali Idea, would later found the guerrilla organisation EOKA (Ethniki Organosis Kyprion Agoniston or National Organisation of Cypriot Fighters). The Greek Cypriots viewed the island as historically Greek and believed that union with Greece was a natural right. In the 1950s, the pursuit of enosis became a part of the Greek national policy.
Initially, the Turkish Cypriots favoured the continuation of the British rule. However, they were alarmed by the Greek Cypriot calls for enosis, as they saw the union of Crete with Greece, which led to the exodus of Cretan Turks, as a precedent to be avoided, and they took a pro-partition stance in response to the militant activity of EOKA. The Turkish Cypriots also viewed themselves as a distinct ethnic group of the island and believed in their having a separate right to self-determination from Greek Cypriots. Meanwhile, in the 1950s, Turkish leader Menderes considered Cyprus an "extension of Anatolia", rejected the partition of Cyprus along ethnic lines and favoured the annexation of the whole island to Turkey. Nationalistic slogans centred on the idea that "Cyprus is Turkish" and the ruling party declared Cyprus to be a part of the Turkish homeland that was vital to its security. Upon realising that the fact that the Turkish Cypriot population was only 20% of the islanders made annexation unfeasible, the national policy was changed to favour partition. The slogan "Partition or Death" was frequently used in Turkish Cypriot and Turkish protests starting in the late 1950s and continuing throughout the 1960s. Although after the Zürich and London conferences Turkey seemed to accept the existence of the Cypriot state and to distance itself from its policy of favouring the partition of the island, the goal of the Turkish and Turkish Cypriot leaders remained that of creating an independent Turkish state in the northern part of the island.
In January 1950, the Church of Cyprus organised a referendum under the supervision of clerics and with no Turkish Cypriot participation, where 96% of the participating Greek Cypriots voted in favour of enosis, The Greeks were 80.2% of the total island' s population at the time (census 1946). Restricted autonomy under a constitution was proposed by the British administration but eventually rejected. In 1955 the EOKA organisation was founded, seeking union with Greece through armed struggle. At the same time the Turkish Resistance Organisation (TMT), calling for Taksim, or partition, was established by the Turkish Cypriots as a counterweight. British officials also tolerated the creation of the Turkish underground organisation T.M.T. The Secretary of State for the Colonies in a letter dated 15 July 1958 had advised the Governor of Cyprus not to act against T.M.T despite its illegal actions so as not to harm British relations with the Turkish government.
Independence and inter-communal violence
Cyprus was placed under the United Kingdom's administration based on the Cyprus Convention in 1878 and was formally annexed by the UK in 1914. The future of the island became a matter of disagreement between the two prominent ethnic communities, Greek Cypriots, who made up 77% of the population in 1960, and Turkish Cypriots, who made up 18% of the population. From the 19th century onwards, the Greek Cypriot population pursued enosis, union with Greece, which became a Greek national policy in the 1950s. The Turkish Cypriot population initially advocated the continuation of the British rule, then demanded the annexation of the island to Turkey, and in the 1950s, together with Turkey, established a policy of taksim, the partition of Cyprus and the creation of a Turkish polity in the north.
Following nationalist violence in the 1950s, Cyprus was granted independence in 1960. On 16 August 1960, Cyprus attained independence after the Zürich and London Agreement between the United Kingdom, Greece and Turkey. Cyprus had a total population of 573,566; of whom 442,138 (77.1%) were Greeks, 104,320 (18.2%) Turks, and 27,108 (4.7%) others. The UK retained the two Sovereign Base Areas of Akrotiri and Dhekelia, while government posts and public offices were allocated by ethnic quotas, giving the minority Turkish Cypriots a permanent veto, 30% in parliament and administration, and granting the three mother-states guarantor rights.
However, the division of power as foreseen by the constitution soon resulted in legal impasses and discontent on both sides, and nationalist militants started training again, with the military support of Greece and Turkey respectively. The Greek Cypriot leadership believed that the rights given to Turkish Cypriots under the 1960 constitution were too extensive and designed the Akritas plan, which was aimed at reforming the constitution in favour of Greek Cypriots, persuading the international community about the correctness of the changes and violently subjugating Turkish Cypriots in a few days should they not accept the plan. Tensions were heightened when Cypriot President Archbishop Makarios III called for constitutional changes, which were rejected by Turkey and opposed by Turkish Cypriots.
Intercommunal violence erupted on 21 December 1963, when two Turkish Cypriots were killed at an incident involving the Greek Cypriot police. The violence resulted in the death of 364 Turkish and 174 Greek Cypriots, destruction of 109 Turkish Cypriot or mixed villages and displacement of 25,000–30,000 Turkish Cypriots. The crisis resulted in the end of the Turkish Cypriot involvement in the administration and their claiming that it had lost its legitimacy; the nature of this event is still controversial. In some areas, Greek Cypriots prevented Turkish Cypriots from travelling and entering government buildings, while some Turkish Cypriots willingly withdrew due to the calls of the Turkish Cypriot administration. Turkish Cypriots started living in enclaves. The republic's structure was changed, unilaterally, by Makarios, and Nicosia was divided by the Green Line, with the deployment of UNFICYP troops.
In 1964, Turkey threatened to invade Cyprus in response to the continuing Cypriot intercommunal violence, but this was stopped by a strongly worded telegram from the US President Lyndon B. Johnson on 5 June, warning that the US would not stand beside Turkey in case of a consequential Soviet invasion of Turkish territory. Meanwhile, by 1964, enosis was a Greek policy and would not be abandoned; Makarios and the Greek prime minister Georgios Papandreou agreed that enosis should be the ultimate aim and King Constantine wished Cyprus "a speedy union with the mother country". Greece dispatched 10,000 troops to Cyprus to counter a possible Turkish invasion.
The crisis of 1963–64 had brought further intercommunal violence between the two communities, displaced more than 25,000 Turkish Cypriots into enclaves and brought the end of Turkish Cypriot representation in the republic. On 15 July 1974, a coup d'état was staged by Greek Cypriot nationalists and elements of the Greek military junta in an attempt at enosis. This action precipitated the Turkish invasion of Cyprus on 20 July, which led to the capture of the present-day territory of Northern Cyprus and the displacement of over 150,000 Greek Cypriots and 50,000 Turkish Cypriots. A separate Turkish Cypriot state in the north was established by unilateral declaration in 1983; the move was widely condemned by the international community, with Turkey alone recognising the new state. These events and the resulting political situation are matters of a continuing dispute.
1974 coup d'état, invasion, and division
On 15 July 1974, the Greek military junta under Dimitrios Ioannides carried out a coup d'état in Cyprus, to unite the island with Greece. The coup ousted president Makarios III and replaced him with pro-enosis nationalist Nikos Sampson. In response to the coup, five days later, on 20 July 1974, the Turkish army invaded the island, citing a right to intervene to restore the constitutional order from the 1960 Treaty of Guarantee. This justification has been rejected by the United Nations and the international community.
The Turkish air force began bombing Greek positions in Cyprus, and hundreds of paratroopers were dropped in the area between Nicosia and Kyrenia, where well-armed Turkish Cypriot enclaves had been long-established; while off the Kyrenia coast, Turkish troop ships landed 6,000 men as well as tanks, trucks and armoured vehicles.
Three days later, when a ceasefire had been agreed, Turkey had landed 30,000 troops on the island and captured Kyrenia, the corridor linking Kyrenia to Nicosia, and the Turkish Cypriot quarter of Nicosia itself. The junta in Athens, and then the Sampson regime in Cyprus fell from power. In Nicosia, Glafkos Clerides temporarily assumed the presidency. But after the peace negotiations in Geneva, the Turkish government reinforced their Kyrenia bridgehead and started a second invasion on 14 August. The invasion resulted in Morphou, Karpass, Famagusta and the Mesaoria coming under Turkish control.
International pressure led to a ceasefire, and by then 36% of the island had been taken over by the Turks and 180,000 Greek Cypriots had been evicted from their homes in the north. At the same time, around 50,000 Turkish Cypriots were displaced to the north and settled in the properties of the displaced Greek Cypriots. Among a variety of sanctions against Turkey, in mid-1975 the US Congress imposed an arms embargo on Turkey for using US-supplied equipment during the Turkish invasion of Cyprus in 1974. There were 1,534 Greek Cypriots and 502 Turkish Cypriots missing as a result of the fighting from 1963 to 1974.
The Republic of Cyprus has de jure sovereignty over the entire island, including its territorial waters and exclusive economic zone, with the exception of the Sovereign Base Areas of Akrotiri and Dhekelia, which remain under the UK's control according to the London and Zürich Agreements. However, the Republic of Cyprus is de facto partitioned into two main parts: the area under the effective control of the Republic, located in the south and west and comprising about 59% of the island's area, and the north, administered by the self-declared Turkish Republic of Northern Cyprus, covering about 36% of the island's area. Another nearly 4% of the island's area is covered by the UN buffer zone. The international community considers the northern part of the island to be territory of the Republic of Cyprus occupied by Turkish forces. The occupation is viewed as illegal under international law and amounting to illegal occupation of EU territory since Cyprus became a member of the European Union.
Post-division
After the restoration of constitutional order and the return of Archbishop Makarios III to Cyprus in December 1974, Turkish troops remained, occupying the northeastern portion of the island. In 1983, the Turkish Cypriot parliament, led by the Turkish Cypriot leader Rauf Denktaş, proclaimed the Turkish Republic of Northern Cyprus (TRNC), which is recognised only by Turkey.
The events of the summer of 1974 dominate the politics on the island, as well as Greco-Turkish relations. Turkish settlers have been settled in the north with the encouragement of the Turkish and Turkish Cypriot states. The Republic of Cyprus considers their presence a violation of the Geneva Convention, whilst many Turkish settlers have since severed their ties to Turkey and their second generation considers Cyprus to be their homeland.
The Turkish invasion, the ensuing occupation and the declaration of independence by the TRNC have been condemned by United Nations resolutions, which are reaffirmed by the Security Council every year. Attempts to resolve the Cyprus dispute have continued. In 2004, the Annan Plan, drafted by the UN Secretary General Kofi Annan, was put to a referendum in both Cypriot administrations. 65% of Turkish Cypriots voted in support of the plan and 74% Greek Cypriots voted against the plan, claiming that it disproportionately favoured Turkish Cypriots and gave unreasonable influence over the nation to Turkey. In total, 66.7% of the voters rejected the Annan Plan.
On 1 May 2004 Cyprus joined the European Union, together with nine other countries. Cyprus was accepted into the EU as a whole, although the EU legislation is suspended in Northern Cyprus until a final settlement of the Cyprus problem.
Efforts have been made to enhance freedom of movement between the two sides. In April 2003, Northern Cyprus unilaterally eased checkpoint restrictions, permitting Cypriots to cross between the two sides for the first time in 30 years. In March 2008, a wall that had stood for decades at the boundary between the Republic of Cyprus and the UN buffer zone was demolished. The wall had cut across Ledra Street in the heart of Nicosia and was seen as a strong symbol of the island's 32-year division. On 3 April 2008, Ledra Street was reopened in the presence of Greek and Turkish Cypriot officials. The two sides relaunched reunification talks in 2015, but these collapsed in 2017.
The European Union issued a warning in February 2019 that Cyprus, an EU member, was selling EU passports to Russian oligarchs, saying it would allow organised crime syndicates to infiltrate the EU. In 2020, leaked documents revealed a wider range of former and current officials from Afghanistan, China, Dubai, Lebanon, the Russian Federation, Saudi Arabia, Ukraine and Vietnam who bought a Cypriot citizenship prior to a change of the law in July 2019. Cyprus and Turkey have been engaged in a dispute over the extent of their exclusive economic zones, ostensibly sparked by oil and gas exploration in the area.
Geography
Cyprus is the third largest island in the Mediterranean Sea, after the Italian islands of Sicily and Sardinia, both in terms of area and population. It is also the world's 80th largest by area and world's 51st largest by population. It measures long from end to end and wide at its widest point, with Turkey to the north. It lies between latitudes 34° and 36° N, and longitudes 32° and 35° E.
Other neighbouring territories include Syria and Lebanon to the east and southeast (, respectively), Israel to the southeast, The Gaza Strip 427 kilometres (265 mi) to the southeast, Egypt to the south, and Greece to the northwest: to the small Dodecanesian island of Kastellorizo (Megisti), to Rhodes and to the Greek mainland. Cyprus is located at the crossroads of three continents, with sources placing Cyprus in Europe, and alternatively Western Asia and the Middle East.
The physical relief of the island is dominated by two mountain ranges, the Troodos Mountains and the smaller Kyrenia Range, and the central plain they encompass, the Mesaoria. The Mesaoria plain is drained by the Pedieos River, the longest on the island. The Troodos Mountains cover most of the southern and western portions of the island and account for roughly half its area. The highest point on Cyprus is Mount Olympus at , located in the centre of the Troodos range. The narrow Kyrenia Range, extending along the northern coastline, occupies substantially less area, and elevations are lower, reaching a maximum of . The island lies within the Anatolian Plate.
Cyprus contains the Cyprus Mediterranean forests ecoregion. It had a 2018 Forest Landscape Integrity Index mean score of 7.06/10, ranking it 59th globally out of 172 countries.
Geopolitically, the island is subdivided into four main segments. The Republic of Cyprus occupies the southern two-thirds of the island (59.74%). The Turkish Republic of Northern Cyprus occupies the northern third (34.85%), and the United Nations-controlled Green Line provides a buffer zone that separates the two and covers 2.67% of the island. Lastly, two bases under British sovereignty are located on the island: Akrotiri and Dhekelia, covering the remaining 2.74%.
Climate
Cyprus has a subtropical climate – Mediterranean and semi-arid type (in the north-eastern part of the island) – Köppen climate classifications Csa and BSh, with very mild winters (on the coast) and warm to hot summers. Snow is possible only in the Troodos Mountains in the central part of island. Rain occurs mainly in winter, with summer being generally dry.
Cyprus has one of the warmest climates in the Mediterranean part of the European Union. The average annual temperature on the coast is around during the day and at night. Generally, summers last about eight months, beginning in April with average temperatures of during the day and at night, and ending in November with average temperatures of during the day and at night, although in the remaining four months temperatures sometimes exceed .
Sunshine hours on the coast are around 3,200 per year, from an average of 5–6 hours of sunshine per day in December to an average of 12–13 hours in July. This is about double that of cities in the northern half of Europe; for comparison, London receives about 1,540 per year. In December, London receives about 50 hours of sunshine while coastal locations in Cyprus about 180 hours (almost as much as in May in London).
Water supply
Cyprus suffers from a chronic shortage of water. The country relies heavily on rain to provide household water, but in the past 30 years average yearly precipitation has decreased. Between 2001 and 2004, exceptionally heavy annual rainfall pushed water reserves up, with supply exceeding demand, allowing total storage in the island's reservoirs to rise to an all-time high by the start of 2005.
However, since then demand has increased annually – a result of local population growth, foreigners moving to Cyprus and the number of visiting tourists – while supply has fallen as a result of more frequent droughts.
Dams remain the principal source of water both for domestic and agricultural use; Cyprus has a total of 107 dams (plus one currently under construction) and reservoirs, with a total water storage capacity of about . Water desalination plants are gradually being constructed to deal with recent years of prolonged drought.
The Government has invested heavily in the creation of water desalination plants which have supplied almost 50 per cent of domestic water since 2001. Efforts have also been made to raise public awareness of the situation and to encourage domestic water users to take more responsibility for the conservation of this increasingly scarce commodity.
Turkey has built a water pipeline under the Mediterranean Sea from Anamur on its southern coast to the northern coast of Cyprus, to supply Northern Cyprus with potable and irrigation water (see Northern Cyprus Water Supply Project).
Flora and fauna
Cyprus is home to a number of endemic species, including the Cypriot mouse, the golden oak and the Cyprus cedar.
Politics
Cyprus is a presidential republic. The head of state and of the government is elected by a process of universal suffrage for a five-year term. Executive power is exercised by the government with legislative power vested in the House of Representatives whilst the Judiciary is independent of both the executive and the legislature.
The 1960 Constitution provided for a presidential system of government with independent executive, legislative and judicial branches as well as a complex system of checks and balances including a weighted power-sharing ratio designed to protect the interests of the Turkish Cypriots. The executive was led by a Greek Cypriot president and a Turkish Cypriot vice-president elected by their respective communities for five-year terms and each possessing a right of veto over certain types of legislation and executive decisions. Legislative power rested on the House of Representatives who were also elected on the basis of separate voters' rolls.
Since 1965, following clashes between the two communities, the Turkish Cypriot seats in the House remain vacant. In 1974 Cyprus was divided de facto when the Turkish army occupied the northern third of the island. The Turkish Cypriots subsequently declared independence in 1983 as the Turkish Republic of Northern Cyprus but were recognised only by Turkey. In 1985 the TRNC adopted a constitution and held its first elections. The United Nations recognises the sovereignty of the Republic of Cyprus over the entire island of Cyprus.
The House of Representatives currently has 56 members elected for a five-year term by proportional representation, and three observer members representing the Armenian, Latin and Maronite minorities. 24 seats are allocated to the Turkish community but remain vacant since 1964. The political environment is dominated by the communist AKEL, the liberal conservative Democratic Rally, the centrist Democratic Party and the social-democratic EDEK.
In 2008, Dimitris Christofias became the country's first Communist head of state. Due to his involvement in the 2012–13 Cypriot financial crisis, Christofias did not run for re-election in 2013. The Presidential election in 2013 resulted in Democratic Rally candidate Nicos Anastasiades winning 57.48% of the vote. As a result, Anastasiades was sworn in on 28 February 2013. Anastasiades was re-elected with 56% of the vote in the 2018 presidential election. On 28 February 2023, Nikos Christodoulides, the winner of the 2023 presidential election run-off, was sworn in as the eighth president of the Republic of Cyprus.
Administrative divisions
The Republic of Cyprus is divided into six districts: Nicosia, Famagusta, Kyrenia, Larnaca, Limassol and Paphos.
Exclaves and enclaves
Cyprus has four exclaves, all in territory that belongs to the British Sovereign Base Area of Dhekelia. The first two are the villages of Ormidhia and Xylotymvou. The third is the Dhekelia Power Station, which is divided by a British road into two parts. The northern part is the EAC refugee settlement. The southern part, even though located by the sea, is also an exclave because it has no territorial waters of its own, those being UK waters.
The UN buffer zone runs up against Dhekelia and picks up again from its east side off Ayios Nikolaos and is connected to the rest of Dhekelia by a thin land corridor. In that sense the buffer zone turns the Paralimni area on the southeast corner of the island into a de facto, though not de jure, exclave.
Foreign relations
The Republic of Cyprus is a member of the following international groups: Australia Group, CN, CE, CFSP, EBRD, EIB, EU, FAO, IAEA, IBRD, ICAO, ICC, ICCt, ITUC, IDA, IFAD, IFC, IHO, ILO, IMF, IMO, Interpol, IOC, IOM, IPU, ITU, MIGA, NAM, NSG, OPCW, OSCE, PCA, UN, UNCTAD, UNESCO, UNHCR, UNIDO, UPU, WCL, WCO, WFTU, WHO, WIPO, WMO, WToO, WTO.
Armed forces
The Cypriot National Guard is the main military institution of the Republic of Cyprus. It is a combined arms force, with land, air and naval elements. Historically all men were required to spend 24 months serving in the National Guard after their 17th birthday, but in 2016 this period of compulsory service was reduced to 14 months.
Annually, approximately 10,000 persons are trained in recruit centres. Depending on their awarded speciality the conscript recruits are then transferred to speciality training camps or to operational units.
While until 2016 the armed forces were mainly conscript based, since then a large Professional Enlisted institution has been adopted (ΣΥΟΠ), which combined with the reduction of conscript service produces an approximate 3:1 ratio between conscript and professional enlisted.
Law, justice and human rights
The Cyprus Police (Greek: , ) is the only National Police Service of the Republic of Cyprus and is under the Ministry of Justice and Public Order since 1993.
In "Freedom in the World 2011", Freedom House rated Cyprus as "free". In January 2011, the Report of the Office of the United Nations High Commissioner for Human Rights on the question of Human Rights in Cyprus noted that the ongoing division of Cyprus continues to affect human rights throughout the island "including freedom of movement, human rights pertaining to the question of missing persons, discrimination, the right to life, freedom of religion, and economic, social and cultural rights". The constant focus on the division of the island can sometimes mask other human rights issues.
In 2014, Turkey was ordered by the European Court of Human Rights to pay well over $100m in compensation to Cyprus for the invasion; Ankara announced that it would ignore the judgment. In 2014, a group of Cypriot refugees and a European parliamentarian, later joined by the Cypriot government, filed a complaint to the International Court of Justice, accusing Turkey of violating the Geneva Conventions by directly or indirectly transferring its civilian population into occupied territory. Other violations of the Geneva and the Hague Conventions—both ratified by Turkey—amount to what archaeologist Sophocles Hadjisavvas called "the organized destruction of Greek and Christian heritage in the north". These violations include looting of cultural treasures, deliberate destruction of churches, neglect of works of art, and altering the names of important historical sites, which was condemned by the International Council on Monuments and Sites. Hadjisavvas has asserted that these actions are motivated by a Turkish policy of erasing the Greek presence in Northern Cyprus within a framework of ethnic cleansing. But some perpetrators are just motivated by greed and are seeking profit. Art law expert Alessandro Chechi has classified the connection of cultural heritage destruction to ethnic cleansing as the "Greek Cypriot viewpoint", which he reports as having been dismissed by two PACE reports. Chechi asserts joint Greek and Turkish Cypriot responsibility for the destruction of cultural heritage in Cyprus, noting the destruction of Turkish Cypriot heritage in the hands of Greek Cypriot extremists.
Economy
In the early 21st century, Cyprus boasted a prosperous economy that made it the wealthiest of the ten countries that joined the European Union in 2004. However, the Cypriot economy was later damaged by the Eurozone financial and banking crisis. In June 2012, the Cypriot government announced it would need € in foreign aid to support the Cyprus Popular Bank, and this was followed by Fitch downgrading Cyprus's credit rating to junk status. Fitch stated Cyprus would need an additional € to support its banks and the downgrade was mainly due to the exposure of Bank of Cyprus, Cyprus Popular Bank and Hellenic Bank, Cyprus's three largest banks, to the Greek financial crisis.
The 2012–2013 Cypriot financial crisis led to an agreement with the Eurogroup in March 2013 to split the country's second largest bank, the Cyprus Popular Bank (also known as Laiki Bank), into a "bad" bank which would be wound down over time and a "good" bank which would be absorbed by the Bank of Cyprus. In return for a €10 billion bailout from the European Commission, the European Central Bank and the International Monetary Fund, often referred to as the "troika", the Cypriot government was required to impose a significant haircut on uninsured deposits, a large proportion of which were held by wealthy Russians who used Cyprus as a tax haven. Insured deposits of €100,000 or less were not affected.
Cyprus made a staggering economic recovery in the 2010's, and according to the 2023 International Monetary Fund estimates, Cyprus' per capita GDP at $54,611 is the highest in Southern Europe, though slightly below the European Union average. Cyprus has been sought as a base for several offshore businesses for its low tax rates. Tourism, financial services and shipping are significant parts of the economy. Robust growth was achieved in the 1980's and 1990's, due to the focus placed by Cypriot governments on meeting the criteria for admission to the European Union. The Cypriot government adopted the euro as the national currency on 1 January 2008, replacing the Cypriot pound.
Cyprus is the last EU member fully isolated from energy interconnections and it is expected that it will be connected to European network via the EuroAsia Interconnector, a 2000 MW high-voltage direct current undersea power cable. EuroAsia Interconnector will connect Greek, Cypriot, and Israeli power grids. It is a leading Project of Common Interest of the European Union and also priority Electricity Highway Interconnector Project.
In recent years significant quantities of offshore natural gas have been discovered in the area known as Aphrodite (at the exploratory drilling block 12) in Cyprus's exclusive economic zone (EEZ), about south of Limassol at 33°5'40″N and 32°59'0″E. However, Turkey's offshore drilling companies have accessed both natural gas and oil resources since 2013. Cyprus demarcated its maritime border with Egypt in 2003, with Lebanon in 2007, and with Israel in 2010. In August 2011, the US-based firm Noble Energy entered into a production-sharing agreement with the Cypriot government regarding the block's commercial development.
Turkey, which does not recognise the border agreements of Cyprus with its neighbours, threatened to mobilise its naval forces if Cyprus proceeded with plans to begin drilling at Block 12. Cyprus's drilling efforts have the support of the US, EU, and UN, and on 19 September 2011 drilling in Block 12 began without any incidents being reported.
Because of the heavy influx of tourists and foreign investors, the property rental market in Cyprus has grown in recent years. In late 2013, the Cyprus Town Planning Department announced a series of incentives to stimulate the property market and increase the number of property developments in the country's town centres. This followed earlier measures to quickly give immigration permits to third country nationals investing in Cyprus property.
Infrastructure
Cyprus is one of only three EU nations in which vehicles drive on the left-hand side of the road, a remnant of British colonisation (the others being Ireland and Malta). A series of motorways runs along the coast from Paphos east to Ayia Napa, with two motorways running inland to Nicosia, one from Limassol and one from Larnaca.
Per capita private car ownership is the 29th-highest in the world. There were approximately 344,000 privately owned vehicles, and a total of 517,000 registered motor vehicles in the Republic of Cyprus in 2006. In 2006, plans were announced to improve and expand bus services and other public transport throughout Cyprus, with the financial backing of the European Union Development Bank. In 2010 the new bus network was implemented.
Cyprus has two international airports in the government-controlled areas, the busier one being in Larnaca and the other in Paphos. The Ercan International Airport is the only active one in the non-government-controlled areas, but all international flights there must have a stopover in Turkey.
The main harbours of the island are Limassol and Larnaca, which service cargo, passenger and cruise ships.
Cyta, the state-owned telecommunications company, manages most telecommunications and Internet connections on the island. However, following deregulation of the sector, a few private telecommunications companies emerged, including epic, Cablenet, OTEnet Telecom, Omega Telecom and PrimeTel. In the non-government-controlled areas of Cyprus, two different companies administer the mobile phone network: Turkcell and KKTC Telsim.
Demographics
According to the CIA World Factbook, in 2001 Greek Cypriots comprised 77%, Turkish Cypriots 18%, and others 5% of the Cypriot population. At the time of the 2011 government census, there were 10,520 people of Russian origin living in Cyprus.
According to the first population census after the declaration of independence, carried out in December 1960 and covering the entire island, Cyprus had a total population of 573,566, of whom 442,138 (77.1%) were Greeks, 104,320 (18.2%) Turkish, and 27,108 (4.7%) others.
Due to the inter-communal ethnic tensions between 1963 and 1974, an island-wide census was regarded as impossible. Nevertheless, the Cypriot government conducted one in 1973, without the Turkish Cypriot populace. According to this census, the Greek Cypriot population was 482,000. One year later, in 1974, the Cypriot government's Department of Statistics and Research estimated the total population of Cyprus at 641,000; of whom 506,000 (78.9%) were Greeks, and 118,000 (18.4%) Turkish. After the military occupation of part of the island in 1974, the government of Cyprus conducted six more censuses: in 1976, 1982, 1992, 2001, 2011 and 2021; these excluded the Turkish population which was resident in non-government-controlled areas of the island.
According to an official 2005 estimate, the number of Cypriot citizens currently living in government-controlled areas of the Republic of Cyprus is around 871,036. In addition to this, the Republic of Cyprus is home to 110,200 foreign permanent residents and an estimated 10,000–30,000 undocumented illegal immigrants. According to the Republic of Cyprus's website, the population was 918,100 at the 2021 Census.
According to the 2006 census carried out by Northern Cyprus, there were 256,644 (de jure) people living in Northern Cyprus. 178,031 were citizens of Northern Cyprus, of whom 147,405 were born in Cyprus (112,534 from the north; 32,538 from the south; 371 did not indicate what region of Cyprus they were from); 27,333 born in Turkey; 2,482 born in the UK and 913 born in Bulgaria. Of the 147,405 citizens born in Cyprus, 120,031 say both parents were born in Cyprus; 16,824 say both parents born in Turkey; 10,361 have one parent born in Turkey and one parent born in Cyprus.
In 2010, the International Crisis Group estimated that the total population of the island was 1.1 million, of which there was an estimated 300,000 residents in the north, perhaps half of whom were either born in Turkey or are children of such settlers.
The villages of Rizokarpaso (in Northern Cyprus), Potamia (in Nicosia district) and Pyla (in Larnaca District) are the only settlements remaining with a mixed Greek and Turkish Cypriot population.
Y-Dna haplogroups are found at the following frequencies in Cyprus: J (43.07% including 6.20% J1), E1b1b (20.00%), R1 (12.30% including 9.2% R1b), F (9.20%), I (7.70%), K (4.60%), A (3.10%). J, K, F and E1b1b haplogroups consist of lineages with differential distribution within Middle East, North Africa and Europe.
Outside Cyprus there are significant and thriving diasporas - both a Greek Cypriot diaspora and a Turkish Cypriot diaspora - in the United Kingdom, Australia, Canada, the United States, Greece and Turkey.
Religion
The majority of Greek Cypriots identify as Christians, specifically Greek Orthodox, whereas most Turkish Cypriots are adherents of Sunni Islam. The first President of Cyprus, Makarios III, was an archbishop.
Hala Sultan Tekke, situated near the Larnaca Salt Lake is an object of pilgrimage for Muslims.
According to the 2001 census carried out in the government-controlled areas, 94.8% of the population were Eastern Orthodox, 0.9% Armenians and Maronites, 1.5% Roman Catholics, 1.0% Church of England, and 0.6% Muslims. There is also a Jewish community on Cyprus. The remaining 1.3% adhered to other religious denominations or did not state their religion. As of 2021, it is estimated that there are 13,280 Sikhs in Cyprus (1.1% of population), making it the third largest national proportion of Sikhs in the world. The Greek Orthodox, Armenian Apostolic Church, and both the Maronite and Latin Catholics are constitutionally recognized denominations and exempt from taxes.
Languages
Cyprus has two official languages, Greek and Turkish. Armenian and Cypriot Maronite Arabic are recognised as minority languages. Although without official status, English is widely spoken and it features widely on road signs, public notices, and in advertisements, etc. English was the sole official language during British colonial rule and the lingua franca until 1960, and continued to be used (de facto) in courts of law until 1989 and in legislation until 1996. 80.4% of Cypriots are proficient in the English language as a second language. Russian is widely spoken among the country's minorities, residents and citizens of post-Soviet countries, and Pontic Greeks. Russian, after English and Greek, is the third language used on many signs of shops and restaurants, particularly in Limassol and Paphos. In addition to these languages, 12% speak French and 5% speak German.
The everyday spoken language of Greek Cypriots is Cypriot Greek and that of Turkish Cypriots is Cypriot Turkish. These vernaculars both differ from their standard registers significantly.
Education
Cyprus has a highly developed system of primary and secondary education offering both public and private education. The high quality of instruction can be attributed in part to the fact that nearly 7% of the GDP is spent on education which makes Cyprus one of the top three spenders of education in the EU along with Denmark and Sweden.
State schools are generally seen as equivalent in quality of education to private-sector institutions. However, the value of a state high-school diploma is limited by the fact that the grades obtained account for only around 25% of the final grade for each topic, with the remaining 75% assigned by the teacher during the semester, in a minimally transparent way. Cypriot universities (like universities in Greece) ignore high school grades almost entirely for admissions purposes. While a high-school diploma is mandatory for university attendance, admissions are decided almost exclusively on the basis of scores at centrally administered university entrance examinations that all university candidates are required to take.
The majority of Cypriots receive their higher education at Greek, British, Turkish, other European and North American universities. Cyprus currently has the highest percentage of citizens of working age who have higher-level education in the EU at 30% which is ahead of Finland's 29.5%. In addition, 47% of its population aged 25–34 have tertiary education, which is the highest in the EU. The body of Cypriot students is highly mobile, with 78.7% studying in a university outside Cyprus.
Culture
Greek Cypriots and Turkish Cypriots share a lot in common in their culture due to cultural exchanges but also have differences. Several traditional food (such as souvla and halloumi) and beverages are similar, as well as expressions and ways of life. Hospitality and buying or offering food and drinks for guests or others are common among both. In both communities, music, dance and art are integral parts of social life and many artistic, verbal and nonverbal expressions, traditional dances such as tsifteteli, similarities in dance costumes and importance placed on social activities are shared between the communities. However, the two communities have distinct religions and religious cultures, with the Greek Cypriots traditionally being Greek Orthodox and Turkish Cypriots traditionally being Sunni Muslims, which has partly hindered cultural exchange. Greek Cypriots have influences from Greece and Christianity, while Turkish Cypriots have influences from Turkey and Islam.
The Limassol Carnival Festival is an annual carnival which is held at Limassol, in Cyprus. The event which is very popular in Cyprus was introduced in the 20th century.
Arts
The art history of Cyprus can be said to stretch back up to 10,000 years, following the discovery of a series of Chalcolithic period carved figures in the villages of Khoirokoitia and Lempa. The island is the home to numerous examples of high quality religious icon painting from the Middle Ages as well as many painted churches. Cypriot architecture was heavily influenced by French Gothic and Italian renaissance introduced in the island during the era of Latin domination (1191–1571).
A well known traditional art that dates at least from the 14th century is the Lefkara lace, which originates from the village of Lefkara. Lefkara lace is recognised as an intangible cultural heritage (ICH) by UNESCO, and it is characterised by distinct design patterns, and its intricate, time-consuming production process. Another local form of art that originated from Lefkara is the production of Cypriot Filigree (locally known as Trifourenio), a type of jewellery that is made with twisted threads of silver.
In modern times Cypriot art history begins with the painter Vassilis Vryonides (1883–1958) who studied at the Academy of Fine Arts in Venice. Arguably the two founding fathers of modern Cypriot art were Adamantios Diamantis (1900–1994) who studied at London's Royal College of Art and
Christophoros Savva (1924–1968) who also studied in London, at Saint Martin's School of Art. In 1960, Savva founded, together with Welsh artist Glyn Hughes, Apophasis [Decision], the first independent cultural centre of the newly established Republic of Cyprus. In 1968, Savva was among the artists representing Cyprus in its inaugural Pavilion at the 34th Venice Biennale. English Cypriot Artist Glyn HUGHES 1931–2014. In many ways these two artists set the template for subsequent Cypriot art and both their artistic styles and the patterns of their education remain influential to this day. In particular the majority of Cypriot artists still train in England while others train at art schools in Greece and local art institutions such as the Cyprus College of Art, University of Nicosia and the Frederick Institute of Technology.
One of the features of Cypriot art is a tendency towards figurative painting although conceptual art is being rigorously promoted by a number of art "institutions" and most notably the Nicosia Municipal Art Centre. Municipal art galleries exist in all the main towns and there is a large and lively commercial art scene.
Other notable Greek Cypriot artists include Helene Black, Kalopedis family, Panayiotis Kalorkoti, Nicos Nicolaides, Stass Paraskos, Arestís Stasí, Telemachos Kanthos, Konstantia Sofokleous and Chris Achilleos, and Turkish Cypriot artists include İsmet Güney, Ruzen Atakan and Mutlu Çerkez.
Music
The traditional folk music of Cyprus has several common elements with Greek, Turkish, and Arabic Music, all of which have descended from Byzantine music, including Greek Cypriot and Turkish Cypriot dances such as the sousta, syrtos, zeibekikos, tatsia, and karsilamas as well as the Middle Eastern-inspired tsifteteli and arapies. There is also a form of musical poetry known as chattista which is often performed at traditional feasts and celebrations. The instruments commonly associated with Cyprus folk music are the violin ("fkiolin"), lute ("laouto"), Cyprus flute (pithkiavlin), oud ("outi"), kanonaki and percussions (including the "tamboutsia"). Composers associated with traditional Cypriot music include Solon Michaelides, Marios Tokas, Evagoras Karageorgis and Savvas Salides. Among musicians is also the acclaimed pianist Cyprien Katsaris, composer Andreas G. Orphanides, and composer and artistic director of the European Capital of Culture initiative Marios Joannou Elia.
Popular music in Cyprus is generally influenced by the Greek Laïka scene; artists who play in this genre include international platinum star Anna Vissi, Evridiki, and Sarbel. Hip hop and R&B have been supported by the emergence of Cypriot rap and the urban music scene at Ayia Napa, while in the last years the reggae scene is growing, especially through the participation of many Cypriot artists at the annual Reggae Sunjam festival. Is also noted Cypriot rock music and Éntekhno rock is often associated with artists such as Michalis Hatzigiannis and Alkinoos Ioannidis. Metal also has a small following in Cyprus represented by bands such as Armageddon (rev.16:16), Blynd, Winter's Verge, Methysos and Quadraphonic.
Literature
Literary production of the antiquity includes the Cypria, an epic poem, probably composed in the late 7th century BC and attributed to Stasinus. The Cypria is one of the first specimens of Greek and European poetry. The Cypriot Zeno of Citium was the founder of the Stoic school of philosophy.
Epic poetry, notably the "acritic songs", flourished during the Middle Ages. Two chronicles, one written by Leontios Machairas and the other by Georgios Boustronios, cover the entire Middle Ages until the end of Frankish rule (4th century–1489). Poèmes d'amour written in medieval Greek Cypriot date back from the 16th century. Some of them are actual translations of poems written by Petrarch, Bembo, Ariosto and G. Sannazzaro. Many Cypriot scholars fled Cyprus at troubled times such as Ioannis Kigalas (c. 1622–1687) who migrated from Cyprus to Italy in the 17th century, several of his works have survived in books of other scholars.
Hasan Hilmi Efendi, a Turkish Cypriot poet, was rewarded by the Ottoman sultan Mahmud II and said to be the "sultan of the poems".
Modern Greek Cypriot literary figures include the poet and writer Costas Montis, poet Kyriakos Charalambides, poet Michalis Pasiardis, writer Nicos Nicolaides, Stylianos Atteshlis, Altheides, Loukis Akritas and Demetris Th. Gotsis. Dimitris Lipertis, Vasilis Michaelides and Pavlos Liasides are folk poets who wrote poems mainly in the Cypriot-Greek dialect. Among leading Turkish Cypriot writers are Osman Türkay, twice nominated for the Nobel Prize in Literature, Özker Yaşın, Neriman Cahit, Urkiye Mine Balman, Mehmet Yaşın and Neşe Yaşın.
There is an increasingly strong presence of both temporary and permanent emigre Cypriot writers in world literature, as well as writings by second and third -generation Cypriot writers born or raised abroad, often writing in English. This includes writers such as Michael Paraskos and Stephanos Stephanides.
Examples of Cyprus in foreign literature include the works of Shakespeare, with most of the play Othello by William Shakespeare set on the island of Cyprus. British writer Lawrence Durrell lived in Cyprus from 1952 until 1956, during his time working for the British colonial government on the island, and wrote the book Bitter Lemons about his time in Cyprus which won the second Duff Cooper Prize in 1957.
Mass media
In the 2015 Freedom of the Press report of Freedom House, the Republic of Cyprus and Northern Cyprus were ranked "free". The Republic of Cyprus scored 25/100 in press freedom, 5/30 in Legal Environment, 11/40 in Political Environment, and 9/30 in Economic Environment (the lower scores the better). Reporters Without Borders rank the Republic of Cyprus 24th out of 180 countries in the 2015 World Press Freedom Index, with a score of 15.62.
The law provides for freedom of speech and press, and the government generally respects these rights in practice. An independent press, an effective judiciary, and a functioning democratic political system combine to ensure freedom of speech and of the press. The law prohibits arbitrary interference with privacy, family, home, or correspondence, and the government generally respects these prohibitions in practice.
Local television companies in Cyprus include the state owned Cyprus Broadcasting Corporation which runs two television channels. In addition on the Greek side of the island there are the private channels ANT1 Cyprus, Plus TV, Mega Channel, Sigma TV, Nimonia TV (NTV) and New Extra. In Northern Cyprus, the local channels are BRT, the Turkish Cypriot equivalent to the Cyprus Broadcasting Corporation, and a number of private channels. The majority of local arts and cultural programming is produced by the Cyprus Broadcasting Corporation and BRT, with local arts documentaries, review programmes and filmed drama series.
Cinema
The most worldwide known Cypriot director, to have worked abroad, is Michael Cacoyannis.
In the late 1960s and early 1970s, George Filis produced and directed Gregoris Afxentiou, Etsi Prodothike i Kypros, and The Mega Document. In 1994, Cypriot film production received a boost with the establishment of the Cinema Advisory Committee. In 2000, the annual amount set aside for filmmaking in the national budget was CYP£500,000 (about €850,000). In addition to government grants, Cypriot co-productions are eligible for funding from the Council of Europe's Eurimages Fund, which finances European film co-productions. To date, four feature films on which a Cypriot was an executive producer have received funding from Eurimages. The first was I Sphagi tou Kokora (1996), followed by Hellados (unreleased), To Tama (1999), and O Dromos gia tin Ithaki (2000).
Cuisine
During the medieval period, under the French Lusignan monarchs of Cyprus an elaborate form of courtly cuisine developed, fusing French, Byzantine and Middle Eastern forms. The Lusignan kings were known for importing Syrian cooks to Cyprus, and it has been suggested that one of the key routes for the importation of Middle Eastern recipes into France and other Western European countries, such as blancmange, was via the Lusignan Kingdom of Cyprus. These recipes became known in the West as vyands de Chypre, or foods of Cyprus, and the food historian William Woys Weaver has identified over one hundred of them in English, French, Italian and German recipe books of the Middle Ages. One that became particularly popular across Europe in the medieval and early modern periods was a stew made with chicken or fish called malmonia, which in English became mawmeny.
Another example of a Cypriot food ingredient entering the Western European canon is the cauliflower, still popular and used in a variety of ways on the island today, which was associated with Cyprus from the early Middle Ages. Writing in the 12th and 13th centuries the Arab botanists Ibn al-'Awwam and Ibn al-Baitar claimed the vegetable had its origins in Cyprus, and this association with the island was echoed in Western Europe, where cauliflowers were originally known as Cyprus cabbage or Cyprus colewart. There was also a long and extensive trade in cauliflower seeds from Cyprus, until well into the sixteenth century.
Although much of the Lusignan food culture was lost after the fall of Cyprus to the Ottomans in 1571, a number of dishes that would have been familiar to the Lusignans survive today, including various forms of tahini and houmous, zalatina, skordalia and pickled wild song birds called ambelopoulia. Ambelopoulia, which is today highly controversial, and illegal, was exported in vast quantities from Cyprus during the Lusignan and Venetian periods, particularly to Italy and France. In 1533 the English traveller to Cyprus, John Locke, claimed to have seen the pickled wild birds packed into large jars, of which 1200 jars were exported from Cyprus annually.
Also familiar to the Lusignans would have been Halloumi cheese, which some food writers today claim originated in Cyprus during the Byzantine period although the name of the cheese itself is thought by academics to be of Arabic origin. There is no surviving written documentary evidence of the cheese being associated with Cyprus before the year 1554, when the Italian historian Florio Bustron wrote of a sheep-milk cheese from Cyprus he called calumi. Halloumi (Hellim) is commonly served sliced, grilled, fried and sometimes fresh, as an appetiser or meze dish.
Seafood and fish dishes include squid, octopus, red mullet, and sea bass. Cucumber and tomato are used widely in salads. Common vegetable preparations include potatoes in olive oil and parsley, pickled cauliflower and beets, asparagus and taro. Other traditional delicacies are meat marinated in dried coriander seeds and wine, and eventually dried and smoked, such as lountza (smoked pork loin), charcoal-grilled lamb, souvlaki (pork and chicken cooked over charcoal), and sheftalia (minced meat wrapped in mesentery). Pourgouri (bulgur, cracked wheat) is the traditional source of carbohydrate other than bread, and is used to make the delicacy koubes.
Fresh vegetables and fruits are common ingredients. Frequently used vegetables include courgettes, green peppers, okra, green beans, artichokes, carrots, tomatoes, cucumbers, lettuce and grape leaves, and pulses such as beans, broad beans, peas, black-eyed beans, chick-peas and lentils. The most common fruits and nuts are pears, apples, grapes, oranges, mandarines, nectarines, medlar, blackberries, cherry, strawberries, figs, watermelon, melon, avocado, lemon, pistachio, almond, chestnut, walnut, and hazelnut.
Cyprus is also well known for its desserts, including lokum (also known as Turkish delight) and Soutzoukos. This island has protected geographical indication (PGI) for its lokum produced in the village of Geroskipou.
Sports
Sport governing bodies include the Cyprus Football Association, Cyprus Basketball Federation, Cyprus Volleyball Federation, Cyprus Automobile Association, Cyprus Badminton Federation, Cyprus Cricket Association, Cyprus Rugby Federation and the Cyprus Pool Association.
Notable sports teams in the Cyprus leagues include APOEL FC, Anorthosis Famagusta FC, AC Omonia, AEL Limassol FC, Apollon Limassol FC, Nea Salamis Famagusta FC, Olympiakos Nicosia, AEK Larnaca FC, Aris Limassol FC, AEL Limassol B.C., Keravnos B.C. and Apollon Limassol B.C. Stadiums or sports venues include the GSP Stadium (the largest in the Republic of Cyprus-controlled areas), Tsirion Stadium (second largest), Neo GSZ Stadium, Antonis Papadopoulos Stadium, Ammochostos Stadium. Makario Stadium and Alphamega Stadium.
In the 2008–09 season, Anorthosis Famagusta FC was the first Cypriot team to qualify for the UEFA Champions League Group stage. Next season, APOEL FC qualified for the UEFA Champions League group stage, and reached the last 8 of the 2011–12 UEFA Champions League after finishing top of its group and beating French Olympique Lyonnais in the Round of 16.
The Cyprus national rugby union team known as The Moufflons currently holds the record for most consecutive international wins, which is especially notable as the Cyprus Rugby Federation was only formed in 2006.
Footballer Sotiris Kaiafas won the European Golden Shoe in the 1975–76 season; Cyprus is the smallest country by population to have one of its players win the award. Tennis player Marcos Baghdatis was ranked 8th in the world, was a finalist at the Australian Open, and reached the Wimbledon semi-final, all in 2006. High jumper Kyriakos Ioannou achieved a jump of 2.35m at the 11th IAAF World Championships in Athletics in Osaka, Japan, in 2007, winning the bronze medal. He has been ranked third in the world. In motorsports, Tio Ellinas is a successful race car driver, currently racing in the GP3 Series for Marussia Manor Motorsport. There is also mixed martial artist Costas Philippou, who competed in UFC's middleweight division from 2011 until 2015. Costas holds a 6–4 record in UFC bouts.
Also notable for a Mediterranean island, the siblings Christopher and Sophia Papamichalopoulou qualified for the 2010 Winter Olympics in Vancouver, British Columbia, Canada. They were the only athletes who managed to qualify and thus represented Cyprus at the 2010 Winter Olympics.
The country's first ever Olympic medal, a silver medal, was won by the sailor Pavlos Kontides, at the 2012 Summer Olympics in the Men's Laser class.
See also
Ancient regions of Anatolia
Index of Cyprus-related articles
Outline of Cyprus
List of notable Cypriots
Notes
References
Further reading
Clark, Tommy. A Brief History of Cyprus (2020) excerpt
Sacopoulo, Marina (1966). Chypre d'aujourd'hui. Paris: G.-P. Maisonneuve et Larose. 406 p., ill. with b&w photos. and fold. maps.
External links
General Information
Cyprus. The World Factbook. Central Intelligence Agency.
Timeline of Cyprus by BBC
Cyprus from UCB Libraries GovPubs
Cyprus information from the United States Department of State includes Background Notes, Country Study and major reports
Cyprus profile from the BBC News
The UN in Cyprus
Government
Cyprus High Commission Trade Centre – London
Republic of Cyprus – English Language
Constitution of the Republic of Cyprus
Press and Information Office – Ministry of Interior
Cyprus Statistical Service
Tourism
Read about Cyprus on visitcyprus.com – the official travel portal for Cyprus
Cyprus informational portal and open platform for contribution of Cyprus-related content – www.Cyprus.com
Cuisine
Gastronomical map of Cyprus
Archaeology
Cypriot Pottery, Bryn Mawr College Art and Artifact Collections
The Cesnola collection of Cypriot art : stone sculpture, a fully digitised text from The Metropolitan Museum of Art libraries
The Mosaics of Khirbat al-Ma
Official publications
The British government's Foreign Affairs Committee report on Cyprus.
Legal Issues arising from certain population transfers and displacements on the territory of the Republic of Cyprus in the period since 20 July 1974
Address to Cypriots by President Papadopoulos (FULL TEXT)
Annan Plan
Embassy of Greece, USA – Cyprus: Geographical and Historical Background
Countries in Europe
Republics in the Commonwealth of Nations
Member states of the European Union
Eastern Mediterranean
Islands of Europe
International islands
Island countries
Mediterranean islands
Islands of Asia
Middle Eastern countries
West Asian countries
Member states of the Union for the Mediterranean
Member states of the United Nations
Countries in Asia
Member states of the Commonwealth of Nations
States and territories established in 1960
Countries and territories where Greek is an official language
Countries and territories where Turkish is an official language |
5598 | https://en.wikipedia.org/wiki/Economy%20of%20Cyprus | Economy of Cyprus | The economy of Cyprus is a high-income economy as classified by the World Bank, and was included by the International Monetary Fund in its list of advanced economies in 2001. Cyprus adopted the euro as its official currency on 1 January 2008, replacing the Cypriot pound at an irrevocable fixed exchange rate of CYP 0.585274 per €1.
The 2012–2013 Cypriot financial crisis, part of the wider European debt crisis, has dominated the country's economic affairs in recent times. In March 2013, the Cypriot government reached an agreement with its eurozone partners to split the country's second biggest bank, the Cyprus Popular Bank (also known as Laiki Bank), into a "bad" bank which would be wound down over time and a "good" bank which would be absorbed by the larger Bank of Cyprus. In return for a €10 billion bailout from the European Commission, the European Central Bank and the International Monetary Fund, the Cypriot government would be required to impose a significant haircut on uninsured deposits. Insured deposits of €100,000 or less would not be affected. After a three-and-a-half-year recession, Cyprus returned to growth in the first quarter of 2015. Cyprus successfully concluded its three-year financial assistance programme at the end of March 2016, having borrowed a total of €6.3 billion from the European Stability Mechanism and €1 billion from the IMF. The remaining €2.7 billion of the ESM bailout was never dispensed, due to the Cypriot government's better than expected finances over the course of the programme.
Economy in the government-controlled area
Cyprus has an open, free-market, service-based economy with some light manufacturing. Internationally, Cyprus promotes its geographical location as a "bridge" between East and West, along with its educated English-speaking population, moderate local costs, good airline connections, and telecommunications.
Since gaining independence from the United Kingdom in 1960, Cyprus has had a record of successful economic performance, reflected in strong growth, full employment conditions and relative stability. The underdeveloped agrarian economy inherited from colonial rule has been transformed into a modern economy, with dynamic services, industrial and agricultural sectors and an advanced physical and social infrastructure. The Cypriots are among the most prosperous people in the Mediterranean region, with GDP per capita in 2023 approaching $35,000 in nominal terms and $54,000 on the basis of purchasing power parity.
Their standard of living is reflected in the country's "very high" Human Development Index, and Cyprus is ranked 23rd in the world in terms of the Quality-of-life Index.
However, after more than three decades of unbroken growth, the Cypriot economy contracted in 2009. This reflected the exposure of Cyprus to the Great Recession and European debt crisis. Furthermore, Cyprus was dealt a severe blow by the Evangelos Florakis Naval Base explosion in July 2011, with the cost to the economy estimated at €1–3 billion, or up to 17% of GDP.
The economic achievements of Cyprus during the preceding decades have been significant, bearing in mind the severe economic and social dislocation created by the Turkish invasion of 1974 and the continuing occupation of the northern part of the island by Turkey. The Turkish invasion inflicted a serious blow to the Cyprus economy and in particular to agriculture, tourism, mining and Quarrying: 70 percent of the island's wealth-producing resources were lost, the tourist industry lost 65 percent of its hotels and tourist accommodation, the industrial sector lost 46 percent, and mining and quarrying lost 56 percent of production. The loss of the port of Famagusta, which handled 83 percent of the general cargo, and the closure of Nicosia International Airport, in the buffer zone, were additional setbacks.
The success of Cyprus in the economic sphere has been attributed, inter alia, to the adoption of a market-oriented economic system, the pursuance of sound macroeconomic policies by the government as well as the existence of a dynamic and flexible entrepreneurship and a highly educated labor force. Moreover, the economy benefited from the close cooperation between the public and private sectors.
In the past 30 years, the economy has shifted from agriculture to light manufacturing and services. The services sector, including tourism, contributes almost 80% to GDP and employs more than 70% of the labor force. Industry and construction account for approximately one-fifth of GDP and labor, while agriculture is responsible for 2.1% of GDP and 8.5% of the labor force. Potatoes and citrus are the principal export crops. After robust growth rates in the 1980s (average annual growth was 6.1%), economic performance in the 1990s was mixed: real GDP growth was 9.7% in 1992, 1.7% in 1993, 6.0% in 1994, 6.0% in 1995, 1.9% in 1996 and 2.3% in 1997. This pattern underlined the economy's vulnerability to swings in tourist arrivals (i.e., to economic and political conditions in Cyprus, Western Europe, and the Middle East) and the need to diversify the economy. Declining competitiveness in tourism and especially in manufacturing are expected to act as a drag on growth until structural changes are effected. Overvaluation of the Cypriot pound prior to the adoption of the euro in 2008 had kept inflation in check.
Trade is vital to the Cypriot economy — the island is not self-sufficient in food and until the recent offshore gas discoveries had few known natural resources – and the trade deficit continues to grow. Cyprus must import fuels, most raw materials, heavy machinery, and transportation equipment. More than 50% of its trade is with the rest of the European Union, especially Greece and the United Kingdom, while the Middle East receives 20% of exports. In 1991, Cyprus introduced a value-added tax (VAT), which is at 19% as of 13 January 2014. Cyprus ratified the new world trade agreement (General Agreement on Tariffs and Trade, GATT) in 1995 and began implementing it fully on 1 January 1996. EU accession negotiations started on 31 March 1998, and concluded when Cyprus joined the organization as a full member in 2004.
Investment climate
The Cyprus legal system is founded on English law, and is therefore familiar to most international financiers. Cyprus's legislation was aligned with EU norms in the period leading up to EU accession in 2004. Restrictions on foreign direct investment were removed, permitting 100% foreign ownership in many cases. Foreign portfolio investment in the Cyprus Stock Exchange was also liberalized. In 2002 a modern, business-friendly tax system was put in place with a 12.5% corporate tax rate, one of the lowest in the EU. Cyprus has concluded treaties on double taxation with more than 40 countries, and, as a member of the Eurozone, has no exchange restrictions. Non-residents and foreign investors may freely repatriate proceeds from investments in Cyprus.
Role as a financial hub
In the years following the dissolution of the Soviet Union it gained great popularity as a portal for investment from the West into Russia and Eastern Europe, becoming for companies of that origin the most common tax haven. More recently, there have been increasing investment flows from the West through Cyprus into Asia, particularly China and India, South America and the Middle East. In addition, businesses from outside the EU use Cyprus as their entry-point for investment into Europe. The business services sector remains the fastest growing sector of the economy, and had overtaken all other sectors in importance. CIPA has been fundamental towards this trend.
Agriculture
Cyprus produced in 2018:
106 thousand tons of potato;
37 thousand tons of tangerine;
23 thousand tons of grape;
20 thousand tons of orange;
19 thousand tons of grapefruit;
19 thousand tons of olive;
18 thousand tons of wheat;
18 thousand tons of barley;
15 thousand tons of tomato;
13 thousand tons of watermelon;
10 thousand tons of melon;
In addition to smaller productions of other agricultural products.
Oil and gas
Surveys suggest more than 100 trillion cubic feet (2.831 trillion cubic metres) of reserves lie untapped in the eastern Mediterranean basin between Cyprus and Israel – almost equal to the world's total annual consumption of natural gas. In 2011, Noble Energy estimated that a pipeline to Leviathan gas field could be in operation as soon as 2014 or 2015. In January 2012, Noble Energy announced a natural gas field discovery. It attracted Shell, Delek and Avner as partners. Several production sharing contracts for exploration were signed with international companies, including Eni, KOGAS, TotalEnergies, ExxonMobil and QatarEnergy. It is necessary to develop infrastructure for landing the gas in Cyprus and for liquefaction for export.
Role as a shipping hub
Cyprus constitutes one of the largest ship management centers in the world; around 50 ship management companies and marine-related foreign enterprises are conducting their international activities in the country while the majority of the largest ship management companies in the world have established fully fledged offices on the island. Its geographical position at the crossroads of three continents and its proximity to the Suez Canal has promoted merchant shipping as an important industry for the island nation. Cyprus has the tenth-largest registered fleet in the world, with 1,030 vessels accounting for 31,706,000 dwt as of 1 January 2013.
Tourism
Tourism is an important factor of the island state's economy, culture, and overall brand development. With over 2 million tourist arrivals per year, it is the 40th most popular destination in the world. However, per capita of local population, it ranks 17th. The industry has been honored with various international awards, spanning from the Sustainable Destinations Global Top 100, VISION on Sustainable Tourism, Totem Tourism and Green Destination titles bestowed to Limassol and Paphos in December 2014. The island beaches have been awarded with 57 Blue Flags. Cyprus became a full member of the World Tourism Organization when it was created in 1975. According to the World Economic Forum's 2013 Travel and Tourism Competitiveness Index, Cyprus' tourism industry ranks 29th in the world in terms of overall competitiveness. In terms of Tourism Infrastructure, in relation to the tourism industry Cyprus ranks 1st in the world. The Cyprus Tourism Organization has a status of a semi-governmental organisation charged with overseeing the industry practices and promoting the island worldwide.
Trade
In 2008 fiscal aggregate value of goods and services exported by Cyprus was in the region of $1.53 billion. It primarily exported goods and services such as citrus fruits, cement, potatoes, clothing and pharmaceuticals. At that same period total financial value of goods and services imported by Cyprus was about $8.689 billion. Prominent goods and services imported by Cyprus in 2008 were consumer goods, machinery, petroleum and other lubricants, transport equipment and intermediate goods.
Cypriot trade partners
Traditionally Greece has been a major export and import partner of Cyprus. In fiscal 2007, it amounted for 21.1 percent of total exports of Cyprus. At that same period it was responsible for 17.7 percent of goods and services imported by Cyprus. Some other important names in this regard are UK and Italy.
Eurozone crisis
In 2012, Cyprus became affected by the Eurozone financial and banking crisis. In June 2012, the Cypriot government announced it would need € of foreign aid to support the Cyprus Popular Bank, and this was followed by Fitch down-grading Cyprus's credit rating to junk status. Fitch said Cyprus would need an additional € to support its banks and the downgrade was mainly due to the exposure of Bank of Cyprus, Cyprus Popular Bank and Hellenic Bank (Cyprus's 3 largest banks) to the Greek financial crisis.
In June 2012 the Cypriot finance minister, Vassos Shiarly, stated that the European Central Bank, European commission and IMF officials are to carry out an in-depth investigation into Cyprus' economy and banking sector to assess the level of funding it requires. The Ministry of Finance rejected the possibility that Cyprus would be forced to undergo the sweeping austerity measures that have caused turbulence in Greece, but admitted that there would be "some negative repercussion".
In November 2012 international lenders negotiating a bailout with the Cypriot government have agreed on a key capital ratio for banks and a system for the sector's supervision. Both commercial banks and cooperatives will be overseen by the Central Bank and the Ministry of Finance. They also set a core Tier 1 ratio – a measure of financial strength – of 9% by the end of 2013 for banks, which could then rise to 10% in 2014.
In 2014, Harris Georgiades pointed that exiting the Memorandum with the European troika required a return to the markets. This he said, required "timely, effective and full implementation of the program." The Finance Minister stressed the need to implement the Memorandum of understanding without an additional loan.
In 2015, Cyprus was praised by the President of the European Commission for adopting the austerity measures and not hesitating to follow a tough reform program.
In 2016, Moody's Investors Service changed its outlook on the Cypriot banking system to positive from stable, reflecting the view that the recovery will restore banks to profitability and improve asset quality. The quick economic recovery was driven by tourism, business services and increased consumer spending. Creditor confidence was also strengthened, allowing Bank of Cyprus to reduce its Emergency Liquidity Assistance to €2.0 billion (from €9.4 billion in 2013). Within the same period, Bank of Cyprus chairman Josef Ackermann urged the European Union to pledge financial support for a permanent solution to the Cyprus dispute.
Economy of Northern Cyprus
The economy of Turkish-occupied northern Cyprus is about one-fifth the size of the economy of the government-controlled area, while GDP per capita is around half. Because the de facto administration is recognized only by Turkey, it has had much difficulty arranging foreign financing, and foreign firms have hesitated to invest there. The economy mainly revolves around the agricultural sector and government service, which together employ about half of the work force.
The tourism sector also contributes substantially into the economy. Moreover, the small economy has seen some downfalls because the Turkish lira is legal tender. To compensate for the economy's weakness, Turkey has been known to provide significant financial aid. In both parts of the island, water shortage is a growing problem, and several desalination plants are planned.
The economic disparity between the two communities is pronounced. Although the economy operates on a free-market basis, the lack of private and government investment, shortages of skilled labor and experienced managers, and inflation and the devaluation of the Turkish lira continue to plague the economy.
Trade with Turkey
Turkey is by far the main trading partner of Northern Cyprus, supplying 55% of imports and absorbing 48% of exports. In a landmark case, the European Court of Justice (ECJ) ruled on 5 July 1994 against the British practice of importing produce from Northern Cyprus based on certificates of origin and phytosanitary certificates granted by the de facto authorities. The ECJ decided that only goods bearing certificates of origin from the internationally recognized Republic of Cyprus could be imported by EU member states. The decision resulted in a considerable decrease of Turkish Cypriot exports to the EU: from $36.4 million (or 66.7% of total Turkish Cypriot exports) in 1993 to $24.7 million in 1996 (or 35% of total exports) in 1996. Even so, the EU continues to be the second-largest trading partner of Northern Cyprus, with a 24.7% share of total imports and 35% share of total exports.
The most important exports of Northern Cyprus are citrus and dairy products. These are followed by rakı, scrap and clothing.
Assistance from Turkey is the mainstay of the Turkish Cypriot economy. Under the latest economic protocol (signed 3 January 1997), Turkey has undertaken to provide loans totalling $250 million for the purpose of implementing projects included in the protocol related to public finance, tourism, banking, and privatization. Fluctuation in the Turkish lira, which suffered from hyperinflation every year until its replacement by the Turkish new lira in 2005, exerted downward pressure on the Turkish Cypriot standard of living for many years.
The de facto authorities have instituted a free market in foreign exchange and permit residents to hold foreign-currency denominated bank accounts. This encourages transfers from Turkish Cypriots living abroad.
Happiness
Economic factors such as the GDP and national income strongly correlate with the happiness of a nation's citizens. In a study published in 2005, citizens from a sample of countries were asked to rate how happy or unhappy they were as a whole on a scale of 1 to 7 (Ranking: 1. Completely happy, 2. Very happy, 3. Fairly happy,4. Neither happy nor unhappy, 5. Fairly unhappy, 6. Very unhappy, 7. Completely unhappy.) Cyprus had a score of 5.29. On the question of how satisfied citizens were with their main job, Cyprus scored 5.36 on a scale of 1 to 7 (Ranking: 1. Completely satisfied, 2. Very satisfied, 3. Fairly satisfied, 4. Neither satisfied nor dissatisfied, 5. Fairly dissatisfied, 6. Very dissatisfied, 7. Completely dissatisfied.) In another ranking of happiness, Northern Cyprus ranks 58 and Cyprus ranks 61, according to the 2018 World Happiness Report. The report rates 156 countries based on variables including income, healthy life expectancy, social support, freedom, trust, and generosity.
Economic factors play a significant role in the general life satisfaction of Cyprus citizens, especially with women who participate in the labor force at a lower rate, work in lower ranks, and work in more public and service sector jobs than the men. Women of different skill-sets and "differing economic objectives and constraints" participate in the tourism industry. Women participate in this industry through jobs like hotel work to serve and/or bring pride to their family, not necessarily to satisfy their own selves. In this study, women with income higher than the mean household income reported higher levels of satisfaction with their lives while those with lower income reported the opposite. When asked who they compare themselves with (those with lower, same, or higher economic status), results showed that those that compared themselves with people of higher economic statuses than them had the lowest level of life satisfaction. While the correlation of income and happiness is positive, it is significantly low; there is stronger correlation between comparison and happiness. This indicates that not only income level but income level in relation to that of others affects their amount of life satisfaction.
Classified as a Mediterranean welfare regime, Cyprus has a weak public Welfare system. This means there is a strong reliance on the family, instead of the state, for both familial and economic support. Another finding is that being a full-time housewife has a stronger negative effect on happiness for women of Northern Cyprus than being unemployed, showing how the combination of gender and the economic factor of participating in the labor force affects life satisfaction. Economic factors also negatively correlate with the happiness levels of those that live in the capital city: citizens living in the capital express lower levels of happiness. As found in this study, citizens of Cyprus that live in its capital, Nicosia, are significantly less happy than others whether or not socio-economic variables are controlled for. Another finding was that the young people in the capital are unhappier than the rest of Cyprus; the old are not.
See also
Cypriot pound
Economy of Europe
References
Cyprus. The World Factbook. Central Intelligence Agency.
External links
Cyprus
Cyprus |
5615 | https://en.wikipedia.org/wiki/Cretaceous | Cretaceous | The Cretaceous ( ) is a geological period that lasted from about 145 to 66 million years ago (Mya). It is the third and final period of the Mesozoic Era, as well as the longest. At around 79 million years, it is the longest geological period of the entire Phanerozoic. The name is derived from the Latin creta, "chalk", which is abundant in the latter half of the period. It is usually abbreviated K, for its German translation Kreide.
The Cretaceous was a period with a relatively warm climate, resulting in high eustatic sea levels that created numerous shallow inland seas. These oceans and seas were populated with now-extinct marine reptiles, ammonites, and rudists, while dinosaurs continued to dominate on land. The world was ice-free, and forests extended to the poles. During this time, new groups of mammals and birds appeared. During the Early Cretaceous, flowering plants appeared and began to rapidly diversify, becoming the dominant group of plants across the Earth by the end of the Cretaceous, coincident with the decline and extinction of previously widespread gymnosperm groups.
The Cretaceous (along with the Mesozoic) ended with the Cretaceous–Paleogene extinction event, a large mass extinction in which many groups, including non-avian dinosaurs, pterosaurs, and large marine reptiles, died out. The end of the Cretaceous is defined by the abrupt Cretaceous–Paleogene boundary (K–Pg boundary), a geologic signature associated with the mass extinction that lies between the Mesozoic and Cenozoic Eras.
Etymology and history
The Cretaceous as a separate period was first defined by Belgian geologist Jean d'Omalius d'Halloy in 1822 as the Terrain Crétacé, using strata in the Paris Basin and named for the extensive beds of chalk (calcium carbonate deposited by the shells of marine invertebrates, principally coccoliths), found in the upper Cretaceous of Western Europe. The name Cretaceous was derived from the Latin creta, meaning chalk. The twofold division of the Cretaceous was implemented by Conybeare and Phillips in 1822. Alcide d'Orbigny in 1840 divided the French Cretaceous into five étages (stages): the Neocomian, Aptian, Albian, Turonian, and Senonian, later adding the Urgonian between Neocomian and Aptian and the Cenomanian between the Albian and Turonian.
Geology
Subdivisions
The Cretaceous is divided into Early and Late Cretaceous epochs, or Lower and Upper Cretaceous series. In older literature, the Cretaceous is sometimes divided into three series: Neocomian (lower/early), Gallic (middle) and Senonian (upper/late). A subdivision into 12 stages, all originating from European stratigraphy, is now used worldwide. In many parts of the world, alternative local subdivisions are still in use.
From youngest to oldest, the subdivisions of the Cretaceous period are:
Boundaries
The lower boundary of the Cretaceous is currently undefined, and the Jurassic–Cretaceous boundary is currently the only system boundary to lack a defined Global Boundary Stratotype Section and Point (GSSP). Placing a GSSP for this boundary has been difficult because of the strong regionality of most biostratigraphic markers, and the lack of any chemostratigraphic events, such as isotope excursions (large sudden changes in ratios of isotopes) that could be used to define or correlate a boundary. Calpionellids, an enigmatic group of planktonic protists with urn-shaped calcitic tests briefly abundant during the latest Jurassic to earliest Cretaceous, have been suggested as the most promising candidates for fixing the Jurassic–Cretaceous boundary. In particular, the first appearance Calpionella alpina, coinciding with the base of the eponymous Alpina subzone, has been proposed as the definition of the base of the Cretaceous. The working definition for the boundary has often been placed as the first appearance of the ammonite Strambergella jacobi, formerly placed in the genus Berriasella, but its use as a stratigraphic indicator has been questioned, as its first appearance does not correlate with that of C. alpina. The boundary is officially considered by the International Commission on Stratigraphy to be approximately 145 million years ago, but other estimates have been proposed based on U-Pb geochronology, ranging as young as 140 million years ago.
The upper boundary of the Cretaceous is sharply defined, being placed at an iridium-rich layer found worldwide that is believed to be associated with the Chicxulub impact crater, with its boundaries circumscribing parts of the Yucatán Peninsula and extending into the Gulf of Mexico. This layer has been dated at 66.043 Mya.
At the end of the Cretaceous, the impact of a large body with the Earth may have been the punctuation mark at the end of a progressive decline in biodiversity during the Maastrichtian age. The result was the extinction of three-quarters of Earth's plant and animal species. The impact created the sharp break known as the K–Pg boundary (formerly known as the K–T boundary). Earth's biodiversity required substantial time to recover from this event, despite the probable existence of an abundance of vacant ecological niches.
Despite the severity of the K-Pg extinction event, there were significant variations in the rate of extinction between and within different clades. Species that depended on photosynthesis declined or became extinct as atmospheric particles blocked solar energy. As is the case today, photosynthesizing organisms, such as phytoplankton and land plants, formed the primary part of the food chain in the late Cretaceous, and all else that depended on them suffered, as well. Herbivorous animals, which depended on plants and plankton as their food, died out as their food sources became scarce; consequently, the top predators, such as Tyrannosaurus rex, also perished. Yet only three major groups of tetrapods disappeared completely; the nonavian dinosaurs, the plesiosaurs and the pterosaurs. The other Cretaceous groups that did not survive into the Cenozoic the ichthyosaurs, last remaining temnospondyls (Koolasuchus), and nonmammalian were already extinct millions of years before the event occurred.
Coccolithophorids and molluscs, including ammonites, rudists, freshwater snails, and mussels, as well as organisms whose food chain included these shell builders, became extinct or suffered heavy losses. For example, ammonites are thought to have been the principal food of mosasaurs, a group of giant marine lizards related to snakes that became extinct at the boundary.
Omnivores, insectivores, and carrion-eaters survived the extinction event, perhaps because of the increased availability of their food sources. At the end of the Cretaceous, there seem to have been no purely herbivorous or carnivorous mammals. Mammals and birds that survived the extinction fed on insects, larvae, worms, and snails, which in turn fed on dead plant and animal matter. Scientists theorise that these organisms survived the collapse of plant-based food chains because they fed on detritus.
In stream communities, few groups of animals became extinct. Stream communities rely less on food from living plants and more on detritus that washes in from land. This particular ecological niche buffered them from extinction. Similar, but more complex patterns have been found in the oceans. Extinction was more severe among animals living in the water column than among animals living on or in the seafloor. Animals in the water column are almost entirely dependent on primary production from living phytoplankton, while animals living on or in the ocean floor feed on detritus or can switch to detritus feeding.
The largest air-breathing survivors of the event, crocodilians and champsosaurs, were semiaquatic and had access to detritus. Modern crocodilians can live as scavengers and can survive for months without food and go into hibernation when conditions are unfavorable, and their young are small, grow slowly, and feed largely on invertebrates and dead organisms or fragments of organisms for their first few years. These characteristics have been linked to crocodilian survival at the end of the Cretaceous.
Geologic formations
The high sea level and warm climate of the Cretaceous meant large areas of the continents were covered by warm, shallow seas, providing habitat for many marine organisms. The Cretaceous was named for the extensive chalk deposits of this age in Europe, but in many parts of the world, the deposits from the Cretaceous are of marine limestone, a rock type that is formed under warm, shallow marine conditions. Due to the high sea level, there was extensive space for such sedimentation. Because of the relatively young age and great thickness of the system, Cretaceous rocks are evident in many areas worldwide.
Chalk is a rock type characteristic for (but not restricted to) the Cretaceous. It consists of coccoliths, microscopically small calcite skeletons of coccolithophores, a type of algae that prospered in the Cretaceous seas.
Stagnation of deep sea currents in middle Cretaceous times caused anoxic conditions in the sea water leaving the deposited organic matter undecomposed. Half of the world's petroleum reserves were laid down at this time in the anoxic conditions of what would become the Persian Gulf and the Gulf of Mexico. In many places around the world, dark anoxic shales were formed during this interval, such as the Mancos Shale of western North America. These shales are an important source rock for oil and gas, for example in the subsurface of the North Sea.
Europe
In northwestern Europe, chalk deposits from the Upper Cretaceous are characteristic for the Chalk Group, which forms the white cliffs of Dover on the south coast of England and similar cliffs on the French Normandian coast. The group is found in England, northern France, the low countries, northern Germany, Denmark and in the subsurface of the southern part of the North Sea. Chalk is not easily consolidated and the Chalk Group still consists of loose sediments in many places. The group also has other limestones and arenites. Among the fossils it contains are sea urchins, belemnites, ammonites and sea reptiles such as Mosasaurus.
In southern Europe, the Cretaceous is usually a marine system consisting of competent limestone beds or incompetent marls. Because the Alpine mountain chains did not yet exist in the Cretaceous, these deposits formed on the southern edge of the European continental shelf, at the margin of the Tethys Ocean.
North America
During the Cretaceous, the present North American continent was isolated from the other continents. In the Jurassic, the North Atlantic already opened, leaving a proto-ocean between Europe and North America. From north to south across the continent, the Western Interior Seaway started forming. This inland sea separated the elevated areas of Laramidia in the west and Appalachia in the east. Three dinosaur clades found in Laramidia (troodontids, therizinosaurids and oviraptorosaurs) are absent from Appalachia from the Coniacian through the Maastrichtian.
Paleogeography
During the Cretaceous, the late-Paleozoic-to-early-Mesozoic supercontinent of Pangaea completed its tectonic breakup into the present-day continents, although their positions were substantially different at the time. As the Atlantic Ocean widened, the convergent-margin mountain building (orogenies) that had begun during the Jurassic continued in the North American Cordillera, as the Nevadan orogeny was followed by the Sevier and Laramide orogenies.
Gondwana had begun to break up during the Jurassic Period, but its fragmentation accelerated during the Cretaceous and was largely complete by the end of the period. South America, Antarctica, and Australia rifted away from Africa (though India and Madagascar remained attached to each other until around 80 million years ago); thus, the South Atlantic and Indian Oceans were newly formed. Such active rifting lifted great undersea mountain chains along the welts, raising eustatic sea levels worldwide. To the north of Africa the Tethys Sea continued to narrow. During the most of the Late Cretaceous, North America would be divided in two by the Western Interior Seaway, a large interior sea, separating Laramidia to the west and Appalachia to the east, then receded late in the period, leaving thick marine deposits sandwiched between coal beds. Bivalve palaeobiogeography also indicates that Africa was split in half by a shallow sea during the Coniacian and Santonian, connecting the Tethys with the South Atlantic by way of the central Sahara and Central Africa, which were then underwater. Yet another shallow seaway ran between what is now Norway and Greenland, connecting the Tethys to the Arctic Ocean and enabling biotic exchange between the two oceans. At the peak of the Cretaceous transgression, one-third of Earth's present land area was submerged.
The Cretaceous is justly famous for its chalk; indeed, more chalk formed in the Cretaceous than in any other period in the Phanerozoic. Mid-ocean ridge activity—or rather, the circulation of seawater through the enlarged ridges—enriched the oceans in calcium; this made the oceans more saturated, as well as increased the bioavailability of the element for calcareous nanoplankton. These widespread carbonates and other sedimentary deposits make the Cretaceous rock record especially fine. Famous formations from North America include the rich marine fossils of Kansas's Smoky Hill Chalk Member and the terrestrial fauna of the late Cretaceous Hell Creek Formation. Other important Cretaceous exposures occur in Europe (e.g., the Weald) and China (the Yixian Formation). In the area that is now India, massive lava beds called the Deccan Traps were erupted in the very late Cretaceous and early Paleocene.
Climate
Palynological evidence indicates the Cretaceous climate had three broad phases: a Berriasian–Barremian warm-dry phase, a Aptian–Santonian warm-wet phase, and a Campanian–Maastrichtian cool-dry phase. As in the Cenozoic, the 400,000 year eccentricity cycle was the dominant orbital cycle governing carbon flux between different reservoirs and influencing global climate. The location of the Intertropical Convergence Zone (ITCZ) was roughly the same as in the present.
The cooling trend of the last epoch of the Jurassic, the Tithonian, continued into the Berriasian, the first age of the Cretaceous. The North Atlantic seaway opened and enabled the flow of cool water from the Boreal Ocean into the Tethys. There is evidence that snowfalls were common in the higher latitudes during this age, and the tropics became wetter than during the Triassic and Jurassic. Glaciation was restricted to high-latitude mountains, though seasonal snow may have existed farther from the poles. After the end of the first age, however, temperatures began to increase again, with a number of thermal excursions, such as the middle Valanginian Weissert Thermal Excursion (WTX), which was caused by the Paraná-Etendeka Large Igneous Province's activity. It was followed by the middle Hauterivian Faraoni Thermal Excursion (FTX) and the early Barremian Hauptblatterton Thermal Event (HTE). The HTE marked the ultimate end of the Tithonian-early Barremian Cool Interval (TEBCI). The TEBCI was followed by the Barremian-Aptian Warm Interval (BAWI). This hot climatic interval coincides with Manihiki and Ontong Java Plateau volcanism and with the Selli Event. Early Aptian tropical sea surface temperatures (SSTs) were 27–32 °C, based on TEX86 measurements from the equatorial Pacific. During the Aptian, Milankovitch cycles governed the occurrence of anoxic events by modulating the intensity of the hydrological cycle and terrestrial runoff. The BAWI itself was followed by the Aptian-Albian Cold Snap (AACS) that began about 118 Ma. A short, relatively minor ice age may have occurred during this so-called "cold snap", as evidenced by glacial dropstones in the western parts of the Tethys Ocean and the expansion of calcareous nannofossils that dwelt in cold water into lower latitudes. The AACS is associated with an arid period in the Iberian Peninsula.
Temperatures increased drastically after the end of the AACS, which ended around 111 Ma with the Paquier/Urbino Thermal Maximum, giving way to the Mid-Cretaceous Hothouse (MKH), which lasted from the early Albian until the early Campanian. Faster rates of seafloor spreading and entry of carbon dioxide into the atmosphere are believed to have initiated this period of extreme warmth. The MKH was punctuated by multiple thermal maxima of extreme warmth. The Leenhardt Thermal Event (LTE) occurred around 110 Ma, followed shortly by the l’Arboudeyesse Thermal Event (ATE) a million years later. Following these two hyperthermals was the Amadeus Thermal Maximum around 106 Ma, during the middle Albian. Then, around a million years after that, occurred the Petite Verol Thermal Event (PVTE). Afterwards, around 102.5 Ma, the Event 6 Thermal Event (EV6) took place; this event was itself followed by the Breistroffer Thermal Maximum around 101 Ma, during the latest Albian. Approximately 94 Ma, the Cenomanian-Turonian Thermal Maximum occurred, with this hyperthermal being the most extreme hothouse interval of the Cretaceous. Temperatures cooled down slightly over the next few million years, but then another thermal maximum, the Coniacian Thermal Maximum, happened, with this thermal event being dated to around 87 Ma. Atmospheric CO2 levels may have varied by thousands of ppm throughout the MKH. Mean annual temperatures at the poles during the MKH exceeded 14 °C. Such hot temperatures during the MKH resulted in a very gentle temperature gradient from the equator to the poles; the latitudinal temperature gradient during the Cenomanian-Turonian Thermal Maximum was 0.54 °C per ° latitude for the Southern Hemisphere and 0.49 °C per ° latitude for the Northern Hemisphere, in contrast to present day values of 1.07 and 0.69 °C per ° latitude for the Southern and Northern hemispheres, respectively. This meant weaker global winds, which drive the ocean currents, and resulted in less upwelling and more stagnant oceans than today. This is evidenced by widespread black shale deposition and frequent anoxic events. Tropical SSTs during the late Albian most likely averaged around 30 °C. Despite this high SST, seawater was not hypersaline at this time, as this would have required significantly higher temperatures still. Tropical SSTs during the Cenomanian-Turonian Thermal Maximum were at least 30 °C, though one study estimated them as high as between 33 and 42 °C. An intermediate estimate of ~33-34 °C has also been given. Meanwhile, deep ocean temperatures were as much as warmer than today's; one study estimated that deep ocean temperatures were between 12 and 20 °C during the MKH. The poles were so warm that ectothermic reptiles were able to inhabit them.
Beginning in the Santonian, near the end of the MKH, the global climate began to cool, with this cooling trend continuing across the Campanian. This period of cooling, driven by falling levels of atmospheric carbon dioxide, caused the end of the MKH and the transition into a cooler climatic interval, known formally as the Late Cretaceous-Early Palaeogene Cool Interval (LKEPCI). Tropical SSTs declined from around 35 °C in the early Campanian to around 28 °C in the Maastrichtian. Deep ocean temperatures declined to 9 to 12 °C, though the shallow temperature gradient between tropical and polar seas remained. Regional conditions in the Western Interior Seaway changed little between the MKH and the LKEPCI. Two upticks in global temperatures are known to have occurred during the Maastrichtian, bucking the trend of overall cooler temperatures during the LKEPCI. Between 70 and 69 Ma and 66–65 Ma, isotopic ratios indicate elevated atmospheric CO2 pressures with levels of 1000–1400 ppmV and mean annual temperatures in west Texas between . Atmospheric CO2 and temperature relations indicate a doubling of pCO2 was accompanied by a ~0.6 °C increase in temperature. The latter warming interval, occurring at the very end of the Cretaceous, was triggered by the activity of the Deccan Traps. The LKEPCI lasted into the Late Palaeocene, when it gave way to another supergreenhouse interval.
The production of large quantities of magma, variously attributed to mantle plumes or to extensional tectonics, further pushed sea levels up, so that large areas of the continental crust were covered with shallow seas. The Tethys Sea connecting the tropical oceans east to west also helped to warm the global climate. Warm-adapted plant fossils are known from localities as far north as Alaska and Greenland, while dinosaur fossils have been found within 15 degrees of the Cretaceous south pole. It was suggested that there was Antarctic marine glaciation in the Turonian Age, based on isotopic evidence. However, this has subsequently been suggested to be the result of inconsistent isotopic proxies, with evidence of polar rainforests during this time interval at 82° S. Rafting by ice of stones into marine environments occurred during much of the Cretaceous, but evidence of deposition directly from glaciers is limited to the Early Cretaceous of the Eromanga Basin in southern Australia.
Flora
Flowering plants (angiosperms) make up around 90% of living plant species today. Prior to the rise of angiosperms, during the Jurassic and the Early Cretaceous, the higher flora was dominated by gymnosperm groups, including cycads, conifers, ginkgophytes, gnetophytes and close relatives, as well as the extinct Bennettitales. Other groups of plants included pteridosperms or "seed ferns", a collective term that refers to disparate groups of extinct seed plants with fern-like foliage, including groups such as Corystospermaceae and Caytoniales. The exact origins of angiosperms are uncertain, although molecular evidence suggests that they are not closely related to any living group of gymnosperms.
The earliest widely accepted evidence of flowering plants are monosulcate (single-grooved) pollen grains from the late Valanginian (~ 134 million years ago) found in Israel and Italy, initially at low abundance. Molecular clock estimates conflict with fossil estimates, suggesting the diversification of crown-group angiosperms during the Upper Triassic or Jurassic, but such estimates are difficult to reconcile with the heavily sampled pollen record and the distinctive tricolpate to tricolporoidate (triple grooved) pollen of eudicot angiosperms. Among the oldest records of Angiosperm macrofossils are Montsechia from the Barremian aged Las Hoyas beds of Spain and Archaefructus from the Barremian-Aptian boundary Yixian Formation in China. Tricolpate pollen distinctive of eudicots first appears in the Late Barremian, while the earliest remains of monocots are known from the Aptian. Flowering plants underwent a rapid radiation beginning during the middle Cretaceous, becoming the dominant group of land plants by the end of the period, coincident with the decline of previously dominant groups such as conifers. The oldest known fossils of grasses are from the Albian, with the family having diversified into modern groups by the end of the Cretaceous. The oldest large angiosperm trees are known from the Turonian (c. 90 Mya) of New Jersey, with the trunk having a preserved diameter of and an estimated height of .
During the Cretaceous, ferns in the order Polypodiales, which make up 80% of living fern species, would also begin to diversify.
Terrestrial fauna
On land, mammals were generally small sized, but a very relevant component of the fauna, with cimolodont multituberculates outnumbering dinosaurs in some sites. Neither true marsupials nor placentals existed until the very end, but a variety of non-marsupial metatherians and non-placental eutherians had already begun to diversify greatly, ranging as carnivores (Deltatheroida), aquatic foragers (Stagodontidae) and herbivores (Schowalteria, Zhelestidae). Various "archaic" groups like eutriconodonts were common in the Early Cretaceous, but by the Late Cretaceous northern mammalian faunas were dominated by multituberculates and therians, with dryolestoids dominating South America.
The apex predators were archosaurian reptiles, especially dinosaurs, which were at their most diverse stage. Avians such as the ancestors of modern-day birds also diversified. They inhabited every continent, and were even found in cold polar latitudes. Pterosaurs were common in the early and middle Cretaceous, but as the Cretaceous proceeded they declined for poorly understood reasons (once thought to be due to competition with early birds, but now it is understood avian adaptive radiation is not consistent with pterosaur decline). By the end of the period only three highly specialized families remained; Pteranodontidae, Nyctosauridae, and Azhdarchidae.
The Liaoning lagerstätte (Yixian Formation) in China is an important site, full of preserved remains of numerous types of small dinosaurs, birds and mammals, that provides a glimpse of life in the Early Cretaceous. The coelurosaur dinosaurs found there represent types of the group Maniraptora, which includes modern birds and their closest non-avian relatives, such as dromaeosaurs, oviraptorosaurs, therizinosaurs, troodontids along with other avialans. Fossils of these dinosaurs from the Liaoning lagerstätte are notable for the presence of hair-like feathers.
Insects diversified during the Cretaceous, and the oldest known ants, termites and some lepidopterans, akin to butterflies and moths, appeared. Aphids, grasshoppers and gall wasps appeared.
Rhynchocephalians
Rhynchocephalians (which today only includes the Tuatara) disappeared from North America and Europe after the Early Cretaceous, and were absent from North Africa and northern South America by the early Late Cretaceous. The cause of the decline of Rhynchocephalia remains unclear, but has often been suggested to be due to competition with advanced lizards and mammals. They appear to have remained diverse in high-latitude southern South America during the Late Cretaceous, where lizards remained rare, with their remains outnumbering terrestrial lizards 200:1.
Choristodera
Choristoderes, a group of freshwater aquatic reptiles that first appeared during the preceding Jurassic, underwent a major evolutionary radiation in Asia during the Early Cretaceous, which represents the high point of choristoderan diversity, including long necked forms such as Hyphalosaurus and the first records of the gharial-like Neochoristodera, which appear to have evolved in the regional absence of aquatic neosuchian crocodyliformes. During the Late Cretaceous the neochoristodere Champsosaurus was widely distributed across western North America. Due to the extreme climatic warmth in the Arctic, choristoderans were able to colonise it too during the Late Cretaceous.
Marine fauna
In the seas, rays, modern sharks and teleosts became common. Marine reptiles included ichthyosaurs in the early and mid-Cretaceous (becoming extinct during the late Cretaceous Cenomanian-Turonian anoxic event), plesiosaurs throughout the entire period, and mosasaurs appearing in the Late Cretaceous. Sea turtles in the form of Cheloniidae and Panchelonioidea lived during the period and survived the extinction event. Panchelonioidea is today represented by a single species; the leatherback sea turtle. The Hesperornithiformes were flightless, marine diving birds that swam like grebes.
Baculites, an ammonite genus with a straight shell, flourished in the seas along with reef-building rudist clams. Predatory gastropods with drilling habits were widespread. Globotruncanid Foraminifera and echinoderms such as sea urchins and starfish (sea stars) thrived. Ostracods were abundant in Cretaceous marine settings; ostracod species characterised by high male sexual investment had the highest rates of extinction and turnover. Thylacocephala, a class of crustaceans, went extinct in the Late Cretaceous. The first radiation of the diatoms (generally siliceous shelled, rather than calcareous) in the oceans occurred during the Cretaceous; freshwater diatoms did not appear until the Miocene. The Cretaceous was also an important interval in the evolution of bioerosion, the production of borings and scrapings in rocks, hardgrounds and shells.
See also
Mesozoic Era
Cretaceous-Paleogene extinction
Chalk Group
Cretaceous Thermal Maximum
List of fossil sites (with link directory)
South Polar region of the Cretaceous
References
Citations
Bibliography
—detailed coverage of various aspects of the evolutionary history of the insects.
External links
UCMP Berkeley Cretaceous page
Cretaceous Microfossils: 180+ images of Foraminifera
Cretaceous (chronostratigraphy scale)
Geological periods |
5617 | https://en.wikipedia.org/wiki/Creutzfeldt%E2%80%93Jakob%20disease | Creutzfeldt–Jakob disease | Creutzfeldt–Jakob disease (CJD), also known as subacute spongiform encephalopathy or neurocognitive disorder due to prion disease, is a fatal degenerative brain disorder. Early symptoms include memory problems, behavioral changes, poor coordination, and visual disturbances. Later symptoms include dementia, involuntary movements, blindness, weakness, and coma. About 70% of people die within a year of diagnosis. The name Creutzfeldt–Jakob disease was introduced by Walther Spielmeyer in 1922, after the German neurologists Hans Gerhard Creutzfeldt and Alfons Maria Jakob.
CJD is caused by a type of abnormal protein known as a prion. Infectious prions are misfolded proteins that can cause normally folded proteins to also become misfolded. About 85% of cases of CJD occur for unknown reasons, while about 7.5% of cases are inherited in an autosomal dominant manner. Exposure to brain or spinal tissue from an infected person may also result in spread. There is no evidence that sporadic CJD can spread among people via normal contact or blood transfusions, although this is possible in variant Creutzfeldt–Jakob disease. Diagnosis involves ruling out other potential causes. An electroencephalogram, spinal tap, or magnetic resonance imaging may support the diagnosis.
There is no specific treatment for CJD. Opioids may be used to help with pain, while clonazepam or sodium valproate may help with involuntary movements. CJD affects about one person per million people per year. Onset is typically around 60 years of age. The condition was first described in 1920. It is classified as a type of transmissible spongiform encephalopathy. Inherited CJD accounts for about 10% of prion disease cases. Sporadic CJD is different from bovine spongiform encephalopathy (mad cow disease) and variant Creutzfeldt–Jakob disease (vCJD).
Signs and symptoms
The first symptom of CJD is usually rapidly progressive dementia, leading to memory loss, personality changes, and hallucinations. Myoclonus (jerky movements) typically occurs in 90% of cases, but may be absent at initial onset. Other frequently occurring features include anxiety, depression, paranoia, obsessive-compulsive symptoms, and psychosis. This is accompanied by physical problems such as speech impairment, balance and coordination dysfunction (ataxia), changes in gait, and rigid posture. In most people with CJD, these symptoms are accompanied by involuntary movements. The duration of the disease varies greatly, but sporadic (non-inherited) CJD can be fatal within months or even weeks. Most affected people die six months after initial symptoms appear, often of pneumonia due to impaired coughing reflexes. About 15% of people with CJD survive for two or more years.
The symptoms of CJD are caused by the progressive death of the brain's nerve cells, which are associated with the build-up of abnormal prion proteins forming in the brain. When brain tissue from a person with CJD is examined under a microscope, many tiny holes can be seen where the nerve cells have died. Parts of the brain may resemble a sponge where the prions were infecting the areas of the brain.
Cause
CJD is a type of transmissible spongiform encephalopathy (TSE), which are caused by prions. Prions are misfolded proteins that occur in the neurons of the central nervous system (CNS). They are thought to affect signaling processes, damaging neurons and resulting in degeneration that causes the spongiform appearance in the affected brain.
The CJD prion is dangerous because it promotes refolding of native prion protein into the diseased state. The number of misfolded protein molecules will increase exponentially and the process leads to a large quantity of insoluble protein in affected cells. This mass of misfolded proteins disrupts neuronal cell function and causes cell death. Mutations in the gene for the prion protein can cause a misfolding of the dominantly alpha helical regions into beta pleated sheets. This change in conformation disables the ability of the protein to undergo digestion. Once the prion is transmitted, the defective proteins invade the brain and induce other prion protein molecules to misfold in a self-sustaining feedback loop. These neurodegenerative diseases are commonly called prion diseases.
People can also develop CJD because they carry a mutation of the gene that codes for the prion protein (PRNP). This occurs in only 5–10% of all CJD cases. In sporadic cases, the misfolding of the prion protein is a process that is hypothesized to occur as a result of the effects of aging on cellular machinery, explaining why the disease often appears later in life. An EU study determined that "87% of cases were sporadic, 8% genetic, 5% iatrogenic and less than 1% variant."
Transmission
The defective protein can be transmitted by contaminated harvested human brain products, corneal grafts, dural grafts, or electrode implants and human growth hormone.
It can be familial (fCJD); or it may appear without clear risk factors (sporadic form: sCJD). In the familial form, a mutation has occurred in the gene for PrP, PRNP, in that family. All types of CJD are transmissible irrespective of how they occur in the person.
It is thought that humans can contract the variant form of the disease by eating food from animals infected with bovine spongiform encephalopathy (BSE), the bovine form of TSE also known as mad cow disease. However, it can also cause sCJD in some cases.
Cannibalism has also been implicated as a transmission mechanism for abnormal prions, causing the disease known as kuru, once found primarily among women and children of the Fore people in Papua New Guinea, who previously engaged in funerary cannibalism. While the men of the tribe ate the muscle tissue of the deceased, women and children consumed other parts, such as the brain, and were more likely than men to contract kuru from infected tissue.
Prions, the infectious agent of CJD, may not be inactivated by means of routine surgical instrument sterilization procedures. The World Health Organization and the US Centers for Disease Control and Prevention recommend that instrumentation used in such cases be immediately destroyed after use; short of destruction, it is recommended that heat and chemical decontamination be used in combination to process instruments that come in contact with high-infectivity tissues. Thermal depolymerization also destroys prions in infected organic and inorganic matter, since the process chemically attacks protein at the molecular level, although more effective and practical methods involve destruction by combinations of detergents and enzymes similar to biological washing powders.
Diagnosis
Testing for CJD has historically been problematic, due to nonspecific nature of early symptoms and difficulty in safely obtaining brain tissue for confirmation. The diagnosis may initially be suspected in a person with rapidly progressing dementia, particularly when they are also found with the characteristic medical signs and symptoms such as involuntary muscle jerking, difficulty with coordination/balance and walking, and visual disturbances. Further testing can support the diagnosis and may include:
Electroencephalography – may have characteristic generalized periodic sharp wave pattern. Periodic sharp wave complexes develop in half of the people with sporadic CJD, particularly in the later stages.
Cerebrospinal fluid (CSF) analysis for elevated levels of 14-3-3 protein could be supportive in the diagnosis of sCJD. However, a positive result should not be regarded as sufficient for the diagnosis. The Real-Time Quaking-Induced Conversion (RT-QuIC) assay has a diagnostic sensitivity of more than 80% and a specificity approaching 100%, tested in detecting PrPSc in CSF samples of people with CJD. It is therefore suggested as a high-value diagnostic method for the disease.
MRI of the brain – often shows high signal intensity in the caudate nucleus and putamen bilaterally on T2-weighted images.
In recent years, studies have shown that the tumour marker neuron-specific enolase (NSE) is often elevated in CJD cases; however, its diagnostic utility is seen primarily when combined with a test for the 14-3-3 protein. , screening tests to identify infected asymptomatic individuals, such as blood donors, are not yet available, though methods have been proposed and evaluated.
Imaging
Imaging of the brain may be performed during medical evaluation, both to rule out other causes and to obtain supportive evidence for diagnosis. Imaging findings are variable in their appearance, and also variable in sensitivity and specificity. While imaging plays a lesser role in diagnosis of CJD, characteristic findings on brain MRI in some cases may precede onset of clinical manifestations.
Brain MRI is the most useful imaging modality for changes related to CJD. Of the MRI sequences, diffuse-weighted imaging sequences are most sensitive. Characteristic findings are as follows:
Focal or diffuse diffusion-restriction involving the cerebral cortex and/or basal ganglia. In about 24% of cases DWI shows only cortical hyperintensity; in 68%, cortical and subcortical abnormalities; and in 5%, only subcortical anomalies. The most iconic and striking cortical abnormality has been called "cortical ribboning" or "cortical ribbon sign" due to hyperintensities resembling ribbons appearing in the cortex on MRI. The involvement of the thalamus can be found in sCJD, is even stronger and constant in vCJD.
Varying degree of symmetric T2 hyperintense signal changes in the basal ganglia (i.e., caudate and putamen), and to a lesser extent globus pallidus and occipital cortex.
Cerebellar atrophy
Brain FDG PET-CT tends to be markedly abnormal, and is increasingly used in the investigation of dementias.
Patients with CJD will normally have hypometabolism on FDG PET.
Histopathology
Testing of tissue remains the most definitive way of confirming the diagnosis of CJD, although it must be recognized that even biopsy is not always conclusive.
In one-third of people with sporadic CJD, deposits of "prion protein (scrapie)", PrPSc, can be found in the skeletal muscle and/or the spleen. Diagnosis of vCJD can be supported by biopsy of the tonsils, which harbor significant amounts of PrPSc; however, biopsy of brain tissue is the definitive diagnostic test for all other forms of prion disease. Due to its invasiveness, biopsy will not be done if clinical suspicion is sufficiently high or low. A negative biopsy does not rule out CJD, since it may predominate in a specific part of the brain.
The classic histologic appearance is spongiform change in the gray matter: the presence of many round vacuoles from one to 50 micrometers in the neuropil, in all six cortical layers in the cerebral cortex or with diffuse involvement of the cerebellar molecular layer. These vacuoles appear glassy or eosinophilic and may coalesce. Neuronal loss and gliosis are also seen. Plaques of amyloid-like material can be seen in the neocortex in some cases of CJD.
However, extra-neuronal vacuolization can also be seen in other disease states. Diffuse cortical vacuolization occurs in Alzheimer's disease, and superficial cortical vacuolization occurs in ischemia and frontotemporal dementia. These vacuoles appear clear and punched-out. Larger vacuoles encircling neurons, vessels, and glia are a possible processing artifact.
Classification
Types of CJD include:
Sporadic (sCJD), caused by the spontaneous misfolding of prion-protein in an individual. This accounts for 85% of cases of CJD.
Familial (fCJD), caused by an inherited mutation in the prion-protein gene. This accounts for the majority of the other 15% of cases of CJD.
Acquired CJD, caused by contamination with tissue from an infected person, usually as the result of a medical procedure (iatrogenic CJD). Medical procedures that are associated with the spread of this form of CJD include blood transfusion from the infected person, use of human-derived pituitary growth hormones, gonadotropin hormone therapy, and corneal and meningeal transplants. Variant Creutzfeldt–Jakob disease (vCJD) is a type of acquired CJD potentially acquired from bovine spongiform encephalopathy or caused by consuming food contaminated with prions.
Treatment
As of 2023, there is no cure or effective treatment for CJD. Some of the symptoms like twitching can be managed, but otherwise treatment is palliative care. Psychiatric symptoms like anxiety and depression can be treated with sedatives and antidepressants. Myoclonic jerks can be handled with clonazepam or sodium valproate. Opiates can help in pain. Seizures are very uncommon but can nevertheless be treated with antiepileptic drugs.
Prognosis
The condition is universally fatal. As of 1981, no one was known to have lived longer than 2.5 years after the onset of CJD symptoms. In 2011, Jonathan Simms, a Northern Irish man who lived 10 years after his diagnosis, was reported to be one of the world's longest survivors of variant Creutzfeldt–Jakob disease (vCJD).
Epidemiology
CDC monitors the occurrence of CJD in the United States through periodic reviews of national mortality data. According to the CDC:
CJD occurs worldwide at a rate of about 1 case per million population per year.
On the basis of mortality surveillance from 1979 to 1994, the annual incidence of CJD remained stable at approximately 1 case per million people in the United States.
In the United States, CJD deaths among people younger than 30 years of age are extremely rare (fewer than five deaths per billion per year).
The disease is found most frequently in people 55–65 years of age, but cases can occur in people older than 90 years and younger than 55 years of age.
In more than 85% of cases, the duration of CJD is less than one year (median: four months) after the onset of symptoms.
Further information from the CDC:
Risk of developing CJD increases with age.
CJD incidence was 3.5 cases per million among those over 50 years of age between 1979 and 2017.
Approximately 85% of CJD cases are sporadic and 10-15% of CJD cases are due to inherited mutations of the prion protein gene.
CJD deaths and age-adjusted death rate in the United States indicate an increasing trend in the number of deaths between 1979 and 2017.
Although not fully understood, additional information suggests that CJD rates in African American and nonwhite groups are lower than in whites. While the mean onset is approximately 67 years of age, cases of sCJD have been reported as young as 17 years and over 80 years of age. Mental capabilities rapidly deteriorate and the average amount of time from onset of symptoms to death is 7 to 9 months.
According to a 2020 systematic review on the international epidemiology of CJD:
Surveillance studies from 2005 and later show the estimated global incidence is 1–2 cases per million population per year.
Sporadic CJD (sCJD) incidence increased from the years 1990–2018 in the UK.
Probable or definite sCJD deaths also increased from the years 1996–2018 in twelve additional countries.
CJD incidence is greatest in those over the age of 55 years old, with an average age of 67 years old.
The intensity of CJD surveillance increases the number of reported cases, often in countries where CJD epidemics have occurred in the past and where surveillance resources are greatest. An increase in surveillance and reporting of CJD is most likely in response to BSE and vCJD. Possible factors contributing to an increase of CJD incidence are an aging population, population increase, clinician awareness, and more accurate diagnostic methods. Since CJD symptoms are similar to other neurological conditions, it is also possible that CJD is mistaken for stroke, acute nephropathy, general dementia, and hyperparathyroidism.
History
The disease was first described by German neurologists Hans Gerhard Creutzfeldt in 1920 and shortly afterward by Alfons Maria Jakob, giving it the name Creutzfeldt–Jakob. Some of the clinical findings described in their first papers do not match current criteria for Creutzfeldt–Jakob disease, and it has been speculated that at least two of the people in initial studies had a different ailment. An early description of familial CJD stems from the German psychiatrist and neurologist Friedrich Meggendorfer (1880–1953). A study published in 1997 counted more than 100 cases worldwide of transmissible CJD and new cases continued to appear at the time.
The first report of suspected iatrogenic CJD was published in 1974. Animal experiments showed that corneas of infected animals could transmit CJD, and the causative agent spreads along visual pathways. A second case of CJD associated with a corneal transplant was reported without details. In 1977, CJD transmission caused by silver electrodes previously used in the brain of a person with CJD was first reported. Transmission occurred despite the decontamination of the electrodes with ethanol and formaldehyde. Retrospective studies identified four other cases likely of similar cause. The rate of transmission from a single contaminated instrument is unknown, although it is not 100%. In some cases, the exposure occurred weeks after the instruments were used on a person with CJD. In the 1980s it was discovered that Lyodura, a dura mater transplant product, was shown to transmit CJD from the donor to the recipient. This led to the product being banned in Canada but it was used in other countries such as Japan until 1993.
A review article published in 1979 indicated that 25 dura mater cases had occurred by that date in Australia, Canada, Germany, Italy, Japan, New Zealand, Spain, the United Kingdom, and the United States.
By 1985, a series of case reports in the United States showed that when injected, cadaver-extracted pituitary human growth hormone could transmit CJD to humans.
In 1992, it was recognized that human gonadotropin administered by injection could also transmit CJD from person to person.
Stanley B. Prusiner of the University of California, San Francisco (UCSF) was awarded the Nobel Prize in Physiology or Medicine in 1997 "for his discovery of Prions—a new biological principle of infection".
Yale University neuropathologist Laura Manuelidis has challenged the prion protein (PrP) explanation for the disease. In January 2007, she and her colleagues reported that they had found a virus-like particle in naturally and experimentally infected animals. "The high infectivity of comparable, isolated virus-like particles that show no intrinsic PrP by antibody labeling, combined with their loss of infectivity when nucleic acid–protein complexes are disrupted, make it likely that these 25-nm particles are the causal TSE virions".
Australia
Australia has documented 10 cases of healthcare-acquired CJD (iatrogenic or ICJD). Five of the deaths resulted after the patients, who were in treatment either for infertility or short stature, were treated using contaminated pituitary extract hormone but no new cases have been noted since 1991. The other five deaths occurred due to dura grafting procedures that were performed during brain surgery, in which the covering of the brain is repaired. There have been no other ICJD deaths documented in Australia due to transmission during healthcare procedures.
New Zealand
A case was reported in 1989 in a 25-year-old man from New Zealand, who also received dura mater transplant. Five New Zealanders have been confirmed to have died of the sporadic form of Creutzfeldt–Jakob disease (CJD) in 2012.
United States
In 1988, there was a confirmed death from CJD of a person from Manchester, New Hampshire. Massachusetts General Hospital believed the person acquired the disease from a surgical instrument at a podiatrist's office. In 2007, Michael Homer, former Vice President of Netscape, had been experiencing consistent memory problems which led to his diagnosis. In September 2013, another person in Manchester was posthumously determined to have died of the disease. The person had undergone brain surgery at Catholic Medical Center three months before his death, and a surgical probe used in the procedure was subsequently reused in other operations. Public health officials identified thirteen people at three hospitals who may have been exposed to the disease through the contaminated probe, but said the risk of anyone's contracting CJD is "extremely low". In January 2015, former speaker of the Utah House of Representatives Rebecca D. Lockhart died of the disease within a few weeks of diagnosis. John Carroll, former editor of The Baltimore Sun and Los Angeles Times, died of CJD in Kentucky in June 2015, after having been diagnosed in January. American actress Barbara Tarbuck (General Hospital, American Horror Story) died of the disease on December 26, 2016. José Baselga, clinical oncologist having headed the AstraZeneca Oncology division, died in Cerdanya, March 21, 2021, from CJD.
Research
Diagnosis
In 2010, a team from New York described detection of PrPSc in sheep's blood, even when initially present at only one part in one hundred billion (10−11) in sheep's brain tissue. The method combines amplification with a novel technology called surround optical fiber immunoassay (SOFIA) and some specific antibodies against PrPSc. The technique allowed improved detection and testing time for PrPSc.
In 2014, a human study showed a nasal brushing method that can accurately detect PrP in the olfactory epithelial cells of people with CJD.
Treatment
Pentosan polysulphate (PPS) was thought to slow the progression of the disease, and may have contributed to the longer than expected survival of the seven people studied. The CJD Therapy Advisory Group to the UK Health Departments advises that data are not sufficient to support claims that pentosan polysulphate is an effective treatment and suggests that further research in animal models is appropriate. A 2007 review of the treatment of 26 people with PPS finds no proof of efficacy because of the lack of accepted objective criteria, but it was unclear to the authors whether that was caused by PPS itself. In 2012 it was claimed that the lack of significant benefits has likely been caused because of the drug being administered very late in the disease in many patients.
Use of RNA interference to slow the progression of scrapie has been studied in mice. The RNA blocks production of the protein that the CJD process transforms into prions.
Both amphotericin B and doxorubicin have been investigated as treatments for CJD, but as yet there is no strong evidence that either drug is effective in stopping the disease. Further study has been taken with other medical drugs, but none are effective. However, anticonvulsants and anxiolytic agents, such as valproate or a benzodiazepine, may be administered to relieve associated symptoms.
Quinacrine, a medicine originally created for malaria, has been evaluated as a treatment for CJD. The efficacy of quinacrine was assessed in a rigorous clinical trial in the UK and the results were published in Lancet Neurology, and concluded that quinacrine had no measurable effect on the clinical course of CJD.
Astemizole, a medication approved for human use, has been found to have anti-prion activity and may lead to a treatment for Creutzfeldt–Jakob disease.
A monoclonal antibody (code name PRN100) targeting the prion protein (PrP) was given to six people with Creutzfeldt–Jakob disease in an early-stage clinical trial conducted from 2018 to 2022. The treatment appeared to be well-tolerated and was able to access the brain, where it might have helped to clear PrPC. While the treated patients still showed progressive neurological decline, and while none of them survived longer than expected from the normal course of the disease, the scientists at University College London who conducted the study see these early-stage results as encouraging and suggest to conduct a larger study, ideally at the earliest possible intervention.
See also
Chronic traumatic encephalopathy
Chronic wasting disease
Kuru
References
External links
Transmissible spongiform encephalopathies
Neurodegenerative disorders
Dementia
Rare infectious diseases
Wikipedia medicine articles ready to translate
Wikipedia neurology articles ready to translate
Rare diseases
1920 in biology |
5622 | https://en.wikipedia.org/wiki/C.%20Northcote%20Parkinson | C. Northcote Parkinson | Cyril Northcote Parkinson (30 July 1909 – 9 March 1993) was a British naval historian and author of some 60 books, the most famous of which was his best-seller Parkinson's Law (1957), in which Parkinson advanced the eponymous law stating that "work expands so as to fill the time available for its completion", an insight which led him to be regarded as an important scholar in public administration and management.
Early life and education
The youngest son of William Edward Parkinson (1871–1927), an art master at North East County School and from 1913 principal of York School of Arts and Crafts, and his wife, Rose Emily Mary Curnow (born 1877), Parkinson attended St. Peter's School, York, where in 1929 he won an Exhibition to study history at Emmanuel College, Cambridge. He received a BA degree in 1932. As an undergraduate, Parkinson developed an interest in naval history, which he pursued when the Pellew family gave him access to family papers at the recently established National Maritime Museum. The papers formed the basis of his first book, Edward Pellew, Viscount Exmouth, Admiral of the Red. In 1934, then a graduate student at King's College London, he wrote his PhD thesis on Trade and War in the Eastern Seas, 1803–1810, which was awarded the Julian Corbett Prize in Naval History for 1935.
Academic and military career
While a graduate student in 1934, Parkinson was commissioned into the Territorial Army in the 22nd London Regiment (The Queen's), was promoted to lieutenant the same year, and commanded an infantry company at the jubilee of King George V in 1935. In the same year, Emmanuel College, Cambridge elected him a research fellow. While at Cambridge, he commanded an infantry unit of the Cambridge University Officers' Training Corps. He was promoted to captain in 1937.
He became senior history master at Blundell's School in Tiverton, Devon in 1938 (and a captain in the school's OTC), then instructor at the Royal Naval College, Dartmouth in 1939. In 1940, he joined the Queen's Royal Regiment as a captain and undertook a range of staff and military teaching positions in Britain. In 1943 he married Ethelwyn Edith Graves (born 1915), a nurse tutor at Middlesex Hospital, with whom he had two children.
Demobilized as a major in 1945, he was a lecturer in history at the University of Liverpool from 1946 to 1949. In 1950, he was appointed Raffles Professor of History at the new University of Malaya in Singapore. While there, he initiated an important series of historical monographs on the history of Malaya, publishing the first in 1960. A movement developed in the mid-1950s to establish two campuses, one in Kuala Lumpur and one in Singapore. Parkinson attempted to persuade the authorities to avoid dividing the university by maintaining it in Johor Bahru to serve both Singapore and Malaya. His efforts were unsuccessful and the two campuses were established in 1959. The Singapore campus later became the University of Singapore.
Parkinson divorced in 1952 and he married the writer and journalist Ann Fry (1921–1983), with whom he had two sons and a daughter. In 1958, while still in Singapore, he published his most famous work, Parkinson's Law, which expanded upon a humorous article that he had published in the Economist magazine in November 1955, satirising government bureaucracies. The 120-page book of short studies, published in the United States and then in Britain, was illustrated by Osbert Lancaster and became an instant best seller. It explained the inevitability of bureaucratic expansion, arguing that 'work expands to fill the time available for its completion'. Typical of his satire and cynical humour, it included a discourse on Parkinson's Law of Triviality (debates about expenses for a nuclear plant, a bicycle shed, and refreshments), a note on why driving on the left side of the road (see road transport) is natural, and suggested that the Royal Navy would eventually have more admirals than ships. After serving as visiting professor at Harvard University in 1958, the University of Illinois and the University of California, Berkeley in 1959–60, he resigned his post in Singapore to become an independent writer.
To avoid high taxation in Britain, he moved to the Channel Islands and settled at St Martin's, Guernsey, where he purchased Les Caches Hall. In Guernsey, he was a very active member of the community and was even committed to the feudal heritage of the island. He even financed a historical re-enactment of the Chevauche de Saint Michel (Cavalcade) by the Court of Seigneurs and wrote a newspaper article about it . He was official member of the Royal Court of Chief Pleas in his quality of Seigneur d'Anneville as he had acquired the manorial rights of the Fief d'Anneville . Attendance at the Royal Court of Chief Pleas is considered very important in Guernsey , as it is the island's oldest court and its first historical self-governing body. As a feudal member, he could therefore be the equivalent of a temporal lord in Guernsey . As Anneville is in some ways considered the oldest fief of the island and his possessor is considered "the first in rank after the clergy" , he was very interested in his fief and its historical possessions. In 1968 he purchased and restored Anneville Manor, the historic manor house of the Seigneurie (or fief) d'Anneville, and in 1971 he restored the Chapel of Thomas d'Anneville pertaining to the same fief. His writings from this period included a series of historical novels featuring a fictional naval officer from Guernsey, Richard Delancey, during the Napoleonic era. In the novel, Richard Delancey was Seigneur of the Fief d'Anneville , and Cyril Northcote Parkinson also loved to boast about being Seigneur of the fief d'Anneville and had even ended up transferring himself to Anneville Manor (le manoir d'Anneville), so in a way Richard Delancey seems to be a mirror image of Cyril Northcote Parkinson.
In 1969 he was invited to deliver the MacMillan Memorial Lecture to the Institution of Engineers and Shipbuilders in Scotland. He chose the subject "The Status of the Engineer".
Parkinson and his 'law'
Parkinson's law, which provides insight into a primary barrier to efficient time management, states that, "work expands so as to fill the time available for its completion". This articulates a situation and an unexplained force that many have come to take for granted and accept. "In exactly the same way nobody bothered and nobody cared, before Newton's day, why an apple should drop to the ground when it might so easily fly up after leaving the tree," wrote Straits Times editor-in-chief, Allington Kennard who continued, "There is less gravity in Professor Parkinson's Law, but hardly less truth."
Parkinson first published his law in a humorous satirical article in The Economist on 19 November 1955, meant as a critique on the efficiency of public administration and civil service bureaucracy, and the continually rising headcount, and related cost, attached to these. That article noted that, "Politicians and taxpayers have assumed (with occasional phases of doubt) that a rising total in the number of civil servants must reflect a growing volume of work to be done." The law examined two sub-laws, The Law of Multiplication of Subordinates, and The Law of Multiplication of Work, and provided 'scientific proof' of the validity of these, including mathematical formulae.
Two years later, the law was revisited when Parkinson's new books, Parkinson's Law And Other Studies in Administration and Parkinson's Law: Or The Pursuit of Progress were published in 1957.
In Singapore, where he was teaching at the time, this began a series of talks where he addressed diverse audiences in person, in print, and over the airwaves on 'Parkinson's Law'. For example, on 16 October 1957, at 10 a.m., he spoke on this at the International Women's Club programme talk held at the Y.W.C.A. at Raffles Quay. The advent of his new book as well as an interview during his debut talk was covered in an editorial in The Straits Times, shortly after, entitled, "A professor's cocktail party secret: They arrive half an hour late and rotate." Time, which also wrote about the book, noted that its theme was "a delightfully unprofessional diagnosis of the widespread 20th century malady — galloping orgmanship." Orgmanship, according to Parkinson, was "the tendency of all administrative departments to increase the number of subordinate staff, irrespective of the amount of work (if any) to be done", as noted by The Straits Times. Parkinson, it was reported, wanted to trace the illegibility of signatures, the attempt being made to fix the point in a successful executive career at which the handwriting becomes meaningless, even to the executive himself.
Straits Times editor-in-chief Allington Kennard's editorial, "Twice the staff for half the work", in mid-April 1958, touched on further aspects or sub-laws, like Parkinson's Law of Triviality, and also other interesting, if dangerous areas like, "the problem of the retirement age, how not to pay Singapore income tax when a millionaire, the point of vanishing interest in high finance, how to get rid of the company chairman," etc. The author supported Parkinson's Law of Triviality — which states that, "The time spent on any item of an agenda is in inverse proportion to the sum involved," with a local example where it took the Singapore City Council "six hours to pick a new man for the gasworks and two and a half minutes to approve a $100 million budget." It is possible that the book, humorous though it is, may have touched a raw nerve among the administration at that time. As J. D. Scott, in his review of Parkinson's book two weeks later, notes, "Of course, Parkinson's Law, like all satire, is serious — it wouldn't be so comic if it weren't — and because it is serious there will be some annoyance and even dismay under the smiles."
His celebrity did not remain local. Parkinson travelled to England, arriving there aboard the P&O Canton, in early June 1958, as reported by Reuters, and made the front page of The Straits Times on the 9th of June. Reporting from London on Saturday 14 June 1958, Hall Romney wrote, "Prof. C. N. Parkinson of the University of Malaya, whose book, Parkinson's Law has sold more than 80,000 copies, has had a good deal of publicity since he arrived in England in the Canton." Romney noted that, "a television interview was arranged, a profile of him appeared in a highbrow Sunday newspaper, columnists gave him almost as much space as they gave to Leslie Charteris, and he was honoured by the Institute of Directors, whose reception was attended by many of the most notable men in the commercial life of London." And then, all of a sudden, satire was answered with some honesty when, as another Reuters release republished in The Straits Times under the title, "Parkinson's Law at work in the UK," quoted, "A PARLIAMENTARY committee, whose Job is to see that British Government departments do not waste the taxpayer's money, said yesterday it was alarmed at the rate of staff increases in certain sections of the War Office. Admiralty and Air Ministry..." In March 1959, further publicity occurred when, the Royal Navy in Singapore took umbrage at a remark Parkinson had made during his talk, about his new book on the wastage of public money, in Manchester, shortly before. Parkinson is reported to have said "Britain spent about $500 million building a naval base there [Singapore] and the only fleet which has used it is the Japanese." A navy spokesman, then, attempting to counter that statement said that the Royal Navy's Singapore base had only been completed in 1939, and, while it was confirmed that the Japanese had, indeed used it during the Second World War, it had been used extensively by the Royal Navy's Far East fleet, after the war. Emeritus Professor of Japanese Studies at the University of Oxford, Richard Storry, writing in the Oxford Mail, 16 May 1962, noted, "The fall of Singapore is still viewed with anger and shame in Britain."
On Thursday 10 September 1959, at 10 p.m., Radio Singapore listeners got to experience his book, Parkinson's Law, set to music by Nesta Pain. The serialised program continued until the end of February 1960. Parkinson, and Parkinson's law, continued to find its way into Singapore newspapers through the decades.
University of Malaya
Singapore was introduced to him almost immediately upon his arrival there, through exposure in the newspaper and a number of public appearances. Parkinson started teaching at the University of Malaya in Singapore at the beginning of April 1950.
Public lectures
The first lecture of the Raffles Professor of History was a public lecture given at the Oei Tiong Ham Hall, on 19 May. Parkinson, who was speaking on "The Task of the Historian," began by noting the new Raffles history chair was aptly named because it was Sir Stamford Raffles who had tried to found the university in 1823 and because Raffles himself was a historian. There was a large audience, including Professor Alexander Oppenheim, the university's Dean of the Faculty of Arts.
The text of his lecture was then reproduced and published over two issues of The Straits Times a few days later.
On 17 April 1953, he addressed the public on "The Historical Aspect of the Coronation," at the Singapore YMCA Hall.
Sponsored by the Malayan Historical Society, Parkinson gave a talk on the "Modern history of Taiping" at the residence of the District Officer, Larut and Matang on 12 August 1953.
Sponsored by the Singapore branch of the Malayan Historical Society, on 5 February 1954 Parkinson gave a public lecture on "Singapore in the sixties" [1860s] at St. Andrew's Cathedral War Memorial Hall.
Sponsored by the Seremban branch of the Historical Society of Malaya, Parkinson spoke on Tin Mining at the King George V School, Seremban. He said, in the past, Chinese labourers were imported from China at $32 a head to work the tin fields of Malaya. He said that mining developed steadily after British protection had been established and that tin from Negri Sembilan in the 1870s came from Sungei Ujong and Rembau, and worked with capital from Malacca. He noted that Chinese working side-by-side with Europeans, did better with their primitive methods and made great profits when they took over mines that Europeans abandoned.
Arranged by the Indian University Graduates Association of Singapore, Parkinson gave a talk on "Indian Political Thought," at the USIS theatrette on 16 February 1955.
On 10 March 1955, he spoke on "What I think about Colonialism," at the British Council Hall, Stamford Road, Singapore at 6.30 p.m. In his lecture, he argued that nationalism which was generally believed to be good, and colonialism which was seen as the reverse, were not necessarily opposite ideas but the same thing seen from different angles. He thought the gifts from Britain that Malaya and Singapore should value most and retain when they became self-governing included, debate, literature (not comics), armed forces' tradition (not police state), arts, tolerance and humour (not puritanism) and public spirit.
Public exhibitions
On 18 August 1950, Parkinson opened a week-long exhibition on the "History of English Handwriting," at the British Council centre, Stamford Road, Singapore.
On 21 March 1952, he opened an exhibition of photographs from The Times of London which had been shown widely in different parts of the world. The exhibition comprised a selection of photographs spanning 1921 to 1951. 140 photographs were on display for a month at the British Council Hall, Singapore, showing scenes ranging from the German surrender to the opening of the Festival of Britain by the late King.
He opened an exhibition of photographs taken by students of the University of Malaya during their tour of India, at the University Arts Theatre in Cluny Road, Singapore, 10 October 1953.
Victor Purcell
Towards the end of August, Professor of Far Eastern History at Cambridge University, Dr. Victor Purcell, who was also a former Acting Secretary of Chinese Affairs in Singapore, addressed the Kuala Lumpur Rotary Club. The Straits Times, quoting Purcell, noted, "Professor C. N. Parkinson had been appointed to the Chair of History at the University of Malaya and 'we can confidently anticipate that under his direction academic research into Malaya's history will assume a creative aspect which it has not possessed before.'"
Johore Transfer Committee
In October, Parkinson was appointed, by the Senate of the University of Malaya, to head a special committee of experts to consult on technical details regarding the transfer of the University to Johore. Along with him were Professor R. E. Holttum (Botany), and Acting Professors C. G. Webb (Physics) and D. W. Fryer (Geography).
Library and Museum
In November, Parkinson was appointed a member of the Committee for the management of Raffles Library and Museum, replacing Professor G. G. Hough who had resigned.
In March 1952, Parkinson proposed a central public library, for Singapore, as a memorial to King George VI, commemorating that monarch's reign. He is reported to have said, "Perhaps the day has gone by for public monuments except in a useful form. And if that be so, might not, some enterprise of local importance be graced with the late King's name? One plan he could certainly have warmly approved would be that of building a Central Public Library," he opined. Parkinson noted that the Raffles Library was growing in usefulness and would, in short time, outgrow the building that then housed it. He said, given the educational work that was producing a large literate population demanding books in English, Malay and Chinese, what was surely needed was a genuinely public library,air-conditioned to preserve the books, and of a design to make those books readily accessible. He suggested that the building, equipment and maintenance of the public library ought to be the responsibility of the Municipality rather than the Government.
T. P. F. McNeice, the then President of the Singapore City Council, as well as leading educationists of the time, thought the suggestion "an excellent, first-class suggestion to meet a definite and urgent need." McNeice also agreed that the project ought to be the responsibility of the City Council. Also in favour of the idea was Director of Education, A. W. Frisby who thought that there ought to be branches of the library, which could be fed by the central library, Raffles Institution Principal P. F. Howitt, Canon R. K. S. Adams (Principal of St. Andrews School) and Homer Cheng, the President of the Chinese Y.M.C.A. Principal of the Anglo-Chinese School, H. H. Peterson suggested the authorities also consider a mobile school library.
While Parkinson had originally suggested that this be a Municipal and not a Government undertaking, something changed. A public meeting, convened by the Friends of Singapore - Parkinson was its President - at the British Council Hall on 15 May, decided that Singapore's memorial to King George VI would take the form of a public library, possibly with mobile units and sub-libraries in the out-of-town districts. Parkinson, in addressing the assembly noted that Raffles Library was not a free library, did not have vernacular sections, and its building could not be air-conditioned. McNeice, the Municipal President then proposed a resolution be sent to Government that the meeting considered the most appropriate memorial to the late King ought to take the form of a library (or libraries) and urged Government to set up a committee with enough non-Government representation, to consider the matter.
The Government got involved, and a Government spokesperson spoke to the Straits Times about this on 16 May, saying that the Singapore Government welcomed proposals from the public on the form in which a memorial to King George ought to take, whether a public library, as suggested by Parkinson, or some other form.
In the middle of 1952, the Singapore Government began setting up a committee to consider the suggestions made on the form Singapore's memorial to King George VI ought to take. G. G. Thomson, the Government's Public Relations Secretary informed the Straits Times that the committee would have official and non-Government representation and added that, apart from Parkinson's suggestion of a free public library, a polytechnic had also been suggested.
W. L. Blythe, the Colonial Secretary, making it clear where his vote lay, pointed out that Singapore, at that time, already had a library, the Raffles Library. From news coverage we learn that yet another committee had been formed, this time to consider what would be necessary to establish an institution along the lines of the London Polytechnic. Blythe stated that the arguments he had heard in favour of a polytechnic were very strong.
Director of Raffles Library and Museum, W. M. F. Tweedie was in favour of the King George VI free public library but up to the end of November, nothing had been heard of any developments towards that end. Tweedie suggested the ground beside the British Council as being suitable for such a library, and, if the public library was built, he would suggest for all the books at the Raffles Library to be moved to the new site, so that the space thus vacated could be used for a public art gallery.
Right after, the Government, who were not supposed to have been involved in the first place - the suggestion made by Parkinson and accepted by City Council President T. P. F. McNeice that this be a Municipal and not Government undertaking - approved the proposal to set up a polytechnic as a memorial to King George IV.
And Singapore continued with its subscription library and was without a free public library as envisioned by Parkinson. However, his call did not go unheeded. The following year, in August, 1953, the Lee Foundation pledged a dollar-for-dollar match up to $375,000 towards the establishment of a national library, provided that it was a free, without-cost, public library, open to men and women of every race, class, creed, and colour.
It was not, however until November 1960, that Parkinson's vision was realised, when the new library, free and for all, was completed and opened to the public.
Film Censorship Consultative Committee
That same month he was also appointed, by the Singapore Government, Chairman of a committee set up to study film censorship in the Colony and suggest changes, if necessary.
Their terms of reference were to enquire into the existing procedure and legislation relating to cinematograph film censorship and to make recommendations with a view to improving the system, including legislation. They were also asked to consider whether the Official Film Censor should continue to be the controller of the British film quota, and to consider the memorandum of the film trade submitted to the Governor earlier that year.
Investigating, archiving and writing Malaya's past
At the beginning of December 1950, Parkinson made an appeal, at the Singapore Rotary Club, for old log books, diaries, newspaper files, ledgers or maps accumulated over the years. He asked that these be passed to the Raffles Library or the University of Malaya library, instead of being thrown away, as they might aid research and help those studying the history of the country to set down an account of what had happened in Malaya since 1867. "The time will come when school-children will be taught the history of their own land rather than of Henry VIII or the capture of Quebec. Parkinson told his audience that there was a large volume of documentary evidence about Malaya written in Portuguese and Dutch. He said that the arrival of the Pluto in Singapore, one of the first vessels to pass through the Suez Canal when it opened in 1869, might be described as the moment when British Malaya was born. "I would urge you not to scrap old correspondence just because it clutters up the office. Send it to a library where it may some day be of great value," he said.
In September 1951 the magazine, British Malaya, published Parkinson's letter that called for the formation of one central Archives Office where all the historical records of Malaya and Singapore could be properly preserved, pointing out that it would be of inestimable value to administrators, historians, economists, social science investigators and students. In his letter, Parkinson, who was still abroad attending the Anglo-American Conference of Historians, in London, said that the formation of an Archives Office was already in discussion, and was urgent, in view of the climate where documents were liable to damage by insects and mildew. He said that many private documents relating to Malaya were kept in the U.K., where they were not appreciated because names like Maxwell, Braddell and Swettenham might mean nothing there. "The establishment of a Malayan Archives Office would do much to encourage the transfer of these documents," he wrote.
On 22 May 1953, Parkinson convened a meeting at the British Council, Stamford Road, Singapore, to form the Singapore branch of the Malayan Historical Society.
Speaking at the inaugural meeting of the society's Singapore branch, Parkinson, addressing the more than 100 people attending, said the aims of the branch would be to assist in the recording of history, folklore, tradition and customs of Malaya and its people and to encourage the preservation of objects of historical and cultural interest. Of Malayan history, he said, it "has mostly still to be written. Nor can it even be taught in the schools until that writing has been done."
Parkinson had been urging the Singapore and Federation Governments to set up a national archives since 1950. In June 1953 he urged the speedy establishment of a national archives, where, "in air-conditioned rooms, on steel shelves, with proper skilled supervision and proper precaution against fire and theft, the records of Malayan history might be preserved indefinitely and at small expense. He noted that cockroaches had nibbled away at many vital documents and records, shrouding many years of Malaya's past in mystery, aided by moths and silverfish and abetted by negligent officials.
A start had, by then, already been made - an air-conditioned room at the Federal Museum had already been set aside for storing important historical documents and preserving them from cockroaches and decay, the work of Peter Williams-Hunt, the Federation Director of Museums and Adviser on Aborigine Affairs who had died that month. He noted, however, that the problems of supervising archives and collecting old documents, had still to be solved.
In January 1955 Parkinson formed University of Malaya's Archaeological Society and became its first President. Upon commencement, The Society had a membership of 53 which was reported to be the largest of its kind in Southeast Asia at the time. "Drive to discover the secrets of S.E. Asia. Hundreds of amateurs will delve into mysteries of the past."
In April 1956 it was reported that 'For the first time, a long-needed Standard History of Malaya is to be published for students.' According to the news report a large-scale project, developing a ten-volume series, the result of ten years of research by University of Malaya staff, was currently in progress, detailing events dating back to the Portuguese occupation of 1511, to the, then, present day. The first volume, written by Parkinson, covered the years 1867 to 1877 and was to be published within three months thence. It was estimated that the last volume would be released after 1960. The report noted that, as at that time, Parkinson and his wife had already released two books on history for junior students, entitled "The Heroes" and "Malayan Fables."
Three months passed by and the book remained unpublished. It was not till 1960 that British Intervention in Malaya (1867-1877), that first volume, finally found its way on bookshelves and into libraries. By that time, the press reported, the series had expanded into a twelve-volume set.
Malayan history syllabus
In January 1951 Parkinson was interviewed by New Zealand film producer and director, Wynona “Noni” Hope Wright. He told of his reorganisation of the Department of History during the last term to facilitate a new syllabus. The interview took place in Parkinson's sitting room beneath a frieze depicting Malaya's history, painted by Parkinson. Departing from the usual syllabus, Parkinson had decided to leave out European History almost entirely in order to give greater focus to Southeast Asia, particularly Malaya. The course, designed experimentally, takes in the study of world history up to 1497 in the first year, the impact of different European nations on Southeast Asia in the second year, and the study of Southeast Asia, particularly Malaya, after the establishment of British influence at the Straits Settlements in the third year. The students who make it through and decide to specialise in history will, then, have been brought to a point where they can profitably undertake original research in the history of modern Malaya, i.e. the 19th and 20th centuries, an area where, according to Parkinson, little had been done, with hardly any serious research attempted for the period after 'the transfer,' in 1867. Parkinson hoped that lecturing on this syllabus would ultimately produce a full-scale history of Malaya. This would include discovering documentation from Portuguese and Dutch sources from the time when those two countries still had a foothold in Malaya. He said that, while the period of development of the Straits Settlements under the East India Company were well-documented - the bulk of these archived at the Raffles Museum, local records after 1867 were not as plentiful and that it would be necessary to reconstruct those records from microfilm copies of documents kept in the United Kingdom. The task for the staff at the History Department was made formidable because their unfamiliarity with the Dutch and Portuguese languages. "I have no doubt that the history of Malaya must finally be written by Malayans, but we can at least do very much to prepare the way." Parkinson told Wright. "Scholars trained at this University in the spirit and technique of historical research, a study divorced from all racial and religious animosities, a study concerned only with finding the truth and explaining it in a lucid and attractive literary form, should be able to make a unique contribution to the mutual understanding of East and West," he said. "History apart, nothing seems to be of more vital importance in our time than the promotion of this understanding. In no field at the present time does the perpetuation of distrust and mutual incomprehension seem more dangerous. If we can, from this university, send forth graduates who can combine learning and ways of thought of the Far East and of the West, they may play a great part in overcoming the barriers of prejudice, insularity and ignorance," he concluded.
Radio Malaya Programs
In March 1951 Parkinson wrote a historical feature, "The China Fleet," for Radio Malaya, offering a what was said to be a true account, in dramatic form, of an incident in the annals of the East India Company that had such an influence on Malaya and other parts of Southeast Asia in the early part of the nineteenth century.
On 28 January 1952, at 9.40 p.m. he talked about the founding of Singapore.
Special Constabulary
In the middle of April 1951, Parkinson was sworn in as special constable by ASP Watson of the Singapore Special Constabulary at the Oei Tion Ham Hall, together with other members of the staff, and students who were then placed under Parkinson's supervision. The special constabulary, The University Corp, being informed of their duties and powers of arrest were then issued batons and charged with the defence of the University in the event of trouble. Lecturer in Economics, P. Sherwood was appointed Parkinson's assistant. These measures were taken to ensure that rioters were dispersed and ejected if they trespassed onto University grounds. Parkinson signed a notice that noted that some of the rioters who took part in the December disorders came from an area near the University buildings in Bukit Timah.
These precautions were taken in advance of the Maria Hertogh appeal on Monday 16 April. The case was postponed a number of times, after which it was finally heard at the end of July.
Anglo-American Conference of Historians
Parkinson departed Singapore on Monday 18 June 1951 for London, where he represented the University of Malaya at the Fifth Anglo-American Conference of Historians, there, from 9 to 14 July. He was to return in October at the start of the new academic year.
Resignation
In October 1958, while still on sabbatical in America – together with his wife and two young children, he had set off for America in May 1958 for study and travel and was due to return to work in April 1959 – Parkinson, through a letter sent from New York, resigned his position at the University of Malaya. K. G. Tregonning was, at that time, Acting Head of the History Department.
Parkinson had not been the only one to resign while on leave. Professor E. H. G. Dobby of the Geography Department had also submitted his resignation while away on sabbatical leave. After deliberations, the University Council had decided, before the university's new constitution came into force on 15 January, that no legal action would be taken against Dobby – the majority of the council feeling that there was no case against Dobby as his resignation occurred before new regulations governing sabbatical leave benefits were introduced. In Parkinson's case, however, the council determined that that resignation had been submitted after the regulations came into effect, and a decision had been made to write to him, asking that he report back to work before a certain date, failing which the council said it was free to take any action they thought appropriate.
In July 1959, K. G. Tregonning, acting head of the History Department, and History Lecturer at the University of Malaya since 1952, was appointed to fill the Raffles History Chair left vacant by Parkinson's resignation. There was nothing in the press about whether the matter between Parkinson and the university had been resolved, or not.
Later life and death
After the death of his second wife in 1984, in 1985 Parkinson married Iris Hilda Waters (d. 1994) and moved to the Isle of Man. After two years there, they moved to Canterbury, Kent, where he died in March 1993, at the age of 83. He was buried in Canterbury, and the law named after him is quoted as his epitaph.
Published works
Richard Delancey series of naval novels
The Devil to Pay (1973)(2)
The Fireship (1975)(3)
Touch and Go (1977)(4)
Dead Reckoning (1978)(6)
So Near, So Far (1981)(5)
The Guernseyman (1982)(1)
Other nautical fiction
Manhunt (1990)
Other fiction
Ponies Plot (1965)
Biographies of fictional characters
The Life and Times of Horatio Hornblower (1970)
Jeeves: A Gentleman's Personal Gentleman (1979)
Naval history
Edward Pellew, Viscount Exmouth (1934)
The Trade Winds, Trade in the French Wars 1793–1815 (1948)
Samuel Walters, Lieut. RN (1949)
War in the Eastern Seas, 1793–1815 (1954)
Trade in the Eastern Seas (1955)
British Intervention in Malaya, 1867–1877 (1960)
Britannia Rules (1977)
Portsmouth Point, The Navy in Fiction, 1793–1815 (1948)
Other non-fiction
The Rise of the Port of Liverpool (1952)
Parkinson's law (1957)
The Evolution of Political Thought (1958)
The Law and the Profits (1960)
In-Laws and Outlaws (1962)
East and West (1963)
Parkinsanities (1965)
Left Luggage (1967)
Mrs. Parkinson's Law: and Other Studies in Domestic Science (1968)
The Law of Delay (1970)
The fur-lined mousetrap (1972)
The Defenders, Script for a "Son et Lumière" in Guernsey (1975)
Gunpowder, Treason and Plot (1978)
The Law, or Still in Pursuit (1979)
Audio recordings
Discusses Political Science with Julian H. Franklin (10 LPs) (1959)
Explains "Parkinson's Law" (1960)
References
Sources consulted
C. Northcote Parkinson on the Fantastic Fiction website
Turnbull, C. M. (2004) "Parkinson, Cyril Northcote (1909–1993)", in Oxford Dictionary of National Biography
Endnotes
Bibliography
Bibliography of C. Northcote Parkinson
External links
Parkinson's law and other texts analysed on BibNum (click "A télécharger", and find the English version)
C. Northcote Parkinson, Parkinson's Law - extract (1958)
English non-fiction writers
English satirists
English historical novelists
1909 births
1993 deaths
Military personnel from County Durham
People from Barnard Castle
Alumni of Emmanuel College, Cambridge
Academic staff of the National University of Singapore
Alumni of King's College London
London Regiment officers
Officers' Training Corps officers
Queen's Royal Regiment officers
Fellows of Emmanuel College, Cambridge
Academics of the University of Liverpool
Nautical historical novelists
People educated at St Peter's School, York
20th-century English novelists
20th-century English historians
English male novelists |
5623 | https://en.wikipedia.org/wiki/Canal | Canal | Canals or artificial waterways are waterways or engineered channels built for drainage management (e.g. flood control and irrigation) or for conveyancing water transport vehicles (e.g. water taxi). They carry free, calm surface flow under atmospheric pressure, and can be thought of as artificial rivers.
In most cases, a canal has a series of dams and locks that create reservoirs of low speed current flow. These reservoirs are referred to as slack water levels, often just called levels. A canal can be called a navigation canal when it parallels a natural river and shares part of the latter's discharges and drainage basin, and leverages its resources by building dams and locks to increase and lengthen its stretches of slack water levels while staying in its valley.
A canal can cut across a drainage divide atop a ridge, generally requiring an external water source above the highest elevation. The best-known example of such a canal is the Panama Canal.
Many canals have been built at elevations, above valleys and other waterways. Canals with sources of water at a higher level can deliver water to a destination such as a city where water is needed. The Roman Empire's aqueducts were such water supply canals.
The term was once used to describe linear features seen on the surface of Mars, Martian canals, an optical illusion.
Types of artificial waterways
A navigation is a series of channels that run roughly parallel to the valley and stream bed of an unimproved river. A navigation always shares the drainage basin of the river. A vessel uses the calm parts of the river itself as well as improvements, traversing the same changes in height.
A true canal is a channel that cuts across a drainage divide, making a navigable channel connecting two different drainage basins.
Structures used in artificial waterways
Both navigations and canals use engineered structures to improve navigation:
weirs and dams to raise river water levels to usable depths;
looping descents to create a longer and gentler channel around a stretch of rapids or falls;
locks to allow ships and barges to ascend/descend.
Since they cut across drainage divides, canals are more difficult to construct and often need additional improvements, like viaducts and aqueducts to bridge waters over streams and roads, and ways to keep water in the channel.
Types of canals
There are two broad types of canal:
Waterways: canals and navigations used for carrying vessels transporting goods and people. These can be subdivided into two kinds:
Those connecting existing lakes, rivers, other canals or seas and oceans.
Those connected in a city network: such as the Canal Grande and others of Venice; the grachten of Amsterdam or Utrecht, and the waterways of Bangkok.
Aqueducts: water supply canals that are used for the conveyance and delivery of potable water, municipal uses, hydro power canals and agriculture irrigation.
Importance
Historically, canals were of immense importance to commerce and the development, growth and vitality of a civilization. In 1855 the Lehigh Canal carried over 1.2 million tons of anthracite coal; by the 1930s the company which built and operated it over a century pulled the plug. The few canals still in operation in our modern age are a fraction of the numbers that once fueled and enabled economic growth, indeed were practically a prerequisite to further urbanization and industrialization. For the movement of bulk raw materials such as coal and ores are difficult and marginally affordable without water transport. Such raw materials fueled the industrial developments and new metallurgy resulting of the spiral of increasing mechanization during 17th–20th century, leading to new research disciplines, new industries and economies of scale, raising the standard of living for any industrialized society.
The surviving canals
Most ship canals today primarily service bulk cargo and large ship transportation industries, whereas the once critical smaller inland waterways conceived and engineered as boat and barge canals have largely been supplanted and filled in, abandoned and left to deteriorate, or kept in service and staffed by state employees, where dams and locks are maintained for flood control or pleasure boating. Their replacement was gradual, beginning first in the United States in the mid-1850s where canal shipping was first augmented by, then began being replaced by using much faster, less geographically constrained & limited, and generally cheaper to maintain railways.
By the early 1880s, canals which had little ability to economically compete with rail transport, were off the map. In the next couple of decades, coal was increasingly diminished as the heating fuel of choice by oil, and growth of coal shipments leveled off. Later, after World War I when motor-trucks came into their own, the last small U.S. barge canals saw a steady decline in cargo ton-miles alongside many railways, the flexibility and steep slope climbing capability of lorries taking over cargo hauling increasingly as road networks were improved, and which also had the freedom to make deliveries well away from rail lined road beds or ditches in the dirt which could not operate in the winter.
The longest extant canal today, the Grand Canal in northern China, still remains in heavy use, especially the portion south of the Yellow River. It stretches from Beijing to Hangzhou at 1,794 kilometres (1,115 miles).
Construction
Canals are built in one of three ways, or a combination of the three, depending on available water and available path:
Human made streams
A canal can be created where no stream presently exists. Either the body of the canal is dug or the sides of the canal are created by making dykes or levees by piling dirt, stone, concrete or other building materials. The finished shape of the canal as seen in cross section is known as the canal prism. The water for the canal must be provided from an external source, like streams or reservoirs. Where the new waterway must change elevation engineering works like locks, lifts or elevators are constructed to raise and lower vessels. Examples include canals that connect valleys over a higher body of land, like Canal du Midi, Canal de Briare and the Panama Canal.
A canal can be constructed by dredging a channel in the bottom of an existing lake. When the channel is complete, the lake is drained and the channel becomes a new canal, serving both drainage of the surrounding polder and providing transport there. Examples include the . One can also build two parallel dikes in an existing lake, forming the new canal in between, and then drain the remaining parts of the lake. The eastern and central parts of the North Sea Canal were constructed in this way. In both cases pumping stations are required to keep the land surrounding the canal dry, either pumping water from the canal into surrounding waters, or pumping it from the land into the canal.
Canalization and navigations
A stream can be canalized to make its navigable path more predictable and easier to maneuver. Canalization modifies the stream to carry traffic more safely by controlling the flow of the stream by dredging, damming and modifying its path. This frequently includes the incorporation of locks and spillways, that make the river a navigation. Examples include the Lehigh Canal in Northeastern Pennsylvania's coal Region, Basse Saône, Canal de Mines de Fer de la Moselle, and canal Aisne. Riparian zone restoration may be required.
Lateral canals
When a stream is too difficult to modify with canalization, a second stream can be created next to or at least near the existing stream. This is called a lateral canal, and may meander in a large horseshoe bend or series of curves some distance from the source waters stream bed lengthening the effective length in order to lower the ratio of rise over run (slope or pitch). The existing stream usually acts as the water source and the landscape around its banks provide a path for the new body. Examples include the Chesapeake and Ohio Canal, Canal latéral à la Loire, Garonne Lateral Canal, Welland Canal and Juliana Canal.
Smaller transportation canals can carry barges or narrowboats, while ship canals allow seagoing ships to travel to an inland port (e.g., Manchester Ship Canal), or from one sea or ocean to another (e.g., Caledonian Canal, Panama Canal).
Features
At their simplest, canals consist of a trench filled with water. Depending on the stratum the canal passes through, it may be necessary to line the cut with some form of watertight material such as clay or concrete. When this is done with clay, it is known as puddling.
Canals need to be level, and while small irregularities in the lie of the land can be dealt with through cuttings and embankments, for larger deviations other approaches have been adopted. The most common is the pound lock, which consists of a chamber within which the water level can be raised or lowered connecting either two pieces of canal at a different level or the canal with a river or the sea. When there is a hill to be climbed, flights of many locks in short succession may be used.
Prior to the development of the pound lock in 984 AD in China by Chhaio Wei-Yo and later in Europe in the 15th century, either flash locks consisting of a single gate were used or ramps, sometimes equipped with rollers, were used to change the level. Flash locks were only practical where there was plenty of water available.
Locks use a lot of water, so builders have adopted other approaches for situations where little water is available. These include boat lifts, such as the Falkirk Wheel, which use a caisson of water in which boats float while being moved between two levels; and inclined planes where a caisson is hauled up a steep railway.
To cross a stream, road or valley (where the delay caused by a flight of locks at either side would be unacceptable) the valley can be spanned by a navigable aqueduct – a famous example in Wales is the Pontcysyllte Aqueduct (now a UNESCO World Heritage Site) across the valley of the River Dee.
Another option for dealing with hills is to tunnel through them. An example of this approach is the Harecastle Tunnel on the Trent and Mersey Canal. Tunnels are only practical for smaller canals.
Some canals attempted to keep changes in level down to a minimum. These canals known as contour canals would take longer, winding routes, along which the land was a uniform altitude. Other, generally later, canals took more direct routes requiring the use of various methods to deal with the change in level.
Canals have various features to tackle the problem of water supply. In cases, like the Suez Canal, the canal is open to the sea. Where the canal is not at sea level, a number of approaches have been adopted. Taking water from existing rivers or springs was an option in some cases, sometimes supplemented by other methods to deal with seasonal variations in flow. Where such sources were unavailable, reservoirs – either separate from the canal or built into its course – and back pumping were used to provide the required water. In other cases, water pumped from mines was used to feed the canal. In certain cases, extensive "feeder canals" were built to bring water from sources located far from the canal.
Where large amounts of goods are loaded or unloaded such as at the end of a canal, a canal basin may be built. This would normally be a section of water wider than the general canal. In some cases, the canal basins contain wharfs and cranes to assist with movement of goods.
When a section of the canal needs to be sealed off so it can be drained for maintenance stop planks are frequently used. These consist of planks of wood placed across the canal to form a dam. They are generally placed in pre-existing grooves in the canal bank. On more modern canals, "guard locks" or gates were sometimes placed to allow a section of the canal to be quickly closed off, either for maintenance, or to prevent a major loss of water due to a canal breach.
Canal falls
A canal fall, or canal drop, is a vertical drop in the canal bed. These are built when the natural ground slope is steeper than the desired canal gradient. They are constructed so the falling water's kinetic energy is dissipated in order to prevent it from scouring the bed and sides of the canal.
A canal fall is constructed by cut and fill. It may be combined with a regulator, bridge, or other structure to save costs.
There are various types of canal falls, based on their shape. One type is the ogee fall, where the drop follows an s-shaped curve to create a smooth transition and reduce turbulence. However, this smooth transition does not dissipate the water's kinetic energy, which leads to heavy scouring. As a result, the canal needs to be reinforced with concrete or masonry to protect it from eroding.
Another type of canal fall is the vertical fall, which is "simple and economical". These feature a "cistern", or depressed area just downstream from the fall, to "cushion" the water by providing a deep pool for its kinetic energy to be diffused in. Vertical falls work for drops of up to 1.5 m in height, and for discharge of up to 15 cubic meters per second.
History
The transport capacity of pack animals and carts is limited. A mule can carry an eighth-ton [] maximum load over a journey measured in days and weeks, though much more for shorter distances and periods with appropriate rest. Besides, carts need roads. Transport over water is much more efficient and cost-effective for large cargoes.
Ancient canals
The oldest known canals were irrigation canals, built in Mesopotamia circa 4000 BC, in what is now Iraq. The Indus Valley civilization of ancient India (circa 3000 BC) had sophisticated irrigation and storage systems developed, including the reservoirs built at Girnar in 3000 BC. This is the first time that such planned civil project had taken place in the ancient world. In Egypt, canals date back at least to the time of Pepi I Meryre (reigned 2332–2283 BC), who ordered a canal built to bypass the cataract on the Nile near Aswan.
In ancient China, large canals for river transport were established as far back as the Spring and Autumn Period (8th–5th centuries BC), the longest one of that period being the Hong Gou (Canal of the Wild Geese), which according to the ancient historian Sima Qian connected the old states of Song, Zhang, Chen, Cai, Cao, and Wei. The Caoyun System of canals was essential for imperial taxation, which was largely assessed in kind and involved enormous shipments of rice and other grains. By far the longest canal was the Grand Canal of China, still the longest canal in the world today and the oldest extant one. It is long and was built to carry the Emperor Yang Guang between Zhuodu (Beijing) and Yuhang (Hangzhou). The project began in 605 and was completed in 609, although much of the work combined older canals, the oldest section of the canal existing since at least 486 BC. Even in its narrowest urban sections it is rarely less than wide.
In the 5th century BC, Achaemenid king Xerxes I of Persia ordered the construction of the Xerxes Canal through the base of Mount Athos peninsula, Chalkidiki, northern Greece. It was constructed as part of his preparations for the Second Persian invasion of Greece, a part of the Greco-Persian Wars. It is one of the few monuments left by the Persian Empire in Europe.
Greek engineers were also among the first to use canal locks, by which they regulated the water flow in the Ancient Suez Canal as early as the 3rd century BC.
There was little experience moving bulk loads by carts, while a pack-horse would [i.e. 'could'] carry only an eighth of a ton. On a soft road a horse might be able to draw 5/8ths of a ton. But if the load were carried by a barge on a waterway, then up to 30 tons could be drawn by the same horse.— technology historian Ronald W. Clark referring to transport realities before the industrial revolution and the Canal age.
Hohokam was a society in the North American Southwest in what is now part of Arizona, United States, and Sonora, Mexico. Their irrigation systems supported the largest population in the Southwest by 1300 CE. Archaeologists working at a major archaeological dig in the 1990s in the Tucson Basin, along the Santa Cruz River, identified a culture and people that may have been the ancestors of the Hohokam. This prehistoric group occupied southern Arizona as early as 2000 BCE, and in the Early Agricultural Period grew corn, lived year-round in sedentary villages, and developed sophisticated irrigation canals.
The large-scale Hohokam irrigation network in the Phoenix metropolitan area was the most complex in ancient North America. A portion of the ancient canals has been renovated for the Salt River Project and now helps to supply the city's water.
Middle Ages
In the Middle Ages, water transport was several times cheaper and faster than transport overland. Overland transport by animal drawn conveyances was used around settled areas, but unimproved roads required pack animal trains, usually of mules to carry any degree of mass, and while a mule could carry an eighth ton, it also needed teamsters to tend it and one man could only tend perhaps five mules, meaning overland bulk transport was also expensive, as men expect compensation in the form of wages, room and board. This was because long-haul roads were unpaved, more often than not too narrow for carts, much less wagons, and in poor condition, wending their way through forests, marshy or muddy quagmires as often as unimproved but dry footing. In that era, as today, greater cargoes, especially bulk goods and raw materials, could be transported by ship far more economically than by land; in the pre-railroad days of the industrial revolution, water transport was the gold standard of fast transportation. The first artificial canal in Western Europe was the Fossa Carolina built at the end of the 8th century under personal supervision of Charlemagne.
In Britain, the Glastonbury Canal is believed to be the first post-Roman canal and was built in the middle of the 10th century to link the River Brue at Northover with Glastonbury Abbey, a distance of about . Its initial purpose is believed to be the transport of building stone for the abbey, but later it was used for delivering produce, including grain, wine and fish, from the abbey's outlying properties. It remained in use until at least the 14th century, but possibly as late as the mid-16th century.More lasting and of more economic impact were canals like the Naviglio Grande built between 1127 and 1257 to connect Milan with the river Ticino. The Naviglio Grande is the most important of the lombard "navigli" and the oldest functioning canal in Europe.Later, canals were built in the Netherlands and Flanders to drain the polders and assist transportation of goods and people.
Canal building was revived in this age because of commercial expansion from the 12th century. River navigations were improved progressively by the use of single, or flash locks. Taking boats through these used large amounts of water leading to conflicts with watermill owners and to correct this, the pound or chamber lock first appeared, in the 10th century in China and in Europe in 1373 in Vreeswijk, Netherlands. Another important development was the mitre gate, which was, it is presumed, introduced in Italy by Bertola da Novate in the 16th century. This allowed wider gates and also removed the height restriction of guillotine locks.
To break out of the limitations caused by river valleys, the first summit level canals were developed with the Grand Canal of China in 581–617 AD whilst in Europe the first, also using single locks, was the Stecknitz Canal in Germany in 1398.
Africa
In the Songhai Empire of West Africa, several canals were constructed under Sunni Ali and Askia Muhammad I between Kabara and Timbuktu in the 15th century. These were used primarily for irrigation and transport. Sunni Ali also attempted to construct a canal from the Niger River to Walata to facilitate conquest of the city but his progress was halted when he went to war with the Mossi Kingdoms.
Early modern period
Around 1500–1800 the first summit level canal to use pound locks in Europe was the Briare Canal connecting the Loire and Seine (1642), followed by the more ambitious Canal du Midi (1683) connecting the Atlantic to the Mediterranean. This included a staircase of 8 locks at Béziers, a tunnel, and three major aqueducts.
Canal building progressed steadily in Germany in the 17th and 18th centuries with three great rivers, the Elbe, Oder and Weser being linked by canals. In post-Roman Britain, the first early modern period canal built appears to have been the Exeter Canal, which was surveyed in 1563, and open in 1566.
The oldest canal in the European settlements of North America, technically a mill race built for industrial purposes, is Mother Brook between the Boston, Massachusetts neighbourhoods of Dedham and Hyde Park connecting the higher waters of the Charles River and the mouth of the Neponset River and the sea. It was constructed in 1639 to provide water power for mills.
In Russia, the Volga–Baltic Waterway, a nationwide canal system connecting the Baltic Sea and Caspian Sea via the Neva and Volga rivers, was opened in 1718.
Industrial Revolution
See also: History of the British canal system
See also: History of turnpikes and canals in the United States
The modern canal system was mainly a product of the 18th century and early 19th century. It came into being because the Industrial Revolution (which began in Britain during the mid-18th century) demanded an economic and reliable way to transport goods and commodities in large quantities.
By the early 18th century, river navigations such as the Aire and Calder Navigation were becoming quite sophisticated, with pound locks and longer and longer "cuts" (some with intermediate locks) to avoid circuitous or difficult stretches of river. Eventually, the experience of building long multi-level cuts with their own locks gave rise to the idea of building a "pure" canal, a waterway designed on the basis of where goods needed to go, not where a river happened to be.
The claim for the first pure canal in Great Britain is debated between "Sankey" and "Bridgewater" supporters. The first true canal in what is now the United Kingdom was the Newry Canal in Northern Ireland constructed by Thomas Steers in 1741.
The Sankey Brook Navigation, which connected St Helens with the River Mersey, is often claimed as the first modern "purely artificial" canal because although originally a scheme to make the Sankey Brook navigable, it included an entirely new artificial channel that was effectively a canal along the Sankey Brook valley. However, "Bridgewater" supporters point out that the last quarter-mile of the navigation is indeed a canalized stretch of the Brook, and that it was the Bridgewater Canal (less obviously associated with an existing river) that captured the popular imagination and inspired further canals.
In the mid-eighteenth century the 3rd Duke of Bridgewater, who owned a number of coal mines in northern England, wanted a reliable way to transport his coal to the rapidly industrializing city of Manchester. He commissioned the engineer James Brindley to build a canal for that purpose. Brindley's design included an aqueduct carrying the canal over the River Irwell. This was an engineering wonder which immediately attracted tourists. The construction of this canal was funded entirely by the Duke and was called the Bridgewater Canal. It opened in 1761 and was the first major British canal.
The new canals proved highly successful. The boats on the canal were horse-drawn with a towpath alongside the canal for the horse to walk along. This horse-drawn system proved to be highly economical and became standard across the British canal network. Commercial horse-drawn canal boats could be seen on the UK's canals until as late as the 1950s, although by then diesel-powered boats, often towing a second unpowered boat, had become standard.
The canal boats could carry thirty tons at a time with only one horse pulling – more than ten times the amount of cargo per horse that was possible with a cart. Because of this huge increase in supply, the Bridgewater canal reduced the price of coal in Manchester by nearly two-thirds within just a year of its opening. The Bridgewater was also a huge financial success, with it earning what had been spent on its construction within just a few years.
This success proved the viability of canal transport, and soon industrialists in many other parts of the country wanted canals. After the Bridgewater canal, early canals were built by groups of private individuals with an interest in improving communications. In Staffordshire the famous potter Josiah Wedgwood saw an opportunity to bring bulky cargoes of clay to his factory doors and to transport his fragile finished goods to market in Manchester, Birmingham or further away, by water, minimizing breakages. Within just a few years of the Bridgewater's opening, an embryonic national canal network came into being, with the construction of canals such as the Oxford Canal and the Trent & Mersey Canal.
The new canal system was both cause and effect of the rapid industrialization of The Midlands and the north. The period between the 1770s and the 1830s is often referred to as the "Golden Age" of British canals.
For each canal, an Act of Parliament was necessary to authorize construction, and as people saw the high incomes achieved from canal tolls, canal proposals came to be put forward by investors interested in profiting from dividends, at least as much as by people whose businesses would profit from cheaper transport of raw materials and finished goods.
In a further development, there was often out-and-out speculation, where people would try to buy shares in a newly floated company to sell them on for an immediate profit, regardless of whether the canal was ever profitable, or even built. During this period of "canal mania", huge sums were invested in canal building, and although many schemes came to nothing, the canal system rapidly expanded to nearly 4,000 miles (over 6,400 kilometres) in length.
Many rival canal companies were formed and competition was rampant. Perhaps the best example was Worcester Bar in Birmingham, a point where the Worcester and Birmingham Canal and the Birmingham Canal Navigations Main Line were only seven feet apart. For many years, a dispute about tolls meant that goods travelling through Birmingham had to be portaged from boats in one canal to boats in the other.
Canal companies were initially chartered by individual states in the United States. These early canals were constructed, owned, and operated by private joint-stock companies. Four were completed when the War of 1812 broke out; these were the South Hadley Canal (opened 1795) in Massachusetts, Santee Canal (opened 1800) in South Carolina, the Middlesex Canal (opened 1802) also in Massachusetts, and the Dismal Swamp Canal (opened 1805) in Virginia. The Erie Canal (opened 1825) was chartered and owned by the state of New York and financed by bonds bought by private investors. The Erie canal runs about from Albany, New York, on the Hudson River to Buffalo, New York, at Lake Erie. The Hudson River connects Albany to the Atlantic port of New York City and the Erie Canal completed a navigable water route from the Atlantic Ocean to the Great Lakes. The canal contains 36 locks and encompasses a total elevation differential of around 565 ft. (169 m). The Erie Canal with its easy connections to most of the U.S. mid-west and New York City soon quickly paid back all its invested capital (US$7 million) and started turning a profit. By cutting transportation costs in half or more it became a large profit center for Albany and New York City as it allowed the cheap transportation of many of the agricultural products grown in the mid west of the United States to the rest of the world. From New York City these agricultural products could easily be shipped to other U.S. states or overseas. Assured of a market for their farm products the settlement of the U.S. mid-west was greatly accelerated by the Erie Canal. The profits generated by the Erie Canal project started a canal building boom in the United States that lasted until about 1850 when railroads started becoming seriously competitive in price and convenience. The Blackstone Canal (finished in 1828) in Massachusetts and Rhode Island fulfilled a similar role in the early industrial revolution between 1828 and 1848. The Blackstone Valley was a major contributor of the American Industrial Revolution where Samuel Slater built his first textile mill.
Power canals
See also: Power canal
A power canal refers to a canal used for hydraulic power generation, rather than for transport. Nowadays power canals are built almost exclusively as parts of hydroelectric power stations. Parts of the United States, particularly in the Northeast, had enough fast-flowing rivers that water power was the primary means of powering factories (usually textile mills) until after the American Civil War. For example, Lowell, Massachusetts, considered to be "The Cradle of the American Industrial Revolution," has of canals, built from around 1790 to 1850, that provided water power and a means of transportation for the city. The output of the system is estimated at 10,000 horsepower. Other cities with extensive power canal systems include Lawrence, Massachusetts, Holyoke, Massachusetts, Manchester, New Hampshire, and Augusta, Georgia. The most notable power canal was built in 1862 for the Niagara Falls Hydraulic Power and Manufacturing Company.
19th century
Competition, from railways from the 1830s and roads in the 20th century, made the smaller canals obsolete for most commercial transport, and many of the British canals fell into decay. Only the Manchester Ship Canal and the Aire and Calder Canal bucked this trend. Yet in other countries canals grew in size as construction techniques improved. During the 19th century in the US, the length of canals grew from to over 4,000, with a complex network making the Great Lakes navigable, in conjunction with Canada, although some canals were later drained and used as railroad rights-of-way.
In the United States, navigable canals reached into isolated areas and brought them in touch with the world beyond. By 1825 the Erie Canal, long with 36 locks, opened up a connection from the populated Northeast to the Great Lakes. Settlers flooded into regions serviced by such canals, since access to markets was available. The Erie Canal (as well as other canals) was instrumental in lowering the differences in commodity prices between these various markets across America. The canals caused price convergence between different regions because of their reduction in transportation costs, which allowed Americans to ship and buy goods from farther distances much cheaper. Ohio built many miles of canal, Indiana had working canals for a few decades, and the Illinois and Michigan Canal connected the Great Lakes to the Mississippi River system until replaced by a channelized river waterway.
Three major canals with very different purposes were built in what is now Canada. The first Welland Canal, which opened in 1829 between Lake Ontario and Lake Erie, bypassing Niagara Falls and the Lachine Canal (1825), which allowed ships to skirt the nearly impassable rapids on the St. Lawrence River at Montreal, were built for commerce. The Rideau Canal, completed in 1832, connects Ottawa on the Ottawa River to Kingston, Ontario on Lake Ontario. The Rideau Canal was built as a result of the War of 1812 to provide military transportation between the British colonies of Upper Canada and Lower Canada as an alternative to part of the St. Lawrence River, which was susceptible to blockade by the United States.
In France, a steady linking of all the river systems – Rhine, Rhône, Saône and Seine – and the North Sea was boosted in 1879 by the establishment of the Freycinet gauge, which specified the minimum size of locks. Canal traffic doubled in the first decades of the 20th century.
Many notable sea canals were completed in this period, starting with the Suez Canal (1869) – which carries tonnage many times that of most other canals – and the Kiel Canal (1897), though the Panama Canal was not opened until 1914.
In the 19th century, a number of canals were built in Japan including the Biwako canal and the Tone canal. These canals were partially built with the help of engineers from the Netherlands and other countries.
A major question was how to connect the Atlantic and the Pacific with a canal through narrow Central America. (The Panama Railroad opened in 1855.) The original proposal was for a sea-level canal through what is today Nicaragua, taking advantage of the relatively large Lake Nicaragua. This canal has never been built in part because of political instability, which scared off potential investors. It remains an active project (the geography has not changed), and in the 2010s Chinese involvement was developing.
The second choice for a Central American canal was a Panama canal. The De Lessups company, which ran the Suez Canal, first attempted to build a Panama Canal in the 1880s. The difficulty of the terrain and weather (rain) encountered caused the company to go bankrupt. High worker mortality from disease also discouraged further investment in the project. DeLessup's abandoned excavating equipment sits, isolated decaying machines, today tourist attractions.
Twenty years later, an expansionist United States, that just acquired colonies after defeating Spain in the 1898 Spanish–American War, and whose Navy became more important, decided to reactivate the project. The United States and Colombia did not reach agreement on the terms of a canal treaty (see Hay–Herrán Treaty). Panama, which did not have (and still does not have) a land connection with the rest of Colombia, was already thinking of independence. In 1903 the United States, with support from Panamanians who expected the canal to provide substantial wages, revenues, and markets for local goods and services, took Panama province away from Colombia, and set up a puppet republic (Panama). Its currency, the Balboa – a name that suggests the country began as a way to get from one hemisphere to the other – was a replica of the US dollar. The US dollar was and remains legal tender (used as currency). A U.S. military zone, the Canal Zone, wide, with U.S. military stationed there (bases, 2 TV stations, channels 8 and 10, Pxs, a U.S.-style high school), split Panama in half. The Canal – a major engineering project – was built. The U.S. did not feel that conditions were stable enough to withdraw until 1979. The withdrawal from Panama contributed to President Jimmy Carter's defeat in 1980.
Modern uses
Large-scale ship canals such as the Panama Canal and Suez Canal continue to operate for cargo transportation, as do European barge canals. Due to globalization, they are becoming increasingly important, resulting in expansion projects such as the Panama Canal expansion project. The expanded canal began commercial operation on 26 June 2016. The new set of locks allow transit of larger, Post-Panamax and New Panamax ships.
The narrow early industrial canals, however, have ceased to carry significant amounts of trade and many have been abandoned to navigation, but may still be used as a system for transportation of untreated water. In some cases railways have been built along the canal route, an example being the Croydon Canal.
A movement that began in Britain and France to use the early industrial canals for pleasure boats, such as hotel barges, has spurred rehabilitation of stretches of historic canals. In some cases, abandoned canals such as the Kennet and Avon Canal have been restored and are now used by pleasure boaters. In Britain, canalside housing has also proven popular in recent years.
The Seine–Nord Europe Canal is being developed into a major transportation waterway, linking France with Belgium, Germany, and the Netherlands.
Canals have found another use in the 21st century, as easements for the installation of fibre optic telecommunications network cabling, avoiding having them buried in roadways while facilitating access and reducing the hazard of being damaged from digging equipment.
Canals are still used to provide water for agriculture. An extensive canal system exists within the Imperial Valley in the Southern California desert to provide irrigation to agriculture within the area.
Cities on water
Canals are so deeply identified with Venice that many canal cities have been nicknamed "the Venice of…". The city is built on marshy islands, with wooden piles supporting the buildings, so that the land is man-made rather than the waterways. The islands have a long history of settlement; by the 12th century, Venice was a powerful city state.
Amsterdam was built in a similar way, with buildings on wooden piles. It became a city around 1300. Many Amsterdam canals were built as part of fortifications. They became grachten when the city was enlarged and houses were built alongside the water. Its nickname as the "Venice of the North" is shared with Hamburg of Germany, St. Petersburg of Russia and Bruges of Belgium.
Suzhou was dubbed the "Venice of the East" by Marco Polo during his travels there in the 13th century, with its modern canalside Pingjiang Road and Shantang Street becoming major tourist attractions. Other nearby cities including Nanjing, Shanghai, Wuxi, Jiaxing, Huzhou, Nantong, Taizhou, Yangzhou, and Changzhou are located along the lower mouth of the Yangtze River and Lake Tai, yet another source of small rivers and creeks, which have been canalized and developed for centuries.
Other cities with extensive canal networks include: Alkmaar, Amersfoort, Bolsward, Brielle, Delft, Den Bosch, Dokkum, Dordrecht, Enkhuizen, Franeker, Gouda, Haarlem, Harlingen, Leeuwarden, Leiden, Sneek and Utrecht in the Netherlands; Brugge and Gent in Flanders, Belgium; Birmingham in England; Saint Petersburg in Russia; Bydgoszcz, Gdańsk, Szczecin and Wrocław in Poland; Aveiro in Portugal; Hamburg and Berlin in Germany; Fort Lauderdale and Cape Coral in Florida, United States, Wenzhou in China, Cần Thơ in Vietnam, Bangkok in Thailand, and Lahore in Pakistan.
Liverpool Maritime Mercantile City was a UNESCO World Heritage Site near the centre of Liverpool, England, where a system of intertwining waterways and docks is now being developed for mainly residential and leisure use.
Canal estates (sometimes known as bayous in the United States) are a form of subdivision popular in cities like Miami, Florida, Texas City, Texas and the Gold Coast, Queensland; the Gold Coast has over 890 km of residential canals. Wetlands are difficult areas upon which to build housing estates, so dredging part of the wetland down to a navigable channel provides fill to build up another part of the wetland above the flood level for houses. Land is built up in a finger pattern that provides a suburban street layout of waterfront housing blocks.
Boats
Inland canals have often had boats specifically built for them. An example of this is the British narrowboat, which is up to long and wide and was primarily built for British Midland canals. In this case the limiting factor was the size of the locks. This is also the limiting factor on the Panama canal where Panamax ships were limited to a length of and a beam of until 26 June 2016 when the opening of larger locks allowed for the passage of larger New Panamax ships. For the lockless Suez Canal the limiting factor for Suezmax ships is generally draft, which is limited to . At the other end of the scale, tub-boat canals such as the Bude Canal were limited to boats of under 10 tons for much of their length due to the capacity of their inclined planes or boat lifts. Most canals have a limit on height imposed either by bridges or by tunnels.
Lists of canals
Africa
Bahr Yussef
El Salam Canal Egypt
Ibrahimiya Canal Egypt
Mahmoudiyah Canal Egypt
Suez Canal Egypt
Asia
see List of canals in India
see List of canals in Pakistan
see History of canals in China
Europe
Danube–Black Sea Canal (Romania)
North Crimean Canal (Ukraine)
Canals of France
Canals of Amsterdam
Canals of Germany
Canals of Ireland
Canals of Russia
Canals of the United Kingdom
List of canals in the United Kingdom
Great Bačka Canal (Serbia)
North America
Canals of Canada
Canals of the United States
Lists of proposed canals
Eurasia Canal
Istanbul Canal
Nicaragua Canal
Salwa Canal
Thai Canal
Sulawesi Canal
Two Seas Canal
Northern river reversal
Balkan Canal or Danube–Morava–Vardar–Aegean Canal
Iranrud
See also
Barges of all types
Beaver, a non-human animal also known for canal building
Canal elevator
Calle canal
Canal & River Trust
Canal tunnel
Channel
Ditch
Environment Agency
History of the British canal system
Horse-drawn boat
Infrastructure
Irrigation district
Lists of canals
List of navigation authorities in the United Kingdom
List of waterways
List of waterway societies in the United Kingdom
Lock
Mooring
Navigable aqueduct
Navigation authority
Narrowboat
Power canal
Proposed canals
River
Ship canal
Tow path
Roman canals – (Torksey)
Volumetric flow rate
Water bridge
Waterscape
Water transportation
Waterway
Waterway restoration
Waterways in the United Kingdom
Weigh lock
References
Notes
Bibliography
External links
British Waterways' leisure website – Britain's official guide to canals, rivers and lakes
Leeds Liverpool Canal Photographic Guide
Information and Boater's Guide to the New York State Canal System
"Canals and Navigable Rivers" by James S. Aber, Emporia State University
National Canal Museum (US)
London Canal Museum (UK)
Canals in Amsterdam
Canal du Midi
Canal des Deux Mers
Canal flow measurement using a sensor.
Coastal construction
Water transport infrastructure
Artificial bodies of water
Infrastructure |
5626 | https://en.wikipedia.org/wiki/Cognitive%20science | Cognitive science | Cognitive science is the interdisciplinary, scientific study of the mind and its processes with input from linguistics, psychology, neuroscience, philosophy, computer science/artificial intelligence, and anthropology. It examines the nature, the tasks, and the functions of cognition (in a broad sense). Cognitive scientists study intelligence and behavior, with a focus on how nervous systems represent, process, and transform information. Mental faculties of concern to cognitive scientists include language, perception, memory, attention, reasoning, and emotion; to understand these faculties, cognitive scientists borrow from fields such as linguistics, psychology, artificial intelligence, philosophy, neuroscience, and anthropology. The typical analysis of cognitive science spans many levels of organization, from learning and decision to logic and planning; from neural circuitry to modular brain organization. One of the fundamental concepts of cognitive science is that "thinking can best be understood in terms of representational structures in the mind and computational procedures that operate on those structures."
The goal of cognitive science is to understand and formulate the principles of intelligence with the hope that this will lead to a better comprehension of the mind and of learning.
The cognitive sciences began as an intellectual movement in the 1950s often referred to as the cognitive revolution.
History
The cognitive sciences began as an intellectual movement in the 1950s, called the cognitive revolution. Cognitive science has a prehistory traceable back to ancient Greek philosophical texts (see Plato's Meno and Aristotle's De Anima); Modern philosophers such as Descartes, David Hume, Immanuel Kant, Benedict de Spinoza, Nicolas Malebranche, Pierre Cabanis, Leibniz and John Locke, rejected scholasticism while mostly having never read Aristotle, and they were working with an entirely different set of tools and core concepts than those of the cognitive scientist.
The modern culture of cognitive science can be traced back to the early cyberneticists in the 1930s and 1940s, such as Warren McCulloch and Walter Pitts, who sought to understand the organizing principles of the mind. McCulloch and Pitts developed the first variants of what are now known as artificial neural networks, models of computation inspired by the structure of biological neural networks.
Another precursor was the early development of the theory of computation and the digital computer in the 1940s and 1950s. Kurt Gödel, Alonzo Church, Alan Turing, and John von Neumann were instrumental in these developments. The modern computer, or Von Neumann machine, would play a central role in cognitive science, both as a metaphor for the mind, and as a tool for investigation.
The first instance of cognitive science experiments being done at an academic institution took place at MIT Sloan School of Management, established by J.C.R. Licklider working within the psychology department and conducting experiments using computer memory as models for human cognition.
In 1959, Noam Chomsky published a scathing review of B. F. Skinner's book Verbal Behavior. At the time, Skinner's behaviorist paradigm dominated the field of psychology within the United States. Most psychologists focused on functional relations between stimulus and response, without positing internal representations. Chomsky argued that in order to explain language, we needed a theory like generative grammar, which not only attributed internal representations but characterized their underlying order.
The term cognitive science was coined by Christopher Longuet-Higgins in his 1973 commentary on the Lighthill report, which concerned the then-current state of artificial intelligence research. In the same decade, the journal Cognitive Science and the Cognitive Science Society were founded. The founding meeting of the Cognitive Science Society was held at the University of California, San Diego in 1979, which resulted in cognitive science becoming an internationally visible enterprise. In 1972, Hampshire College started the first undergraduate education program in Cognitive Science, led by Neil Stillings. In 1982, with assistance from Professor Stillings, Vassar College became the first institution in the world to grant an undergraduate degree in Cognitive Science. In 1986, the first Cognitive Science Department in the world was founded at the University of California, San Diego.
In the 1970s and early 1980s, as access to computers increased, artificial intelligence research expanded. Researchers such as Marvin Minsky would write computer programs in languages such as LISP to attempt to formally characterize the steps that human beings went through, for instance, in making decisions and solving problems, in the hope of better understanding human thought, and also in the hope of creating artificial minds. This approach is known as "symbolic AI".
Eventually the limits of the symbolic AI research program became apparent. For instance, it seemed to be unrealistic to comprehensively list human knowledge in a form usable by a symbolic computer program. The late 80s and 90s saw the rise of neural networks and connectionism as a research paradigm. Under this point of view, often attributed to James McClelland and David Rumelhart, the mind could be characterized as a set of complex associations, represented as a layered network. Critics argue that there are some phenomena which are better captured by symbolic models, and that connectionist models are often so complex as to have little explanatory power. Recently symbolic and connectionist models have been combined, making it possible to take advantage of both forms of explanation. While both connectionism and symbolic approaches have proven useful for testing various hypotheses and exploring approaches to understanding aspects of cognition and lower level brain functions, neither are biologically realistic and therefore, both suffer from a lack of neuroscientific plausibility. Connectionism has proven useful for exploring computationally how cognition emerges in development and occurs in the human brain, and has provided alternatives to strictly domain-specific / domain general approaches. For example, scientists such as Jeff Elman, Liz Bates, and Annette Karmiloff-Smith have posited that networks in the brain emerge from the dynamic interaction between them and environmental input.
Principles
Levels of analysis
A central tenet of cognitive science is that a complete understanding of the mind/brain cannot be attained by studying only a single level. An example would be the problem of remembering a phone number and recalling it later. One approach to understanding this process would be to study behavior through direct observation, or naturalistic observation. A person could be presented with a phone number and be asked to recall it after some delay of time; then the accuracy of the response could be measured. Another approach to measure cognitive ability would be to study the firings of individual neurons while a person is trying to remember the phone number. Neither of these experiments on its own would fully explain how the process of remembering a phone number works. Even if the technology to map out every neuron in the brain in real-time were available and it were known when each neuron fired it would still be impossible to know how a particular firing of neurons translates into the observed behavior. Thus an understanding of how these two levels relate to each other is imperative. Francisco Varela, in The Embodied Mind: Cognitive Science and Human Experience, argues that "the new sciences of the mind need to enlarge their horizon to encompass both lived human experience and the possibilities for transformation inherent in human experience". On the classic cognitivist view, this can be provided by a functional level account of the process. Studying a particular phenomenon from multiple levels creates a better understanding of the processes that occur in the brain to give rise to a particular behavior.
Marr gave a famous description of three levels of analysis:
The computational theory, specifying the goals of the computation;
Representation and algorithms, giving a representation of the inputs and outputs and the algorithms which transform one into the other; and
The hardware implementation, or how algorithm and representation may be physically realized.
Interdisciplinary nature
Cognitive science is an interdisciplinary field with contributors from various fields, including psychology, neuroscience, linguistics, philosophy of mind, computer science, anthropology and biology. Cognitive scientists work collectively in hope of understanding the mind and its interactions with the surrounding world much like other sciences do. The field regards itself as compatible with the physical sciences and uses the scientific method as well as simulation or modeling, often comparing the output of models with aspects of human cognition. Similarly to the field of psychology, there is some doubt whether there is a unified cognitive science, which have led some researchers to prefer 'cognitive sciences' in plural.
Many, but not all, who consider themselves cognitive scientists hold a functionalist view of the mind—the view that mental states and processes should be explained by their function – what they do. According to the multiple realizability account of functionalism, even non-human systems such as robots and computers can be ascribed as having cognition.
Cognitive science: the term
The term "cognitive" in "cognitive science" is used for "any kind of mental operation or structure that can be studied in precise terms" (Lakoff and Johnson, 1999). This conceptualization is very broad, and should not be confused with how "cognitive" is used in some traditions of analytic philosophy, where "cognitive" has to do only with formal rules and truth-conditional semantics.
The earliest entries for the word "cognitive" in the OED take it to mean roughly "pertaining to the action or process of knowing". The first entry, from 1586, shows the word was at one time used in the context of discussions of Platonic theories of knowledge. Most in cognitive science, however, presumably do not believe their field is the study of anything as certain as the knowledge sought by Plato.
Scope
Cognitive science is a large field, and covers a wide array of topics on cognition. However, it should be recognized that cognitive science has not always been equally concerned with every topic that might bear relevance to the nature and operation of minds. Classical cognitivists have largely de-emphasized or avoided social and cultural factors, embodiment, emotion, consciousness, animal cognition, and comparative and evolutionary psychologies. However, with the decline of behaviorism, internal states such as affects and emotions, as well as awareness and covert attention became approachable again. For example, situated and embodied cognition theories take into account the current state of the environment as well as the role of the body in cognition. With the newfound emphasis on information processing, observable behavior was no longer the hallmark of psychological theory, but the modeling or recording of mental states.
Below are some of the main topics that cognitive science is concerned with. This is not an exhaustive list. See List of cognitive science topics for a list of various aspects of the field.
Artificial intelligence
Artificial intelligence (AI) involves the study of cognitive phenomena in machines. One of the practical goals of AI is to implement aspects of human intelligence in computers. Computers are also widely used as a tool with which to study cognitive phenomena. Computational modeling uses simulations to study how human intelligence may be structured. (See .)
There is some debate in the field as to whether the mind is best viewed as a huge array of small but individually feeble elements (i.e. neurons), or as a collection of higher-level structures such as symbols, schemes, plans, and rules. The former view uses connectionism to study the mind, whereas the latter emphasizes symbolic artificial intelligence. One way to view the issue is whether it is possible to accurately simulate a human brain on a computer without accurately simulating the neurons that make up the human brain.
Attention
Attention is the selection of important information. The human mind is bombarded with millions of stimuli and it must have a way of deciding which of this information to process. Attention is sometimes seen as a spotlight, meaning one can only shine the light on a particular set of information. Experiments that support this metaphor include the dichotic listening task (Cherry, 1957) and studies of inattentional blindness (Mack and Rock, 1998). In the dichotic listening task, subjects are bombarded with two different messages, one in each ear, and told to focus on only one of the messages. At the end of the experiment, when asked about the content of the unattended message, subjects cannot report it.
Bodily processes related to cognition
Embodied cognition approaches to cognitive science emphasize the role of body and environment in cognition. This includes both neural and extra-neural bodily processes, and factors that range from affective and emotional processes, to posture, motor control, proprioception, and kinaesthesis, to autonomic processes that involve heartbeat and respiration, to the role of the enteric gut microbiome. It also includes accounts of how the body engages with or is coupled to social and physical environments. 4E (embodied, embedded, extended and enactive) cognition includes a broad range of views about brain-body-environment interaction, from causal embeddedness to stronger claims about how the mind extends to include tools and instruments, as well as the role of social interactions, action-oriented processes, and affordances. 4E theories range from those closer to classic cognitivism (so-called "weak" embodied cognition) to stronger extended and enactive versions that are sometimes referred to as radical embodied cognitive science.
Knowledge and processing of language
The ability to learn and understand language is an extremely complex process. Language is acquired within the first few years of life, and all humans under normal circumstances are able to acquire language proficiently. A major driving force in the theoretical linguistic field is discovering the nature that language must have in the abstract in order to be learned in such a fashion. Some of the driving research questions in studying how the brain itself processes language include: (1) To what extent is linguistic knowledge innate or learned?, (2) Why is it more difficult for adults to acquire a second-language than it is for infants to acquire their first-language?, and (3) How are humans able to understand novel sentences?
The study of language processing ranges from the investigation of the sound patterns of speech to the meaning of words and whole sentences. Linguistics often divides language processing into orthography, phonetics, phonology, morphology, syntax, semantics, and pragmatics. Many aspects of language can be studied from each of these components and from their interaction.
The study of language processing in cognitive science is closely tied to the field of linguistics. Linguistics was traditionally studied as a part of the humanities, including studies of history, art and literature. In the last fifty years or so, more and more researchers have studied knowledge and use of language as a cognitive phenomenon, the main problems being how knowledge of language can be acquired and used, and what precisely it consists of. Linguists have found that, while humans form sentences in ways apparently governed by very complex systems, they are remarkably unaware of the rules that govern their own speech. Thus linguists must resort to indirect methods to determine what those rules might be, if indeed rules as such exist. In any event, if speech is indeed governed by rules, they appear to be opaque to any conscious consideration.
Learning and development
Learning and development are the processes by which we acquire knowledge and information over time. Infants are born with little or no knowledge (depending on how knowledge is defined), yet they rapidly acquire the ability to use language, walk, and recognize people and objects. Research in learning and development aims to explain the mechanisms by which these processes might take place.
A major question in the study of cognitive development is the extent to which certain abilities are innate or learned. This is often framed in terms of the nature and nurture debate. The nativist view emphasizes that certain features are innate to an organism and are determined by its genetic endowment. The empiricist view, on the other hand, emphasizes that certain abilities are learned from the environment. Although clearly both genetic and environmental input is needed for a child to develop normally, considerable debate remains about how genetic information might guide cognitive development. In the area of language acquisition, for example, some (such as Steven Pinker) have argued that specific information containing universal grammatical rules must be contained in the genes, whereas others (such as Jeffrey Elman and colleagues in Rethinking Innateness) have argued that Pinker's claims are biologically unrealistic. They argue that genes determine the architecture of a learning system, but that specific "facts" about how grammar works can only be learned as a result of experience.
Memory
Memory allows us to store information for later retrieval. Memory is often thought of as consisting of both a long-term and short-term store. Long-term memory allows us to store information over prolonged periods (days, weeks, years). We do not yet know the practical limit of long-term memory capacity. Short-term memory allows us to store information over short time scales (seconds or minutes).
Memory is also often grouped into declarative and procedural forms. Declarative memory—grouped into subsets of semantic and episodic forms of memory—refers to our memory for facts and specific knowledge, specific meanings, and specific experiences (e.g. "Are apples food?", or "What did I eat for breakfast four days ago?"). Procedural memory allows us to remember actions and motor sequences (e.g. how to ride a bicycle) and is often dubbed implicit knowledge or memory .
Cognitive scientists study memory just as psychologists do, but tend to focus more on how memory bears on cognitive processes, and the interrelationship between cognition and memory. One example of this could be, what mental processes does a person go through to retrieve a long-lost memory? Or, what differentiates between the cognitive process of recognition (seeing hints of something before remembering it, or memory in context) and recall (retrieving a memory, as in "fill-in-the-blank")?
Perception and action
Perception is the ability to take in information via the senses, and process it in some way. Vision and hearing are two dominant senses that allow us to perceive the environment. Some questions in the study of visual perception, for example, include: (1) How are we able to recognize objects?, (2) Why do we perceive a continuous visual environment, even though we only see small bits of it at any one time? One tool for studying visual perception is by looking at how people process optical illusions. The image on the right of a Necker cube is an example of a bistable percept, that is, the cube can be interpreted as being oriented in two different directions.
The study of haptic (tactile), olfactory, and gustatory stimuli also fall into the domain of perception.
Action is taken to refer to the output of a system. In humans, this is accomplished through motor responses. Spatial planning and movement, speech production, and complex motor movements are all aspects of action.
Consciousness
Consciousness is the awareness of experiences within oneself.
This helps the mind with having the ability to experience or feel a sense of self.
Research methods
Many different methodologies are used to study cognitive science. As the field is highly interdisciplinary, research often cuts across multiple areas of study, drawing on research methods from psychology, neuroscience, computer science and systems theory.
Behavioral experiments
In order to have a description of what constitutes intelligent behavior, one must study behavior itself. This type of research is closely tied to that in cognitive psychology and psychophysics. By measuring behavioral responses to different stimuli, one can understand something about how those stimuli are processed. Lewandowski & Strohmetz (2009) reviewed a collection of innovative uses of behavioral measurement in psychology including behavioral traces, behavioral observations, and behavioral choice. Behavioral traces are pieces of evidence that indicate behavior occurred, but the actor is not present (e.g., litter in a parking lot or readings on an electric meter). Behavioral observations involve the direct witnessing of the actor engaging in the behavior (e.g., watching how close a person sits next to another person). Behavioral choices are when a person selects between two or more options (e.g., voting behavior, choice of a punishment for another participant).
Reaction time. The time between the presentation of a stimulus and an appropriate response can indicate differences between two cognitive processes, and can indicate some things about their nature. For example, if in a search task the reaction times vary proportionally with the number of elements, then it is evident that this cognitive process of searching involves serial instead of parallel processing.
Psychophysical responses. Psychophysical experiments are an old psychological technique, which has been adopted by cognitive psychology. They typically involve making judgments of some physical property, e.g. the loudness of a sound. Correlation of subjective scales between individuals can show cognitive or sensory biases as compared to actual physical measurements. Some examples include:
sameness judgments for colors, tones, textures, etc.
threshold differences for colors, tones, textures, etc.
Eye tracking. This methodology is used to study a variety of cognitive processes, most notably visual perception and language processing. The fixation point of the eyes is linked to an individual's focus of attention. Thus, by monitoring eye movements, we can study what information is being processed at a given time. Eye tracking allows us to study cognitive processes on extremely short time scales. Eye movements reflect online decision making during a task, and they provide us with some insight into the ways in which those decisions may be processed.
Brain imaging
Brain imaging involves analyzing activity within the brain while performing various tasks. This allows us to link behavior and brain function to help understand how information is processed. Different types of imaging techniques vary in their temporal (time-based) and spatial (location-based) resolution. Brain imaging is often used in cognitive neuroscience.
Single-photon emission computed tomography and positron emission tomography. SPECT and PET use radioactive isotopes, which are injected into the subject's bloodstream and taken up by the brain. By observing which areas of the brain take up the radioactive isotope, we can see which areas of the brain are more active than other areas. PET has similar spatial resolution to fMRI, but it has extremely poor temporal resolution.
Electroencephalography. EEG measures the electrical fields generated by large populations of neurons in the cortex by placing a series of electrodes on the scalp of the subject. This technique has an extremely high temporal resolution, but a relatively poor spatial resolution.
Functional magnetic resonance imaging. fMRI measures the relative amount of oxygenated blood flowing to different parts of the brain. More oxygenated blood in a particular region is assumed to correlate with an increase in neural activity in that part of the brain. This allows us to localize particular functions within different brain regions. fMRI has moderate spatial and temporal resolution.
Optical imaging. This technique uses infrared transmitters and receivers to measure the amount of light reflectance by blood near different areas of the brain. Since oxygenated and deoxygenated blood reflects light by different amounts, we can study which areas are more active (i.e., those that have more oxygenated blood). Optical imaging has moderate temporal resolution, but poor spatial resolution. It also has the advantage that it is extremely safe and can be used to study infants' brains.
Magnetoencephalography. MEG measures magnetic fields resulting from cortical activity. It is similar to EEG, except that it has improved spatial resolution since the magnetic fields it measures are not as blurred or attenuated by the scalp, meninges and so forth as the electrical activity measured in EEG is. MEG uses SQUID sensors to detect tiny magnetic fields.
Computational modeling
Computational models require a mathematically and logically formal representation of a problem. Computer models are used in the simulation and experimental verification of different specific and general properties of intelligence. Computational modeling can help us understand the functional organization of a particular cognitive phenomenon.
Approaches to cognitive modeling can be categorized as: (1) symbolic, on abstract mental functions of an intelligent mind by means of symbols; (2) subsymbolic, on the neural and associative properties of the human brain; and (3) across the symbolic–subsymbolic border, including hybrid.
Symbolic modeling evolved from the computer science paradigms using the technologies of knowledge-based systems, as well as a philosophical perspective (e.g. "Good Old-Fashioned Artificial Intelligence" (GOFAI)). They were developed by the first cognitive researchers and later used in information engineering for expert systems. Since the early 1990s it was generalized in systemics for the investigation of functional human-like intelligence models, such as personoids, and, in parallel, developed as the SOAR environment. Recently, especially in the context of cognitive decision-making, symbolic cognitive modeling has been extended to the socio-cognitive approach, including social and organizational cognition, interrelated with a sub-symbolic non-conscious layer.
Subsymbolic modeling includes connectionist/neural network models. Connectionism relies on the idea that the mind/brain is composed of simple nodes and its problem-solving capacity derives from the connections between them. Neural nets are textbook implementations of this approach. Some critics of this approach feel that while these models approach biological reality as a representation of how the system works, these models lack explanatory powers because, even in systems endowed with simple connection rules, the emerging high complexity makes them less interpretable at the connection-level than they apparently are at the macroscopic level.
Other approaches gaining in popularity include (1) dynamical systems theory, (2) mapping symbolic models onto connectionist models (Neural-symbolic integration or hybrid intelligent systems), and (3) and Bayesian models, which are often drawn from machine learning.
All the above approaches tend either to be generalized to the form of integrated computational models of a synthetic/abstract intelligence (i.e. cognitive architecture) in order to be applied to the explanation and improvement of individual and social/organizational decision-making and reasoning or to focus on single simulative programs (or microtheories/"middle-range" theories) modelling specific cognitive faculties (e.g. vision, language, categorization etc.).
Neurobiological methods
Research methods borrowed directly from neuroscience and neuropsychology can also help us to understand aspects of intelligence. These methods allow us to understand how intelligent behavior is implemented in a physical system.
Single-unit recording
Direct brain stimulation
Animal models
Postmortem studies
Key findings
Cognitive science has given rise to models of human cognitive bias and risk perception, and has been influential in the development of behavioral finance, part of economics. It has also given rise to a new theory of the philosophy of mathematics (related to denotational mathematics), and many theories of artificial intelligence, persuasion and coercion. It has made its presence known in the philosophy of language and epistemology as well as constituting a substantial wing of modern linguistics. Fields of cognitive science have been influential in understanding the brain's particular functional systems (and functional deficits) ranging from speech production to auditory processing and visual perception. It has made progress in understanding how damage to particular areas of the brain affect cognition, and it has helped to uncover the root causes and results of specific dysfunction, such as dyslexia, anopia, and hemispatial neglect.
Notable researchers
Some of the more recognized names in cognitive science are usually either the most controversial or the most cited. Within philosophy, some familiar names include Daniel Dennett, who writes from a computational systems perspective, John Searle, known for his controversial Chinese room argument, and Jerry Fodor, who advocates functionalism.
Others include David Chalmers, who advocates Dualism and is also known for articulating the hard problem of consciousness, and Douglas Hofstadter, famous for writing Gödel, Escher, Bach, which questions the nature of words and thought.
In the realm of linguistics, Noam Chomsky and George Lakoff have been influential (both have also become notable as political commentators). In artificial intelligence, Marvin Minsky, Herbert A. Simon, and Allen Newell are prominent.
Popular names in the discipline of psychology include George A. Miller, James McClelland, Philip Johnson-Laird, Lawrence Barsalou, Vittorio Guidano, Howard Gardner and Steven Pinker. Anthropologists Dan Sperber, Edwin Hutchins, Bradd Shore, James Wertsch and Scott Atran, have been involved in collaborative projects with cognitive and social psychologists, political scientists and evolutionary biologists in attempts to develop general theories of culture formation, religion, and political association.
Computational theories (with models and simulations) have also been developed, by David Rumelhart, James McClelland and Philip Johnson-Laird.
Epistemics
Epistemics is a term coined in 1969 by the University of Edinburgh with the foundation of its School of Epistemics. Epistemics is to be distinguished from epistemology in that epistemology is the philosophical theory of knowledge, whereas epistemics signifies the scientific study of knowledge.
Christopher Longuet-Higgins has defined it as "the construction of formal models of the processes (perceptual, intellectual, and linguistic) by which knowledge and understanding are achieved and communicated."
In his 1978 essay "Epistemics: The Regulative Theory of Cognition", Alvin I. Goldman claims to have coined the term "epistemics" to describe a reorientation of epistemology. Goldman maintains that his epistemics is continuous with traditional epistemology and the new term is only to avoid opposition. Epistemics, in Goldman's version, differs only slightly from traditional epistemology in its alliance with the psychology of cognition; epistemics stresses the detailed study of mental processes and information-processing mechanisms that lead to knowledge or beliefs.
In the mid-1980s, the School of Epistemics was renamed as The Centre for Cognitive Science (CCS). In 1998, CCS was incorporated into the University of Edinburgh's School of Informatics.
Binding problem in cognitive science
One of the core aims of cognitive science is to achieve an integrated theory of cognition. This requires integrative mechanisms explaining how the information processing that occurs simultaneously in spatially segregated (sub-)cortical areas in the brain is coordinated and bound together to give rise to coherent perceptual and symbolic representations. One approach is to solve this "Binding problem" (that is, the problem of dynamically representing conjunctions of informational elements, from the most basic perceptual representations ("feature binding") to the most complex cognitive representations, like symbol structures ("variable binding")), by means of integrative synchronization mechanisms. In other words, one of the coordinating mechanisms appears to be the temporal (phase) synchronization of neural activity based on dynamical self-organizing processes in neural networks, described by the Binding-by-synchrony (BBS) Hypothesis from neurophysiology. Connectionist cognitive neuroarchitectures have been developed that use integrative synchronization mechanisms to solve this binding problem in perceptual cognition and in language cognition. In perceptual cognition the problem is to explain how elementary object properties and object relations, like the object color or the object form, can be dynamically bound together or can be integrated to a representation of this perceptual object by means of a synchronization mechanism ("feature binding", "feature linking"). In language cognition the problem is to explain how semantic concepts and syntactic roles can be dynamically bound together or can be integrated to complex cognitive representations like systematic and compositional symbol structures and propositions by means of a synchronization mechanism ("variable binding") (see also the "Symbolism vs. connectionism debate" in connectionism).
See also
Affective science
Cognitive anthropology
Cognitive biology
Cognitive computing
Cognitive ethology
Cognitive linguistics
Cognitive neuropsychology
Cognitive neuroscience
Cognitive psychology
Cognitive science of religion
Computational neuroscience
Computational-representational understanding of mind
Concept mining
Decision field theory
Decision theory
Dynamicism
Educational neuroscience
Educational psychology
Embodied cognition
Embodied cognitive science
Enactivism
Epistemology
Folk psychology
Heterophenomenology
Human Cognome Project
Human–computer interaction
Indiana Archives of Cognitive Science
Informatics (academic field)
List of cognitive scientists
List of psychology awards
Malleable intelligence
Neural Darwinism
Personal information management (PIM)
Qualia
Quantum cognition
Simulated consciousness
Situated cognition
Society of Mind theory
Spatial cognition
Speech–language pathology
Outlines
Outline of human intelligence – topic tree presenting the traits, capacities, models, and research fields of human intelligence, and more.
Outline of thought – topic tree that identifies many types of thoughts, types of thinking, aspects of thought, related fields, and more.
References
External links
"Cognitive Science" on the Stanford Encyclopedia of Philosophy
Cognitive Science Society
Cognitive Science Movie Index: A broad list of movies showcasing themes in the Cognitive Sciences
List of leading thinkers in cognitive science |
5630 | https://en.wikipedia.org/wiki/Copula%20%28linguistics%29 | Copula (linguistics) | In linguistics, a copula (plural: copulas or copulae; abbreviated ) is a word or phrase that links the subject of a sentence to a subject complement, such as the word is in the sentence "The sky is blue" or the phrase was not being in the sentence "It was not being co-operative." The word copula derives from the Latin noun for a "link" or "tie" that connects two different things.
A copula is often a verb or a verb-like word, though this is not universally the case. A verb that is a copula is sometimes called a copulative or copular verb. In English primary education grammar courses, a copula is often called a linking verb. In other languages, copulas show more resemblances to pronouns, as in Classical Chinese and Guarani, or may take the form of suffixes attached to a noun, as in Korean, Beja, and Inuit languages.
Most languages have one main copula (in English, the verb "to be"), although some (like Spanish, Portuguese and Thai) have more than one, while others have none. While the term copula is generally used to refer to such principal verbs, it may also be used for a wider group of verbs with similar potential functions (like become, get, feel and seem in English); alternatively, these might be distinguished as "semi-copulas" or "pseudo-copulas".
Grammatical function
The principal use of a copula is to link the subject of a clause to a subject complement. A copular verb is often considered to be part of the predicate, the remainder being called a predicative expression. A simple clause containing a copula is illustrated below:
The book is on the table.
In that sentence, the noun phrase the book is the subject, the verb is serves as the copula, and the prepositional phrase on the table is the predicative expression. The whole expression is on the table may (in some theories of grammar) be called a predicate or a verb phrase.
The predicative expression accompanying the copula, also known as the complement of the copula, may take any of several possible forms: it may be a noun or noun phrase, an adjective or adjective phrase, a prepositional phrase (as above) or an adverb or another adverbial phrase expressing time or location. Examples are given below (with the copula in bold and the predicative expression in italics):
The three components (subject, copula and predicative expression) do not necessarily appear in that order: their positioning depends on the rules for word order applicable to the language in question. In English (an SVO language), the ordering given above is the normal one, but certain variation is possible:
In many questions and other clauses with subject–auxiliary inversion, the copula moves in front of the subject: Are you happy?
In inverse copular constructions (see below) the predicative expression precedes the copula, but the subject follows it: In the room were three men.
It is also possible, in certain circumstances, for one (or even two) of the three components to be absent:
In null-subject (pro-drop) languages, the subject may be omitted, as it may from other types of sentence. In Italian, means ‘I am tired’, literally ‘am tired’.
In non-finite clauses in languages like English, the subject is often absent, as in the participial phrase being tired or the infinitive phrase to be tired. The same applies to most imperative sentences like Be good!
For cases in which no copula appears, see below.
Any of the three components may be omitted as a result of various general types of ellipsis. In particular, in English, the predicative expression may be elided in a construction similar to verb phrase ellipsis, as in short sentences like I am; Are they? (where the predicative expression is understood from the previous context).
Inverse copular constructions, in which the positions of the predicative expression and the subject are reversed, are found in various languages. They have been the subject of much theoretical analysis, particularly in regard to the difficulty of maintaining, in the case of such sentences, the usual division into a subject noun phrase and a predicate verb phrase.
Another issue is verb agreement when both subject and predicative expression are noun phrases (and differ in number or person): in English, the copula typically agrees with the syntactical subject even if it is not logically (i.e. semantically) the subject, as in the cause of the riot is (not are) these pictures of the wall. Compare Italian ; notice the use of the plural to agree with plural "these photos" rather than with singular "the cause". In instances where an English syntactical subject comprises a prepositional object that is pluralized, however, the prepositional object agrees with the predicative expression, e.g. "What kind of birds are those?"
The definition and scope of the concept of a copula is not necessarily precise in any language. As noted above, though the concept of the copula in English is most strongly associated with the verb to be, there are many other verbs that can be used in a copular sense as well.
The boy became a man.
The girl grew more excited as the holiday preparations intensified.
The dog felt tired from the activity.
And more tenuously
The milk turned sour.
The food smells good.
You seem upset.
Other functions
A copular verb may also have other uses supplementary to or distinct from its uses as a copula. Some co-occurrences are common.
Auxiliary verb
The English verb to be is also used as an auxiliary verb, especially for expressing passive voice (together with the past participle) or expressing progressive aspect (together with the present participle):
Other languages' copulas have additional uses as auxiliaries. For example, French can be used to express passive voice similarly to English be; both French and German are used to express the perfect forms of certain verbs (formerly English be was also):
The auxiliary functions of these verbs derived from their copular function, and could be interpreted as special cases of the copular function (with the verbal forms it precedes being considered adjectival).
Another auxiliary usage in English is to denote an obligatory action or expected occurrence: "I am to serve you;" "The manager is to resign." This can be put also into past tense: "We were to leave at 9." For forms like "if I was/were to come," see English conditional sentences. (By certain criteria, the English copula be may always be considered an auxiliary verb; see Diagnostics for identifying auxiliary verbs in English.)
Existential verb
The English to be and its equivalents in certain other languages also have a non-copular use as an existential verb, meaning "to exist." This use is illustrated in the following sentences: I want only to be, and that is enough; I think therefore I am; To be or not to be, that is the question. In these cases, the verb itself expresses a predicate (that of existence), rather than linking to a predicative expression as it does when used as a copula. In ontology it is sometimes suggested that the "is" of existence is reducible to the "is" of property attribution or class membership; to be, Aristotle held, is to be something. However, Abelard in his Dialectica made a reductio ad absurdum argument against the idea that the copula can express existence.
Similar examples can be found in many other languages; for example, the French and Latin equivalents of I think therefore I am are and , where and are the equivalents of English "am," normally used as copulas. However, other languages prefer a different verb for existential use, as in the Spanish version (where the verb "to exist" is used rather than the copula or ‘to be’).
Another type of existential usage is in clauses of the there is… or there are… type. Languages differ in the way they express such meanings; some of them use the copular verb, possibly with an expletive pronoun like the English there, while other languages use different verbs and constructions, like the French (which uses parts of the verb ‘to have,’ not the copula) or the Swedish (the passive voice of the verb for "to find"). For details, see existential clause.
Relying on a unified theory of copular sentences, it has been proposed that the English there-sentences are subtypes of inverse copular constructions.
Meanings
Predicates formed using a copula may express identity: that the two noun phrases (subject and complement) have the same referent or express an identical concept:
They may also express membership of a class or a subset relationship:
Similarly they may express some property, relation or position, permanent or temporary:
Essence vs. state
Some languages use different copulas, or different syntax, to denote a permanent, essential characteristic of something versus a temporary state. For examples, see the sections on the Romance languages, Slavic languages and Irish.
Forms
In many languages the principal copula is a verb, like English (to) be, German , Mixtec , Touareg emous, etc. It may inflect for grammatical categories like tense, aspect and mood, like other verbs in the language. Being a very commonly used verb, it is likely that the copula has irregular inflected forms; in English, the verb be has a number of highly irregular (suppletive) forms and has more different inflected forms than any other English verb (am, is, are, was, were, etc.; see English verbs for details).
Other copulas show more resemblances to pronouns. That is the case for Classical Chinese and Guarani, for instance. In highly synthetic languages, copulas are often suffixes, attached to a noun, but they may still behave otherwise like ordinary verbs: in Inuit languages.
In some other languages, like Beja and Ket, the copula takes the form of suffixes that attach to a noun but are distinct from the person agreement markers used on predicative verbs. This phenomenon is known as nonverbal person agreement (or nonverbal subject agreement), and the relevant markers are always established as deriving from cliticized independent pronouns.
Zero copula
In some languages, copula omission occurs within a particular grammatical context. For example, speakers of Russian, Indonesian, Turkish, Hungarian, Arabic, Hebrew, Geʽez and Quechuan languages consistently drop the copula in present tense: Russian: , ‘I (am a) human;’ Indonesian: ‘I (am) a human;’ Turkish: ‘s/he (is a) human;’ Hungarian: ‘s/he (is) a human;’ Arabic: أنا إنسان, ‘I (am a) human;’ Hebrew: אני אדם, ʔani ʔadam "I (am a) human;" Geʽez: አነ ብእሲ/ብእሲ አነ ʔana bəʔəsi / bəʔəsi ʔana "I (am a) man" / "(a) man I (am)"; Southern Quechua: payqa runam "s/he (is) a human." The usage is known generically as the zero copula. In other tenses (sometimes in forms other than third person singular), the copula usually reappears.
Some languages drop the copula in poetic or aphorismic contexts. Examples in English include
The more, the better.
Out of many, one.
True that.
Such poetic copula dropping is more pronounced in some languages other than English, like the Romance languages.
In informal speech of English, the copula may also be dropped in general sentences, as in "She a nurse." It is a feature of African-American Vernacular English, but is also used by a variety of other English speakers. An example is the sentence "I saw twelve men, each a soldier."
Examples in specific languages
In Ancient Greek, when an adjective precedes a noun with an article, the copula is understood: , "the house is large," can be written , "large the house (is)."
In Quechua (Southern Quechua used for the examples), zero copula is restricted to present tense in third person singular (kan): Payqa runam — "(s)he is a human;" but: (paykuna) runakunam kanku "(they) are human."ap
In Māori, the zero copula can be used in predicative expressions and with continuous verbs (many of which take a copulative verb in many Indo-European languages) — He nui te whare, literally "a big the house," "the house (is) big;" I te tēpu te pukapuka, literally "at (past locative particle) the table the book," "the book (was) on the table;" Nō Ingarangi ia, literally "from England (s)he," "(s)he (is) from England," Kei te kai au, literally "at the (act of) eating I," "I (am) eating."
Alternatively, in many cases, the particle ko can be used as a copulative (though not all instances of ko are used as thus, like all other Maori particles, ko has multiple purposes): Ko nui te whare "The house is big;" Ko te pukapuka kei te tēpu "It is the book (that is) on the table;" Ko au kei te kai "It is me eating."
However, when expressing identity or class membership, ko must be used: Ko tēnei tāku pukapuka "This is my book;" Ko Ōtautahi he tāone i Te Waipounamu "Christchurch is a city in the South Island (of New Zealand);" Ko koe tōku hoa "You are my friend."
When expressing identity, ko can be placed on either object in the clause without changing the meaning (ko tēnei tāku pukapuka is the same as ko tāku pukapuka tēnei) but not on both (ko tēnei ko tāku pukapuka would be equivalent to saying "it is this, it is my book" in English).
In Hungarian, zero copula is restricted to present tense in third person singular and plural: Ő ember/Ők emberek — "s/he is a human"/"they are humans;" but: (én) ember vagyok "I am a human," (te) ember vagy "you are a human," mi emberek vagyunk "we are humans," (ti) emberek vagytok "you (all) are humans." The copula also reappears for stating locations: az emberek a házban vannak, "the people are in the house," and for stating time: hat óra van, "it is six o'clock." However, the copula may be omitted in colloquial language: hat óra (van), "it is six o'clock."
Hungarian uses copula lenni for expressing location: Itt van Róbert "Bob is here," but it is omitted in the third person present tense for attribution or identity statements: Róbert öreg "Bob is old;" ők éhesek "They are hungry;" Kati nyelvtudós "Cathy is a linguist" (but Róbert öreg volt "Bob was old," éhesek voltak "They were hungry," Kati nyelvtudós volt "Cathy was a linguist).
In Turkish, both the third person singular and the third person plural copulas are omittable. Ali burada and Ali buradadır both mean "Ali is here," and Onlar aç and Onlar açlar both mean "They are hungry." Both of the sentences are acceptable and grammatically correct, but sentences with the copula are more formal.
The Turkish first person singular copula suffix is omitted when introducing oneself. Bora ben (I am Bora) is grammatically correct, but "Bora benim" (same sentence with the copula) is not for an introduction (but is grammatically correct in other cases).
Further restrictions may apply before omission is permitted. For example, in the Irish language, is, the present tense of the copula, may be omitted when the predicate is a noun. Ba, the past/conditional, cannot be deleted. If the present copula is omitted, the pronoun (e.g., é, í, iad) preceding the noun is omitted as well.
Copula-like words
Sometimes, the term copula is taken to include not only a language's equivalent(s) to the verb be but also other verbs or forms that serve to link a subject to a predicative expression (while adding semantic content of their own). For example, English verbs like become, get, feel, look, taste, smell, and seem can have this function, as in the following sentences (the predicative expression, the complement of the verb, is in italics):
(This usage should be distinguished from the use of some of these verbs as "action" verbs, as in They look at the wall, in which look denotes an action and cannot be replaced by the basic copula are.)
Some verbs have rarer, secondary uses as copular verbs, like the verb fall in sentences like The zebra fell victim to the lion.
These extra copulas are sometimes called "semi-copulas" or "pseudo-copulas." For a list of common verbs of this type in English, see List of English copulae.
In particular languages
Indo-European
In Indo-European languages, the words meaning to be are sometimes similar to each other. Due to the high frequency of their use, their inflection retains a considerable degree of similarity in some cases. Thus, for example, the English form is is a cognate of German ist, Latin est, Persian ast and Russian jest', even though the Germanic, Italic, Iranian and Slavic language groups split at least 3000 years ago. The origins of the copulas of most Indo-European languages can be traced back to four Proto-Indo-European stems: *es- (*h1es-), *sta- (*steh2-), *wes- and *bhu- (*bʰuH-).
English
The English copular verb be has eight forms (more than any other English verb): be, am, is, are, being, was, were, been. Additional archaic forms include art, wast, wert, and occasionally beest (as a subjunctive). For more details see English verbs. For the etymology of the various forms, see Indo-European copula.
The main uses of the copula in English are described in the above sections. The possibility of copula omission is mentioned under .
A particular construction found in English (particularly in speech) is the use of two successive copulas when only one appears necessary, as in My point is, is that.... The acceptability of this construction is a disputed matter in English prescriptive grammar.
The simple English copula "be" may on occasion be substituted by other verbs with near identical meanings.
Persian
In Persian, the verb to be can either take the form of ast (cognate to English is) or budan (cognate to be).
{| border="0" cellspacing="2" cellpadding="1"
|-
| Aseman abi ast.
|آسمان آبی است
| the sky is blue
|-
| Aseman abi khahad bood.
|آسمان آبی خواهد بود
| the sky will be blue
|-
| Aseman abi bood.
|آسمان آبی بود
| the sky was blue
|}
Hindustani
In Hindustani (Hindi and Urdu), the copula होना ɦonɑ ہونا can be put into four grammatical aspects (simple, habitual, perfective, and progressive) and each of those four aspects can be put into five grammatical moods (indicative, presumptive, subjunctive, contrafactual, and imperative). Some example sentences using the simple aspect are shown below:
Besides the verb होना honā (to be), there are three other verbs which can also be used as the copula, they are रहना rêhnā (to stay), जाना jānā (to go), and आना ānā (to come). The following table shows the conjugations of the copula होना honā in the five grammatical moods in the simple aspect. The transliteration scheme used is ISO 15919.
Romance
Copulas in the Romance languages usually consist of two different verbs that can be translated as "to be," the main one from the Latin esse (via Vulgar Latin essere; esse deriving from *es-), often referenced as sum (another of the Latin verb's principal parts) and a secondary one from stare (from *sta-), often referenced as sto. The resulting distinction in the modern forms is found in all the Iberian Romance languages, and to a lesser extent Italian, but not in French or Romanian. The difference is that the first usually refers to essential characteristics, while the second refers to states and situations, e.g., "Bob is old" versus "Bob is well." A similar division is found in the non-Romance Basque language (viz. egon and izan). (The English words just used, "essential" and "state," are also cognate with the Latin infinitives esse and stare. The word "stay" also comes from Latin stare, through Middle French estai, stem of Old French ester.) In Spanish and Portuguese, the high degree of verbal inflection, plus the existence of two copulas (ser and estar), means that there are 105 (Spanish) and 110 (Portuguese) separate forms to express the copula, compared to eight in English and one in Chinese.
In some cases, the verb itself changes the meaning of the adjective/sentence. The following examples are from Portuguese:
Slavic
Some Slavic languages make a distinction between essence and state (similar to that discussed in the above section on the Romance languages), by putting a predicative expression denoting a state into the instrumental case, and essential characteristics are in the nominative. This can apply with other copula verbs as well: the verbs for "become" are normally used with the instrumental case.
As noted above under , Russian and other North Slavic languages generally or often omit the copula in the present tense.
Irish
In Irish and Scottish Gaelic, there are two copulas, and the syntax is also changed when one is distinguishing between states or situations and essential characteristics.
Describing the subject's state or situation typically uses the normal VSO ordering with the verb bí. The copula is is used to state essential characteristics or equivalences.
{| border="0" cellspacing="2" cellpadding="1" valign="top"
| align=left valign=top| || align=right valign=top | || align=left valign=top |
|-
|Is fear é Liam.|| "Liam is a man." ||(Lit., "Is man Liam.")
|-
|Is leabhar é sin.|| "That is a book." ||(Lit., "Is book it that.")
|}
The word is is the copula (rhymes with the English word "miss").
The pronoun used with the copula is different from the normal pronoun. For a masculine singular noun, é is used (for "he" or "it"), as opposed to the normal pronoun sé; for a feminine singular noun, í is used (for "she" or "it"), as opposed to normal pronoun sí; for plural nouns, iad is used (for "they" or "those"), as opposed to the normal pronoun siad.
To describe being in a state, condition, place, or act, the verb "to be" is used: Tá mé ag rith. "I am running."
Arabic dialects
North Levantine Arabic
The North Levantine Arabic dialect, spoken in Syria and Lebanon, has a negative copula formed by and a suffixed pronoun.
Bantu languages
Chichewa
In Chichewa, a Bantu language spoken mainly in Malawi, a very similar distinction exists between permanent and temporary states as in Spanish and Portuguese, but only in the present tense. For a permanent state, in the 3rd person, the copula used in the present tense is ndi (negative sí):
iyé ndi mphunzitsi "he is a teacher"
iyé sí mphunzitsi "he is not a teacher"
For the 1st and 2nd persons the particle ndi is combined with pronouns, e.g. ine "I":
ine ndine mphunzitsi "I am a teacher"
iwe ndiwe mphunzitsi "you (singular) are a teacher"
ine síndine mphunzitsi "I am not a teacher"
For temporary states and location, the copula is the appropriate form of the defective verb -li:
iyé ali bwino "he is well"
iyé sáli bwino "he is not well"
iyé ali ku nyumbá "he is in the house"
For the 1st and 2nd persons the person is shown, as normally with Chichewa verbs, by the appropriate pronominal prefix:
ine ndili bwino "I am well"
iwe uli bwino "you (sg.) are well"
kunyumbá kuli bwino "at home (everything) is fine"
In the past tenses, -li is used for both types of copula:
iyé analí bwino "he was well (this morning)"
iyé ánaalí mphunzitsi "he was a teacher (at that time)"
In the future, subjunctive, or conditional tenses, a form of the verb khala ("sit/dwell") is used as a copula:
máwa ákhala bwino "he'll be fine tomorrow"
Muylaq' Aymaran
Uniquely, the existence of the copulative verbalizer suffix in the Southern Peruvian Aymaran language variety, Muylaq' Aymara, is evident only in the surfacing of a vowel that would otherwise have been deleted because of the presence of a following suffix, lexically prespecified to suppress it. As the copulative verbalizer has no independent phonetic structure, it is represented by the Greek letter ʋ in the examples used in this entry.
Accordingly, unlike in most other Aymaran variants, whose copulative verbalizer is expressed with a vowel-lengthening component, -:, the presence of the copulative verbalizer in Muylaq' Aymara is often not apparent on the surface at all and is analyzed as existing only meta-linguistically. However, in a verb phrase like "It is old," the noun thantha meaning "old" does not require the copulative verbalizer, thantha-wa "It is old."
It is now pertinent to make some observations about the distribution of the copulative verbalizer. The best place to start is with words in which its presence or absence is obvious. When the vowel-suppressing first person simple tense suffix attaches to a verb, the vowel of the immediately preceding suffix is suppressed (in the examples in this subsection, the subscript "c" appears prior to vowel-suppressing suffixes in the interlinear gloss to better distinguish instances of deletion that arise from the presence of a lexically pre-specified suffix from those that arise from other (e.g. phonotactic) motivations). Consider the verb sara- which is inflected for the first person simple tense and so, predictably, loses its final root vowel: sar(a)-ct-wa "I go."
However, prior to the suffixation of the first person simple suffix -ct to the same root nominalized with the agentive nominalizer -iri, the word must be verbalized. The fact that the final vowel of -iri below is not suppressed indicates the presence of an intervening segment, the copulative verbalizer: sar(a)-iri-ʋ-t-wa "I usually go."
It is worthwhile to compare of the copulative verbalizer in Muylaq' Aymara as compared to La Paz Aymara, a variant which represents this suffix with vowel lengthening. Consider the near-identical sentences below, both translations of "I have a small house" in which the nominal root uta-ni "house-attributive" is verbalized with the copulative verbalizer, but the correspondence between the copulative verbalizer in these two variants is not always a strict one-to-one relation.
{| border="0" cellspacing="2" cellpadding="1"
| align=left | || align=right | || align=left |
|-
| La Paz Aymara:
|ma: jisk'a uta-ni-:-ct(a)-wa
|-
| Muylaq' Aymara:
|ma isk'a uta-ni-ʋ-ct-wa
|}
Georgian
As in English, the verb "to be" (qopna) is irregular in Georgian (a Kartvelian language); different verb roots are employed in different tenses. The roots -ar-, -kn-, -qav-, and -qop- (past participle) are used in the present tense, future tense, past tense and the perfective tenses respectively. Examples:
{| border="0" cellspacing="2" cellpadding="1"
| align=left | || align=right | || align=left |
|-
| Masc'avlebeli var.
| "I am a teacher."
|-
| Masc'avlebeli viknebi.
| "I will be a teacher."
|-
| Masc'avlebeli viqavi.
| "I was a teacher."
|-
| Masc'avlebeli vqopilvar.
| "I have been a teacher."
|-
| Masc'avlebeli vqopiliqavi.
| "I had been a teacher."
|}
In the last two examples (perfective and pluperfect), two roots are used in one verb compound. In the perfective tense, the root qop (which is the expected root for the perfective tense) is followed by the root ar, which is the root for the present tense. In the pluperfective tense, again, the root qop is followed by the past tense root qav. This formation is very similar to German (an Indo-European language), where the perfect and the pluperfect are expressed in the following way:
{| border="0" cellspacing="2" cellpadding="1"
| align=left | || align=right | || align=left |
|-
| Ich bin Lehrer gewesen.
| "I have been a teacher," literally "I am teacher been."
|-
| Ich war Lehrer gewesen.
| "I had been a teacher," literally "I was teacher been."
|}
Here, gewesen is the past participle of sein ("to be") in German. In both examples, as in Georgian, this participle is used together with the present and the past forms of the verb in order to conjugate for the perfect and the pluperfect aspects.
Haitian Creole
Haitian Creole, a French-based creole language, has three forms of the copula: se, ye, and the zero copula, no word at all (the position of which will be indicated with Ø, just for purposes of illustration).
Although no textual record exists of Haitian-Creole at its earliest stages of development from French, se is derived from French (written c'est), which is the normal French contraction of (that, written ce) and the copula (is, written est) (a form of the verb être).
The derivation of ye is less obvious; but we can assume that the French source was ("he/it is," written il est), which, in rapidly spoken French, is very commonly pronounced as (typically written y est).
The use of a zero copula is unknown in French, and it is thought to be an innovation from the early days when Haitian-Creole was first developing as a Romance-based pidgin. Latin also sometimes used a zero copula.
Which of se / ye / Ø is used in any given copula clause depends on complex syntactic factors that we can superficially summarize in the following four rules:
1. Use Ø (i.e., no word at all) in declarative sentences where the complement is an adjective phrase, prepositional phrase, or adverb phrase:
{| border="0" cellspacing="2" cellpadding="1"
| align=left | || align=right | || align=left |
|-
| Li te Ø an Ayiti.
| "She was in Haiti." || (Lit., "She past-tense in Haiti.")
|-
| Liv-la Ø jon.
| "The book is yellow." || (Lit., "Book-the yellow.")
|-
| Timoun-yo Ø lakay.
| "The kids are [at] home." || (Lit., "Kids-the home.")
|}
2. Use se when the complement is a noun phrase. But, whereas other verbs come after any tense/mood/aspect particles (like pa to mark negation, or te to explicitly mark past tense, or ap to mark progressive aspect), se comes before any such particles:
{| border="0" cellspacing="2" cellpadding="1"
| align=left | || align=right | || align=left |
|-
| Chal se ekriven.
| "Charles is writer."
|-
| Chal, ki se ekriven, pa vini.
| "Charles, who is writer, not come."
|}
3. Use se where French and English have a dummy "it" subject:
{| border="0" cellspacing="2" cellpadding="1"
| align=left | || align=right | || align=left |
|-
| Se mwen!
| "It's me!" French C'est moi!
|-
| Se pa fasil.
| "It's not easy," colloquial French C'est pas facile.
|}
4. Finally, use the other copula form ye in situations where the sentence's syntax leaves the copula at the end of a phrase:
{| border="0" cellspacing="2" cellpadding="1"
| align=left | || align=right | || align=left |
|-
| Kijan ou ye?
| "How you are?"
|-
| Pou kimoun liv-la te ye?
| "Whose book was it?" || (Lit., "Of who book-the past-tense is?)
|-
| M pa konnen kimoun li ye.
| "I don't know who he is." || (Lit., "I not know who he is.")
|-
| Se yon ekriven Chal ye.
| "Charles is a writer!" || (Lit., "It's a writer Charles is;" cf. French C'est un écrivain qu'il est.)
|}
The above is, however, only a simplified analysis.
Japanese
The Japanese copula (most often translated into English as an inflected form of "to be") has many forms. E.g., The form is used predicatively, attributively, adverbially or as a connector, and predicatively or as a politeness indicator.
Examples:
{| border="0" cellspacing="2" cellpadding="1"
| align=left | || align=right | || align=left |
|-
|
| || "I'm a student." || (lit., I TOPIC student COPULA)
|-
|
| || "This is a pen." || (lit., this TOPIC pen COPULA-POLITE)
|}
is the polite form of the copula. Thus, many sentences like the ones below are almost identical in meaning and differ only in the speaker's politeness to the addressee and in nuance of how assured the person is of their statement.
{| border="0" cellspacing="2" cellpadding="1"
| align=left | || align=right | || align=left |
|-
|
| || "That's a hotel." || (lit., that TOPIC hotel COPULA)
|-
|
| || "That is a hotel." || (lit., that TOPIC hotel COPULA-POLITE)
|}
A predicate in Japanese is expressed by the predicative form of a verb, the predicative form of an adjective or noun + the predicative form of a copula.
{| border="0" cellspacing="2" cellpadding="1"
| align=left | || align=right | || align=left |
|-
|
| || "This beer is delicious."
|-
|
| || "This beer is delicious."
|-
| *
| * || colspan=2 | This is grammatically incorrect because can only be coupled with a noun to form a predicate.
|}
Other forms of copula:
, (used in writing and formal speaking)
(used in public announcements, notices, etc.)
The copula is subject to dialectal variation throughout Japan, resulting in forms like in Kansai and in Hiroshima (see map above).
Japanese also has two verbs corresponding to English "to be": and . They are not copulas but existential verbs. is used for inanimate objects, including plants, whereas is used for animate things like people, animals, and robots, though there are exceptions to this generalization.
{| border="0" cellspacing="2" cellpadding="1"
| align=left | || align=right | || align=left |
|-
|
| || "The book is on a table."
|-
|
| || "Kobayashi is here."
|}
Japanese speakers, when learning English, often drop the auxiliary verbs "be" and "do," incorrectly believing that "be" is a semantically empty copula equivalent to and .
Korean
For sentences with predicate nominatives, the copula "이" (i-) is added to the predicate nominative (with no space in between).
{| border="0" cellspacing="2" cellpadding="1"
| align=left | || align=right | || align=left |
|-
| 바나나는 과일이다.
| Ba-na-na-neun gwa-il-i-da. || "Bananas are a fruit."
|}
Some adjectives (usually colour adjectives) are nominalized and used with the copula "이"(i-).
1. Without the copula "이"(i-):
{| border="0" cellspacing="2" cellpadding="1"
| align=left | || align=right | || align=left |
|-
| 장미는 빨개요.
| Jang-mi-neun ppal-gae-yo.|| "Roses are red."
|}
2. With the copula "이"(i-):
{| border="0" cellspacing="2" cellpadding="1"
| align=left | || align=right | || align=left |
|-
| 장미는 빨간색이다.
| Jang-mi-neun ppal-gan-saek-i-da.|| "Roses are red-coloured."
|}
Some Korean adjectives are derived using the copula. Separating these articles and nominalizing the former part will often result in a sentence with a related, but different meaning. Using the separated sentence in a situation where the un-separated sentence is appropriate is usually acceptable as the listener can decide what the speaker is trying to say using the context.
Chinese
In Chinese, both states and qualities are, in general, expressed with stative verbs (SV) with no need for a copula, e.g., in Chinese, "to be tired" (累 lèi), "to be hungry" (饿 è), "to be located at" (在 zài), "to be stupid" (笨 bèn) and so forth. A sentence can consist simply of a pronoun and such a verb: for example, 我饿 wǒ è ("I am hungry"). Usually, however, verbs expressing qualities are qualified by an adverb (meaning "very," "not," "quite," etc.); when not otherwise qualified, they are often preceded by 很 hěn, which in other contexts means "very," but in this use often has no particular meaning.
Only sentences with a noun as the complement (e.g., "This is my sister") use the copular verb "to be": . This is used frequently; for example, instead of having a verb meaning "to be Chinese," the usual expression is "to be a Chinese person" (; "I am a Chinese person;" "I am Chinese"). This is sometimes called an equative verb. Another possibility is for the complement to be just a noun modifier (ending in ), the noun being omitted:
Before the Han dynasty, the character 是 served as a demonstrative pronoun meaning "this." (This usage survives in some idioms and proverbs.) Some linguists believe that 是 developed into a copula because it often appeared, as a repetitive subject, after the subject of a sentence (in classical Chinese we can say, for example: "George W. Bush, this president of the United States" meaning "George W. Bush is the president of the United States). The character 是 appears to be formed as a compound of characters with the meanings of "early" and "straight."
Another use of 是 in modern Chinese is in combination with the modifier 的 de to mean "yes" or to show agreement. For example:
Question: 你的汽车是不是红色的? nǐ de qìchē shì bú shì hóngsè de? "Is your car red or not?"Response: 是的 shì de "Is," meaning "Yes," or 不是 bú shì "Not is," meaning "No."
(A more common way of showing that the person asking the question is correct is by simply saying "right" or "correct," 对 duì; the corresponding negative answer is 不对 bú duì, "not right.")
Yet another use of 是 is in the shì...(de) construction, which is used to emphasize a particular element of the sentence; see .
In Hokkien 是 sī acts as the copula, and 是 is the equivalent in Wu Chinese. Cantonese uses 係 () instead of 是; similarly, Hakka uses 係 he55.
Siouan languages
In Siouan languages like Lakota, in principle almost all words—according to their structure—are verbs. So not only (transitive, intransitive and so-called "stative") verbs but even nouns often behave like verbs and do not need to have copulas.
For example, the word wičháša refers to a man, and the verb "to-be-a-man" is expressed as wimáčhaša/winíčhaša/wičháša (I am/you are/he is a man). Yet there also is a copula héčha (to be a ...) that in most cases is used: wičháša hemáčha/heníčha/héčha (I am/you are/he is a man).
In order to express the statement "I am a doctor of profession," one has to say pezuta wičháša hemáčha. But, in order to express that that person is THE doctor (say, that had been phoned to help), one must use another copula iyé (to be the one): pežúta wičháša (kiŋ) miyé yeló (medicine-man DEF ART I-am-the-one MALE ASSERT).
In order to refer to space (e.g., Robert is in the house), various verbs are used, e.g., yaŋkÁ (lit., to sit) for humans, or háŋ/hé (to stand upright) for inanimate objects of a certain shape. "Robert is in the house" could be translated as Robert thimáhel yaŋké (yeló), whereas "There's one restaurant next to the gas station" translates as Owótethipi wígli-oínažiŋ kiŋ hél isákhib waŋ hé.
Constructed languages
The constructed language Lojban has two words that act similar to a copula in natural languages. The clause me ... me'u turns whatever follows it into a predicate that means to be (among) what it follows. For example, me la .bob. (me'u) means "to be Bob," and me le ci mensi (me'u) means "to be one of the three sisters." Another one is du, which is itself a predicate that means all its arguments are the same thing (equal). One word which is often confused for a copula in Lojban, but isn't one, is cu. It merely indicates that the word which follows is the main predicate of the sentence. For example, lo pendo be mi cu zgipre means "my friend is a musician," but the word cu does not correspond to English is; instead, the word zgipre, which is a predicate, corresponds to the entire phrase "is a musician". The word cu is used to prevent lo pendo be mi zgipre, which would mean "the friend-of-me type of musician".
See also
Indo-European copula
Nominal sentence
Stative verb
Subject complement
Zero copula
Citations
General references
(See "copular sentences" and "existential sentences and expletive there" in Volume II.)
Moro, A. (1997) The Raising of Predicates. Cambridge University Press, Cambridge, England.
Tüting, A. W. (December 2003). Essay on Lakota syntax. .
Further reading
Parts of speech
Verb types |
5635 | https://en.wikipedia.org/wiki/Christopher%20Columbus | Christopher Columbus | Christopher Columbus (; between 25 August and 31 October 1451 – 20 May 1506) was an Italian explorer and navigator from the Republic of Genoa who completed four Spanish-based voyages across the Atlantic Ocean sponsored by the Catholic Monarchs, opening the way for the widespread European exploration and European colonization of the Americas. His expeditions were the first known European contact with the Caribbean and Central and South America.
The name Christopher Columbus is the anglicisation of the Latin . Growing up on the coast of Liguria, he went to sea at a young age and travelled widely, as far north as the British Isles and as far south as what is now Ghana. He married Portuguese noblewoman Filipa Moniz Perestrelo, who bore a son Diego, and was based in Lisbon for several years. He later took a Castilian mistress, Beatriz Enríquez de Arana, who bore a son, Ferdinand.
Largely self-educated, Columbus was knowledgeable in geography, astronomy, and history. He developed a plan to seek a western sea passage to the East Indies, hoping to profit from the lucrative spice trade. After the Granada War, and Columbus's persistent lobbying in multiple kingdoms, the Catholic Monarchs, Queen Isabella I and King Ferdinand II agreed to sponsor a journey west. Columbus left Castile in August 1492 with three ships and made landfall in the Americas on 12 October, ending the period of human habitation in the Americas now referred to as the pre-Columbian era. His landing place was an island in the Bahamas, known by its native inhabitants as Guanahani. He then visited the islands now known as Cuba and Hispaniola, establishing a colony in what is now Haiti. Columbus returned to Castile in early 1493, with captured natives. Word of his voyage soon spread throughout Europe.
Columbus made three further voyages to the Americas, exploring the Lesser Antilles in 1493, Trinidad and the northern coast of South America in 1498, and the east coast of Central America in 1502. Many names he gave to geographical features, particularly islands, are still in use. He gave the name indios ("Indians") to the indigenous peoples he encountered. The extent to which he was aware the Americas were a wholly separate landmass is uncertain; he never clearly renounced his belief he had reached the Far East. As a colonial governor, Columbus was accused by some of his contemporaries of significant brutality and removed from the post. Columbus's strained relationship with the Crown of Castile and its colonial administrators in America led to his arrest and removal from Hispaniola in 1500, and later to protracted litigation over the privileges he and his heirs claimed were owed to them by the crown.
Columbus's expeditions inaugurated a period of exploration, conquest, and colonization that lasted for centuries, thus bringing the Americas into the European sphere of influence. The transfer of commodities, ideas, and people between the Old World and New World that followed his first voyage are known as the Columbian exchange. These events and the effects which persist to the present are often cited as the beginning of the modern era. Columbus was widely celebrated in the centuries after his death, but public perception fractured in the 21st century due to greater attention to the harms committed under his governance, particularly the beginning of the depopulation of Hispaniola's indigenous Taínos, caused by Old World diseases and mistreatment, including slavery. Many places in the Western Hemisphere bear his name, including the South American country of Colombia, the Canadian province of British Columbia, the American city Columbus, Ohio, and the U.S. capital, the District of Columbia.
Early life
Columbus's early life is obscure, but scholars believe he was born in the Republic of Genoa between 25 August and 31 October 1451. His father was Domenico Colombo, a wool weaver who worked in Genoa and Savona and owned a cheese stand at which young Christopher worked. His mother was Susanna Fontanarossa. He had three brothers—Bartholomew, Giovanni Pellegrino, and Giacomo (also called Diego)—as well as a sister, Bianchinetta. Bartholomew ran a cartography workshop in Lisbon for at least part of his adulthood.
His native language is presumed to have been a Genoese dialect (Ligurian) as his first language, though Columbus probably never wrote in it. His name in 16th-century Genoese was Cristoffa Corombo, in Italian, Cristoforo Colombo, and in Spanish Cristóbal Colón.
In one of his writings, he says he went to sea at 14. In 1470, the family moved to Savona, where Domenico took over a tavern. Some modern authors have argued that he was not from Genoa, but from the Aragon region of Spain or from Portugal. These competing hypotheses have been discounted by most scholars.
In 1473, Columbus began his apprenticeship as business agent for the wealthy Spinola, Centurione, and Di Negro families of Genoa. Later, he made a trip to the Greek island Chios in the Aegean Sea, then ruled by Genoa. In May 1476, he took part in an armed convoy sent by Genoa to carry valuable cargo to northern Europe. He probably visited Bristol, England, and Galway, Ireland, where he may have visited St. Nicholas' Collegiate Church. It has been speculated he went to Iceland in 1477, though many scholars doubt this. It is known that in the autumn of 1477, he sailed on a Portuguese ship from Galway to Lisbon, where he found his brother Bartholomew, and they continued trading for the Centurione family. Columbus based himself in Lisbon from 1477 to 1485. In 1478, the Centuriones sent Columbus on a sugar-buying trip to Madeira. He married Felipa Perestrello e Moniz, daughter of Bartolomeu Perestrello, a Portuguese nobleman of Lombard origin, who had been the donatary captain of Porto Santo.
In 1479 or 1480, Columbus's son Diego was born. Between 1482 and 1485, Columbus traded along the coasts of West Africa, reaching the Portuguese trading post of Elmina at the Guinea coast in present-day Ghana. Before 1484, Columbus returned to Porto Santo to find that his wife had died. He returned to Portugal to settle her estate and take Diego with him.
He left Portugal for Castile in 1485, where he took a mistress in 1487, a 20-year-old orphan named Beatriz Enríquez de Arana. It is likely that Beatriz met Columbus when he was in Córdoba, a gathering place for Genoese merchants and where the court of the Catholic Monarchs was located at intervals. Beatriz, unmarried at the time, gave birth to Columbus's second son, Fernando Columbus, in July 1488, named for the monarch of Aragon. Columbus recognized the boy as his offspring. Columbus entrusted his older, legitimate son Diego to take care of Beatriz and pay the pension set aside for her following his death, but Diego was negligent in his duties.
Columbus learned Latin, Portuguese, and Castilian. He read widely about astronomy, geography, and history, including the works of Ptolemy, Pierre d'Ailly's Imago Mundi, the travels of Marco Polo and Sir John Mandeville, Pliny's Natural History, and Pope Pius II's Historia rerum ubique gestarum. According to historian Edmund Morgan,
Columbus was not a scholarly man. Yet he studied these books, made hundreds of marginal notations in them and came out with ideas about the world that were characteristically simple and strong and sometimes wrong ...
Quest for Asia
Background
Under the Mongol Empire's hegemony over Asia and the Pax Mongolica, Europeans had long enjoyed a safe land passage on the Silk Road to India, parts of East Asia, including China and Maritime Southeast Asia, which were sources of valuable goods. With the fall of Constantinople to the Ottoman Empire in 1453, the Silk Road was closed to Christian traders.
In 1474, the Florentine astronomer Paolo dal Pozzo Toscanelli suggested to King Afonso V of Portugal that sailing west across the Atlantic would be a quicker way to reach the Maluku (Spice) Islands, China, Japan and India than the route around Africa, but Afonso rejected his proposal. In the 1480s, Columbus and his brother proposed a plan to reach the East Indies by sailing west. Columbus supposedly wrote Toscanelli in 1481 and received encouragement, along with a copy of a map the astronomer had sent Afonso implying that a westward route to Asia was possible. Columbus's plans were complicated by Bartolomeu Dias's rounding of the Cape of Good Hope in 1488, which suggested the Cape Route around Africa to Asia.
Carol Delaney and other commentators have argued that Columbus was a Christian millennialist and apocalypticist and that these beliefs motivated his quest for Asia in a variety of ways. Columbus often wrote about seeking gold in the log books of his voyages and writes about acquiring it "in such quantity that the sovereigns... will undertake and prepare to go conquer the Holy Sepulcher" in a fulfillment of Biblical prophecy. Columbus often wrote about converting all races to Christianity. Abbas Hamandi argues that Columbus was motivated by the hope of "[delivering] Jerusalem from Muslim hands" by "using the resources of newly discovered lands".
Geographical considerations
Despite a popular misconception to the contrary, nearly all educated Westerners of Columbus's time knew that the Earth is spherical, a concept that had been understood since antiquity. The techniques of celestial navigation, which uses the position of the Sun and the stars in the sky, had long been in use by astronomers and were beginning to be implemented by mariners.
As far back as the 3rd century BC, Eratosthenes had correctly computed the circumference of the Earth by using simple geometry and studying the shadows cast by objects at two remote locations. In the 1st century BC, Posidonius confirmed Eratosthenes's results by comparing stellar observations at two separate locations. These measurements were widely known among scholars, but Ptolemy's use of the smaller, old-fashioned units of distance led Columbus to underestimate the size of the Earth by about a third.
Three cosmographical parameters determined the bounds of Columbus's enterprise: the distance across the ocean between Europe and Asia, which depended on the extent of the oikumene, i.e., the Eurasian land-mass stretching east–west between Spain and China; the circumference of the Earth; and the number of miles or leagues in a degree of longitude, which was possible to deduce from the theory of the relationship between the size of the surfaces of water and the land as held by the followers of Aristotle in medieval times.
From Pierre d'Ailly's Imago Mundi (1410), Columbus learned of Alfraganus's estimate that a degree of latitude (equal to approximately a degree of longitude along the equator) spanned 56.67 Arabic miles (equivalent to or 76.2 mi), but he did not realize that this was expressed in the Arabic mile (about ) rather than the shorter Roman mile (about 1,480 m) with which he was familiar. Columbus therefore estimated the size of the Earth to be about 75% of Eratosthenes's calculation, and the distance westward from the Canary Islands to the Indies as only 68 degrees, equivalent to (a 58% error).
Most scholars of the time accepted Ptolemy's estimate that Eurasia spanned 180° longitude, rather than the actual 130° (to the Chinese mainland) or 150° (to Japan at the latitude of Spain). Columbus believed an even higher estimate, leaving a smaller percentage for water. In d'Ailly's Imago Mundi, Columbus read Marinus of Tyre's estimate that the longitudinal span of Eurasia was 225° at the latitude of Rhodes. Some historians, such as Samuel Morison, have suggested that he followed the statement in the apocryphal book 2 Esdras (6:42) that "six parts [of the globe] are habitable and the seventh is covered with water." He was also aware of Marco Polo's claim that Japan (which he called "Cipangu") was some to the east of China ("Cathay"), and closer to the equator than it is. He was influenced by Toscanelli's idea that there were inhabited islands even farther to the east than Japan, including the mythical Antillia, which he thought might lie not much farther to the west than the Azores.
Based on his sources, Columbus estimated a distance of from the Canary Islands west to Japan; the actual distance is . No ship in the 15th century could have carried enough food and fresh water for such a long voyage, and the dangers involved in navigating through the uncharted ocean would have been formidable. Most European navigators reasonably concluded that a westward voyage from Europe to Asia was unfeasible. The Catholic Monarchs, however, having completed the Reconquista, an expensive war against the Moors in the Iberian Peninsula, were eager to obtain a competitive edge over other European countries in the quest for trade with the Indies. Columbus's project, though far-fetched, held the promise of such an advantage.
Nautical considerations
Though Columbus was wrong about the number of degrees of longitude that separated Europe from the Far East and about the distance that each degree represented, he did take advantage of the trade winds, which would prove to be the key to his successful navigation of the Atlantic Ocean. He planned to first sail to the Canary Islands before continuing west with the northeast trade wind. Part of the return to Spain would require traveling against the wind using an arduous sailing technique called beating, during which progress is made very slowly. To effectively make the return voyage, Columbus would need to follow the curving trade winds northeastward to the middle latitudes of the North Atlantic, where he would be able to catch the "westerlies" that blow eastward to the coast of Western Europe.
The navigational technique for travel in the Atlantic appears to have been exploited first by the Portuguese, who referred to it as the volta do mar ('turn of the sea'). Through his marriage to his first wife, Felipa Perestrello, Columbus had access to the nautical charts and logs that had belonged to her deceased father, Bartolomeu Perestrello, who had served as a captain in the Portuguese navy under Prince Henry the Navigator. In the mapmaking shop where he worked with his brother Bartholomew, Columbus also had ample opportunity to hear the stories of old seamen about their voyages to the western seas, but his knowledge of the Atlantic wind patterns was still imperfect at the time of his first voyage. By sailing due west from the Canary Islands during hurricane season, skirting the so-called horse latitudes of the mid-Atlantic, he risked being becalmed and running into a tropical cyclone, both of which he avoided by chance.
Quest for financial support for a voyage
By about 1484, Columbus proposed his planned voyage to King John II of Portugal. The king submitted Columbus's proposal to his advisors, who rejected it, correctly, on the grounds that Columbus's estimate for a voyage of 2,400 nmi was only a quarter of what it should have been. In 1488, Columbus again appealed to the court of Portugal, and John II again granted him an audience. That meeting also proved unsuccessful, in part because not long afterwards Bartolomeu Dias returned to Portugal with news of his successful rounding of the southern tip of Africa (near the Cape of Good Hope).
Columbus sought an audience with the monarchs Ferdinand II of Aragon and Isabella I of Castile, who had united several kingdoms in the Iberian Peninsula by marrying and now ruled together. On 1 May 1486, permission having been granted, Columbus presented his plans to Queen Isabella, who, in turn, referred it to a committee. The learned men of Spain, like their counterparts in Portugal, replied that Columbus had grossly underestimated the distance to Asia. They pronounced the idea impractical and advised the Catholic Monarchs to pass on the proposed venture. To keep Columbus from taking his ideas elsewhere, and perhaps to keep their options open, the sovereigns gave him an allowance, totaling about 14,000 maravedis for the year, or about the annual salary of a sailor. In May 1489, the queen sent him another 10,000 maravedis, and the same year the monarchs furnished him with a letter ordering all cities and towns under their dominion to provide him food and lodging at no cost.
Columbus also dispatched his brother Bartholomew to the court of Henry VII of England to inquire whether the English crown might sponsor his expedition, but he was captured by pirates en route, and only arrived in early 1491. By that time, Columbus had retreated to La Rábida Friary, where the Spanish crown sent him 20,000 maravedis to buy new clothes and instructions to return to the Spanish court for renewed discussions.
Agreement with the Spanish crown
Columbus waited at King Ferdinand's camp until Ferdinand and Isabella conquered Granada, the last Muslim stronghold on the Iberian Peninsula, in January 1492. A council led by Isabella's confessor, Hernando de Talavera, found Columbus's proposal to reach the Indies implausible. Columbus had left for France when Ferdinand intervened, first sending Talavera and Bishop Diego Deza to appeal to the queen. Isabella was finally convinced by the king's clerk Luis de Santángel, who argued that Columbus would take his ideas elsewhere, and offered to help arrange the funding. Isabella then sent a royal guard to fetch Columbus, who had traveled 2 leagues (over 10 km) toward Córdoba.
In the April 1492 "Capitulations of Santa Fe", King Ferdinand and Queen Isabella promised Columbus that if he succeeded he would be given the rank of Admiral of the Ocean Sea and appointed Viceroy and Governor of all the new lands he might claim for Spain. He had the right to nominate three persons, from whom the sovereigns would choose one, for any office in the new lands. He would be entitled to 10% (diezmo) of all the revenues from the new lands in perpetuity. He also would have the option of buying one-eighth interest in any commercial venture in the new lands, and receive one-eighth (ochavo) of the profits.
In 1500, during his third voyage to the Americas, Columbus was arrested and dismissed from his posts. He and his sons, Diego and Fernando, then conducted a lengthy series of court cases against the Castilian crown, known as the pleitos colombinos, alleging that the Crown had illegally reneged on its contractual obligations to Columbus and his heirs. The Columbus family had some success in their first litigation, as a judgment of 1511 confirmed Diego's position as viceroy but reduced his powers. Diego resumed litigation in 1512, which lasted until 1536, and further disputes initiated by heirs continued until 1790.
Voyages
Between 1492 and 1504, Columbus completed four round-trip voyages between Spain and the Americas, each voyage being sponsored by the Crown of Castile. On his first voyage he reached the Americas, initiating the European exploration and colonization of the continent, as well as the Columbian exchange. His role in history is thus important to the Age of Discovery, Western history, and human history writ large.
In Columbus's letter on the first voyage, published following his first return to Spain, he claimed that he had reached Asia, as previously described by Marco Polo and other Europeans. Over his subsequent voyages, Columbus refused to acknowledge that the lands he visited and claimed for Spain were not part of Asia, in the face of mounting evidence to the contrary. This might explain, in part, why the American continent was named after the Florentine explorer Amerigo Vespucci—who received credit for recognizing it as a "New World"—and not after Columbus.
First voyage (1492–1493)
On the evening of 3 August 1492, Columbus departed from Palos de la Frontera with three ships. The largest was a carrack, the Santa María, owned and captained by Juan de la Cosa, and under Columbus's direct command. The other two were smaller caravels, the Pinta and the Niña, piloted by the Pinzón brothers. Columbus first sailed to the Canary Islands. There he restocked provisions and made repairs then departed from San Sebastián de La Gomera on 6 September, for what turned out to be a five-week voyage across the ocean.
On 7 October, the crew spotted "[i]mmense flocks of birds". On 11 October, Columbus changed the fleet's course to due west, and sailed through the night, believing land was soon to be found. At around 02:00 the following morning, a lookout on the Pinta, Rodrigo de Triana, spotted land. The captain of the Pinta, Martín Alonso Pinzón, verified the sight of land and alerted Columbus. Columbus later maintained that he had already seen a light on the land a few hours earlier, thereby claiming for himself the lifetime pension promised by Ferdinand and Isabella to the first person to sight land. Columbus called this island (in what is now the Bahamas) San Salvador (meaning "Holy Savior"); the natives called it Guanahani. Christopher Columbus's journal entry of 12 October 1492 states:I saw some who had marks of wounds on their bodies and I made signs to them asking what they were; and they showed me how people from other islands nearby came there and tried to take them, and how they defended themselves; and I believed and believe that they come here from tierra firme to take them captive. They should be good and intelligent servants, for I see that they say very quickly everything that is said to them; and I believe they would become Christians very easily, for it seemed to me that they had no religion. Our Lord pleasing, at the time of my departure I will take six of them from here to Your Highnesses in order that they may learn to speak.Columbus called the inhabitants of the lands that he visited Los Indios (Spanish for "Indians"). He initially encountered the Lucayan, Taíno, and Arawak peoples. Noting their gold ear ornaments, Columbus took some of the Arawaks prisoner and insisted that they guide him to the source of the gold. Columbus did not believe he needed to create a fortified outpost, writing, "the people here are simple in war-like matters ... I could conquer the whole of them with fifty men, and govern them as I pleased." The Taínos told Columbus that another indigenous tribe, the Caribs, were fierce warriors and cannibals, who made frequent raids on the Taínos, often capturing their women, although this may have been a belief perpetuated by the Spaniards to justify enslaving them.
Columbus also explored the northeast coast of Cuba, where he landed on 28 October. On the night of 26 November, Martín Alonso Pinzón took the Pinta on an unauthorized expedition in search of an island called "Babeque" or "Baneque", which the natives had told him was rich in gold. Columbus, for his part, continued to the northern coast of Hispaniola, where he landed on 6 December. There, the Santa María ran aground on 25 December 1492 and had to be abandoned. The wreck was used as a target for cannon fire to impress the native peoples. Columbus was received by the native cacique Guacanagari, who gave him permission to leave some of his men behind. Columbus left 39 men, including the interpreter Luis de Torres, and founded the settlement of La Navidad, in present-day Haiti. Columbus took more natives prisoner and continued his exploration. He kept sailing along the northern coast of Hispaniola with a single ship until he encountered Pinzón and the Pinta on 6 January.
On 13 January 1493, Columbus made his last stop of this voyage in the Americas, in the Bay of Rincón in northeast Hispaniola. There he encountered the Ciguayos, the only natives who offered violent resistance during this voyage. The Ciguayos refused to trade the amount of bows and arrows that Columbus desired; in the ensuing clash one Ciguayo was stabbed in the buttocks and another wounded with an arrow in his chest. Because of these events, Columbus called the inlet the Golfo de Las Flechas (Bay of Arrows).
Columbus headed for Spain on the Niña, but a storm separated him from the Pinta, and forced the Niña to stop at the island of Santa Maria in the Azores. Half of his crew went ashore to say prayers of thanksgiving in a chapel for having survived the storm. But while praying, they were imprisoned by the governor of the island, ostensibly on suspicion of being pirates. After a two-day standoff, the prisoners were released, and Columbus again set sail for Spain.
Another storm forced Columbus into the port at Lisbon. From there he went to Vale do Paraíso north of Lisbon to meet King John II of Portugal, who told Columbus that he believed the voyage to be in violation of the 1479 Treaty of Alcáçovas. After spending more than a week in Portugal, Columbus set sail for Spain. Returning to Palos on 15 March 1493, he was given a hero's welcome and soon afterward received by Isabella and Ferdinand in Barcelona.
Columbus's letter on the first voyage, dispatched to the Spanish court, was instrumental in spreading the news throughout Europe about his voyage. Almost immediately after his arrival in Spain, printed versions began to appear, and word of his voyage spread rapidly. Most people initially believed that he had reached Asia. The Bulls of Donation, three papal bulls of Pope Alexander VI delivered in 1493, purported to grant overseas territories to Portugal and the Catholic Monarchs of Spain. They were replaced by the Treaty of Tordesillas of 1494.
The two earliest published copies of Columbus's letter on the first voyage aboard the Niña were donated in 2017 by the Jay I. Kislak Foundation to the University of Miami library in Coral Gables, Florida, where they are housed.
Second voyage (1493–1496)
On 24 September 1493, Columbus sailed from Cádiz with 17 ships, and supplies to establish permanent colonies in the Americas. He sailed with nearly 1,500 men, including sailors, soldiers, priests, carpenters, stonemasons, metalworkers, and farmers. Among the expedition members were Alvarez Chanca, a physician who wrote a detailed account of the second voyage; Juan Ponce de León, the first governor of Puerto Rico and Florida; the father of Bartolomé de las Casas; Juan de la Cosa, a cartographer who is credited with making the first world map depicting the New World; and Columbus's youngest brother Diego. The fleet stopped at the Canary Islands to take on more supplies, and set sail again on 7 October, deliberately taking a more southerly course than on the first voyage.
On 3 November, they arrived in the Windward Islands; the first island they encountered was named Dominica by Columbus, but not finding a good harbor there, they anchored off a nearby smaller island, which he named Mariagalante, now a part of Guadeloupe and called Marie-Galante. Other islands named by Columbus on this voyage were Montserrat, Antigua, Saint Martin, the Virgin Islands, as well as many others.
On 22 November, Columbus returned to Hispaniola to visit La Navidad, where 39 Spaniards had been left during the first voyage. Columbus found the fort in ruins, destroyed by the Taínos after some of the Spaniards reportedly antagonized their hosts with their unrestrained lust for gold and women. Columbus then established a poorly located and short-lived settlement to the east, La Isabela, in the present-day Dominican Republic.
From April to August 1494, Columbus explored Cuba and Jamaica, then returned to Hispaniola. By the end of 1494, disease and famine had killed two-thirds of the Spanish settlers. Columbus implemented encomienda, a Spanish labor system that rewarded conquerors with the labor of conquered non-Christian people.
Columbus executed Spanish colonists for minor crimes, and used dismemberment as punishment. Columbus and the colonists enslaved the indigenous people, including children. Natives were beaten, raped, and tortured for the location of imagined gold. Thousands committed suicide rather than face the oppression.
In February 1495, Columbus rounded up about 1,500 Arawaks, some of whom had rebelled, in a great slave raid. About 500 of the strongest were shipped to Spain as slaves, with about two hundred of those dying en route.
In June 1495, the Spanish crown sent ships and supplies to Hispaniola. In October, Florentine merchant Gianotto Berardi, who had won the contract to provision the fleet of Columbus's second voyage and to supply the colony on Hispaniola, received almost 40,000 maravedís worth of enslaved Indians. He renewed his effort to get supplies to Columbus, and was working to organize a fleet when he suddenly died in December. On 10 March 1496, having been away about 30 months, the fleet departed La Isabela. On 8 June the crew sighted land somewhere between Lisbon and Cape St. Vincent, and disembarked in Cádiz on 11 June.
Third voyage (1498–1500)
On 30 May 1498, Columbus left with six ships from Sanlúcar, Spain. The fleet called at Madeira and the Canary Islands, where it divided in two, with three ships heading for Hispaniola and the other three vessels, commanded by Columbus, sailing south to the Cape Verde Islands and then westward across the Atlantic. It is probable that this expedition was intended at least partly to confirm rumors of a large continent south of the Caribbean Sea, that is, South America.
On 31 July they sighted Trinidad, the most southerly of the Caribbean islands. On 5 August, Columbus sent several small boats ashore on the southern side of the Paria Peninsula in what is now Venezuela, near the mouth of the Orinoco river. This was the first recorded landing of Europeans on the mainland of South America, which Columbus realized must be a continent. The fleet then sailed to the islands of Chacachacare and Margarita, reaching the latter on 14 August, and sighted Tobago and Grenada from afar, according to some scholars.
On 19 August, Columbus returned to Hispaniola. There he found settlers in rebellion against his rule, and his unfulfilled promises of riches. Columbus had some of the Europeans tried for their disobedience; at least one rebel leader was hanged.
In October 1499, Columbus sent two ships to Spain, asking the Court of Spain to appoint a royal commissioner to help him govern. By this time, accusations of tyranny and incompetence on the part of Columbus had also reached the Court. The sovereigns sent Francisco de Bobadilla, a relative of Marquesa Beatriz de Bobadilla, a patron of Columbus and a close friend of Queen Isabella, to investigate the accusations of brutality made against the Admiral. Arriving in Santo Domingo while Columbus was away, Bobadilla was immediately met with complaints about all three Columbus brothers. He moved into Columbus's house and seized his property, took depositions from the Admiral's enemies, and declared himself governor.
Bobadilla reported to Spain that Columbus once punished a man found guilty of stealing corn by having his ears and nose cut off and then selling him into slavery. He claimed that Columbus regularly used torture and mutilation to govern Hispaniola. Testimony recorded in the report stated that Columbus congratulated his brother Bartholomew on "defending the family" when the latter ordered a woman paraded naked through the streets and then had her tongue cut because she had "spoken ill of the admiral and his brothers". The document also describes how Columbus put down native unrest and revolt: he first ordered a brutal suppression of the uprising in which many natives were killed, and then paraded their dismembered bodies through the streets in an attempt to discourage further rebellion. Columbus vehemently denied the charges. The neutrality and accuracy of the accusations and investigations of Bobadilla toward Columbus and his brothers have been disputed by historians, given the anti-Italian sentiment of the Spaniards and Bobadilla's desire to take over Columbus's position.
In early October 1500, Columbus and Diego presented themselves to Bobadilla, and were put in chains aboard La Gorda, the caravel on which Bobadilla had arrived at Santo Domingo. They were returned to Spain, and languished in jail for six weeks before King Ferdinand ordered their release. Not long after, the king and queen summoned the Columbus brothers to the Alhambra palace in Granada. The sovereigns expressed indignation at the actions of Bobadilla, who was then recalled and ordered to make restitutions of the property he had confiscated from Columbus. The royal couple heard the brothers' pleas; restored their freedom and wealth; and, after much persuasion, agreed to fund Columbus's fourth voyage. However, Nicolás de Ovando was to replace Bobadilla and be the new governor of the West Indies.
New light was shed on the seizure of Columbus and his brother Bartholomew, the Adelantado, with the discovery by archivist Isabel Aguirre of an incomplete copy of the testimonies against them gathered by Francisco de Bobadilla at Santo Domingo in 1500. She found a manuscript copy of this pesquisa (inquiry) in the Archive of Simancas, Spain, uncatalogued until she and Consuelo Varela published their book, La caída de Cristóbal Colón: el juicio de Bobadilla (The fall of Christopher Colón: the judgement of Bobadilla) in 2006.
Fourth voyage (1502–1504)
On 9 May 1502, Columbus left Cádiz with his flagship Santa María and three other vessels. The ships were crewed by 140 men, including his brother Bartholomew as second in command and his son Fernando. He sailed to Asilah on the Moroccan coast to rescue Portuguese soldiers said to be besieged by the Moors. The siege had been lifted by the time they arrived, so the Spaniards stayed only a day and continued on to the Canary Islands.
On 15 June, the fleet arrived at Martinique, where it lingered for several days. A hurricane was forming, so Columbus continued westward, hoping to find shelter on Hispaniola. He arrived at Santo Domingo on 29 June, but was denied port, and the new governor Francisco de Bobadilla refused to listen to his warning that a hurricane was approaching. Instead, while Columbus's ships sheltered at the mouth of the Rio Jaina, the first Spanish treasure fleet sailed into the hurricane. Columbus's ships survived with only minor damage, while 20 of the 30 ships in the governor's fleet were lost along with 500 lives (including that of Francisco de Bobadilla). Although a few surviving ships managed to straggle back to Santo Domingo, Aguja, the fragile ship carrying Columbus's personal belongings and his 4,000 pesos in gold was the sole vessel to reach Spain. The gold was his tenth (décimo) of the profits from Hispaniola, equal to 240,000 maravedis, guaranteed by the Catholic Monarchs in 1492.
After a brief stop at Jamaica, Columbus sailed to Central America, arriving at the coast of Honduras on 30 July. Here Bartholomew found native merchants and a large canoe. On 14 August, Columbus landed on the continental mainland at Punta Caxinas, now Puerto Castilla, Honduras. He spent two months exploring the coasts of Honduras, Nicaragua, and Costa Rica, seeking a strait in the western Caribbean through which he could sail to the Indian Ocean. Sailing south along the Nicaraguan coast, he found a channel that led into Almirante Bay in Panama on 5 October.
As soon as his ships anchored in Almirante Bay, Columbus encountered Ngäbe people in canoes who were wearing gold ornaments. In January 1503, he established a garrison at the mouth of the Belén River. Columbus left for Hispaniola on 16 April. On 10 May he sighted the Cayman Islands, naming them "Las Tortugas" after the numerous sea turtles there. His ships sustained damage in a storm off the coast of Cuba. Unable to travel farther, on 25 June 1503 they were beached in Saint Ann Parish, Jamaica.
For six months Columbus and 230 of his men remained stranded on Jamaica. Diego Méndez de Segura, who had shipped out as a personal secretary to Columbus, and a Spanish shipmate called Bartolomé Flisco, along with six natives, paddled a canoe to get help from Hispaniola. The governor, Nicolás de Ovando y Cáceres, detested Columbus and obstructed all efforts to rescue him and his men. In the meantime Columbus, in a desperate effort to induce the natives to continue provisioning him and his hungry men, won their favor by predicting a lunar eclipse for 29 February 1504, using Abraham Zacuto's astronomical charts. Despite the governor's obstruction, Christopher Columbus and his men were rescued on 28 June 1504, and arrived in Sanlúcar, Spain, on 7 November.
Later life, illness, and death
Columbus had always claimed that the conversion of non-believers was one reason for his explorations, and he grew increasingly religious in his later years. Probably with the assistance of his son Diego and his friend the Carthusian monk Gaspar Gorricio, Columbus produced two books during his later years: a Book of Privileges (1502), detailing and documenting the rewards from the Spanish Crown to which he believed he and his heirs were entitled, and a Book of Prophecies (1505), in which passages from the Bible were used to place his achievements as an explorer in the context of Christian eschatology.
In his later years, Columbus demanded that the Crown of Castile give him his tenth of all the riches and trade goods yielded by the new lands, as stipulated in the Capitulations of Santa Fe. Because he had been relieved of his duties as governor, the Crown did not feel bound by that contract and his demands were rejected. After his death, his heirs sued the Crown for a part of the profits from trade with America, as well as other rewards. This led to a protracted series of legal disputes known as the pleitos colombinos ("Columbian lawsuits").
During a violent storm on his first return voyage, Columbus, then 41, had suffered an attack of what was believed at the time to be gout. In subsequent years, he was plagued with what was thought to be influenza and other fevers, bleeding from the eyes, temporary blindness and prolonged attacks of gout. The attacks increased in duration and severity, sometimes leaving Columbus bedridden for months at a time, and culminated in his death 14 years later.
Based on Columbus's lifestyle and the described symptoms, some modern commentators suspect that he suffered from reactive arthritis, rather than gout. Reactive arthritis is a joint inflammation caused by intestinal bacterial infections or after acquiring certain sexually transmitted diseases (primarily chlamydia or gonorrhea). In 2006, Frank C. Arnett, a medical doctor, and historian Charles Merrill, published their paper in The American Journal of the Medical Sciences proposing that Columbus had a form of reactive arthritis; Merrill made the case in that same paper that Columbus was the son of Catalans and his mother possibly a member of a prominent converso (converted Jew) family. "It seems likely that [Columbus] acquired reactive arthritis from food poisoning on one of his ocean voyages because of poor sanitation and improper food preparation", says Arnett, a rheumatologist and professor of internal medicine, pathology and laboratory medicine at the University of Texas Medical School at Houston.
Some historians such as H. Micheal Tarver and Emily Slape, as well as medical doctors such as Arnett and Antonio Rodríguez Cuartero, believe that Columbus had such a form of reactive arthritis, but according to other authorities, this is "speculative", or "very speculative".
After his arrival to Sanlúcar from his fourth voyage (and Queen Isabella's death), an ill Columbus settled in Seville in April 1505. He stubbornly continued to make pleas to the Crown to defend his own personal privileges and his family's. He moved to Segovia (where the court was at the time) on a mule by early 1506, and, on the occasion of the wedding of King Ferdinand with Germaine of Foix in Valladolid, Spain, in March 1506, Columbus moved to that city to persist with his demands. On 20 May 1506, aged 54, Columbus died in Valladolid.
Location of remains
Columbus's remains were first buried at a convent in Valladolid, then moved to the monastery of La Cartuja in Seville (southern Spain) by the will of his son Diego. They may have been exhumed in 1513 and interred at the Seville Cathedral. In about 1536, the remains of both Columbus and his son Diego were moved to a cathedral in Colonial Santo Domingo, in the present-day Dominican Republic; Columbus had requested to be buried on the island. By some accounts, in 1793, when France took over the entire island of Hispaniola, Columbus's remains were moved to Havana, Cuba. After Cuba became independent following the Spanish–American War in 1898, at least some of these remains were moved back to the Seville Cathedral, where they were placed on an elaborate catafalque.
In June 2003, DNA samples were taken from these remains as well as those of Columbus's brother Diego and younger son Fernando. Initial observations suggested that the bones did not appear to match Columbus's physique or age at death. DNA extraction proved difficult; only short fragments of mitochondrial DNA could be isolated. These matched corresponding DNA from Columbus's brother, supporting that both individuals had shared the same mother. Such evidence, together with anthropologic and historic analyses, led the researchers to conclude that the remains belonged to Christopher Columbus.
In 1877, a priest discovered a lead box at Santo Domingo inscribed: "Discoverer of America, First Admiral". Inscriptions found the next year read "Last of the remains of the first admiral, Sire Christopher Columbus, discoverer." The box contained bones of an arm and a leg, as well as a bullet. These remains were considered legitimate by physician and U.S. Assistant Secretary of State John Eugene Osborne, who suggested in 1913 that they travel through the Panama Canal as a part of its opening ceremony. These remains were kept at the Basilica Cathedral of Santa María la Menor (in the Colonial City of Santo Domingo) before being moved to the Columbus Lighthouse (Santo Domingo Este, inaugurated in 1992). The authorities in Santo Domingo have never allowed these remains to be DNA-tested, so it is unconfirmed whether they are from Columbus's body as well.
Commemoration
The figure of Columbus was not ignored in the British colonies during the colonial era: Columbus became a unifying symbol early in the history of the colonies that became the United States when Puritan preachers began to use his life story as a model for a "developing American spirit". In the spring of 1692, Puritan preacher Cotton Mather described Columbus's voyage as one of three shaping events of the modern age, connecting Columbus's voyage and the Puritans' migration to North America, seeing them together as the key to a grand design.
The use of Columbus as a founding figure of New World nations spread rapidly after the American Revolution. This was out of a desire to develop a national history and founding myth with fewer ties to Britain. His name was the basis for the female national personification of the United States, Columbia, in use since the 1730s with reference to the original Thirteen Colonies, and also a historical name applied to the Americas and to the New World. Columbia, South Carolina and Columbia Rediviva, the ship for which the Columbia River was named, are named for Columbus.
Columbus's name was given to the newly born Republic of Colombia in the early 19th century, inspired by the political project of "Colombeia" developed by revolutionary Francisco de Miranda, which was put at the service of the emancipation of continental Hispanic America.
To commemorate the 400th anniversary of the landing of Columbus, the 1893 World's Fair in Chicago was named the World's Columbian Exposition. The U.S. Postal Service issued the first U.S. commemorative stamps, the Columbian Issue, depicting Columbus, Queen Isabella and others in various stages of his several voyages. The policies related to the celebration of the Spanish colonial empire as the vehicle of a nationalist project undertaken in Spain during the Restoration in the late 19th century took form with the commemoration of the 4th centenary on 12 October 1892 (in which the figure of Columbus was extolled by the Conservative government), eventually becoming the very same national day. Several monuments commemorating the "discovery" were erected in cities such as Palos, Barcelona, Granada, Madrid, Salamanca, Valladolid and Seville in the years around the 400th anniversary.
For the Columbus Quincentenary in 1992, a second Columbian issue was released jointly with Italy, Portugal, and Spain. Columbus was celebrated at Seville Expo '92, and Genoa Expo '92.
The Boal Mansion Museum, founded in 1951, contains a collection of materials concerning later descendants of Columbus and collateral branches of the family. It features a 16th-century chapel from a Spanish castle reputedly owned by Diego Colón which became the residence of Columbus's descendants. The chapel interior was dismantled and moved from Spain in 1909 and re-erected on the Boal estate at Boalsburg, Pennsylvania. Inside it are numerous religious paintings and other objects including a reliquary with fragments of wood supposedly from the True Cross. The museum also holds a collection of documents mostly relating to Columbus descendants of the late 18th and early 19th centuries.
In many countries of the Americas, as well as Spain and Italy, Columbus Day celebrates the anniversary of Columbus's arrival in the Americas on 12 October 1492.
Legacy
The voyages of Columbus are considered a turning point in human history, marking the beginning of globalization and accompanying demographic, commercial, economic, social, and political changes.
His explorations resulted in permanent contact between the two hemispheres, and the term "pre-Columbian" is used to refer to the cultures of the Americas before the arrival of Columbus and his European successors. The ensuing Columbian exchange saw the massive exchange of animals, plants, fungi, diseases, technologies, mineral wealth and ideas.
In the first century after his endeavors, Columbus's figure largely languished in the backwaters of history, and his reputation was beset by his failures as a colonial administrator. His legacy was somewhat rescued from oblivion when he began to appear as a character in Italian and Spanish plays and poems from the late 16th century onward.
Columbus was subsumed into the Western narrative of colonization and empire building, which invoked notions of translatio imperii and translatio studii to underline who was considered "civilized" and who was not.
The Americanization of the figure of Columbus began in the latter decades of the 18th century, after the revolutionary period of the United States, elevating the status of his reputation to a national myth, homo americanus. His landing became a powerful icon as an "image of American genesis". The Discovery of America sculpture, depicting Columbus and a cowering Indian maiden, was commissioned on 3 April 1837, when U.S. President Martin Van Buren sanctioned the engineering of Luigi Persico's design. This representation of Columbus's triumph and the Indian's recoil is a demonstration of white superiority over savage, naive Indians. As recorded during its unveiling in 1844, the sculpture extends to "represent the meeting of the two races", as Persico captures their first interaction, highlighting the "moral and intellectual inferiority" of Indians. Placed outside the U.S. Capitol building where it remained until its removal in the mid-20th century, the sculpture reflected the contemporary view of whites in the U.S. toward the Natives; they are labeled "merciless Indian savages" in the United States Declaration of Independence. In 1836, Pennsylvania senator and future U.S. President James Buchanan, who proposed the sculpture, described it as representing "the great discoverer when he first bounded with ecstasy upon the shore, ail his toils past, presenting a hemisphere to the astonished world, with the name America inscribed upon it. Whilst he is thus standing upon the shore, a female savage, with awe and wonder depicted in her countenance, is gazing upon him."
The American Columbus myth was reconfigured later in the century when he was enlisted as an ethnic hero by immigrants to the United States who were not of Anglo-Saxon stock, such as Jewish, Italian, and Irish people, who claimed Columbus as a sort of ethnic founding father. Catholics unsuccessfully tried to promote him for canonization in the 19th century.
From the 1990s onward, a narrative of Columbus being responsible for the genocide of indigenous peoples and environmental destruction began to compete with the then predominant discourse of Columbus as Christ-bearer, scientist, or father of America. This narrative features the negative effects of Columbus' conquests on native populations. Exposed to Old World diseases, the indigenous populations of the New World collapsed, and were largely replaced by Europeans and Africans, who brought with them new methods of farming, business, governance, and religious worship.
Originality of discovery of America
Though Christopher Columbus came to be considered the European discoverer of America in Western popular culture, his historical legacy is more nuanced. After settling Iceland, the Norse settled the uninhabited southern part of Greenland beginning in the 10th century. Norsemen are believed to have then set sail from Greenland and Iceland to become the first known Europeans to reach the North American mainland, nearly 500 years before Columbus reached the Caribbean. The 1960s discovery of a Norse settlement dating to c. 1000 AD at L'Anse aux Meadows, Newfoundland, partially corroborates accounts within the Icelandic sagas of Erik the Red's colonization of Greenland and his son Leif Erikson's subsequent exploration of a place he called Vinland.
In the 19th century, amid a revival of interest in Norse culture, Carl Christian Rafn and Benjamin Franklin DeCosta wrote works establishing that the Norse had preceded Columbus in colonizing the Americas. Following this, in 1874 Rasmus Bjørn Anderson argued that Columbus must have known of the North American continent before he started his voyage of discovery. Most modern scholars doubt Columbus had knowledge of the Norse settlements in America, with his arrival to the continent being most likely an independent discovery.
Europeans devised explanations for the origins of the Native Americans and their geographical distribution with narratives that often served to reinforce their own preconceptions built on ancient intellectual foundations. In modern Latin America, the non-Native populations of some countries often demonstrate an ambiguous attitude toward the perspectives of indigenous peoples regarding the so-called "discovery" by Columbus and the era of colonialism that followed.
In his 1960 monograph, Mexican philosopher and historian Edmundo O'Gorman explicitly rejects the Columbus discovery myth, arguing that the idea that Columbus discovered America was a misleading legend fixed in the public mind through the works of American author Washington Irving during the 19th century. O'Gorman argues that to assert Columbus "discovered America" is to shape the facts concerning the events of 1492 to make them conform to an interpretation that arose many years later. For him, the Eurocentric view of the discovery of America sustains systems of domination in ways that favor Europeans.
In a 1992 article for The UNESCO Courier, Félix Fernández-Shaw argues that the word "discovery" prioritizes European explorers as the "heroes" of the contact between the Old and New World. He suggests that the word "encounter" is more appropriate, being a more universal term which includes Native Americans in the narrative.
America as a distinct land
Historians have traditionally argued that Columbus remained convinced until his death that his journeys had been along the east coast of Asia as he originally intended (excluding arguments such as Anderson's). On his third voyage he briefly referred to South America as a "hitherto unknown" continent, while also rationalizing that it was the "Earthly Paradise" located "at the end of the Orient". Columbus continued to claim in his later writings that he had reached Asia; in a 1502 letter to Pope Alexander VI, he asserts that Cuba is the east coast of Asia. On the other hand, in a document in the Book of Privileges (1502), Columbus refers to the New World as the Indias Occidentales ('West Indies'), which he says "were unknown to all the world".
Shape of the Earth
Washington Irving's 1828 biography of Columbus popularized the idea that Columbus had difficulty obtaining support for his plan because many Catholic theologians insisted that the Earth was flat, but this is a popular misconception which can be traced back to 17th-century Protestants campaigning against Catholicism. In fact, the spherical shape of the Earth had been known to scholars since antiquity, and was common knowledge among sailors, including Columbus. Coincidentally, the oldest surviving globe of the Earth, the Erdapfel, was made in 1492, just before Columbus's return to Europe from his first voyage. As such it contains no sign of the Americas and yet demonstrates the common belief in a spherical Earth.
Making observations with a quadrant on his third voyage, Columbus inaccurately measured the polar radius of the North Star's diurnal motion to be five degrees, which was double the value of another erroneous reading he had made from further north. This led him to describe the figure of the Earth as pear-shaped, with the "stalk" portion ascending towards Heaven. In fact, the Earth is ever so slightly pear-shaped, with its "stalk" pointing north.
Criticism and defense
Columbus has been criticized both for his brutality and for initiating the depopulation of the indigenous peoples of the Caribbean, whether by imported diseases or intentional violence. According to scholars of Native American history, George Tinker and Mark Freedman, Columbus was responsible for creating a cycle of "murder, violence, and slavery" to maximize exploitation of the Caribbean islands' resources, and that Native deaths on the scale at which they occurred would not have been caused by new diseases alone. Further, they describe the proposition that disease and not genocide caused these deaths as "American holocaust denial". Historian Kris Lane disputes whether it is appropriate to use the term "genocide" when the atrocities were not Columbus's intent, but resulted from his decrees, family business goals, and negligence. Other scholars defend Columbus's actions or allege that the worst accusations against him are not based in fact while others claim that "he has been blamed for events far beyond his own reach or knowledge".
As a result of the protests and riots that followed the murder of George Floyd in 2020, many public monuments of Christopher Columbus have been removed.
Brutality
Some historians have criticized Columbus for initiating the widespread colonization of the Americas and for abusing its native population. On St. Croix, Columbus's friend Michele da Cuneo—according to his own account—kept an indigenous woman he captured, whom Columbus "gave to [him]", then brutally raped her.
According to some historians, the punishment for an indigenous person, aged 14 and older, failing to pay a hawk's bell, or cascabela, worth of gold dust every six months (based on Bartolomé de las Casas's account) was cutting off the hands of those without tokens, often leaving them to bleed to death. Other historians dispute such accounts. For example, a study of Spanish archival sources showed that the cascabela quotas were imposed by Guarionex, not Columbus, and that there is no mention, in the primary sources, of punishment by cutting off hands for failing to pay. Columbus had an economic interest in the enslavement of the Hispaniola natives and for that reason was not eager to baptize them, which attracted criticism from some churchmen. Consuelo Varela, a Spanish historian, stated that "Columbus's government was characterized by a form of tyranny. Even those who loved him had to admit the atrocities that had taken place." Other historians have argued that some of the accounts of the brutality of Columbus and his brothers have been exaggerated as part of the Black Legend, a historical tendency towards anti-Spanish and anti-Catholic sentiment in historical sources dating as far back as the 16th century, which they speculate may continue to taint scholarship into the present day.
According to historian Emily Berquist Soule, the immense Portuguese profits from the maritime trade in African slaves along the West African coast served as an inspiration for Columbus to create a counterpart of this apparatus in the New World using indigenous American slaves. Historian William J. Connell has argued that while Columbus "brought the entrepreneurial form of slavery to the New World", this "was a phenomenon of the times", further arguing that "we have to be very careful about applying 20th-century understandings of morality to the morality of the 15th century." In a less popular defense of colonization, Spanish ambassador María Jesús Figa López-Palop has argued, "Normally we melded with the cultures in America, we stayed there, we spread our language and culture and religion."
British historian Basil Davidson has dubbed Columbus the "father of the slave trade", citing the fact that the first license to ship enslaved Africans to the Caribbean was issued by the Catholic Monarchs in 1501 to the first royal governor of Hispaniola, Nicolás de Ovando.
Depopulation
Around the turn of the 21st century, estimates for the population of Hispaniola ranged between 250,000 and two million, but genetic analysis published in late 2020 suggests that smaller figures are more likely, perhaps as low as 10,000–50,000 for Hispaniola and Puerto Rico combined. Based on the previous figures of a few hundred thousand, some have estimated that a third or more of the natives in Haiti were dead within the first two years of Columbus's governorship. Contributors to depopulation included disease, warfare, and harsh enslavement. Indirect evidence suggests that some serious illness may have arrived with the 1,500 colonists who accompanied Columbus' second expedition in 1493. Charles C. Mann writes that "It was as if the suffering these diseases had caused in Eurasia over the past millennia were concentrated into the span of decades." A third of the natives forced to work in gold and silver mines died every six months. Within three to six decades, the surviving Arawak population numbered only in the hundreds. The indigenous population of the Americas overall is thought to have been reduced by about 90% in the century after Columbus's arrival. Among indigenous peoples, Columbus is often viewed as a key agent of genocide. Samuel Eliot Morison, a Harvard historian and author of a multivolume biography on Columbus, writes, "The cruel policy initiated by Columbus and pursued by his successors resulted in complete genocide."
According to Noble David Cook, "There were too few Spaniards to have killed the millions who were reported to have died in the first century after Old and New World contact." He instead estimates that the death toll was caused by smallpox, which may have caused a pandemic only after the arrival of Hernán Cortés in 1519. According to some estimates, smallpox had an 80–90% fatality rate in Native American populations. The natives had no acquired immunity to these new diseases and suffered high fatalities. There is also evidence that they had poor diets and were overworked. Historian Andrés Reséndez of University of California, Davis, says the available evidence suggests "slavery has emerged as major killer" of the indigenous populations of the Caribbean between 1492 and 1550 more so than diseases such as smallpox, influenza and malaria. He says that indigenous populations did not experience a rebound like European populations did following the Black Death because unlike the latter, a large portion of the former were subjected to deadly forced labor in the mines.
The diseases that devastated the Native Americans came in multiple waves at different times, sometimes as much as centuries apart, which would mean that survivors of one disease may have been killed by others, preventing the population from recovering. Historian David Stannard describes the depopulation of the indigenous Americans as "neither inadvertent nor inevitable", saying it was the result of both disease and intentional genocide.
Navigational expertise
Biographers and historians have a wide range of opinions about Columbus's expertise and experience navigating and captaining ships. One scholar lists some European works ranging from the 1890s to 1980s that support Columbus's experience and skill as among the best in Genoa, while listing some American works over a similar timeframe that portray the explorer as an untrained entrepreneur, having only minor crew or passenger experience prior to his noted journeys. According to Morison, Columbus's success in utilizing the trade winds might owe significantly to luck.
Physical appearance
Contemporary descriptions of Columbus, including those by his son Fernando and Bartolomé de las Casas, describe him as taller than average, with light skin (often sunburnt), blue or hazel eyes, high cheekbones and freckled face, an aquiline nose, and blond to reddish hair and beard (until about the age of 30, when it began to whiten). One Spanish commentator described his eyes using the word garzos, now usually translated as "light blue", but it seems to have indicated light grey-green or hazel eyes to Columbus's contemporaries. The word rubios can mean "blond", "fair", or "ruddy". Although an abundance of artwork depicts Columbus, no authentic contemporary portrait is known.
A well-known image of Columbus is a portrait by Sebastiano del Piombo, which has been reproduced in many textbooks. It agrees with descriptions of Columbus in that it shows a large man with auburn hair, but the painting dates from 1519 so cannot have been painted from life. Furthermore, the inscription identifying the subject as Columbus was probably added later, and the face shown differs from that of other images.
Sometime between 1531 and 1536, Alejo Fernández painted an altarpiece, The Virgin of the Navigators, that includes a depiction of Columbus. The painting was commissioned for a chapel in Seville's Casa de Contratación (House of Trade) in the Alcázar of Seville and remains there.
At the World's Columbian Exposition in 1893, 71 alleged portraits of Columbus were displayed; most of them did not match contemporary descriptions.
See also
Christopher Columbus in fiction
List of monuments and memorials to Christopher Columbus
Egg of Columbus
Diego Columbus
Ferdinand Columbus
Columbus's letter on the first voyage
Christopher Columbus House
History of the Americas
Peopling of the Americas
Lugares colombinos
Notes
References
Sources
in
Crosby, A.W. (1987) The Columbian Voyages: the Columbian Exchange, and their Historians. Washington, DC: American Historical Association.
Fuson, Robert H. (1992) The Log of Christopher Columbus. International Marine Publishing
Further reading
Wey, Gómez Nicolás (2008). The tropics of empire: Why Columbus sailed south to the Indies. Cambridge, MA: MIT Press.
Wilford, John Noble (1991), The Mysterious History of Columbus: An Exploration of the Man, the Myth, the Legacy, New York: Alfred A. Knopf.
External links
Journals and Other Documents on the Life and Voyages of Christopher Columbus, translated and edited by Samuel Eliot Morison in PDF format
Excerpts from the log of Christopher Columbus's first voyage
The Letter of Columbus to Luis de Sant Angel Announcing His Discovery
Columbus Monuments Pages (overview of monuments for Columbus all over the world)
"But for Columbus There Would Be No America", Tiziano Thomas Dossena, Bridgepugliausa.it, 2012.
1451 births
1506 deaths
1490s in Cuba
1490s in the Caribbean
1492 in North America
15th-century apocalypticists
15th-century explorers
15th-century Genoese people
16th-century Genoese people
15th-century Roman Catholics
Spanish exploration in the Age of Discovery
Burials at Seville Cathedral
Colonial governors of Santo Domingo
Christopher
Explorers of Central America
Italian expatriates in Spain
Italian explorers of North America
Italian explorers of South America
Italian people imprisoned abroad
Italian Roman Catholics
Explorers from the Republic of Genoa
16th-century diarists
Prisoners and detainees of Spain |
5637 | https://en.wikipedia.org/wiki/Cypress%20Hill | Cypress Hill | Cypress Hill is an American hip hop group from South Gate, California, formed in 1988. They have sold over 20 million albums worldwide, and they have obtained multi-platinum and platinum certifications. The group has been critically acclaimed for their first five albums. They are considered to be among the main progenitors of West Coast hip hop and 1990s hip hop. All of the group members advocate for medical and recreational use of cannabis in the United States. In 2019, Cypress Hill became the first hip hop group to have a star on the Hollywood Walk of Fame.
History
Formation (1988)
Senen Reyes (also known as Sen Dog) and Ulpiano Sergio Reyes (also known as Mellow Man Ace) are brothers born in Pinar del Río, Cuba. In 1971, their family immigrated to the United States and initially lived in South Gate, California. In 1988, the two brothers teamed up with New York City native Lawrence Muggerud (also known as DJ Muggs, previously in a rap group named 7A3) and Louis Freese (also known as B-Real) to form a hip-hop group named DVX (Devastating Vocal Excellence). The band soon lost Mellow Man Ace to a solo career, and changed their name to Cypress Hill, after a street in South Gate.
Mainstream success with Cypress Hill and Black Sunday, addition of Eric Bobo, and III: Temples of Boom (1989–1996)
After recording a demo in 1989, Cypress Hill signed a record deal with Ruffhouse Records. Their self-titled first album was released in August 1991. The lead single was the double A-side "The Phuncky Feel One"/"How I Could Just Kill a Man" which received heavy airplay on urban and college radio, most notably peaking at No. 1 on Billboard Hot Rap Tracks chart and at No. 77 on the Billboard Hot 100. The other two singles released from the album were "Hand on the Pump" and "Latin Lingo", the latter of which combined English and Spanish lyrics, a trait that was continued throughout their career. The success of these singles led Cypress Hill to sell two million copies in the U.S. alone, and it peaked at No. 31 on the Billboard 200 and was certified double platinum by the RIAA. In 1992, Cypress Hill's first contribution to a soundtrack was the song "Shoot 'Em Up" for the movie Juice. The group made their first appearance at Lollapalooza on the side stage in 1992. It was the festival's second year of touring, and featured a diverse lineup of acts such as Red Hot Chili Peppers, Ice Cube, Lush, Tool, Stone Temple Pilots, among others. The trio also supported the Cypress Hill album by touring with the Beastie Boys, who were touring behind their third album Check Your Head.
Black Sunday, the group's second album, debuted at No. 1 on the Billboard 200 in 1993, recording the highest Soundscan for a rap group up until that time. "Insane in the Brain" became a crossover hit, peaking at No. 19 on the Billboard Hot 100, at No. 16 on the Dance Club Songs chart, and at No. 1 on the Hot Rap Tracks chart. "Insane in the Brain" also garnered the group their first Grammy nomination. Black Sunday went triple platinum in the U.S. and sold about 3.26 million copies. Cypress Hill headlined the Soul Assassins tour with House of Pain and Funkdoobiest as support, then performed on a college tour with Rage Against the Machine and Seven Year Bitch. Also in 1993, Cypress Hill had two tracks on the Judgment Night soundtrack, teaming up with Pearl Jam (without vocalist Eddie Vedder) on the track "Real Thing" and Sonic Youth on "I Love You Mary Jane". The soundtrack was notable for intentionally creating collaborations between the rap/hip-hop and rock/metal genres, and as a result the soundtrack peaked at No. 17 on the Billboard 200 and was certified gold by the RIAA. On October 2, 1993, Cypress Hill performed on the comedy show Saturday Night Live, broadcast by NBC. Prior to their performances, studio executives, label representatives, and the group's own associates constantly asked the trio to not smoke marijuana on-stage. DJ Muggs became irritated due to the constant inquisitions, and he subsequently lit a joint during the group's second song. Up until that point, it was extremely uncommon to see marijuana usage on a live televised broadcast. The incident prompted NBC to ban the group from returning on the show, a distinction shared only by six other artists.
The group later played at Woodstock 94, officially making percussionist Eric Bobo a member of the group during the performance. Eric Bobo was known as the son of Willie Bobo and as a touring member of the Beastie Boys, who Cypress Hill previously toured with in 1992. That same year, Rolling Stone named the group as the Best Rap Group in their music awards voted by critics and readers. Cypress Hill then played at Lollapalooza for two successive years, topping the bill in 1995. They also appeared on the "Homerpalooza" episode of The Simpsons. The group received their second Grammy nomination in 1995 for "I Ain't Goin' Out Like That".
Cypress Hill's third album III: Temples of Boom was released in 1995 as it peaked at No. 3 on the Billboard 200 and at No. 3 on the Canadian Albums Chart. The album was certified platinum by the RIAA. "Throw Your Set in the Air" was the most successful single off the album, peaking at No. 45 on the Billboard Hot 100 and No. 11 on the Hot Rap Tracks charts. The single also earned Cypress Hill's third Grammy nomination. Shortly after the release of III: Temples of Boom, Sen Dog became frustrated due to the rigorous touring schedule. Just prior to an overseas tour, he departed from the group unexpectedly. Cypress Hill continued their tours throughout 1995 and 1996, with Eric Bobo and also various guest vocalists covering Sen Dog's verses. Sen Dog later formed the rock band SX-10 to explore other musical genres. Later on in 1996, Cypress Hill appeared on the first Smokin' Grooves tour, featuring Ziggy Marley, The Fugees, Busta Rhymes, and A Tribe Called Quest. The group also released a nine track EP Unreleased and Revamped with rare mixes.
Focus on solo projects, IV, crossover appeal with Skull & Bones, and Stoned Raiders (1997–2002)
In 1997, the members focused on their solo careers. DJ Muggs released Soul Assassins: Chapter 1, with features from Dr. Dre, KRS-One, Wyclef Jean, and Mobb Deep. B-Real appeared with Busta Rhymes, Coolio, LL Cool J, and Method Man on "Hit 'Em High" from the multi-platinum Space Jam Soundtrack. He also appeared with RBX, Nas, and KRS-One on "East Coast Killer, West Coast Killer" from Dr. Dre's Dr. Dre Presents the Aftermath album, and contributed to an album entitled The Psycho Realm with the group of the same name. Sen Dog also released the Get Wood sampler as part of SX-10 on the label Flip Records. In addition, Eric Bobo contributed drums to various rock bands on their albums, such as 311 and Soulfly.
In early 1998, Sen Dog returned to Cypress Hill. He cited his therapist and also his creative collaborations with the band SX-10 as catalysts for his rejoining. The quartet then embarked on the third annual Smokin' Grooves tour with Public Enemy, Wyclef Jean, Busta Rhymes, and Gang Starr. Cypress Hill released IV in October 1998 which went gold in the U.S. and peaked at No. 11 on the Billboard 200. The lead single off the album was "Dr. Greenthumb", as it peaked at No. 11 on the Hot Rap Tracks chart. It also peaked at No. 70 on the Billboard Hot 100, their last appearance on the chart to date. In 1999, Cypress Hill helped with the PC first-person shooter video game Kingpin: Life of Crime. Three of the band's songs from the 1998 IV album were in the game; "16 Men Till There's No Men Left", "Checkmate", and "Lightning Strikes". The group also did voice work for some of the game's characters. Also in 1999, the band released a greatest hits album in Spanish, Los Grandes Éxitos en Español.
In 2000, Cypress Hill fused genres with their fifth album, Skull & Bones, which consisted of two discs. The first disc Skull was composed of rap tracks while Bones explored further the group's forays into rock. The album peaked at No. 5 on the Billboard 200 and at No. 3 on the Canadian Albums Chart, and the album was eventually certified platinum by the RIAA. The first two singles were "(Rock) Superstar" for rock radio and "(Rap) Superstar" for urban radio. Both singles received heavy airplay on both rock and urban radio, enabling Cypress Hill to crossover again. "(Rock) Superstar" peaked at No. 18 on the Modern Rock Tracks chart and "(Rap) Superstar" peaked at No. 43 on the Hot Rap Tracks chart.
Due to the rock genre's prominent appearance on Skull & Bones, Cypress Hill employed the members of Sen Dog's band SX-10 as backing musicians for the live shows. Cypress Hill supported Skull & Bones by initially playing a summer tour with Limp Bizkit and Cold called the Back 2 Basics Tour. The tour was controversial as it was sponsored by the file sharing service Napster. In addition, Napster enabled each show of the tour to be free to the fans, and no security guards were employed during the performances. After the tour's conclusion, the acts had not reported any disturbances. Towards the end of 2000, Cypress Hill and MxPx landed a slot opening for The Offspring on the Conspiracy of One Tour. The group also released Live at the Fillmore, a concert disc recorded at San Francisco's The Fillmore in 2000. Cypress Hill continued their experimentation with rock on the Stoned Raiders album in 2001; however, its sales were a disappointment. The album peaked at No. 64 on the Billboard 200, the group's lowest position to that point. Also in 2001, the group made a cameo appearance as themselves in the film How High. Cypress Hill then recorded the track "Just Another Victim" for WWF as a theme song for Tazz, borrowing elements from the 2000 single "(Rock) Superstar". The song would later be featured on the compilation WWF Forceable Entry in March 2002, which peaked at No. 3 on the Billboard 200 and was certified gold by the RIAA.
Till Death Do Us Part, DJ Muggs' hiatus, and extensive collaborations on Rise Up (2003–2012)
Cypress Hill released Till Death Do Us Part in March 2004 as it peaked at No. 21 on the Billboard 200. It featured appearances by Bob Marley's son Damian Marley, Prodigy of Mobb Deep, and producers The Alchemist and Fredwreck. The album represented a further departure from the group's signature sound. Reggae was a strong influence on its sound, especially on the lead single "What's Your Number?". The track featured Tim Armstrong of Rancid on guitar and backup vocals. It was based on the classic song "The Guns of Brixton" from The Clash's album London Calling. "What's Your Number?" saw Cypress Hill crossover into the rock charts again, as the single peaked at No. 23 on the Modern Rock Tracks chart.
Afterwards, DJ Muggs took a hiatus from the group to focus on other projects, such as Soul Assassins and his DJ Muggs vs. collaboration albums. In December 2005 another compilation album titled Greatest Hits From the Bong was released. It included nine hits from previous albums and two new tracks. In the summer of 2006, B-Real appeared on Snoop Dogg's single "Vato", which was produced by Pharrell Williams. The group's next album was tentatively scheduled for an early 2007 release, but it was pushed back numerous times. In 2007 Cypress Hill toured as a part of the Rock the Bells tour. They headlined with Public Enemy, Wu-Tang Clan, Nas, and a reunited Rage Against the Machine.
On July 25, 2008, Cypress Hill performed at a benefit concert at the House of Blues Chicago, where a majority of the proceeds went to the Chicago Alliance to End Homelessness. In August 2009, a new song by Cypress Hill titled "Get 'Em Up" was made available on iTunes. The song was also featured in the Madden NFL 2010 video game. It was the first sampling of the group's then-upcoming album.
Cypress Hill's eighth studio album Rise Up featured contributions from Everlast, Tom Morello, Daron Malakian, Pitbull, Marc Anthony, and Mike Shinoda. Previously, the vast majority of the group's albums were produced by DJ Muggs; however, Rise Up instead featured a large array of guest features and producers, with DJ Muggs only appearing on two tracks. The album was released on Priority Records/EMI Entertainment, as the group was signed to the label by new creative chairman Snoop Dogg. Rise Up was released on April 20, 2010 and it peaked at No. 19 on the Billboard 200. The single "Rise Up" was featured at WWE's pay-per-view Elimination Chamber as the official theme song for the event. It also appeared in the trailer for the movie The Green Hornet. "Rise Up" managed to peak at No. 20 on both the Modern Rock Tracks and Mainstream Rock Tracks charts. "Armada Latina", which featured Pitbull and Marc Anthony, was Cypress Hill's last song to chart in the U.S. to date, peaking at No. 25 on the Hot Rap Tracks chart.
Cypress Hill commenced its Rise Up tour in Philadelphia on April 10, 2010. In one particular instance, the group was supposed to stop in Tucson, Arizona but canceled the show in protest of the recent immigration legislation. At the Rock en Seine festival in Paris on August 27, 2010, they had said in an interview that they would anticipate the outcome of the legislation before returning. Also in 2010, Cypress Hill performed at the Reading and Leeds Festivals on August 28 at Leeds and August 29 at Reading. On June 5, 2012, Cypress Hill and dubstep artist Rusko released a collaborative EP entitled Cypress X Rusko. DJ Muggs, who was still on a hiatus, and Eric Bobo were absent on the release. Also in 2012, Cypress Hill collaborated with Deadmau5 on his sixth studio album Album Title Goes Here, lending vocals on "Failbait".
Elephants on Acid, Hollywood Walk of Fame, and Back in Black (2013–2022)
During the interval between Cypress Hill albums, the four members commenced work on various projects. B-Real formed the band Prophets of Rage alongside three members of Rage Against the Machine and two members of Public Enemy. He also released The Prescription EP under his Dr. Greenthumb persona. Sen Dog formed the band Powerflo alongside members of Fear Factory, downset., and Biohazard. DJ Muggs revived his Soul Assassins project as its main producer. Eric Bobo formed a duo named Ritmo Machine. He also contributed to an unreleased album by his father Willie Bobo.
On September 28, 2018, Cypress Hill released the album Elephants on Acid, which saw the return of DJ Muggs as main composer and producer. It peaked at No. 120 on the Billboard 200 and at No. 6 on the Top Independent Albums chart. Overall, four different singles were released to promote the album. In April 2019 Cypress Hill received a star on the Hollywood Walk of Fame. Although various solo hip hop artists had received stars, Cypress Hill became the first collective hip hop group to receive a star. The entire lineup of B-Real, Sen Dog, Eric Bobo, and DJ Muggs had all attended the ceremony.
In January 2022, the group announced their 10th studio album entitled Back in Black. In addition, Cypress Hill planned to support the album by joining Slipknot alongside Ho99o9 for the second half of the 2022 Knotfest Roadshow. They had previously invited Slipknot to join their Great Smoke-Out festival back in 2009. Back in Black was released on March 18, 2022. It was the group's first album to not feature DJ Muggs on any of the tracks, as producing duties were handled by Black Milk. Back in Black was the lowest charting album of the group's career, and the first to not reach the Billboard 200 chart; however, it peaked at No. 69 on the Top Current Album Sales chart.
A documentary about the group, entitled Cypress Hill: Insane in the Brain, was released on the Showtime service in April 2022. Estevan Oriol, Cypress Hill's former tour manager and close associate, directed the film. It had mainly chronicled the group's formation and their first decade of existence. In relation to the Cypress Hill: Insane in the Brain documentary, Cypress Hill digitally released the single "Crossroads" in September 2022. The single featured the return of DJ Muggs on production.
Future plans and tentative final album (2023–present)
In an interview, Sen Dog claimed that the group will fully reunite with DJ Muggs for an 11th album; however, he stated that it will be the group's final album of their career.
Style
Rapping
One of the band's most striking aspects is B-Real's exaggeratedly high-pitched nasal vocals. In the book Check the Technique, B-Real described his nasal style, saying his rapping voice is "high and annoying...the nasal style I have was just something that I developed...my more natural style wasn't so pleasing to DJ Muggs and Sen Dog's ears" and talking about the nasal style in the book How to Rap, B-Real said "you want to stand out from the others and just be distinct...when you got something that can separate you from everybody else, you gotta use it to your advantage." In the film Art of Rap, B-Real credited the Beastie Boys as an influence when developing his rapping style. Sen Dog's voice is deeper, more violent, and often shouted alongside the rapping; his vocals are often emphasized by adding another background/choir voice to say them. Sen Dog's style is in contrast to B-Real's, who said "Sen's voice is so strong" and "it all blends together" when they are both on the same track.
Both B-Real and Sen Dog started writing lyrics in both Spanish and English. Initially, B-Real was inspired to start writing raps from watching Sen Dog and Mellow Man Ace writing their lyrics, and originally B-Real was going to just be the writer for the group rather than a rapper. Their lyrics are noted for bringing a "cartoonish" approach to violence by Peter Shapiro and Allmusic.
Production
The sound and groove of their music, mostly produced by DJ Muggs, has spooky sounds and a stoned aesthetic; with its bass-heavy rhythms and odd sample loops ("Insane in the Brain" has a blues guitar pitched looped in its chorus), it carries a psychedelic value, which is lessened in their rock-oriented albums. The double album Skull & Bones consists of a pure rap disc (Skull) and a separate rock disc (Bones). In the live album Live at The Fillmore, some of the old classics were played in a rock/metal version, with Eric Bobo playing the drums and Sen Dog's band SX-10 as the other instrumentalists. 2010's Rise Up was the most radically different album in regards to production. DJ Muggs had produced the majority of each prior Cypress Hill album, but he only appeared on Rise Up twice. The remaining songs were handled by various other guests. 2018's Elephants on Acid marked the return of DJ Muggs, and the album featured a more psychedelic and hip-hop approach.
Legacy
Cypress Hill are often credited for being one of the few Latin American hip hop groups to break through with their own stylistic impact on rap music. Cypress Hill have been cited as an influence by artists such as Eminem, Baby Bash, Paul Wall ,Post Malone, Luniz, and Fat Joe. Cypress Hill have also been cited as a strong influence on nu metal bands such as Deftones, Limp Bizkit, System of a Down, Linkin Park, and Korn. Famously, the bassline during the outro of Korn's 1994 single "Blind" was a direct tribute to Cypress Hill's 1993 track "Lick a Shot".
Discography
Studio albums
Cypress Hill (1991)
Black Sunday (1993)
III: Temples of Boom (1995)
IV (1998)
Skull & Bones (2000)
Stoned Raiders (2001)
Till Death Do Us Part (2004)
Rise Up (2010)
Elephants on Acid (2018)
Back in Black (2022)
Awards and nominations
Billboard Music Awards
Grammy Awards
MTV Video Music Awards
Hollywood Walk of Fame
|-
|2019
|Cypress Hill
|Star
|
|}
Members
Current
Louis "B-Real" Freese – vocals (1988–present)
Senen "Sen Dog" Reyes – vocals (1988–1995, 1998–present)
Eric "Eric Bobo" Correa – drums, percussion (1993–present)
Current touring
Lord "DJ Lord" Asword – turntables, samples, vocals (2019–present)
Former
Ulpiano "Mellow Man Ace" Reyes – vocals (1988)
Lawrence "DJ Muggs" Muggerud – turntables, samples (1988–2004, 2014–2018)
Former touring
Panchito "Ponch" Gomez – drums, percussion (1993–1994)
Frank Mercurio – bass (2000–2002)
Jeremy Fleener – guitar (2000–2002)
Andy Zambrano – guitar (2000–2002)
Julio "Julio G" González – turntables, samples (2004–2014)
Michael "Mix Master Mike" Schwartz – turntables, samples (2018–2019)
Timeline
References
External links
1988 establishments in California
American cannabis activists
American rap rock groups
Bloods
Cannabis music
Columbia Records artists
Gangsta rap groups
West Coast hip hop groups
Hispanic and Latino American rappers
Musical groups established in 1988
Musical groups from California
People from South Gate, California
Priority Records artists
Psychedelic rap groups
Rappers from Los Angeles
Hip hop groups from California |
5638 | https://en.wikipedia.org/wiki/Combustion | Combustion | Combustion, or burning, is a high-temperature exothermic redox chemical reaction between a fuel (the reductant) and an oxidant, usually atmospheric oxygen, that produces oxidized, often gaseous products, in a mixture termed as smoke. Combustion does not always result in fire, because a flame is only visible when substances undergoing combustion vaporize, but when it does, a flame is a characteristic indicator of the reaction. While activation energy must be supplied to initiate combustion (e.g., using a lit match to light a fire), the heat from a flame may provide enough energy to make the reaction self-sustaining.
Combustion is often a complicated sequence of elementary radical reactions. Solid fuels, such as wood and coal, first undergo endothermic pyrolysis to produce gaseous fuels whose combustion then supplies the heat required to produce more of them. Combustion is often hot enough that incandescent light in the form of either glowing or a flame is produced. A simple example can be seen in the combustion of hydrogen and oxygen into water vapor, a reaction which is commonly used to fuel rocket engines. This reaction releases 242kJ/mol of heat and reduces the enthalpy accordingly (at constant temperature and pressure):
2H_2(g){+}O_2(g)\rightarrow 2H_2O\uparrow
Uncatalyzed combustion in air requires relatively high temperatures. Complete combustion is stoichiometric concerning the fuel, where there is no remaining fuel, and ideally, no residual oxidant. Thermodynamically, the chemical equilibrium of combustion in air is overwhelmingly on the side of the products. However, complete combustion is almost impossible to achieve, since the chemical equilibrium is not necessarily reached, or may contain unburnt products such as carbon monoxide, hydrogen and even carbon (soot or ash). Thus, the produced smoke is usually toxic and contains unburned or partially oxidized products. Any combustion at high temperatures in atmospheric air, which is 78 percent nitrogen, will also create small amounts of several nitrogen oxides, commonly referred to as NOx, since the combustion of nitrogen is thermodynamically favored at high, but not low temperatures. Since burning is rarely clean, fuel gas cleaning or catalytic converters may be required by law.
Fires occur naturally, ignited by lightning strikes or by volcanic products. Combustion (fire) was the first controlled chemical reaction discovered by humans, in the form of campfires and bonfires, and continues to be the main method to produce energy for humanity. Usually, the fuel is carbon, hydrocarbons, or more complicated mixtures such as wood that contain partially oxidized hydrocarbons. The thermal energy produced from the combustion of either fossil fuels such as coal or oil, or from renewable fuels such as firewood, is harvested for diverse uses such as cooking, production of electricity or industrial or domestic heating. Combustion is also currently the only reaction used to power rockets. Combustion is also used to destroy (incinerate) waste, both nonhazardous and hazardous.
Oxidants for combustion have high oxidation potential and include atmospheric or pure oxygen, chlorine, fluorine, chlorine trifluoride, nitrous oxide and nitric acid. For instance, hydrogen burns in chlorine to form hydrogen chloride with the liberation of heat and light characteristic of combustion. Although usually not catalyzed, combustion can be catalyzed by platinum or vanadium, as in the contact process.
Types
Complete and incomplete
Complete
In complete combustion, the reactant burns in oxygen and produces a limited number of products. When a hydrocarbon burns in oxygen, the reaction will primarily yield carbon dioxide and water. When elements are burned, the products are primarily the most common oxides. Carbon will yield carbon dioxide, sulfur will yield sulfur dioxide, and iron will yield iron(III) oxide. Nitrogen is not considered to be a combustible substance when oxygen is the oxidant. Still, small amounts of various nitrogen oxides (commonly designated species) form when the air is the oxidative.
Combustion is not necessarily favorable to the maximum degree of oxidation, and it can be temperature-dependent. For example, sulfur trioxide is not produced quantitatively by the combustion of sulfur. species appear in significant amounts above about , and more is produced at higher temperatures. The amount of is also a function of oxygen excess.
In most industrial applications and in fires, air is the source of oxygen (). In the air, each mole of oxygen is mixed with approximately of nitrogen. Nitrogen does not take part in combustion, but at high temperatures, some nitrogen will be converted to (mostly , with much smaller amounts of ). On the other hand, when there is insufficient oxygen to combust the fuel completely, some fuel carbon is converted to carbon monoxide, and some of the hydrogens remain unreacted. A complete set of equations for the combustion of a hydrocarbon in the air, therefore, requires an additional calculation for the distribution of oxygen between the carbon and hydrogen in the fuel.
The amount of air required for complete combustion is known as the "theoretical air" or "stoichiometric air". The amount of air above this value actually needed for optimal combustion is known as the "excess air", and can vary from 5% for a natural gas boiler, to 40% for anthracite coal, to 300% for a gas turbine.
Incomplete
Incomplete combustion will occur when there is not enough oxygen to allow the fuel to react completely to produce carbon dioxide and water. It also happens when the combustion is quenched by a heat sink, such as a solid surface or flame trap. As is the case with complete combustion, water is produced by incomplete combustion; however, carbon and carbon monoxide are produced instead of carbon dioxide.
For most fuels, such as diesel oil, coal, or wood, pyrolysis occurs before combustion. In incomplete combustion, products of pyrolysis remain unburnt and contaminate the smoke with noxious particulate matter and gases. Partially oxidized compounds are also a concern; partial oxidation of ethanol can produce harmful acetaldehyde, and carbon can produce toxic carbon monoxide.
The designs of combustion devices can improve the quality of combustion, such as burners and internal combustion engines. Further improvements are achievable by catalytic after-burning devices (such as catalytic converters) or by the simple partial return of the exhaust gases into the combustion process. Such devices are required by environmental legislation for cars in most countries. They may be necessary to enable large combustion devices, such as thermal power stations, to reach legal emission standards.
The degree of combustion can be measured and analyzed with test equipment. HVAC contractors, firefighters and engineers use combustion analyzers to test the efficiency of a burner during the combustion process. Also, the efficiency of an internal combustion engine can be measured in this way, and some U.S. states and local municipalities use combustion analysis to define and rate the efficiency of vehicles on the road today.
Carbon monoxide is one of the products from incomplete combustion. The formation of carbon monoxide produces less heat than formation of carbon dioxide so complete combustion is greatly preferred especially as carbon monoxide is a poisonous gas. When breathed, carbon monoxide takes the place of oxygen and combines with some of the hemoglobin in the blood, rendering it unable to transport oxygen.
Problems associated with incomplete combustion
Environmental problems
These oxides combine with water and oxygen in the atmosphere, creating nitric acid and sulfuric acids, which return to Earth's surface as acid deposition, or "acid rain." Acid deposition harms aquatic organisms and kills trees. Due to its formation of certain nutrients that are less available to plants such as calcium and phosphorus, it reduces the productivity of the ecosystem and farms. An additional problem associated with nitrogen oxides is that they, along with hydrocarbon pollutants, contribute to the formation of ground level ozone, a major component of smog.
Human health problems
Breathing carbon monoxide causes headache, dizziness, vomiting, and nausea. If carbon monoxide levels are high enough, humans become unconscious or die. Exposure to moderate and high levels of carbon monoxide over long periods is positively correlated with the risk of heart disease. People who survive severe carbon monoxide poisoning may suffer long-term health problems. Carbon monoxide from the air is absorbed in the lungs which then binds with hemoglobin in human's red blood cells. This reduces the capacity of red blood cells that carry oxygen throughout the body.
Smoldering
Smoldering is the slow, low-temperature, flameless form of combustion, sustained by the heat evolved when oxygen directly attacks the surface of a condensed-phase fuel. It is a typically incomplete combustion reaction. Solid materials that can sustain a smoldering reaction include coal, cellulose, wood, cotton, tobacco, peat, duff, humus, synthetic foams, charring polymers (including polyurethane foam) and dust. Common examples of smoldering phenomena are the initiation of residential fires on upholstered furniture by weak heat sources (e.g., a cigarette, a short-circuited wire) and the persistent combustion of biomass behind the flaming fronts of wildfires.
Spontaneous
Spontaneous combustion is a type of combustion that occurs by self-heating (increase in temperature due to exothermic internal reactions), followed by thermal runaway (self-heating which rapidly accelerates to high temperatures) and finally, ignition.
For example, phosphorus self-ignites at room temperature without the application of heat. Organic materials undergoing bacterial composting can generate enough heat to reach the point of combustion.
Turbulent
Combustion resulting in a turbulent flame is the most used for industrial applications (e.g. gas turbines, gasoline engines, etc.) because the turbulence helps the mixing process between the fuel and oxidizer.
Micro-gravity
The term 'micro' gravity refers to a gravitational state that is 'low' (i.e., 'micro' in the sense of 'small' and not necessarily a millionth of Earth's normal gravity) such that the influence of buoyancy on physical processes may be considered small relative to other flow processes that would be present at normal gravity. In such an environment, the thermal and flow transport dynamics can behave quite differently than in normal gravity conditions (e.g., a candle's flame takes the shape of a sphere.). Microgravity combustion research contributes to the understanding of a wide variety of aspects that are relevant to both the environment of a spacecraft (e.g., fire dynamics relevant to crew safety on the International Space Station) and terrestrial (Earth-based) conditions (e.g., droplet combustion dynamics to assist developing new fuel blends for improved combustion, materials fabrication processes, thermal management of electronic systems, multiphase flow boiling dynamics, and many others).
Micro-combustion
Combustion processes that happen in very small volumes are considered micro-combustion. The high surface-to-volume ratio increases specific heat loss. Quenching distance plays a vital role in stabilizing the flame in such combustion chambers.
Chemical equations
Stoichiometric combustion of a hydrocarbon in oxygen
Generally, the chemical equation for stoichiometric combustion of a hydrocarbon in oxygen is:
C_\mathit{x}H_\mathit{y}{} + \mathit{z}O2 -> \mathit{x}CO2{} + \frac{\mathit{y}}{2}H2O
where .
For example, the stoichiometric burning of propane in oxygen is:
\underset{propane\atop (fuel)}{C3H8} + \underset{oxygen}{5O2} -> \underset{carbon\ dioxide}{3CO2} + \underset{water}{4H2O}
Stoichiometric combustion of a hydrocarbon in air
If the stoichiometric combustion takes place using air as the oxygen source, the nitrogen present in the air (Atmosphere of Earth) can be added to the equation (although it does not react) to show the stoichiometric composition of the fuel in air and the composition of the resultant flue gas. Treating all non-oxygen components in air as nitrogen gives a 'nitrogen' to oxygen ratio of 3.77, i.e. (100% - O2%) / O2% where O2% is 20.95% vol:
where .
For example, the stoichiometric combustion of propane (C3H8) in air is:
The stoichiometric composition of propane in air is 1 / (1 + 5 + 18.87) = 4.02% vol.
The stoichiometric combustion reaction for CHO in air:
The stoichiometric combustion reaction for CHOS:
The stoichiometric combustion reaction for CHONS:
The stoichiometric combustion reaction for CHOF:
Trace combustion products
Various other substances begin to appear in significant amounts in combustion products when the flame temperature is above about . When excess air is used, nitrogen may oxidize to and, to a much lesser extent, to . forms by disproportionation of , and and form by disproportionation of .
For example, when of propane is burned with of air (120% of the stoichiometric amount), the combustion products contain 3.3% . At , the equilibrium combustion products contain 0.03% and 0.002% . At , the combustion products contain 0.17% , 0.05% , 0.01% , and 0.004% .
Diesel engines are run with an excess of oxygen to combust small particles that tend to form with only a stoichiometric amount of oxygen, necessarily producing nitrogen oxide emissions. Both the United States and European Union enforce limits to vehicle nitrogen oxide emissions, which necessitate the use of special catalytic converters or treatment of the exhaust with urea (see Diesel exhaust fluid).
Incomplete combustion of a hydrocarbon in oxygen
The incomplete (partial) combustion of a hydrocarbon with oxygen produces a gas mixture containing mainly , , , and . Such gas mixtures are commonly prepared for use as protective atmospheres for the heat-treatment of metals and for gas carburizing. The general reaction equation for incomplete combustion of one mole of a hydrocarbon in oxygen is:
\underset{fuel}{C_\mathit{x} H_\mathit{y}} + \underset{oxygen}{\mathit{z} O2} -> \underset{carbon \ dioxide}{\mathit{a}CO2} + \underset{carbon\ monoxide}{\mathit{b}CO} + \underset{water}{\mathit{c}H2O} + \underset{hydrogen}{\mathit{d}H2}
When z falls below roughly 50% of the stoichiometric value, can become an important combustion product; when z falls below roughly 35% of the stoichiometric value, elemental carbon may become stable.
The products of incomplete combustion can be calculated with the aid of a material balance, together with the assumption that the combustion products reach equilibrium. For example, in the combustion of one mole of propane () with four moles of , seven moles of combustion gas are formed, and z is 80% of the stoichiometric value. The three elemental balance equations are:
Carbon:
Hydrogen:
Oxygen:
These three equations are insufficient in themselves to calculate the combustion gas composition.
However, at the equilibrium position, the water-gas shift reaction gives another equation:
CO + H2O -> CO2 + H2;
For example, at the value of K is 0.728. Solving, the combustion gas consists of 42.4% , 29.0% , 14.7% , and 13.9% . Carbon becomes a stable phase at and pressure when z is less than 30% of the stoichiometric value, at which point the combustion products contain more than 98% and and about 0.5% .
Substances or materials which undergo combustion are called fuels. The most common examples are natural gas, propane, kerosene, diesel, petrol, charcoal, coal, wood, etc.
Liquid fuels
Combustion of a liquid fuel in an oxidizing atmosphere actually happens in the gas phase. It is the vapor that burns, not the liquid. Therefore, a liquid will normally catch fire only above a certain temperature: its flash point. The flash point of liquid fuel is the lowest temperature at which it can form an ignitable mix with air. It is the minimum temperature at which there is enough evaporated fuel in the air to start combustion.
Gaseous fuels
Combustion of gaseous fuels may occur through one of four distinctive types of burning: diffusion flame, premixed flame, autoignitive reaction front, or as a detonation. The type of burning that actually occurs depends on the degree to which the fuel and oxidizer are mixed prior to heating: for example, a diffusion flame is formed if the fuel and oxidizer are separated initially, whereas a premixed flame is formed otherwise. Similarly, the type of burning also depends on the pressure: a detonation, for example, is an autoignitive reaction front coupled to a strong shock wave giving it its characteristic high-pressure peak and high detonation velocity.
Solid fuels
The act of combustion consists of three relatively distinct but overlapping phases:
Preheating phase, when the unburned fuel is heated up to its flash point and then fire point. Flammable gases start being evolved in a process similar to dry distillation.
Distillation phase or gaseous phase, when the mix of evolved flammable gases with oxygen is ignited. Energy is produced in the form of heat and light. Flames are often visible. Heat transfer from the combustion to the solid maintains the evolution of flammable vapours.
Charcoal phase or solid phase, when the output of flammable gases from the material is too low for the persistent presence of flame and the charred fuel does not burn rapidly and just glows and later only smoulders.
Combustion management
Efficient process heating requires recovery of the largest possible part of a fuel's heat of combustion into the material being processed. There are many avenues of loss in the operation of a heating process. Typically, the dominant loss is sensible heat leaving with the offgas (i.e., the flue gas). The temperature and quantity of offgas indicates its heat content (enthalpy), so keeping its quantity low minimizes heat loss.
In a perfect furnace, the combustion air flow would be matched to the fuel flow to give each fuel molecule the exact amount of oxygen needed to cause complete combustion. However, in the real world, combustion does not proceed in a perfect manner. Unburned fuel (usually and ) discharged from the system represents a heating value loss (as well as a safety hazard). Since combustibles are undesirable in the offgas, while the presence of unreacted oxygen there presents minimal safety and environmental concerns, the first principle of combustion management is to provide more oxygen than is theoretically needed to ensure that all the fuel burns. For methane () combustion, for example, slightly more than two molecules of oxygen are required.
The second principle of combustion management, however, is to not use too much oxygen. The correct amount of oxygen requires three types of measurement: first, active control of air and fuel flow; second, offgas oxygen measurement; and third, measurement of offgas combustibles. For each heating process, there exists an optimum condition of minimal offgas heat loss with acceptable levels of combustibles concentration. Minimizing excess oxygen pays an additional benefit: for a given offgas temperature, the NOx level is lowest when excess oxygen is kept lowest.
Adherence to these two principles is furthered by making material and heat balances on the combustion process. The material balance directly relates the air/fuel ratio to the percentage of in the combustion gas. The heat balance relates the heat available for the charge to the overall net heat produced by fuel combustion. Additional material and heat balances can be made to quantify the thermal advantage from preheating the combustion air, or enriching it in oxygen.
Reaction mechanism
Combustion in oxygen is a chain reaction in which many distinct radical intermediates participate. The high energy required for initiation is explained by the unusual structure of the dioxygen molecule. The lowest-energy configuration of the dioxygen molecule is a stable, relatively unreactive diradical in a triplet spin state. Bonding can be described with three bonding electron pairs and two antibonding electrons, with spins aligned, such that the molecule has nonzero total angular momentum. Most fuels, on the other hand, are in a singlet state, with paired spins and zero total angular momentum. Interaction between the two is quantum mechanically a "forbidden transition", i.e. possible with a very low probability. To initiate combustion, energy is required to force dioxygen into a spin-paired state, or singlet oxygen. This intermediate is extremely reactive. The energy is supplied as heat, and the reaction then produces additional heat, which allows it to continue.
Combustion of hydrocarbons is thought to be initiated by hydrogen atom abstraction (not proton abstraction) from the fuel to oxygen, to give a hydroperoxide radical (HOO). This reacts further to give hydroperoxides, which break up to give hydroxyl radicals. There are a great variety of these processes that produce fuel radicals and oxidizing radicals. Oxidizing species include singlet oxygen, hydroxyl, monatomic oxygen, and hydroperoxyl. Such intermediates are short-lived and cannot be isolated. However, non-radical intermediates are stable and are produced in incomplete combustion. An example is acetaldehyde produced in the combustion of ethanol. An intermediate in the combustion of carbon and hydrocarbons, carbon monoxide, is of special importance because it is a poisonous gas, but also economically useful for the production of syngas.
Solid and heavy liquid fuels also undergo a great number of pyrolysis reactions that give more easily oxidized, gaseous fuels. These reactions are endothermic and require constant energy input from the ongoing combustion reactions. A lack of oxygen or other improperly designed conditions result in these noxious and carcinogenic pyrolysis products being emitted as thick, black smoke.
The rate of combustion is the amount of a material that undergoes combustion over a period of time. It can be expressed in grams per second (g/s) or kilograms per second (kg/s).
Detailed descriptions of combustion processes, from the chemical kinetics perspective, require the formulation of large and intricate webs of elementary reactions. For instance, combustion of hydrocarbon fuels typically involve hundreds of chemical species reacting according to thousands of reactions.
The inclusion of such mechanisms within computational flow solvers still represents a pretty challenging task mainly in two aspects. First, the number of degrees of freedom (proportional to the number of chemical species) can be dramatically large; second, the source term due to reactions introduces a disparate number of time scales which makes the whole dynamical system stiff. As a result, the direct numerical simulation of turbulent reactive flows with heavy fuels soon becomes intractable even for modern supercomputers.
Therefore, a plethora of methodologies have been devised for reducing the complexity of combustion mechanisms without resorting to high detail levels. Examples are provided by:
The Relaxation Redistribution Method (RRM)
The Intrinsic Low-Dimensional Manifold (ILDM) approach and further developments
The invariant-constrained equilibrium edge preimage curve method.
A few variational approaches
The Computational Singular perturbation (CSP) method and further developments.
The Rate Controlled Constrained Equilibrium (RCCE) and Quasi Equilibrium Manifold (QEM) approach.
The G-Scheme.
The Method of Invariant Grids (MIG).
Kinetic modelling
The kinetic modelling may be explored for insight into the reaction mechanisms of thermal decomposition in the combustion of different materials by using for instance Thermogravimetric analysis.
Temperature
Assuming perfect combustion conditions, such as complete combustion under adiabatic conditions (i.e., no heat loss or gain), the adiabatic combustion temperature can be determined. The formula that yields this temperature is based on the first law of thermodynamics and takes note of the fact that the heat of combustion is used entirely for heating the fuel, the combustion air or oxygen, and the combustion product gases (commonly referred to as the flue gas).
In the case of fossil fuels burnt in air, the combustion temperature depends on all of the following:
the heating value;
the stoichiometric air to fuel ratio ;
the specific heat capacity of fuel and air;
the air and fuel inlet temperatures.
The adiabatic combustion temperature (also known as the adiabatic flame temperature) increases for higher heating values and inlet air and fuel temperatures and for stoichiometric air ratios approaching one.
Most commonly, the adiabatic combustion temperatures for coals are around (for inlet air and fuel at ambient temperatures and for ), around for oil and for natural gas.
In industrial fired heaters, power station steam generators, and large gas-fired turbines, the more common way of expressing the usage of more than the stoichiometric combustion air is percent excess combustion air. For example, excess combustion air of 15 percent means that 15 percent more than the required stoichiometric air is being used.
Instabilities
Combustion instabilities are typically violent pressure oscillations in a combustion chamber. These pressure oscillations can be as high as 180dB, and long-term exposure to these cyclic pressure and thermal loads reduces the life of engine components. In rockets, such as the F1 used in the Saturn V program, instabilities led to massive damage to the combustion chamber and surrounding components. This problem was solved by re-designing the fuel injector. In liquid jet engines, the droplet size and distribution can be used to attenuate the instabilities. Combustion instabilities are a major concern in ground-based gas turbine engines because of emissions. The tendency is to run lean, an equivalence ratio less than 1, to reduce the combustion temperature and thus reduce the emissions; however, running the combustion lean makes it very susceptible to combustion instability.
The Rayleigh Criterion is the basis for analysis of thermoacoustic combustion instability and is evaluated using the Rayleigh Index over one cycle of instability
where q' is the heat release rate perturbation and p' is the pressure fluctuation.
When the heat release oscillations are in phase with the pressure oscillations, the Rayleigh Index is positive and the magnitude of the thermoacoustic instability is maximised. On the other hand, if the Rayleigh Index is negative, then thermoacoustic damping occurs. The Rayleigh Criterion implies that thermoacoustic instability can be optimally controlled by having heat release oscillations 180 degrees out of phase with pressure oscillations at the same frequency. This minimizes the Rayleigh Index.
See also
Related concepts
Air–fuel ratio
Autoignition temperature
Chemical looping combustion
Deflagration
Detonation
Explosion
Fire
Flame
Heterogeneous combustion
Markstein number
Phlogiston theory (historical)
Spontaneous combustion
Machines and equipment
Boiler
Bunsen burner
External combustion engine
Furnace
Gas turbine
Internal combustion engine
Rocket engine
Scientific and engineering societies
International Flame Research Foundation
The Combustion Institute
Other
List of light sources
References
Further reading
Chemical reactions |
5639 | https://en.wikipedia.org/wiki/Cyrillic%20script | Cyrillic script | The Cyrillic script ( ), Slavonic script or the Slavic script is a writing system used for various languages across Eurasia. It is the designated national script in various Slavic, Turkic, Mongolic, Uralic, Caucasian and Iranic-speaking countries in Southeastern Europe, Eastern Europe, the Caucasus, Central Asia, North Asia, and East Asia, and used by many other minority languages.
, around 250 million people in Eurasia use Cyrillic as the official script for their national languages, with Russia accounting for about half of them. With the accession of Bulgaria to the European Union on 1 January 2007, Cyrillic became the third official script of the European Union, following the Latin and Greek alphabets.
The Early Cyrillic alphabet was developed during the 9th century AD at the Preslav Literary School in the First Bulgarian Empire during the reign of Tsar Simeon I the Great, probably by the disciples of the two Byzantine brothers Cyril and Methodius, who had previously created the Glagolitic script. Among them were Clement of Ohrid, Naum of Preslav, Angelar, Sava and other scholars. The script is named in honor of Saint Cyril.
Etymology
Since the script was conceived and popularised by the Slavic followers of Cyril and Methodius, rather than by Cyril and Methodius themselves, its name denotes homage rather than authorship. The name "Cyrillic" often confuses people who are not familiar with the script's history, because it does not identify the country of origin – Bulgaria (in contrast to the "Greek alphabet"). Among the general public, it is often called "the Russian alphabet", because Russian is the most popular and influential alphabet based on the script.
In Bulgarian, Macedonian, Russian, Serbian, Czech and Slovak, the Cyrillic alphabet is also known as azbuka, derived from the old names of the first two letters of most Cyrillic alphabets (just as the term alphabet came from the first two Greek letters alpha and beta). In Czech and Slovak, which have never used Cyrillic, "azbuka" refers to Cyrillic and contrasts with "abeceda", which refers to the local Latin script and is composed of the names of the first letters (A, B, C, and D). In Russian, syllabaries, especially the Japanese kana, are commonly referred to as 'syllabic azbukas' rather than 'syllabic scripts'.
History
The Cyrillic script was created during the First Bulgarian Empire. Modern scholars believe that the Early Cyrillic alphabet was created at the Preslav Literary School, the most important early literary and cultural center of the First Bulgarian Empire and of all Slavs:
Unlike the Churchmen in Ohrid, Preslav scholars were much more dependent upon Greek models and quickly abandoned the Glagolitic scripts in favor of an adaptation of the Greek uncial to the needs of Slavic, which is now known as the Cyrillic alphabet.
A number of prominent Bulgarian writers and scholars worked at the school, including Naum of Preslav until 893; Constantine of Preslav; Joan Ekzarh (also transcr. John the Exarch); and Chernorizets Hrabar, among others. The school was also a center of translation, mostly of Byzantine authors. The Cyrillic script is derived from the Greek uncial script letters, augmented by ligatures and consonants from the older Glagolitic alphabet for sounds not found in Greek. Glagolitic and Cyrillic were formalized by the Byzantine Saints Cyril and Methodius and their disciples, such as Saints Naum, Clement, Angelar, and Sava. They spread and taught Christianity in the whole of Bulgaria. Paul Cubberley posits that although Cyril may have codified and expanded Glagolitic, it was his students in the First Bulgarian Empire under Tsar Simeon the Great that developed Cyrillic from the Greek letters in the 890s as a more suitable script for church books.
Cyrillic spread among other Slavic peoples, as well as among non-Slavic Vlachs. The earliest datable Cyrillic inscriptions have been found in the area of Preslav, in the medieval city itself and at nearby Patleina Monastery, both in present-day Shumen Province, as well as in the Ravna Monastery and in the Varna Monastery. The new script became the basis of alphabets used in various languages in Orthodox Church-dominated Eastern Europe, both Slavic and non-Slavic languages (such as Romanian, until the 1860s). For centuries, Cyrillic was also used by Catholic and Muslim Slavs (see Bosnian Cyrillic).
Cyrillic and Glagolitic were used for the Church Slavonic language, especially the Old Church Slavonic variant. Hence expressions such as "И is the tenth Cyrillic letter" typically refer to the order of the Church Slavonic alphabet; not every Cyrillic alphabet uses every letter available in the script. The Cyrillic script came to dominate Glagolitic in the 12th century.
The literature produced in Old Church Slavonic soon spread north from Bulgaria and became the lingua franca of the Balkans and Eastern Europe.
Bosnian Cyrillic, widely known as Bosančica is an extinct variant of the Cyrillic alphabet that originated in medieval Bosnia.
Paleographers consider the earliest features of Bosnian Cyrillic script had likely begun to appear between the 10th or 11th century, with the Humac tablet (a tablet written in Bosnian Cyrillic) to be the first such document using this type of script and is believed to date from this period. Bosnian Cyrillic was used continuously until the 18th century, with sporadic usage even taking place in the 20th century.
With the orthographic reform of Saint Evtimiy of Tarnovo and other prominent representatives of the Tarnovo Literary School of the 14th and 15th centuries, such as Gregory Tsamblak and Constantine of Kostenets, the school influenced Russian, Serbian, Wallachian and Moldavian medieval culture. This is known in Russia as the second South-Slavic influence.
In the early 18th century, the Cyrillic script used in Russia was heavily reformed by Peter the Great, who had recently returned from his Grand Embassy in Western Europe. The new letterforms, called the Civil script, became closer to those of the Latin alphabet; several archaic letters were abolished and several new letters were introduced designed by Peter himself. Letters became distinguished between upper and lower case. West European typography culture was also adopted. The pre-reform letterforms, called 'Полуустав', were notably retained in Church Slavonic and are sometimes used in Russian even today, especially if one wants to give a text a 'Slavic' or 'archaic' feel.
The alphabet used for the modern Church Slavonic language in Eastern Orthodox and Eastern Catholic rites still resembles early Cyrillic. However, over the course of the following millennium, Cyrillic adapted to changes in spoken language, developed regional variations to suit the features of national languages, and was subjected to academic reform and political decrees. A notable example of such linguistic reform can be attributed to Vuk Stefanović Karadžić, who updated the Serbian Cyrillic alphabet by removing certain graphemes no longer represented in the vernacular and introducing graphemes specific to Serbian (i.e. Љ Њ Ђ Ћ Џ Ј), distancing it from the Church Slavonic alphabet in use prior to the reform. Today, many languages in the Balkans, Eastern Europe, and northern Eurasia are written in Cyrillic alphabets.
Letters
Cyrillic script spread throughout the East Slavic and some South Slavic territories, being adopted for writing local languages, such as Old East Slavic. Its adaptation to local languages produced a number of Cyrillic alphabets, discussed below.
Capital and lowercase letters were not distinguished in old manuscripts.
Yeri () was originally a ligature of Yer and I ( + = ). Iotation was indicated by ligatures formed with the letter І: (not an ancestor of modern Ya, Я, which is derived from ), , (ligature of and ), , . Sometimes different letters were used interchangeably, for example = = , as were typographical variants like = . There were also commonly used ligatures like = .
The letters also had numeric values, based not on Cyrillic alphabetical order, but inherited from the letters' Greek ancestors.
The early Cyrillic alphabet is difficult to represent on computers. Many of the letterforms differed from those of modern Cyrillic, varied a great deal in manuscripts, and changed over time. Few fonts include glyphs sufficient to reproduce the alphabet. In accordance with Unicode policy, the standard does not include letterform variations or ligatures found in manuscript sources unless they can be shown to conform to the Unicode definition of a character.
The Unicode 5.1 standard, released on 4 April 2008, greatly improved computer support for the early Cyrillic and the modern Church Slavonic language. In Microsoft Windows, the Segoe UI user interface font is notable for having complete support for the archaic Cyrillic letters since Windows 8.
Currency signs
Some currency signs have derived from Cyrillic letters:
The Ukrainian hryvnia sign (₴) is from the cursive minuscule Ukrainian Cyrillic letter He (г).
The Russian ruble sign (₽) from the majuscule Р.
The Kyrgyzstani som sign (⃀) from the majuscule С (es)
The Kazakhstani tenge sign (₸) from Т
The Mongolian tögrög sign (₮) from Т
Letterforms and typography
The development of Cyrillic typography passed directly from the medieval stage to the late Baroque, without a Renaissance phase as in Western Europe. Late Medieval Cyrillic letters (categorized as vyaz' and still found on many icon inscriptions today) show a marked tendency to be very tall and narrow, with strokes often shared between adjacent letters.
Peter the Great, Tsar of Russia, mandated the use of westernized letter forms (ru) in the early 18th century. Over time, these were largely adopted in the other languages that use the script. Thus, unlike the majority of modern Greek fonts that retained their own set of design principles for lower-case letters (such as the placement of serifs, the shapes of stroke ends, and stroke-thickness rules, although Greek capital letters do use Latin design principles), modern Cyrillic fonts are much the same as modern Latin fonts of the same font family. The development of some Cyrillic computer typefaces from Latin ones has also contributed to the visual Latinization of Cyrillic type.
Lowercase forms
Cyrillic uppercase and lowercase letter forms are not as differentiated as in Latin typography. Upright Cyrillic lowercase letters are essentially small capitals (with exceptions: Cyrillic , , , , , and adopted Western lowercase shapes, lowercase is typically designed under the influence of Latin , lowercase , and are traditional handwritten forms), although a good-quality Cyrillic typeface will still include separate small-caps glyphs.
Cyrillic fonts, as well as Latin ones, have roman and italic types (practically all popular modern fonts include parallel sets of Latin and Cyrillic letters, where many glyphs, uppercase as well as lowercase, are shared by both). However, the native font terminology in most Slavic languages (for example, in Russian) does not use the words "roman" and "italic" in this sense. Instead, the nomenclature follows German naming patterns:
Roman type is called ("upright type")compare with ("regular type") in German
Italic type is called ("cursive") or ("cursive type")from the German word , meaning italic typefaces and not cursive writing
Cursive handwriting is ("handwritten type")in German: or , both meaning literally 'running type'
A (mechanically) sloped oblique type of sans-serif faces is ("sloped" or "slanted type").
A boldfaced type is called ("semi-bold type"), because there existed fully boldfaced shapes that have been out of use since the beginning of the 20th century.
Italic and cursive forms
Similarly to Latin fonts, italic and cursive types of many Cyrillic letters (typically lowercase; uppercase only for handwritten or stylish types) are very different from their upright roman types. In certain cases, the correspondence between uppercase and lowercase glyphs does not coincide in Latin and Cyrillic fonts: for example, italic Cyrillic is the lowercase counterpart of not of .
Note: in some fonts or styles, , i.e. the lowercase italic Cyrillic , may look like Latin , and , i.e. lowercase italic Cyrillic , may look like small-capital italic .
In Standard Serbian, as well as in Macedonian, some italic and cursive letters are allowed to be different, to more closely resemble the handwritten letters. The regular (upright) shapes are generally standardized in small caps form.
Notes: Depending on fonts available, the Serbian row may appear identical to the Russian row. Unicode approximations are used in the faux row to ensure it can be rendered properly across all systems.
In Bulgarian typography, many lowercase letterforms may more closely resemble the cursive forms on the one hand and Latin glyphs on the other hand, e.g. by having an ascender or descender or by using rounded arcs instead of sharp corners. Sometimes, uppercase letters may have a different shape as well, e.g. more triangular, Д and Л, like Greek delta Δ and lambda Λ.
Notes: Depending on fonts available, the Bulgarian row may appear identical to the Russian row. Unicode approximations are used in the faux row to ensure it can be rendered properly across all systems; in some cases, such as ж with k-like ascender, no such approximation exists.
Accessing variant forms
Computer fonts typically default to the Central/Eastern, Russian letterforms, and require the use of OpenType Layout (OTL) features to display the Western, Bulgarian or Southern, Serbian/Macedonian forms. Depending on the choices of the font manufacturer, they may either be automatically activated by the local variant locl feature for text tagged with an appropriate language code, or the author needs to opt-in by activating a stylistic set ss## or character variant cv## feature. These solutions only enjoy partial support and may render with default glyphs in certain software configurations.
Cyrillic alphabets
Among others, Cyrillic is the standard script for writing the following languages:
Slavic languages: Belarusian, Bulgarian, Macedonian, Russian, Rusyn, Serbo-Croatian (Standard Serbian, Bosnian, and Montenegrin), Ukrainian
Non-Slavic languages of Russia: Abaza, Adyghe, Avar, Azerbaijani (in Dagestan), Bashkir, Buryat, Chechen, Chuvash, Erzya, Ingush, Kabardian, Kalmyk, Karachay-Balkar, Kildin Sami, Komi, Mari, Moksha, Nogai, Ossetian (in North Ossetia–Alania), Romani, Sakha/Yakut, Tatar, Tuvan, Udmurt, Yuit (Yupik)
Non-Slavic languages in other countries: Abkhaz, Aleut (now mostly in church texts), Dungan, Kazakh (to be replaced by Latin script by 2025), Kyrgyz, Mongolian (to also be written with traditional Mongolian script by 2025), Tajik, Tlingit (now only in church texts), Turkmen (officially replaced by Latin script), Uzbek (also officially replaced by Latin script, but still in wide use), Yupik (in Alaska)
The Cyrillic script has also been used for languages of Alaska, Slavic Europe (except for Western Slavic and some Southern Slavic), the Caucasus, the languages of Idel-Ural, Siberia, and the Russian Far East.
The first alphabet derived from Cyrillic was Abur, used for the Komi language. Other Cyrillic alphabets include the Molodtsov alphabet for the Komi language and various alphabets for Caucasian languages.
Usage of Cyrillic versus other scripts
Latin script
A number of languages written in a Cyrillic alphabet have also been written in a Latin alphabet, such as Azerbaijani, Uzbek, Serbian, and Romanian (in the Republic of Moldova until 1989 and in the Danubian Principalities throughout the 19th century). After the disintegration of the Soviet Union in 1991, some of the former republics officially shifted from Cyrillic to Latin. The transition is complete in most of Moldova (except the breakaway region of Transnistria, where Moldovan Cyrillic is official), Turkmenistan, and Azerbaijan. Uzbekistan still uses both systems, and Kazakhstan has officially begun a transition from Cyrillic to Latin (scheduled to be complete by 2025). The Russian government has mandated that Cyrillic must be used for all public communications in all federal subjects of Russia, to promote closer ties across the federation. This act was controversial for speakers of many Slavic languages; for others, such as Chechen and Ingush speakers, the law had political ramifications. For example, the separatist Chechen government mandated a Latin script which is still used by many Chechens.
Standard Serbian uses both the Cyrillic and Latin scripts. Cyrillic is nominally the official script of Serbia's administration according to the Serbian constitution; however, the law does not regulate scripts in standard language, or standard language itself by any means. In practice the scripts are equal, with Latin being used more often in a less official capacity.
The Zhuang alphabet, used between the 1950s and 1980s in portions of the People's Republic of China, used a mixture of Latin, phonetic, numeral-based, and Cyrillic letters. The non-Latin letters, including Cyrillic, were removed from the alphabet in 1982 and replaced with Latin letters that closely resembled the letters they replaced.
Romanization
There are various systems for romanization of Cyrillic text, including transliteration to convey Cyrillic spelling in Latin letters, and transcription to convey pronunciation.
Standard Cyrillic-to-Latin transliteration systems include:
Scientific transliteration, used in linguistics, is based on the Serbo-Croatian Latin alphabet.
The Working Group on Romanization Systems of the United Nations recommends different systems for specific languages. These are the most commonly used around the world.
ISO 9:1995, from the International Organization for Standardization.
American Library Association and Library of Congress Romanization tables for Slavic alphabets (ALA-LC Romanization), used in North American libraries.
BGN/PCGN Romanization (1947), United States Board on Geographic Names & Permanent Committee on Geographical Names for British Official Use).
GOST 16876, a now defunct Soviet transliteration standard. Replaced by GOST 7.79-2000, which is based on ISO 9.
Various informal romanizations of Cyrillic, which adapt the Cyrillic script to Latin and sometimes Greek glyphs for compatibility with small character sets.
See also Romanization of Belarusian, Bulgarian, Kyrgyz, Russian, Macedonian and Ukrainian.
Cyrillization
Representing other writing systems with Cyrillic letters is called Cyrillization.
Summary table
Ё in Russian is usually spelled as Е; Ё is typically printed in texts for learners and in dictionaries, and in word pairs which are differentiated only by that letter (все – всё).
Computer encoding
Unicode
As of Unicode version , Cyrillic letters, including national and historical alphabets, are encoded across several blocks:
Cyrillic: U+0400–U+04FF
Cyrillic Supplement: U+0500–U+052F
Cyrillic Extended-A: U+2DE0–U+2DFF
Cyrillic Extended-B: U+A640–U+A69F
Cyrillic Extended-C: U+1C80–U+1C8F
Cyrillic Extended-D: U+1E030–U+1E08F
Phonetic Extensions: U+1D2B, U+1D78
Combining Half Marks: U+FE2E–U+FE2F
The characters in the range U+0400 to U+045F are essentially the characters from ISO 8859-5 moved upward by 864 positions. The characters in the range U+0460 to U+0489 are historic letters, not used now. The characters in the range U+048A to U+052F are additional letters for various languages that are written with Cyrillic script.
Unicode as a general rule does not include accented Cyrillic letters. A few exceptions include:
combinations that are considered as separate letters of respective alphabets, like Й, Ў, Ё, Ї, Ѓ, Ќ (as well as many letters of non-Slavic alphabets);
two most frequent combinations orthographically required to distinguish homonyms in Bulgarian and Macedonian: Ѐ, Ѝ;
a few Old and New Church Slavonic combinations: Ѷ, Ѿ, Ѽ.
To indicate stressed or long vowels, combining diacritical marks can be used after the respective letter (for example, : е́ у́ э́ etc.).
Some languages, including Church Slavonic, are still not fully supported.
Unicode 5.1, released on 4 April 2008, introduces major changes to the Cyrillic blocks. Revisions to the existing Cyrillic blocks, and the addition of Cyrillic Extended A (2DE0 ... 2DFF) and Cyrillic Extended B (A640 ... A69F), significantly improve support for the early Cyrillic alphabet, Abkhaz, Aleut, Chuvash, Kurdish, and Moksha.
Other
Other character encoding systems for Cyrillic:
CP8668-bit Cyrillic character encoding established by Microsoft for use in MS-DOS also known as GOST-alternative. Cyrillic characters go in their native order, with a "window" for pseudographic characters.
ISO/IEC 8859-58-bit Cyrillic character encoding established by International Organization for Standardization
KOI8-R8-bit native Russian character encoding. Invented in the USSR for use on Soviet clones of American IBM and DEC computers. The Cyrillic characters go in the order of their Latin counterparts, which allowed the text to remain readable after transmission via a 7-bit line that removed the most significant bit from each bytethe result became a very rough, but readable, Latin transliteration of Cyrillic. Standard encoding of early 1990s for Unix systems and the first Russian Internet encoding.
KOI8-UKOI8-R with addition of Ukrainian letters.
MIK8-bit native Bulgarian character encoding for use in Microsoft DOS.
Windows-12518-bit Cyrillic character encoding established by Microsoft for use in Microsoft Windows. The simplest 8-bit Cyrillic encoding32 capital chars in native order at 0xc0–0xdf, 32 usual chars at 0xe0–0xff, with rarely used "YO" characters somewhere else. No pseudographics. Former standard encoding in some Linux distributions for Belarusian and Bulgarian, but currently displaced by UTF-8.
GOST-main.
GB 2312Principally simplified Chinese encodings, but there are also the basic 33 Russian Cyrillic letters (in upper- and lower-case).
JIS and Shift JISPrincipally Japanese encodings, but there are also the basic 33 Russian Cyrillic letters (in upper- and lower-case).
Keyboard layouts
Each language has its own standard keyboard layout, adopted from typewriters. With the flexibility of computer input methods, there are also transliterating or phonetic/homophonic keyboard layouts made for typists who are more familiar with other layouts, like the common English QWERTY keyboard. When practical Cyrillic keyboard layouts or fonts are unavailable, computer users sometimes use transliteration or look-alike "volapuk" encoding to type in languages that are normally written with the Cyrillic alphabet.
See also
Cyrillic Alphabet Day
Cyrillic digraphs
Cyrillic script in Unicode
Faux Cyrillic, real or fake Cyrillic letters used to give Latin-alphabet text a Soviet or Russian feel
List of Cyrillic digraphs and trigraphs
Russian Braille
Russian cursive
Russian manual alphabet
Bulgarian Braille
Vladislav the Grammarian
Yugoslav Braille
Yugoslav manual alphabet
Internet top-level domains in Cyrillic
gTLDs
.мон
.бг
.қаз
.рф
.срб
.укр
.мкд
.бел
Notes
Footnotes
References
Bringhurst, Robert (2002). The Elements of Typographic Style (version 2.5), pp. 262–264. Vancouver, Hartley & Marks. .
Nezirović, M. (1992). Jevrejsko-španjolska književnost. Sarajevo: Svjetlost. [cited in Šmid, 2002]
Prostov, Eugene Victor. 1931. "Origins of Russian Printing". Library Quarterly 1 (January): 255–77.
Šmid, Katja (2002). " ", in Verba Hispanica, vol X. Liubliana: Facultad de Filosofía y Letras de la Universidad de Liubliana. .
'The Lives of St. Tsurho and St. Strahota', Bohemia, 1495, Vatican Library
Philipp Ammon: Tractatus slavonicus. in: Sjani (Thoughts) Georgian Scientific Journal of Literary Theory and Comparative Literature, N 17, 2016, pp. 248–256
External links
The Cyrillic Charset Soup overview and history of Cyrillic charsets.
Transliteration of Non-Roman Scripts, a collection of writing systems and transliteration tables
History and development of the Cyrillic alphabet
Cyrillic Alphabets of Slavic Languages review of Cyrillic charsets in Slavic Languages.
data entry in Old Cyrillic / Стара Кирилица (archived 22 February 2014)
Cyrillic and its Long Journey East – NamepediA Blog, article about the Cyrillic script
Unicode collation charts—including Cyrillic letters, sorted by shape
Bulgarian inventions
Eastern Europe
North Asia
Central Asia |
5643 | https://en.wikipedia.org/wiki/Channel%20Islands | Channel Islands | The Channel Islands are an archipelago in the English Channel, off the French coast of Normandy. They are divided into two Crown Dependencies: the Bailiwick of Jersey, which is the largest of the islands; and the Bailiwick of Guernsey, consisting of Guernsey, Alderney, Sark, Herm and some smaller islands. Historically, they are the remnants of the Duchy of Normandy. Although they are not part of the United Kingdom, the UK is currently responsible for the defence and international relations of the islands. The Crown Dependencies are neither members of the Commonwealth of Nations, nor part of the European Union. They have a total population of about , and the bailiwicks' capitals, Saint Helier and Saint Peter Port, have populations of 33,500 and 18,207 respectively.
"Channel Islands" is a geographical term, not a political unit. The two bailiwicks have been administered separately since the late 13th century. Each has its own independent laws, elections, and representative bodies (although in modern times, politicians from the islands' legislatures are in regular contact). Any institution common to both is the exception rather than the rule.
The Bailiwick of Guernsey is divided into three jurisdictions – Guernsey, Alderney and Sark – each with its own legislature. Although there are a few pan-island institutions (such as the Channel Islands Brussels Office, the Director of Civil Aviation and the Channel Islands Financial Ombudsman, which are actually joint ventures between the bailiwicks), these tend to be established structurally as equal projects between Guernsey and Jersey. Otherwise, entities whose names imply membership of both Guernsey and Jersey might in fact be from one bailiwick only. For instance, The International Stock Exchange is in Saint Peter Port and therefore is in Guernsey.
The term "Channel Islands" began to be used around 1830, possibly first by the Royal Navy as a collective name for the islands. The term refers only to the archipelago to the west of the Cotentin Peninsula. Other populated islands located in the English Channel, and close to the coast of Britain, such as the Isle of Wight, Hayling Island and Portsea Island, are not regarded as "Channel Islands".
Geography
The two major islands are Jersey and Guernsey. They make up 99% of the population and 92% of the area.
List of islands
Names
The names of the larger islands in the archipelago in general have the -ey suffix, whilst those of the smaller ones have the -hou suffix. These are believed to be from the Old Norse ey (island) and holmr (islet).
The Chausey Islands
The Chausey Islands south of Jersey are not generally included in the geographical definition of the Channel Islands but are occasionally described in English as 'French Channel Islands' in view of their French jurisdiction. They were historically linked to the Duchy of Normandy, but they are part of the French territory along with continental Normandy, and not part of the British Isles or of the Channel Islands in a political sense. They are an incorporated part of the commune of Granville (Manche). While they are popular with visitors from France, Channel Islanders can only visit them by private or charter boats as there are no direct transport links from the other islands.
In official Jersey Standard French, the Channel Islands are called 'Îles de la Manche', while in France, the term 'Îles Anglo-normandes' (Anglo-Norman Isles) is used to refer to the British 'Channel Islands' in contrast to other islands in the Channel. Chausey is referred to as an 'Île normande' (as opposed to anglo-normande). 'Îles Normandes' and 'Archipel Normand' have also, historically, been used in Channel Island French to refer to the islands as a whole.
Waters
The very large tidal variation provides an environmentally rich inter-tidal zone around the islands, and some islands such as Burhou, the Écréhous, and the Minquiers have been designated Ramsar sites.
The waters around the islands include the following:
The Swinge (between Alderney and Burhou)
The Little Swinge (between Burhou and Les Nannels)
La Déroute (between Jersey and Sark, and Jersey and the Cotentin)
Le Raz Blanchard, or Race of Alderney (between Alderney and the Cotentin)
The Great Russel (between Sark, Jéthou and Herm)
The Little Russel (between Guernsey, Herm and Jéthou)
Souachehouais (between Le Rigdon and L'Étacq, Jersey)
Le Gouliot (between Sark and Brecqhou)
La Percée (between Herm and Jéthou)
Highest point
The highest point in the islands is Les Platons in Jersey at 143 metres (469 ft) above sea level. The lowest point is the English Channel (sea level).
Climate
History
Prehistory
The earliest evidence of human occupation of the Channel Islands has been dated to 250,000 years ago when they were attached to the landmass of continental Europe. The islands became detached by rising sea levels in the Mesolithic period. The numerous dolmens and other archaeological sites extant and recorded in history demonstrate the existence of a population large enough and organised enough to undertake constructions of considerable size and sophistication, such as the burial mound at La Hougue Bie in Jersey or the statue menhirs of Guernsey.
From the Iron Age
Hoards of Armorican coins have been excavated, providing evidence of trade and contact in the Iron Age period. Evidence for Roman settlement is sparse, although evidently the islands were visited by Roman officials and traders. The Roman name for the Channel Islands was I. Lenuri (Lenur Islands) and is included in the Peutinger Table The traditional Latin names used for the islands (Caesarea for Jersey, Sarnia for Guernsey, Riduna for Alderney) derive (possibly mistakenly) from the Antonine Itinerary. Gallo-Roman culture was adopted to an unknown extent in the islands.
In the sixth century, Christian missionaries visited the islands. Samson of Dol, Helier, Marculf and Magloire are among saints associated with the islands. In the sixth century, they were already included in the diocese of Coutances where they remained until the Reformation.
There were probably some Celtic Britons who settled on the Islands in the 5th and 6th centuries AD (the indigenous Celts of Great Britain, and the ancestors of the modern Welsh, Cornish, and Bretons) who had emigrated from Great Britain in the face of invading Anglo-Saxons. But there were not enough of them to leave any trace, and the islands continued to be ruled by the king of the Franks and its church remained part of the diocese of Coutances.
From the beginning of the ninth century, Norse raiders appeared on the coasts. Norse settlement eventually succeeded initial attacks, and it is from this period that many place names of Norse origin appear, including the modern names of the islands.
From the Duchy of Normandy
In 933, the islands were granted to William I Longsword by Raoul, the King of Western Francia, and annexed to the Duchy of Normandy. In 1066, William II of Normandy invaded and conquered England, becoming William I of England, also known as William the Conqueror. In the period 1204–1214, King John lost the Angevin lands in northern France, including mainland Normandy, to King Philip II of France, but managed to retain control of the Channel Islands. In 1259, his successor, Henry III of England, by the Treaty of Paris, officially surrendered his claim and title to the Duchy of Normandy, while retaining the Channel Islands, as peer of France and feudal vassal of the King of France. Since then, the Channel Islands have been governed as two separate bailiwicks and were never absorbed into the Kingdom of England nor its successor kingdoms of Great Britain and the United Kingdom. During the Hundred Years' War, the Channel Islands were part of the French territory recognizing the claims of the English kings to the French throne.
The islands were invaded by the French in 1338, who held some territory until 1345. Edward III of England granted a Charter in July 1341 to Jersey, Guernsey, Sark and Alderney, confirming their customs and laws to secure allegiance to the English Crown. Owain Lawgoch, a mercenary leader of a Free Company in the service of the French Crown, attacked Jersey and Guernsey in 1372, and in 1373 Bertrand du Guesclin besieged Mont Orgueil. The young King Richard II of England reconfirmed in 1378 the Charter rights granted by his grandfather, followed in 1394 with a second Charter granting, because of great loyalty shown to the Crown, exemption for ever, from English tolls, customs and duties. Jersey was occupied by the French in 1461 as part of an exchange for helping the Lancastrians fight against the Yorkists during The War of the Roses. It was retaken by the Yorkists in 1468. In 1483 a Papal bull decreed that the islands would be neutral during time of war. This privilege of neutrality enabled islanders to trade with both France and England and was respected until 1689 when it was abolished by Order in Council following the Glorious Revolution in Great Britain.
Various attempts to transfer the islands from the diocese of Coutances (to Nantes (1400), Salisbury (1496), and Winchester (1499)) had little effect until an Order in Council of 1569 brought the islands formally into the diocese of Winchester. Control by the bishop of Winchester was ineffectual as the islands had turned overwhelmingly Calvinist and the episcopacy was not restored until 1620 in Jersey and 1663 in Guernsey.
After the loss of Calais in 1558, the Channel Islands were the last remaining English holdings in France and the only French territory that was controlled by the English kings as Kings of France. This situation lasted until the English kings dropped their title and claims to the French throne in 1801, confirming the Channel Islands in a situation of a crown dependency under the sovereignty of neither Great-Britain nor France but of the British crown directly.
Sark in the 16th century was uninhabited until colonised from Jersey in the 1560s. The grant of seigneurship from Elizabeth I of England in 1565 forms the basis of Sark's constitution today.
From the 17th century
During the Wars of the Three Kingdoms, Jersey held out strongly for the Royalist cause, providing refuge for Charles, Prince of Wales in 1646 and 1649–1650, while the more strongly Presbyterian Guernsey more generally favoured the parliamentary cause (although Castle Cornet was held by Royalists and did not surrender until October 1651).
The islands acquired commercial and political interests in the North American colonies. Islanders became involved with the Newfoundland fisheries in the 17th century. In recognition for all the help given to him during his exile in Jersey in the 1640s, Charles II gave George Carteret, Bailiff and governor, a large grant of land in the American colonies, which he promptly named New Jersey, now part of the United States of America. Sir Edmund Andros, bailiff of Guernsey, was an early colonial governor in North America, and head of the short-lived Dominion of New England.
In the late 18th century, the islands were dubbed "the French Isles". Wealthy French émigrés fleeing the French Revolution sought residency in the islands. Many of the town domiciles existing today were built in that time. In Saint Peter Port, a large part of the harbour had been built by 1865.
20th century
World War II
The islands were occupied by the German Army during World War II.
The British Government demilitarised the islands in June 1940, and the lieutenant-governors were withdrawn on 21 June, leaving the insular administrations to continue government as best they could under impending military occupation.
Before German troops landed, between 30 June and 4 July 1940, evacuation took place. Many young men had already left to join the Allied armed forces, as volunteers. 6,600 out of 50,000 left Jersey while 17,000 out of 42,000 left Guernsey. Thousands of children were evacuated with their schools to England and Scotland.
The population of Sark largely remained where they were; but in Alderney, all but six people left. In Alderney, the occupying Germans built four prison camps which housed approximately 6,000 people, of whom over 700 died. Due to the destruction of documents, it is impossible to state how many forced workers died in the other islands. Alderney had the only Nazi concentration camps on British soil.
The Royal Navy blockaded the islands from time to time, particularly following the Invasion of Normandy in June 1944. There was considerable hunger and privation during the five years of German occupation, particularly in the final months when the population was close to starvation. Intense negotiations resulted in some humanitarian aid being sent via the Red Cross, leading to the arrival of Red Cross parcels in the supply ship SS Vega in December 1944.
The German occupation of 1940–45 was harsh: over 2,000 islanders were deported by the Germans, and some Jews were sent to concentration camps; partisan resistance and retribution, accusations of collaboration, and slave labour also occurred. Many Spaniards, initially refugees from the Spanish Civil War, were brought to the islands to build fortifications. Later, Russians and Central Europeans continued the work. Many land mines were laid, with 65,718 land mines laid in Jersey alone.
There was no resistance movement in the Channel Islands on the scale of that in mainland France. This has been ascribed to a range of factors including the physical separation of the islands, the density of troops (up to one German for every two Islanders), the small size of the islands precluding any hiding places for resistance groups, and the absence of the Gestapo from the occupying forces. Moreover, much of the population of military age had already joined the British Army.
The end of the occupation came after VE-Day on 8 May 1945, with Jersey and Guernsey being liberated on 9 May. The German garrison in Alderney was left until 16 May, and it was one of the last of the Nazi German remnants to surrender. The first evacuees returned on the first sailing from Great Britain on 23 June, but the people of Alderney were unable to start returning until December 1945. Many of the evacuees who returned home had difficulty reconnecting with their families after five years of separation.
After 1945
Following the liberation of 1945, reconstruction led to a transformation of the economies of the islands, attracting immigration and developing tourism. The legislatures were reformed and non-party governments embarked on social programmes, aided by the incomes from offshore finance, which grew rapidly from the 1960s. The islands decided not to join the European Economic Community when the UK joined. Since the 1990s, declining profitability of agriculture and tourism has challenged the governments of the islands.
Flag gallery
Governance
The Channel Islands fall into two separate self-governing bailiwicks, the Bailiwick of Guernsey and the Bailiwick of Jersey. Each of these is a British Crown Dependency, and neither is a part of the United Kingdom. They have been parts of the Duchy of Normandy since the 10th century, and Queen Elizabeth II was often referred to by her traditional and conventional title of Duke of Normandy. However, pursuant to the Treaty of Paris (1259), she governed in her right as The Queen (the "Crown in right of Jersey", and the "Crown in right of the république of the Bailiwick of Guernsey"), and not as the Duke. This notwithstanding, it is a matter of local pride for monarchists to treat the situation otherwise: the Loyal toast at formal dinners was to 'The Queen, our Duke', rather than to 'Her Majesty, The Queen' as in the UK. The Queen died in 2022 and her son Charles III became the King.
A bailiwick is a territory administered by a bailiff. Although the words derive from a common root ('bail' = 'to give charge of') there is a vast difference between the meanings of the word 'bailiff' in Great Britain and in the Channel Islands; a bailiff in Britain is a court-appointed private debt-collector authorised to collect judgment debts, in the Channel Islands, the Bailiff in each bailiwick is the civil head, presiding officer of the States, and also head of the judiciary, and thus the most important citizen in the bailiwick.
In the early 21st century, the existence of governmental offices such as the bailiffs' with multiple roles straddling the different branches of government came under increased scrutiny for their apparent contravention of the doctrine of separation of powers—most notably in the Guernsey case of McGonnell -v- United Kingdom (2000) 30 EHRR 289. That case, following final judgement at the European Court of Human Rights, became part of the impetus for much recent constitutional change, particularly the Constitutional Reform Act 2005 (2005 c.4) in the UK, including the separation of the roles of the Lord Chancellor, the abolition of the House of Lords' judicial role, and its replacement by the UK Supreme Court. The islands' bailiffs, however, still retain their historic roles.
The systems of government in the islands date from Norman times, which accounts for the names of the legislatures, the States, derived from the Norman 'États' or 'estates' (i.e. the Crown, the Church, and the people). The States have evolved over the centuries into democratic parliaments.
The UK Parliament has power to legislate for the islands, but Acts of Parliament do not extend to the islands automatically. Usually, an Act gives power to extend its application to the islands by an Order in Council, after consultation. For the most part the islands legislate for themselves. Each island has its own primary legislature, known as the States of Guernsey and the States of Jersey, with Chief Pleas in Sark and the States of Alderney. The Channel Islands are not represented in the UK Parliament. Laws passed by the States are given royal assent by The King in Council, to whom the islands' governments are responsible.
The islands have never been part of the European Union, and thus were not a party to the 2016 referendum on the EU membership, but were part of the Customs Territory of the European Community by virtue of Protocol Three to the Treaty on European Union. In September 2010, a Channel Islands Brussels Office was set up jointly by the two Bailiwicks to develop the Channel Islands' influence with the EU, to advise the Channel Islands' governments on European matters, and to promote economic links with the EU.
Both bailiwicks are members of the British–Irish Council, and Jèrriais and Guernésiais are recognised regional languages of the islands.
The legal courts are separate; separate courts of appeal have been in place since 1961. Among the legal heritage from Norman law is the Clameur de haro. The basis of the legal systems of both Bailiwicks is Norman customary law (Coutume) rather than the English Common Law, although elements of the latter have become established over time.
Islanders are full British citizens, but were not classed as European citizens unless by descent from a UK national. Any British citizen who applies for a passport in Jersey or Guernsey receives a passport bearing the words "British Islands, Bailiwick of Jersey" or "British Islands, Bailiwick of Guernsey". Under the provisions of Protocol Three, Channel Islanders who do not have a close connection with the UK (no parent or grandparent from the UK, and have never been resident in the UK for a five-year period) did not automatically benefit from the EU provisions on free movement within the EU, and their passports received an endorsement to that effect. This affected only a minority of islanders.
Under the UK Interpretation Act 1978, the Channel Islands are deemed to be part of the British Islands, not to be confused with the British Isles. For the purposes of the British Nationality Act 1981, the "British Islands" include the United Kingdom (Great Britain and Northern Ireland), the Channel Islands and the Isle of Man, taken together, unless the context otherwise requires.
Economy
Tourism is still important. However, Jersey and Guernsey have, since the 1960s, become major offshore financial centres. Historically Guernsey's horticultural and greenhouse activities have been more significant than in Jersey, and Guernsey has maintained light industry as a higher proportion of its economy than Jersey. In Jersey, potatoes are an important export crop, shipped mostly to the UK.
Jersey is heavily reliant on financial services, with 39.4% of Gross Value Added (GVA) in 2018 contributed by the sector. Rental income comes second at 15.1% with other business activities at 11.2%. Tourism 4.5% with agriculture contributing just 1.2% and manufacturing even lower at 1.1%. GVA has fluctuated between £4.5 and £5 billion for 20 years.
Jersey has had a steadily rising population, increasing from below 90,000 in 2000 to over 105,000 in 2018 which combined with a flat GVA has resulted in GVA per head of population falling from £57,000 to £44,000 per person. Guernsey had a GDP of £3.2 billion in 2018 and with a stable population of around 66,000 has had a steadily rising GDP, and a GVA per head of population which in 2018 surpassed £52,000.
Both bailiwicks issue their own banknotes and coins, which circulate freely in all the islands alongside UK coinage and Bank of England and Scottish banknotes.
Transport and communications
Post
Since 1969, Jersey and Guernsey have operated postal administrations independently of the UK's Royal Mail, with their own postage stamps, which can be used for postage only in their respective Bailiwicks. UK stamps are no longer valid, but mail to the islands, and to the Isle of Man, is charged at UK inland rates. It was not until the early 1990s that the islands joined the UK's postcode system, Jersey postcodes using the initials JE and Guernsey GY.
Transport
Road
Each of the three largest islands has a distinct vehicle registration scheme:
Guernsey (GBG): a number of up to five digits;
Jersey (GBJ): J followed by up to six digits (JSY vanity plates are also issued);
Alderney (GBA): AY followed by up to five digits (four digits are the most that have been used, as redundant numbers are re-issued).
In Sark, where most motor traffic is prohibited, the few vehicles – nearly all tractors – do not display plates. Bicycles display tax discs.
Sea
In the 1960s, names used for the cross-Channel ferries plying the mail route between the islands and Weymouth, Dorset, were taken from the popular Latin names for the islands: Caesarea (Jersey), Sarnia (Guernsey) and Riduna (Alderney). Fifty years later, the ferry route between the Channel Islands and the UK is operated by Condor Ferries from both St Helier, Jersey and St Peter Port, Guernsey, using high-speed catamaran fast craft to Poole in the UK. A regular passenger ferry service on the Commodore Clipper goes from both Channel Island ports to Portsmouth daily, and carries both passengers and freight.
Ferry services to Normandy are operated by Manche Îles Express, and services between Jersey and Saint-Malo are operated by Compagnie Corsaire and Condor Ferries.
The Isle of Sark Shipping Company operates small ferries to Sark.
Normandy Trader operates an ex military tank landing craft for transporting freight between the islands and France.
On 20 August 2013, Huelin-Renouf, which had operated a "lift-on lift-off" container service for 80 years between the Port of Southampton and the Port of Jersey, ceased trading. Senator Alan Maclean, a Jersey politician, had previously tried to save the 90-odd jobs furnished by the company to no avail. On 20 September, it was announced that Channel Island Lines would continue this service, and would purchase the MV Huelin Dispatch from Associated British Ports who in turn had purchased them from the receiver in the bankruptcy. The new operator was to be funded by Rockayne Limited, a closely held association of Jersey businesspeople.
Air
There are three airports in the Channel Islands: Alderney Airport, Guernsey Airport and Jersey Airport. They are directly connected to each other by services operated by Blue Islands and Aurigny.
Rail
Historically, there have been railway networks on Jersey, Guernsey, and Alderney, but all of the lines on Jersey and Guernsey have been closed and dismantled. Today there are three working railways in the Channel Islands, of which the Alderney Railway is the only one providing a regular timetabled passenger service. The other two are a gauge miniature railway, also on Alderney, and the heritage steam railway operated on Jersey as part of the Pallot Heritage Steam Museum.
Media
The Channel Islands are served by a number of local radio services – BBC Radio Jersey and BBC Radio Guernsey, Channel 103 and Island FM – as well as regional television news opt-outs from BBC Channel Islands and ITV Channel Television.
On 1 August 2021, DAB+ digital radio became available for the first time, introducing new stations like the local Bailiwick Radio and Soleil Radio, and UK-wide services like Capital, Heart, and Times Radio.
There are two broadcast transmitters serving Jersey – at Frémont Point and Les Platons – as well as one at Les Touillets in Guernsey and a relay in Alderney.
There are several local newspapers including the Guernsey Press and the Jersey Evening Post and magazines.
Telephone
Jersey always operated its own telephone services independently of Britain's national system, Guernsey established its own telephone service in 1968. Both islands still form part of the British telephone numbering plan, but Ofcom on the mainlines does not have responsibility for telecommunications regulatory and licensing issues on the islands. It is responsible for wireless telegraphy licensing throughout the islands, and by agreement, for broadcasting regulation in the two large islands only. Submarine cables connect the various islands and provide connectivity with England and France.
Internet
Modern broadband speeds are available in all the islands, including full-fibre (FTTH) in Jersey (offering speeds of up to 1Gbps on all broadband connections) and VDSL and some business and homes with fibre connectivity in Guernsey. Providers include Sure and JT.
The two Bailiwicks each have their own internet domain, .GG (Guernsey, Alderney, Sark) and .JE (Jersey), which are managed by channelisles.net.
Culture
The Norman language predominated in the islands until the nineteenth century, when increasing influence from English-speaking settlers and easier transport links led to Anglicisation. There are four main dialects/languages of Norman in the islands, Auregnais (Alderney, extinct in late twentieth century), Dgèrnésiais (Guernsey), Jèrriais (Jersey) and Sercquiais (Sark, an offshoot of Jèrriais).
Victor Hugo spent many years in exile, first in Jersey and then in Guernsey, where he finished Les Misérables. Guernsey is the setting of Hugo's later novel Les Travailleurs de la Mer (Toilers of the Sea). A "Guernsey-man" also makes an appearance in chapter 91 of Herman Melville's Moby-Dick.
The annual "Muratti", the inter-island football match, is considered the sporting event of the year, although, due to broadcast coverage, it no longer attracts the crowds of spectators, travelling between the islands, that it did during the twentieth century.
Cricket is popular in the Channel Islands. The Jersey cricket team and the Guernsey cricket team are both associate members of the International Cricket Council. The teams have played each other in the inter-insular match since 1957. In 2001 and 2002, the Channel Islands entered a team into the MCCA Knockout Trophy, the one-day tournament of the minor counties of English and Welsh cricket.
Channel Island sportsmen and women compete in the Commonwealth Games for their respective islands and the islands have also been enthusiastic supporters of the Island Games. Shooting is a popular sport, in which islanders have won Commonwealth medals.
Guernsey's traditional colour for sporting and other purposes is green and Jersey's is red.
The main islanders have traditional animal nicknames:
Guernsey: les ânes ("donkeys" in French and Norman): the steepness of St Peter Port streets required beasts of burden, but Guernsey people also claim it is a symbol of their strength of characterwhich Jersey people traditionally interpret as stubbornness.
Jersey: les crapauds ("toads" in French and Jèrriais): Jersey has toads and snakes, which Guernsey lacks.
Sark: les corbins ("crows" in Sercquiais, Dgèrnésiais and Jèrriais, les corbeaux in French): crows could be seen from the sea on the island's coast.
Alderney: les lapins ("rabbits" in French and Auregnais): the island is noted for its warrens.
Religion
Christianity was brought to the islands around the sixth century; according to tradition, Jersey was evangelised by St Helier, Guernsey by St Samson of Dol, and the smaller islands were occupied at various times by monastic communities representing strands of Celtic Christianity. At the Reformation, the previously Catholic islands converted to Calvinism under the influence of an influx of French-language pamphlets published in Geneva. Anglicanism was imposed in the seventeenth century, but the Non-Conformist local tendency returned with a strong adoption of Methodism. In the late twentieth century, a strong Catholic presence re-emerged with the arrival of numerous Portuguese workers (both from mainland Portugal and the island of Madeira). Their numbers have been reinforced by recent migrants from Poland and elsewhere in Eastern Europe. Today, Evangelical churches have been established. Services are held in a number of languages.
According to 2015 statistics, 39% of the population was non-religious.
Other islands in the English Channel
A number of islands in the English Channel are part of France. Among these are Bréhat, Île de Batz, Chausey, Tatihou and the Îles Saint-Marcouf.
The Isle of Wight, which is part of England, lies just off the coast of Great Britain, between the Channel and the Solent.
Hayling and Portsea islands are also part of England which is part of the United Kingdom.
See also
German occupation of the Channel Islands
List of churches, chapels and meeting halls in the Channel Islands
Places named after the Channel Islands
Notes
References
Bibliography
Encyclopædia Britannica Vol. 5 (1951), Encyclopædia Britannica, Inc., Chicago – London – Toronto
– Republished
Hamlin, John F. "No 'Safe Haven': Military Aviation in the Channel Islands 1939–1945" Air Enthusiast, No. 83, September/October 1999, pp. 6–15
External links
States of Alderney
States of Guernsey
States of Jersey
Government of Sark
British Isles
Northwestern Europe
Geography of Europe
English-speaking countries and territories
Special territories of the European Union |
5645 | https://en.wikipedia.org/wiki/Cult%20film | Cult film | A cult film or cult movie, also commonly referred to as a cult classic, is a film that has acquired a cult following. Cult films are known for their dedicated, passionate fanbase which forms an elaborate subculture, members of which engage in repeated viewings, dialogue-quoting, and audience participation. Inclusive definitions allow for major studio productions, especially box-office bombs, while exclusive definitions focus more on obscure, transgressive films shunned by the mainstream. The difficulty in defining the term and subjectivity of what qualifies as a cult film mirror classificatory disputes about art. The term cult film itself was first used in the 1970s to describe the culture that surrounded underground films and midnight movies, though cult was in common use in film analysis for decades prior to that.
Cult films trace their origin back to controversial and suppressed films kept alive by dedicated fans. In some cases, reclaimed or rediscovered films have acquired cult followings decades after their original release, occasionally for their camp value. Other cult films have since become well-respected or reassessed as classics; there is debate as to whether these popular and accepted films are still cult films. After failing at the cinema, some cult films have become regular fixtures on cable television or profitable sellers on home video. Others have inspired their own film festivals. Cult films can both appeal to specific subcultures and form their own subcultures. Other media that reference cult films can easily identify which demographics they desire to attract and offer savvy fans an opportunity to demonstrate their knowledge.
Cult films frequently break cultural taboos, and many feature excessive displays of violence, gore, sexuality, profanity, or combinations thereof. This can lead to controversy, censorship, and outright bans; less transgressive films may attract similar amounts of controversy when critics call them frivolous or incompetent. Films that fail to attract requisite amounts of controversy may face resistance when labeled as cult films. Mainstream films and big budget blockbusters have attracted cult followings similar to more underground and lesser known films; fans of these films often emphasize the films' niche appeal and reject the more popular aspects. Fans who like the films for the wrong reasons, such as perceived elements that represent mainstream appeal and marketing, will often be ostracized or ridiculed. Likewise, fans who stray from accepted subcultural scripts may experience similar rejection.
Since the late 1970s, cult films have become increasingly popular. Films that once would have been limited to obscure cult followings are now capable of breaking into the mainstream, and showings of cult films have proved to be a profitable business venture. Overbroad usage of the term has resulted in controversy, as purists state it has become a meaningless descriptor applied to any film that is the slightest bit weird or unconventional; others accuse Hollywood studios of trying to artificially create cult films or use the term as a marketing tactic. Films are frequently stated to be an "instant cult classic" now, occasionally before they are released. Fickle fans on the Internet have latched on to unreleased films only to abandon them later on release. At the same time, other films have acquired massive, quick cult followings, owing to spreading virally through social media. Easy access to cult films via video on demand and peer-to-peer file sharing has led some critics to pronounce the death of cult films.
Definition
A cult film is any film that has a cult following, although the term is not easily defined and can be applied to a wide variety of films. Some definitions exclude films that have been released by major studios or have big budgets, that try specifically to become cult films, or become accepted by mainstream audiences and critics. Cult films are defined by audience reaction as much as by their content. This may take the form of elaborate and ritualized audience participation, film festivals, or cosplay. Over time, the definition has become more vague and inclusive as it drifts away from earlier, stricter views. Increasing use of the term by mainstream publications has resulted in controversy, as cinephiles argue that the term has become meaningless or "elastic, a catchall for anything slightly maverick or strange". Academic Mark Shiel has criticized the term itself as being a weak concept, reliant on subjectivity; different groups can interpret films in their own terms. According to feminist scholar Joanne Hollows, this subjectivity causes films with large female cult followings to be perceived as too mainstream and not transgressive enough to qualify as a cult film. Academic Mike Chopra‑Gant says that cult films become decontextualized when studied as a group, and Shiel criticizes this recontextualization as cultural commodification.
In 2008, Cineaste asked a range of academics for their definition of a cult film. Several people defined cult films primarily in terms of their opposition to mainstream films and conformism, explicitly requiring a transgressive element, though others disputed the transgressive potential, given the demographic appeal to conventional moviegoers and mainstreaming of cult films. Jeffrey Andrew Weinstock instead called them mainstream films with transgressive elements. Most definitions also required a strong community aspect, such as obsessed fans or ritualistic behavior. Citing misuse of the term, Mikel J. Koven took a self-described hard-line stance that rejected definitions that use any other criteria. Matt Hills instead stressed the need for an open-ended definition rooted in structuration, where the film and the audience reaction are interrelated and neither is prioritized. Ernest Mathijs focused on the accidental nature of cult followings, arguing that cult film fans consider themselves too savvy to be marketed to, while Jonathan Rosenbaum rejected the continued existence of cult films and called the term a marketing buzzword. Mathijs suggests that cult films help to understand ambiguity and incompleteness in life given the difficulty in even defining the term. That cult films can have opposing qualities – such as good and bad, failure and success, innovative and retro – helps to illustrate that art is subjective and never self-evident. This ambiguity leads critics of postmodernism to accuse cult films of being beyond criticism, as the emphasis is now on personal interpretation rather than critical analysis or metanarratives. These inherent dichotomies can lead audiences to be split between ironic and earnest fans.
Writing in Defining Cult Movies, Jancovich et al. quote academic Jeffrey Sconce, who defines cult films in terms of paracinema, marginal films that exist outside critical and cultural acceptance: everything from exploitation to beach party musicals to softcore pornography. However, they reject cult films as having a single unifying feature; instead, they state that cult films are united in their "subcultural ideology" and opposition to mainstream tastes, itself a vague and undefinable term. Cult followings themselves can range from adoration to contempt, and they have little in common except for their celebration of nonconformity – even the bad films ridiculed by fans are artistically nonconformist, albeit unintentionally. At the same time, they state that bourgeois, masculine tastes are frequently reinforced, which makes cult films more of an internal conflict within the bourgeoisie, rather than a rebellion against it. This results in an anti-academic bias despite the use of formal methodologies, such as defamiliarization. This contradiction exists in many subcultures, especially those dependent on defining themselves in terms of opposition to the mainstream. This nonconformity is eventually co-opted by the dominant forces, such as Hollywood, and marketed to the mainstream. Academic Xavier Mendik also defines cult films as opposing the mainstream and further proposes that films can become cult by virtue of their genre or content, especially if it is transgressive. Due to their rejection of mainstream appeal, Mendik says cult films can be more creative and political; times of relative political instability produce more interesting films.
General overview
Cult films have existed since the early days of cinema. Film critic Harry Allan Potamkin traces them back to 1910s France and the reception of Pearl White, William S. Hart, and Charlie Chaplin, which he described as "a dissent from the popular ritual". Nosferatu (1922) was an unauthorized adaptation of Bram Stoker's Dracula. Stoker's widow sued the production company and drove it to bankruptcy. All known copies of the film were destroyed, and Nosferatu become an early cult film, kept alive by a cult following that circulated illegal bootlegs. Academic Chuck Kleinhans identifies the Marx Brothers as making other early cult films. On their original release, some highly regarded classics from the Golden Age of Hollywood were panned by critics and audiences, relegated to cult status. The Night of the Hunter (1955) was a cult film for years, quoted often and championed by fans, before it was reassessed as an important and influential classic. During this time, American exploitation films and imported European art films were marketed similarly. Although critics Pauline Kael and Arthur Knight argued against arbitrary divisions into high and low culture, American films settled into rigid genres; European art films continued to push the boundaries of simple definitions, and these exploitative art films and artistic exploitation films would go on to influence American cult films. Much like later cult films, these early exploitation films encouraged audience participation, influenced by live theater and vaudeville.
Modern cult films grew from 1960s counterculture and underground films, popular among those who rejected mainstream Hollywood films. These underground film festivals led to the creation of midnight movies, which attracted cult followings. The term cult film itself was an outgrowth of this movement and was first used in the 1970s, though cult had been in use for decades in film analysis with both positive and negative connotations. These films were more concerned with cultural significance than the social justice sought by earlier avant-garde films. Midnight movies became more popular and mainstream, peaking with the release of The Rocky Horror Picture Show (1975), which finally found its audience several years after its release. Eventually, the rise of home video would marginalize midnight movies once again, after which many directors joined the burgeoning independent film scene or went back underground. Home video would give a second life to box-office flops, as positive word-of-mouth or excessive replay on cable television led these films to develop an appreciative audience, as well as obsessive replay and study. For example, The Beastmaster (1982), despite its failure at the box office, became one of the most played movies on American cable television and developed into a cult film. Home video and television broadcasts of cult films were initially greeted with hostility. Joanne Hollows states that they were seen as turning cult films mainstream – in effect, feminizing them by opening them to distracted, passive audiences.
Releases from major studios – such as The Big Lebowski (1998), which was distributed by Universal Studios – can become cult films when they fail at the box office and develop a cult following through reissues, such as midnight movies, festivals, and home video. Hollywood films, due to their nature, are more likely to attract this kind of attention, which leads to a mainstreaming effect of cult culture. With major studios behind them, even financially unsuccessful films can be re-released multiple times, which plays into a trend to capture audiences through repetitious reissues. The constant use of profanity and drugs in otherwise mainstream, Hollywood films, such as The Big Lebowski, can alienate critics and audiences yet lead to a large cult following among more open-minded demographics not often associated with cult films, such as Wall Street bankers and professional soldiers. Thus, even comparatively mainstream films can satisfy the traditional demands of a cult film, perceived by fans as transgressive, niche, and uncommercial. Discussing his reputation for making cult films, Bollywood director Anurag Kashyap said, "I didn't set out to make cult films. I wanted to make box-office hits." Writing in Cult Cinema, academics Ernest Mathijs and Jamie Sexton state that this acceptance of mainstream culture and commercialism is not out of character, as cult audiences have a more complex relationship to these concepts: they are more opposed to mainstream values and excessive commercialism than they are anything else.
In a global context, popularity can vary widely by territory, especially with regard to limited releases. Mad Max (1979) was an international hit – except in America where it became an obscure cult favorite, ignored by critics and available for years only in a dubbed version though it earned over $100M internationally. Foreign cinema can put a different spin on popular genres, such as Japanese horror, which was initially a cult favorite in America. Asian imports to the West are often marketed as exotic cult films and of interchangeable national identity, which academic Chi-Yun Shin criticizes as reductive. Foreign influence can affect fan response, especially on genres tied to a national identity; when they become more global in scope, questions of authenticity may arise. Filmmakers and films ignored in their own country can become the objects of cult adoration in another, producing perplexed reactions in their native country. Cult films can also establish an early viability for more mainstream films both for filmmakers and national cinema. The early cult horror films of Peter Jackson were so strongly associated with his homeland that they affected the international reputation of New Zealand and its cinema. As more artistic films emerged, New Zealand was perceived as a legitimate competitor to Hollywood, which mirrored Jackson's career trajectory. Heavenly Creatures (1994) acquired its own cult following, became a part of New Zealand's national identity, and paved the way for big-budget, Hollywood-style epics, such as Jackson's The Lord of the Rings trilogy.
Mathijs states that cult films and fandom frequently involve nontraditional elements of time and time management. Fans will often watch films obsessively, an activity that is viewed by the mainstream as wasting time yet can be seen as resisting the commodification of leisure time. They may also watch films idiosyncratically: sped up, slowed down, frequently paused, or at odd hours. Cult films themselves subvert traditional views of time – time travel, non-linear narratives, and ambiguous establishments of time are all popular. Mathijs also identifies specific cult film viewing habits, such as viewing horror films on Halloween, sentimental melodrama on Christmas, and romantic films on Valentine's Day. These films are often viewed as marathons where fans can gorge themselves on their favorites. Mathijs states that cult films broadcast on Christmas have a nostalgic factor. These films, ritually watched every season, give a sense of community and shared nostalgia to viewers. New films often have trouble making inroads against the institutions of It's A Wonderful Life (1946) and Miracle on 34th Street (1947). These films provide mild criticism of consumerism while encouraging family values. Halloween, on the other hand, allows flaunting society's taboos and testing one's fears. Horror films have appropriated the holiday, and many horror films debut on Halloween. Mathijs criticizes the over-cultified, commercialized nature of Halloween and horror films, which feed into each other so much that Halloween has turned into an image or product with no real community. Mathijs states that Halloween horror conventions can provide the missing community aspect.
Despite their oppositional nature, cult films can produce celebrities. Like cult films themselves, authenticity is an important aspect of their popularity. Actors can become typecast as they become strongly associated with such iconic roles. Tim Curry, despite his acknowledged range as an actor, found casting difficult after he achieved fame in The Rocky Horror Picture Show. Even when discussing unrelated projects, interviewers frequently bring up the role, which causes him to tire of discussing it. Mary Woronov, known for her transgressive roles in cult films, eventually transitioned to mainstream films. She was expected to recreate the transgressive elements of her cult films within the confines of mainstream cinema. Instead of the complex gender deconstructions of her Andy Warhol films, she became typecast as a lesbian or domineering woman. Sylvia Kristel, after starring in Emmanuelle (1974), found herself highly associated with the film and the sexual liberation of the 1970s. Caught between the transgressive elements of her cult film and the mainstream appeal of soft-core pornography, she was unable to work in anything but exploitation films and Emmanuelle sequels. Despite her immense popularity and cult following, she would rate only a footnote in most histories of European cinema if she was even mentioned. Similarly, Chloë Sevigny has struggled with her reputation as a cult independent film star famous for her daring roles in transgressive films. Cult films can also trap directors. Leonard Kastle, who directed The Honeymoon Killers (1969), never directed another film again. Despite his cult following, which included François Truffaut, he was unable to find financing for any of his other screenplays. Qualities that bring cult films to prominence – such as an uncompromising, unorthodox vision – caused Alejandro Jodorowsky to languish in obscurity for years.
Transgression and censorship
Transgressive films as a distinct artistic movement began in the 1970s. Unconcerned with genre distinctions, they drew inspiration equally from the nonconformity of European art cinema and experimental film, the gritty subject matter of Italian neorealism, and the shocking images of 1960s exploitation. Some used hardcore pornography and horror, occasionally at the same time. In the 1980s, filmmaker Nick Zedd identified this movement as the Cinema of Transgression and later wrote a manifesto. Popular in midnight showings, they were mainly limited to large urban areas, which led academic Joan Hawkins to label them as "downtown culture". These films acquired a legendary reputation as they were discussed and debated in alternative weeklies, such as The Village Voice. Home video would finally allow general audiences to see them, which gave many people their first taste of underground film. Ernest Mathijs says that cult films often disrupt viewer expectations, such as giving characters transgressive motivations or focusing attention on elements outside the film. Cult films can also transgress national stereotypes and genre conventions, such as Battle Royale (2000), which broke many rules of teenage slasher films. The reverse – when films based on cult properties lose their transgressive edge – can result in derision and rejection by fans. Audience participation itself can be transgressive, such as breaking long-standing taboos against talking during films and throwing things at the screen.
According to Mathijs, critical reception is important to a film's perception as cult, through topicality and controversy. Topicality, which can be regional (such as objection to government funding of the film) or critical (such as philosophical objections to the themes), enables attention and a contextual response. Cultural topics make the film relevant and can lead to controversy, such as a moral panic, which provides opposition. Cultural values transgressed in the film, such as sexual promiscuity, can be attacked by proxy, through attacks on the film. These concerns can vary from culture to culture, and they need not be at all similar. However, Mathijs says the film must invoke metacommentary for it to be more than simply culturally important. While referencing previous arguments, critics may attack its choice of genre or its very right to exist. Taking stances on these varied issues, critics assure their own relevance while helping to elevate the film to cult status. Perceived racist and reductive remarks by critics can rally fans and raise the profile of cult films, an example of which would be Rex Reed's comments about Korean culture in his review of Oldboy (2003). Critics can also polarize audiences and lead debates, such as how Joe Bob Briggs and Roger Ebert dueled over I Spit On Your Grave (1978). Briggs would later contribute a commentary track to the DVD release in which he describes it as a feminist film. Films which do not attract enough controversy may be ridiculed and rejected when suggested as cult films.
Academic Peter Hutchings, noting the many definitions of a cult film that require transgressive elements, states that cult films are known in part for their excesses. Both subject matter and its depiction are portrayed in extreme ways that break taboos of good taste and aesthetic norms. Violence, gore, sexual perversity, and even the music can be pushed to stylistic excess far beyond that allowed by mainstream cinema. Film censorship can make these films obscure and difficult to find, common criteria used to define cult films. Despite this, these films remain well-known and prized among collectors. Fans will occasionally express frustration with dismissive critics and conventional analysis, which they believe marginalizes and misinterprets paracinema. In marketing these films, young men are predominantly targeted. Horror films in particular can draw fans who seek the most extreme films. Audiences can also ironically latch on to offensive themes, such as misogyny, using these films as catharsis for the things that they hate most in life. Exploitative, transgressive elements can be pushed to excessive extremes for both humor and satire. Frank Henenlotter faced censorship and ridicule, but he found acceptance among audiences receptive to themes that Hollywood was reluctant to touch, such as violence, drug addiction, and misogyny. Lloyd Kaufman sees his films' political statements as more populist and authentic than the hypocrisy of mainstream films and celebrities. Despite featuring an abundance of fake blood, vomit, and diarrhea, Kaufman's films have attracted positive attention from critics and academics. Excess can also exist as camp, such as films that highlight the excesses of 1980s fashion and commercialism.
Films that are influenced by unpopular styles or genres can become cult films. Director Jean Rollin worked within cinéma fantastique, an unpopular genre in modern France. Influenced by American films and early French fantasists, he drifted between art, exploitation, and pornography. His films were reviled by critics, but he retained a cult following drawn by the nudity and eroticism. Similarly, Jess Franco chafed under fascist censorship in Spain but became influential in Spain's horror boom of the 1960s. These transgressive films that straddle the line between art and horror may have overlapping cult followings, each with their own interpretation and reasons for appreciating it. The films that followed Jess Franco were unique in their rejection of mainstream art. Popular among fans of European horror for their subversiveness and obscurity, these later Spanish films allowed political dissidents to criticize the fascist regime within the cloak of exploitation and horror. Unlike most exploitation directors, they were not trying to establish a reputation. They were already established in the art-house world and intentionally chose to work within paracinema as a reaction against the New Spanish Cinema, an artistic revival supported by the fascists. As late as the 1980s, critics still cited Pedro Almodóvar's anti-macho iconoclasm as a rebellion against fascist mores, as he grew from countercultural rebel to mainstream respectability. Transgressive elements that limit a director's appeal in one country can be celebrated or highlighted in another. Takashi Miike has been marketed in the West as a shocking and avant-garde filmmaker despite his many family-friendly comedies, which have not been imported.
The transgressive nature of cult films can lead to their censorship. During the 1970s and early 1980s, a wave of explicit, graphic exploitation films caused controversy. Called "video nasties" within the UK, they ignited calls for censorship and stricter laws on home video releases, which were largely unregulated. Consequently, the British Board of Film Classification banned many popular cult films due to issues of sex, violence, and incitement to crime. Released during the cannibal boom, Cannibal Holocaust (1980) was banned in dozens of countries and caused the director to be briefly jailed over fears that it was a real snuff film. Although opposed to censorship, director Ruggero Deodato would later agree with cuts made by the BBFC which removed unsimulated animal killings, which limited the film's distribution. Frequently banned films may introduce questions of authenticity as fans question whether they have seen a truly uncensored cut. Cult films have been falsely claimed to have been banned to increase their transgressive reputation and explain their lack of mainstream penetration. Marketing campaigns have also used such claims to raise interest among curious audiences. Home video has allowed cult film fans to import rare or banned films, finally giving them a chance to complete their collection with imports and bootlegs. Cult films previously banned are sometimes released with much fanfare and the fans assumed to be already familiar with the controversy. Personal responsibility is often highlighted, and a strong anti-censorship message may be present. Previously lost scenes cut by studios can be re-added and restore a director's original vision, which draws similar fanfare and acclaim from fans. Imports are sometimes censored to remove elements that would be controversial, such as references to Islamic spirituality in Indonesian cult films.
Academics have written of how transgressive themes in cult films can be regressive. David Church and Chuck Kleinhans describe an uncritical celebration of transgressive themes in cult films, including misogyny and racism. Church has also criticized gendered descriptions of transgressive content that celebrate masculinity. Joanne Hollows further identifies a gendered component to the celebration of transgressive themes in cult films, where male terms are used to describe films outside the mainstream while female terms are used to describe mainstream, conformist cinema. Jacinda Read's expansion states that cult films, despite their potential for empowerment of the marginalized, are more often used by politically incorrect males. Knowledgeable about feminism and multiculturalism, they seek a refuge from the academic acceptance of these progressive ideals. Their playful and ironic acceptance of regressive lad culture invites, and even dares, condemnation from academics and the uncool. Thus, cult films become a tool to reinforce mainstream values through transgressive content; Rebecca Feasy states that cultural hierarchies can also be reaffirmed through mockery of films perceived to be lacking masculinity. However, the sexploitation films of Doris Wishman took a feminist approach which avoids and subverts the male gaze and traditional goal-oriented methods. Wishman's subject matter, though exploitative and transgressive, was always framed in terms of female empowerment and the feminine spectator. Her use of common cult film motifs – female nudity and ambiguous gender – were repurposed to comment on feminist topics. Similarly, the films of Russ Meyer were a complicated combination of transgressive, mainstream, progressive, and regressive elements. They attracted both acclaim and denouncement from critics and progressives. Transgressive films imported from cultures that are recognizably different yet still relatable can be used to progressively examine issues in another culture.
Subcultural appeal and fandom
Cult films can be used to help define or create groups as a form of subcultural capital; knowledge of cult films proves that one is "authentic" or "non-mainstream". They can be used to provoke an outraged response from the mainstream, which further defines the subculture, as only members could possibly tolerate such deviant entertainment. More accessible films have less subcultural capital; among extremists, banned films will have the most. By referencing cult films, media can identify desired demographics, strengthen bonds with specific subcultures, and stand out among those who understand the intertextuality. Popular films from previous eras may be reclaimed by genre fans long after they have been forgotten by the original audiences. This can be done for authenticity, such as horror fans who seek out now-obscure titles from the 1950s instead of the modern, well-known remakes. Authenticity may also drive fans to deny genre categorization to films perceived as too mainstream or accessible. Authenticity in performance and expertise can drive fan acclaim. Authenticity can also drive fans to decry the mainstream in the form of hostile critics and censors. Especially when promoted by enthusiastic and knowledgeable programmers, choice of venue can be an important part of expressing individuality. Besides creating new communities, cult films can link formerly disparate groups, such as fans and critics. As these groups intermix, they can influence each other, though this may be resisted by older fans, unfamiliar with these new references. In extreme cases, cult films can lead to the creation of religions, such as Dudeism. For their avoidance of mainstream culture and audiences, enjoyment of irony, and celebration of obscure subcultures, academic Martin Roberts compares cult film fans to hipsters.
A film can become the object of a cult following within a particular region or culture if it has unusual significance. For example, Norman Wisdom's films, friendly to Marxist interpretation, amassed a cult following in Albania, as they were among the few Western films allowed by the country's Communist rulers. The Wizard of Oz (1939) and its star, Judy Garland, hold special significance to American and British gay culture, although it is a widely viewed and historically important film in greater American culture. Similarly, James Dean and his brief film career have become icons of alienated youth. Cult films can have such niche appeal that they are only popular within certain subcultures, such as Reefer Madness (1936) and Hemp for Victory (1942) among the stoner subculture. Beach party musicals, popular among American surfers, failed to find an equivalent audience when imported to the United Kingdom. When films target subcultures like this, they may seem unintelligible without the proper cultural capital. Films which appeal to teenagers may offer subcultural identities that are easily recognized and differentiate various subcultural groups. Films which appeal to stereotypical male activities, such as sports, can easily gain strong male cult followings. Sports metaphors are often used in the marketing of cult films to males, such as emphasizing the "extreme" nature of the film, which increases the appeal to youth subcultures fond of extreme sports.
Matt Hills' concept of the "cult blockbuster" involves cult followings inside larger, mainstream films. Although these are big budget, mainstream films, they still attract cult followings. The cult fans differentiate themselves from ordinary fans in several ways: longstanding devotion to the film, distinctive interpretations, and fan works. Hills identifies three different cult followings for The Lord of the Rings, each with their own fandom separate from the mainstream. Academic Emma Pett identifies Back to the Future (1985) as another example of a cult blockbuster. Although the film was an instant hit when released, it has also developed a nostalgic cult following over the years. The hammy acting by Christopher Lloyd and quotable dialogue have drawn a cult following, as they mimic traditional cult films. Blockbuster science fiction films that include philosophical subtexts, such as The Matrix, allow cult film fans to enjoy them on a higher level than the mainstream. Star Wars, with its large cult following in geek subculture, has been cited as both a cult blockbuster and a cult film. Although a mainstream epic, Star Wars has provided its fans with a spirituality and culture outside of the mainstream.
Fans, in response to the popularity of these blockbusters, will claim elements for themselves while rejecting others. For example, in the Star Wars film series, mainstream criticism of Jar Jar Binks focused on racial stereotyping; although cult film fans will use that to bolster their arguments, he is rejected because he represents mainstream appeal and marketing. Also, instead of valuing textual rarity, fans of cult blockbusters will value repeat viewings. They may also engage in behaviors more traditional for fans of cult television and other serial media, as cult blockbusters are often franchised, preconceived as a film series, or both. To reduce mainstream accessibility, a film series can be self-reflexive and full of in-jokes that only longtime fans can understand. Mainstream critics may ridicule commercially successful directors of cult blockbusters, such as James Cameron, Michael Bay, and Luc Besson, whose films have been called simplistic. This critical backlash may serve to embellish the filmmakers' reception as cult auteurs. In the same way, critics may ridicule fans of cult blockbusters as immature or shallow.
Cult films can create their own subculture. Rocky Horror, originally made to exploit the popularity of glam subculture, became what academic Gina Marchetti called a "sub-subculture", a variant that outlived its parent subculture. Although often described as primarily composed of obsessed fans, cult film fandom can include many newer, less experienced members. Familiar with the film's reputation and having watched clips on YouTube, these fans may take the next step and enter the film's fandom. If they are the majority, they may alter or ignore long-standing traditions, such as audience participation rituals; rituals which lack perceived authenticity may be criticized, but accepted rituals bring subcultural capital to veteran fans who introduce them to the newer members. Fans who flaunt their knowledge receive negative reactions. Newer fans may cite the film itself as their reason for attending a showing, but longtime fans often cite the community. Organized fandoms may spread and become popular as a way of introducing new people to the film, as well as theatrical screenings being privileged by the media and fandom itself. Fandom can also be used as a process of legitimation. Fans of cult films, as in media fandom, are frequently producers instead of mere consumers. Unconcerned with traditional views on intellectual property, these fan works are often unsanctioned, transformative, and ignore fictional canon.
Like cult films themselves, magazines and websites dedicated to cult films revel in their self-conscious offensiveness. They maintain a sense of exclusivity by offending mainstream audiences with misogyny, gore, and racism. Obsessive trivia can be used to bore mainstream audiences while building up subcultural capital. Specialist stores on the fringes of society (or websites which prominently partner with hardcore pornographic sites) can be used to reinforce the outsider nature of cult film fandom, especially when they use erotic or gory imagery. By assuming a preexisting knowledge of trivia, non-fans can be excluded. Previous articles and controversies can also be alluded to without explanation. Casual readers and non-fans will thus be left out of discussions and debates, as they lack enough information to meaningfully contribute. When fans like a cult film for the wrong reasons, such as casting or characters aimed at mainstream appeal, they may be ridiculed. Thus, fandom can keep the mainstream at bay while defining themselves in terms of the "Other", a philosophical construct divergent from social norms. Commercial aspects of fandom (such as magazines or books) can also be defined in terms of "otherness" and thus valid to consume: consumers purchasing independent or niche publications are discerning consumers, but the mainstream is denigrated. Irony or self-deprecating humor can also be used. In online communities, different subcultures attracted to transgressive films can clash over values and criteria for subcultural capital. Even within subcultures, fans who break subcultural scripts, such as denying the affectivity of a disturbing film, will be ridiculed for their lack of authenticity.
Types
"So bad it's good"
The critic Michael Medved characterized examples of the "so bad it's good" class of low-budget cult film through books such as The Golden Turkey Awards. These films include financially fruitless and critically scorned films that have become inadvertent comedies to film buffs, such as Plan 9 from Outer Space (1959), Mommie Dearest (1981), The Room (2003), and the Ugandan action comedy film Who Killed Captain Alex? (2010). Similarly, Paul Verhoeven's Showgirls (1995) bombed in theaters but developed a cult following on video. Catching on, Metro-Goldwyn-Mayer capitalized on the film's ironic appeal and marketed it as a cult film. Sometimes, fans will impose their own interpretation of films which have attracted derision, such as reinterpreting an earnest melodrama as a comedy. Jacob deNobel of the Carroll County Times states that films can be perceived as nonsensical or inept when audiences misunderstand avant-garde filmmaking or misinterpret parody. Films such as Rocky Horror can be misinterpreted as "weird for weirdness' sake" by people unfamiliar with the cult films that it parodies. deNobel ultimately rejects the use of the label "so bad it's good" as mean-spirited and often misapplied. Alamo Drafthouse programmer Zack Carlson has further said that any film which succeeds in entertaining an audience is good, regardless of irony. In francophone culture, "so bad it's good" films, known as , have given rise to a subculture with dedicated websites such as Nanarland, film festivals and viewings in theaters, as well as various books analyzing the phenomenon. The rise of the Internet and on-demand films has led critics to question whether "so bad it's good" films have a future now that people have such diverse options in both availability and catalog, though fans eager to experience the worst films ever made can lead to lucrative showings for local theaters and merchandisers.
Camp and guilty pleasures
Chuck Kleinhans states that the difference between a guilty pleasure and a cult film can be as simple as the number of fans; David Church raises the question of how many people it takes to form a cult following, especially now that home video makes fans difficult to count. As these cult films become more popular, they can bring varied responses from fans that depend on different interpretations, such as camp, irony, genuine affection, or combinations thereof. Earnest fans, who recognize and accept the film's faults, can make minor celebrities of the film's cast, though the benefits are not always clear. Cult film stars known for their camp can inject subtle parody or signal when films should not be taken seriously. Campy actors can also provide comic book supervillains for serious, artistic-minded films. This can draw fan acclaim and obsession more readily than subtle, method-inspired acting. Mark Chalon Smith of the Los Angeles Times says technical faults may be forgiven if a film makes up for them in other areas, such as camp or transgressive content. Smith states that the early films of John Waters are amateurish and less influential than claimed, but Waters' outrageous vision cements his place in cult cinema. Films such as Myra Breckinridge (1970) and Beyond the Valley of the Dolls (1970) can experience critical reappraisal later, once their camp excess and avant-garde filmmaking are better accepted, and films that are initially dismissed as frivolous are often reassessed as campy. Films that intentionally try to appeal to fans of camp may end up alienating them, as the films become perceived as trying too hard or not authentic.
Nostalgia
According to academic Brigid Cherry, nostalgia "is a strong element of certain kinds of cult appeal." When Veoh added many cult films to their site, they cited nostalgia as a factor for their popularity. Academic I. Q. Hunter describes cult films as "New Hollywood in extremis" and a form of nostalgia for that period. Ernest Mathijs instead states that cult films use nostalgia as a form of resistance against progress and capitalistic ideas of a time-based economy. By virtue of the time travel plot, Back to the Future permits nostalgia for both the 1950s and 1980s. Many members of its nostalgic cult following are too young to have been alive during those periods, which Emma Pett interprets as fondness for retro aesthetics, nostalgia for when they saw the film rather than when it was released, and looking to the past to find a better time period. Similarly, films directed by John Hughes have taken hold in midnight movie venues, trading off of nostalgia for the 1980s and an ironic appreciation for their optimism. Mathijs and Sexton describe Grease (1978) as a film nostalgic about an imagined past that has acquired a nostalgic cult following. Other cult films, such as Streets of Fire (1984), create a new fictional world based on nostalgic views of the past. Cult films may also subvert nostalgia, such as The Big Lebowski, which introduces many nostalgic elements and then reveals them as fake and hollow. Scott Pilgrim vs. the World is a recent example, containing extensive nostalgia for the music and video gaming culture of the 2000s. Nathan Lee of the New York Sun identifies the retro aesthetic and nostalgic pastiche in films such as Donnie Darko as factors in its popularity among midnight movie crowds.
Midnight movies
Author Tomas Crowder-Taraborrelli describes midnight movies as a reaction against the political and cultural conservatism in America, and Joan Hawkins identifies the movement as running the gamut from anarchist to libertarian, united in their anti-establishment attitude and punk aesthetic. These films are resistant to simple categorization and are defined by the fanaticism and ritualistic behaviors of their audiences. Midnight movies require a night life and an audience willing to invest themselves actively. Hawkins states that these films took a rather bleak point of view due to the living conditions of the artists and the economic prospects of the 1970s. Like the surrealists and dadaists, they not only satirically attacked society but also the very structure of film – a counter-cinema that deconstructs narrative and traditional processes. In the late 1980s and 1990s, midnight movies transitioned from underground showings to home video viewings; eventually, a desire for community brought a resurgence, and The Big Lebowski kick-started a new generation. Demographics shifted, and more hip and mainstream audiences were drawn to them. Although studios expressed skepticism, large audiences were drawn to box-office flops, such as Donnie Darko (2001), The Warriors (1979) and Office Space (1999). Modern midnight movies retain their popularity and have been strongly diverging from mainstream films shown at midnight. Mainstream cinemas, eager to disassociate themselves from negative associations and increase profits, have begun abandoning midnight screenings. Although classic midnight movies have dropped off in popularity, they still bring reliable crowds.
Art and exploitation
Although seemingly at odds with each other, art and exploitation films are frequently treated as equal and interchangeable in cult fandom, listed alongside each other and described in similar terms: their ability to provoke a response. The most exploitative aspects of art films are thus played up and their academic recognition ignored. This flattening of culture follows the popularity of post-structuralism, which rejects a hierarchy of artistic merit and equates exploitation and art. Mathijs and Sexton state that although cult films are not synonymous with exploitation, as is occasionally assumed, this is a key component; they write that exploitation, which exists on the fringes of the mainstream and deals with taboo subjects, is well-suited for cult followings. Academic David Andrews writes that cult softcore films are "the most masculinized, youth-oriented, populist, and openly pornographic softcore area." The sexploitation films of Russ Meyer were among the first to abandon all hypocritical pretenses of morality and were technically proficient enough to gain a cult following. His persistent vision saw him received as an auteur worthy of academic study; director John Waters attributes this to Meyer's ability to create complicated, sexually charged films without resorting to explicit sex. Myrna Oliver described Doris Wishman's exploitation films as "crass, coarse, and camp ... perfect fodder for a cult following." "Sick films", the most disturbing and graphically transgressive films, have their own distinct cult following; these films transcend their roots in exploitation, horror, and art films. In 1960s and 1970s America, exploitation and art films shared audiences and marketing, especially in New York City's grindhouse cinemas.
B and genre films
Mathijs and Sexton state that genre is an important part of cult films; cult films will often mix, mock, or exaggerate the tropes associated with traditional genres. Science fiction, fantasy, and horror are known for their large and dedicated cult followings; as science fiction films become more popular, fans emphasize non-mainstream and less commercial aspects of it. B films, which are often conflated with exploitation, are as important to cult films as exploitation. Teodor Reljic of Malta Today states that cult B films are a realistic goal for Malta's burgeoning film industry. Genre films, B films that strictly adhere to genre limitations, can appeal to cult film fans: given their transgressive excesses, horror films are likely to become to cult films; films like Galaxy Quest (1999) highlight the importance of cult followings and fandom to science fiction; and authentic martial arts skills in Hong Kong action films can drive them to become cult favorites. Cult musicals can range from the traditional, such as Singin' in the Rain (1952), which appeal to cult audiences through nostalgia, camp, and spectacle, to the more non-traditional, such as Cry-Baby (1990), which parodies musicals, and Rocky Horror, which uses a rock soundtrack. Romantic fairy tale The Princess Bride (1987) failed to attract audiences in its original release, as the studio did not know how to market it. The freedom and excitement associated with cars can be an important part of drawing cult film fans to genre films, and they can signify action and danger with more ambiguity than a gun. Ad Week writes that cult B films, when released on home video, market themselves and need only enough advertising to raise curiosity or nostalgia.
Animation
Animation can provide wide open vistas for stories. The French film Fantastic Planet (1973) explored ideas beyond the limits of traditional, live-action science fiction films. Ralph Bakshi's career has been marked with controversy: Fritz the Cat (1972), the first animated film to be rated "X" by the MPAA, provoked outrage for its racial caricatures and graphic depictions of sex, and Coonskin (1975) was decried as racist. Bakshi recalls that older animators had tired of "kid stuff" and desired edgier work, whereas younger animators hated his work for "destroying the Disney images". Eventually, his work would be reassessed and cult followings, which include Quentin Tarantino and Robert Rodriguez, developed around several of his films. Heavy Metal (1981) faced similar denunciations from critics. Donald Liebenson of the Los Angeles Times cites the violence and sexual imagery as alienating critics, who did not know what to make of the film. It would go on to become a popular midnight movie and frequently bootlegged by fans, as licensing issues kept it from being released on video for many years.
Phil Hoad of The Guardian identifies Akira (1988) as introducing violent, adult Japanese animation (known as anime) to the West and paving the way for later works. Anime, according to academic Brian Ruh, is not a cult genre, but the lack of individual fandoms inside anime fandom itself lends itself to a bleeding over of cult attention and can help spread works internationally. Anime, which is frequently presented as a series (with movies either rising from existing series, or spinning off series based on the film), provides its fans with alternative fictional canons and points of view that can drive fan activity. The Ghost in the Shell films, for example, provided Japanese fans with enough bonus material and spinoffs that it encouraged cult tendencies. Markets that did not support the sale of these materials saw less cult activity. The claymation film Gumby: The Movie (1995), which made only $57,100 at the box office against its $2.8 million budget but sold a million copies on VHS alone, was subsequently released on DVD and remastered in high definition for Blu-ray due to its strong cult following. Like many cult films, RiffTrax made their own humorous audio commentary for Gumby: The Movie in 2021.
Nonfiction
Sensationalistic documentaries called mondo films replicate the most shocking and transgressive elements of exploitation films. They are usually modeled after "sick films" and cover similar subject matter. In The Cult Film Reader, academics Mathijs and Mendik write that these documentaries often present non-Western societies as "stereotypically mysterious, seductive, immoral, deceptive, barbaric or savage". Though they can be interpreted as racist, Mathijs and Mendik state that they also "exhibit a liberal attitude towards the breaking of cultural taboos". Mondo films like Faces of Death mix real and fake footage freely, and they gain their cult following through the outrage and debate over authenticity that results. Like "so bad it's good" cult films, old propaganda and government hygiene films may be enjoyed ironically by more modern audiences for the camp value of the outdated themes and outlandish claims made about perceived social threats, such as drug use. Academic Barry K. Grant states that Frank Capra's Why We Fight World War II propaganda films are explicitly not cult, because they are "slickly made and have proven their ability to persuade an audience." The sponsored film Mr. B Natural became a cult hit when it was broadcast on the satirical television show Mystery Science Theater 3000; cast member Trace Beaulieu cited these educational shorts as his favorite to mock on the show. Mark Jancovich states that cult audiences are drawn to these films because of their "very banality or incoherence of their political positions", unlike traditional cult films, which achieve popularity through auteurist radicalism.
Mainstream popularity
Mark Shiel explains the rising popularity of cult films as an attempt by cinephiles and scholars to escape the oppressive conformity and mainstream appeal of even independent film, as well as a lack of condescension in both critics and the films; Academic Donna de Ville says it is a chance to subvert the dominance of academics and cinephiles. According to Xavier Mendik, "academics have been really interested in cult movies for quite a while now." Mendik has sought to bring together academic interest and fandom through Cine-Excess, a film festival. I. Q. Hunter states that "it's much easier to be a cultist now, but it is also rather more inconsequential." Citing the mainstream availability of Cannibal Holocaust, Jeffrey Sconce rejects definitions of cult films based on controversy and excess, as they've now become meaningless. Cult films have influenced such diverse industries as cosmetics, music videos, and fashion. Cult films have shown up in less expected places; as a sign of his popularity, a bronze statue of Ed Wood has been proposed in his hometown, and L'Osservatore Romano, the official newspaper of the Holy See, has courted controversy for its endorsement of cult films and pop culture. When cities attempt to renovate neighborhoods, fans have called attempts to demolish iconic settings from cult films "cultural vandalism". Cult films can also drive tourism, even when it is unwanted. From Latin America, Alejandro Jodorowsky's film El Topo (1970) has attracted attention of rock musicians such as John Lennon, Mick Jagger, and Bob Dylan.
As far back as the 1970s, Attack of the Killer Tomatoes (1978) was designed specifically to be a cult film, and The Rocky Horror Picture Show was produced by 20th Century Fox, a major Hollywood studio. Over its decades-long release, Rocky Horror became the seventh highest grossing R-rated film when adjusted for inflation; journalist Matt Singer has questioned whether Rocky Horrors popularity invalidates its cult status. Founded in 1974, Troma Entertainment, an independent studio, would become known for both its cult following and cult films. In the 1980s, Danny Peary's Cult Movies (1981) would influence director Edgar Wright and film critic Scott Tobias of The A.V. Club. The rise of home video would have a mainstreaming effect on cult films and cultish behavior, though some collectors would be unlikely to self-identify as cult film fans. Film critic Joe Bob Briggs began reviewing drive-in theater and cult films, though he faced much criticism as an early advocate of exploitation and cult films. Briggs highlights the mainstreaming of cult films by pointing out the respectful obituaries that cult directors have received from formerly hostile publications and acceptance of politically incorrect films at mainstream film festivals. This acceptance is not universal, though, and some critics have resisted this mainstreaming of paracinema. Beginning in the 1990s, director Quentin Tarantino would have the greatest success in turning cult films mainstream. Tarantino later used his fame to champion obscure cult films that had influenced him and set up the short-lived Rolling Thunder Pictures, which distributed several of his favorite cult films. Tarantino's clout led Phil Hoad of The Guardian to call Tarantino the world's most influential director.
As major Hollywood studios and audiences both become savvy to cult films, productions once limited to cult appeal have instead become popular hits, and cult directors have become hot properties known for more mainstream and accessible films. Remarking on the popular trend of remaking cult films, Claude Brodesser-Akner of New York magazine states that Hollywood studios have been superstitiously hoping to recreate past successes rather than trading on nostalgia. Their popularity would bring some critics to proclaim the death of cult films now that they have finally become successful and mainstream, are too slick to attract a proper cult following, lack context, or are too easily found online. In response, David Church says that cult film fans have retreated to more obscure and difficult to find films, often using illegal distribution methods, which preserves the outlaw status of cult films. Virtual spaces, such as online forums and fan sites, replace the traditional fanzines and newsletters. Cult film fans consider themselves collectors, rather than consumers, as they associate consumers with mainstream, Hollywood audiences. This collecting can take the place of fetishization of a single film. Addressing concerns that DVDs have revoked the cult status of films like Rocky Horror, academic Mikel J. Koven states that small scale screenings with friends and family can replace midnight showings. Koven also identifies television shows, such as Twin Peaks, as retaining more traditional cult activities inside popular culture. David Lynch himself has not ruled out another television series, as studios have become reluctant to take chances on non-mainstream ideas. Despite this, the Alamo Drafthouse has capitalized on cult films and the surrounding culture through inspiration drawn from Rocky Horror and retro promotional gimmickry. They sell out their shows regularly and have acquired a cult following of their own.
Academic Bob Batchelor, writing in Cult Pop Culture, states that the internet has democratized cult culture and destroyed the line between cult and mainstream. Fans of even the most obscure films can communicate online with each other in vibrant communities. Although known for their big-budget blockbusters, Steven Spielberg and George Lucas have criticized the current Hollywood system of gambling everything on the opening weekend of these productions. Geoffrey Macnab of The Independent instead suggests that Hollywood look to capitalize on cult films, which have exploded in popularity on the internet. The rise of social media has been a boon to cult films. Sites such as Twitter have displaced traditional venues for fandom and courted controversy from cultural critics who are unamused by campy cult films. After a clip from one of his films went viral, director-producer Roger Corman made a distribution deal with YouTube. Found footage which had originally been distributed as cult VHS collections eventually went viral on YouTube, which opened them to new generations of fans. Films such as Birdemic (2008) and The Room (2003) gained quick, massive popularity, as prominent members of social networking sites discussed them. Their rise as "instant cult classics" bypasses the years of obscurity that most cult films labor under. In response, critics have described the use of viral marketing as astroturfing and an attempt to manufacture cult films.
I. Q. Hunter identifies a prefabricated cult film style which includes "deliberately, insulting bad films", "slick exercises in dysfunction and alienation", and mainstream films "that sell themselves as worth obsessing over". Writing for NPR, Scott Tobias states that Don Coscarelli, whose previous films effortlessly attracted cult followings, has drifted into this realm. Tobias criticizes Coscarelli as trying too hard to appeal to cult audiences and sacrificing internal consistency for calculated quirkiness. Influenced by the successful online hype of The Blair Witch Project (1999), other films have attempted to draw online cult fandom with the use of prefabricated cult appeal. Snakes on a Plane (2006) is an example that attracted massive attention from curious fans. Uniquely, its cult following preceded the film's release and included speculative parodies of what fans imagined the film might be. This reached the point of convergence culture when fan speculation began to impact on the film's production. Although it was proclaimed a cult film and major game-changer before it was released, it failed to win either mainstream audiences or maintain its cult following. In retrospect, critic Spencer Kornhaber would call it a serendipitous novelty and a footnote to a "more naive era of the Internet". However, it became influential in both marketing and titling. This trend of "instant cult classics" which are hailed yet fail to attain a lasting following is described by Matt Singer, who states that the phrase is an oxymoron.
Cult films are often approached in terms of auteur theory, which states that the director's creative vision drives a film. This has fallen out of favor in academia, creating a disconnect between cult film fans and critics. Matt Hills states that auteur theory can help to create cult films; fans that see a film as continuing a director's creative vision are likely to accept it as cult. According to academic Greg Taylor, auteur theory also helped to popularize cult films when middlebrow audiences found an accessible way to approach avant-garde film criticism. Auteur theory provided an alternative culture for cult film fans while carrying the weight of scholarship. By requiring repeated viewings and extensive knowledge of details, auteur theory naturally appealed to cult film fans. Taylor further states that this was instrumental in allowing cult films to break through to the mainstream. Academic Joe Tompkins states that this auteurism is often highlighted when mainstream success occurs. This may take the place of – and even ignore – political readings of the director. Cult films and directors may be celebrated for their transgressive content, daring, and independence, but Tompkins argues that mainstream recognition requires they be palatable to corporate interests who stand to gain much from the mainstreaming of cult film culture. While critics may champion revolutionary aspects of filmmaking and political interpretation, Hollywood studios and other corporate interests will instead highlight only the aspects that they wish to legitimize in their own films, such as sensational exploitation. Someone like George Romero, whose films are both transgressive and subversive, will have the transgressive aspects highlighted while the subversive aspects are ignored.
See also
Cult video game
List of cult films
Sleeper hit
Mark Kermode's Secrets of Cinema: Cult Movies
List of cult television shows
References
Film and video fandom
Film and video terminology
Film genres
Cult following
Articles containing video clips |
5646 | https://en.wikipedia.org/wiki/Constantinople | Constantinople | Constantinople (see other names) became the capital of the Roman Empire during the reign of Constantine the Great in 330. Following the collapse of the Western Roman Empire in the late 5th century, Constantinople remained the capital of the Eastern Roman Empire (also known as the Byzantine Empire; 330–1204 and 1261–1453), the Latin Empire (1204–1261), and the Ottoman Empire (1453–1922). Following the Turkish War of Independence, the Turkish capital then moved to Ankara. Officially renamed Istanbul in the 1920s, the city is today the largest city and financial centre of Turkey and the largest city in Europe, straddling the Bosporus strait, lying in both Europe and Asia.
In 324, after the Western and Eastern Roman Empires were reunited, the ancient city of Byzantium was selected to serve as the new capital of the Roman Empire, and the city was renamed Nova Roma, or 'New Rome', by Emperor Constantine the Great. On 11 May 330, it was renamed Constantinople and dedicated to Constantine. Constantinople is generally considered to be the center and the "cradle of Orthodox Christian civilization". From the mid-5th century to the early 13th century, Constantinople was the largest and wealthiest city in Europe. The city became famous for its architectural masterpieces, such as Hagia Sophia, the cathedral of the Eastern Orthodox Church, which served as the seat of the Ecumenical Patriarchate; the sacred Imperial Palace, where the emperors lived; the Hippodrome; the Golden Gate of the Land Walls; and opulent aristocratic palaces. The University of Constantinople was founded in the 5th century and contained artistic and literary treasures before it was sacked in 1204 and 1453, including its vast Imperial Library which contained the remnants of the Library of Alexandria and had 100,000 volumes. The city was the home of the Ecumenical Patriarch of Constantinople and guardian of Christendom's holiest relics such as the Crown of thorns and the True Cross.
Constantinople was famous for its massive and complex fortifications, which ranked among the most sophisticated defensive architecture of antiquity. The Theodosian Walls consisted of a double wall lying about to the west of the first wall and a moat with palisades in front. Constantinople's location between the Golden Horn and the Sea of Marmara reduced the land area that needed defensive walls. The city was built intentionally to rival Rome, and it was claimed that several elevations within its walls matched Rome's 'seven hills'. The impenetrable defenses enclosed magnificent palaces, domes, and towers, the result of prosperity Constantinople achieved as the gateway between two continents (Europe and Asia) and two seas (the Mediterranean and the Black Sea). Although besieged on numerous occasions by various armies, the defenses of Constantinople proved impenetrable for nearly nine hundred years.
In 1204, however, the armies of the Fourth Crusade took and devastated the city, and for several decades, its inhabitants resided under Latin occupation in a dwindling and depopulated city. In 1261 the Byzantine Emperor Michael VIII Palaiologos liberated the city, and after the restoration under the Palaiologos dynasty, it enjoyed a partial recovery. With the advent of the Ottoman Empire in 1299, the Byzantine Empire began to lose territories, and the city began to lose population. By the early 15th century, the Byzantine Empire was reduced to just Constantinople and its environs, along with Morea in Greece, making it an enclave inside the Ottoman Empire. The city was finally besieged and conquered by the Ottoman Empire in 1453, remaining under its control until the early 20th century, after which it was renamed Istanbul under the Empire's successor state, Turkey.
Names
Before Constantinople
According to Pliny the Elder in his Natural History, the first known name of a settlement on the site of Constantinople was Lygos, a settlement likely of Thracian origin founded between the 13th and 11th centuries BC. The site, according to the founding myth of the city, was abandoned by the time Greek settlers from the city-state of Megara founded Byzantium (, Byzántion) in around 657 BC, across from the town of Chalcedon on the Asiatic side of the Bosphorus.
The origins of the name of Byzantion, more commonly known by the later Latin Byzantium, are not entirely clear, though some suggest it is of Thracian origin. The founding myth of the city has it told that the settlement was named after the leader of the Megarian colonists, Byzas. The later Byzantines of Constantinople themselves would maintain that the city was named in honor of two men, Byzas and Antes, though this was more likely just a play on the word Byzantion.
The city was briefly renamed Augusta Antonina in the early 3rd century AD by the Emperor Septimius Severus (193–211), who razed the city to the ground in 196 for supporting a rival contender in the civil war and had it rebuilt in honor of his son Marcus Aurelius Antoninus (who succeeded him as Emperor), popularly known as Caracalla. The name appears to have been quickly forgotten and abandoned, and the city reverted to Byzantium/Byzantion after either the assassination of Caracalla in 217 or, at the latest, the fall of the Severan dynasty in 235.
Names of Constantinople
Byzantium took on the name of Constantinople (Greek: Κωνσταντινούπολις, romanized: Kōnstantinoupolis; "city of Constantine") after its refoundation under Roman emperor Constantine I, who transferred the capital of the Roman Empire to Byzantium in 330 and designated his new capital officially as Nova Roma () 'New Rome'. During this time, the city was also called 'Second Rome', 'Eastern Rome', and Roma Constantinopolitana (Latin for 'Constantinopolitan Rome'). As the city became the sole remaining capital of the Roman Empire after the fall of the West, and its wealth, population, and influence grew, the city also came to have a multitude of nicknames.
As the largest and wealthiest city in Europe during the 4th–13th centuries and a center of culture and education of the Mediterranean basin, Constantinople came to be known by prestigious titles such as Basileuousa (Queen of Cities) and Megalopolis (the Great City) and was, in colloquial speech, commonly referred to as just Polis () 'the City' by Constantinopolitans and provincial Byzantines alike.
In the language of other peoples, Constantinople was referred to just as reverently. The medieval Vikings, who had contacts with the empire through their expansion in eastern Europe (Varangians), used the Old Norse name Miklagarðr (from mikill 'big' and garðr 'city'), and later Miklagard and Miklagarth. In Arabic, the city was sometimes called Rūmiyyat al-Kubra (Great City of the Romans) and in Persian as Takht-e Rum (Throne of the Romans).
In East and South Slavic languages, including in Kievan Rus', Constantinople has been referred to as Tsargrad (Царьград) or Carigrad, 'City of the Caesar (Emperor)', from the Slavonic words tsar ('Caesar' or 'King') and grad ('city'). This was presumably a calque on a Greek phrase such as (Vasileos Polis), 'the city of the emperor [king]'.
In Persian the city was also called Asitane (the Threshold of the State), and in Armenian, it was called Gosdantnubolis (City of Constantine).
Modern names of the city
The modern Turkish name for the city, İstanbul, derives from the Greek phrase eis tin Polin (), meaning '(in)to the city'. This name was used in colloquial speech in Turkish alongside Kostantiniyye, the more formal adaptation of the original Constantinople, during the period of Ottoman rule, while western languages mostly continued to refer to the city as Constantinople until the early 20th century. In 1928, the Turkish alphabet was changed from Arabic script to Latin script. After that, as part of the Turkification movement, Turkey started to urge other countries to use Turkish names for Turkish cities, instead of other transliterations to Latin script that had been used in Ottoman times and the city came to be known as Istanbul and its variations in most world languages.
The name Constantinople is still used by members of the Eastern Orthodox Church in the title of one of their most important leaders, the Orthodox patriarch based in the city, referred to as "His Most Divine All-Holiness the Archbishop of Constantinople New Rome and Ecumenical Patriarch". In Greece today, the city is still called Konstantinoúpoli(s) () or simply just "the City" ().
History
Foundation of Byzantium
Constantinople was founded by the Roman emperor Constantine I (272–337) in 324 on the site of an already-existing city, Byzantium, which was settled in the early days of Greek colonial expansion, in around 657 BC, by colonists of the city-state of Megara. This is the first major settlement that would develop on the site of later Constantinople, but the first known settlements was that of Lygos, referred to in Pliny's Natural Histories. Apart from this, little is known about this initial settlement. The site, according to the founding myth of the city, was abandoned by the time Greek settlers from the city-state of Megara founded Byzantium () in around 657 BC, across from the town of Chalcedon on the Asiatic side of the Bosphorus.
Hesychius of Miletus wrote that some "claim that people from Megara, who derived their descent from Nisos, sailed to this place under their leader Byzas, and invent the fable that his name was attached to the city". Some versions of the founding myth say Byzas was the son of a local nymph, while others say he was conceived by one of Zeus' daughters and Poseidon. Hesychius also gives alternate versions of the city's founding legend, which he attributed to old poets and writers:
It is said that the first Argives, after having received this prophecy from Pythia,
Blessed are those who will inhabit that holy city,
a narrow strip of the Thracian shore at the mouth of the Pontos,
where two pups drink of the gray sea,
where fish and stag graze on the same pasture,
set up their dwellings at the place where the rivers Kydaros and Barbyses have their estuaries, one flowing from the north, the other from the west, and merging with the sea at the altar of the nymph called Semestre"
The city maintained independence as a city-state until it was annexed by Darius I in 512 BC into the Persian Empire, who saw the site as the optimal location to construct a pontoon bridge crossing into Europe as Byzantium was situated at the narrowest point in the Bosphorus strait. Persian rule lasted until 478 BC when as part of the Greek counterattack to the Second Persian invasion of Greece, a Greek army led by the Spartan general Pausanias captured the city which remained an independent, yet subordinate, city under the Athenians, and later to the Spartans after 411 BC. A farsighted treaty with the emergent power of Rome in which stipulated tribute in exchange for independent status allowed it to enter Roman rule unscathed. This treaty would pay dividends retrospectively as Byzantium would maintain this independent status, and prosper under peace and stability in the Pax Romana, for nearly three centuries until the late 2nd century AD.
Byzantium was never a major influential city-state like that of Athens, Corinth or Sparta, but the city enjoyed relative peace and steady growth as a prosperous trading city lent by its remarkable position. The site lay astride the land route from Europe to Asia and the seaway from the Black Sea to the Mediterranean, and had in the Golden Horn an excellent and spacious harbor. Already then, in Greek and early Roman times, Byzantium was famous for the strategic geographic position that made it difficult to besiege and capture, and its position at the crossroads of the Asiatic-European trade route over land and as the gateway between the Mediterranean and Black Seas made it too valuable a settlement to abandon, as Emperor Septimius Severus later realized when he razed the city to the ground for supporting Pescennius Niger's claimancy. It was a move greatly criticized by the contemporary consul and historian Cassius Dio who said that Severus had destroyed "a strong Roman outpost and a base of operations against the barbarians from Pontus and Asia". He would later rebuild Byzantium towards the end of his reign, in which it would be briefly renamed Augusta Antonina, fortifying it with a new city wall in his name, the Severan Wall.
324–337: The refoundation as Constantinople
Constantine had altogether more colourful plans. Having restored the unity of the Empire, and, being in the course of major governmental reforms as well as of sponsoring the consolidation of the Christian church, he was well aware that Rome was an unsatisfactory capital. Rome was too far from the frontiers, and hence from the armies and the imperial courts, and it offered an undesirable playground for disaffected politicians. Yet it had been the capital of the state for over a thousand years, and it might have seemed unthinkable to suggest that the capital be moved to a different location. Nevertheless, Constantine identified the site of Byzantium as the right place: a place where an emperor could sit, readily defended, with easy access to the Danube or the Euphrates frontiers, his court supplied from the rich gardens and sophisticated workshops of Roman Asia, his treasuries filled by the wealthiest provinces of the Empire.
Constantinople was built over six years, and consecrated on 11 May 330. Constantine divided the expanded city, like Rome, into 14 regions, and ornamented it with public works worthy of an imperial metropolis. Yet, at first, Constantine's new Rome did not have all the dignities of old Rome. It possessed a proconsul, rather than an urban prefect. It had no praetors, tribunes, or quaestors. Although it did have senators, they held the title clarus, not clarissimus, like those of Rome. It also lacked the panoply of other administrative offices regulating the food supply, police, statues, temples, sewers, aqueducts, or other public works. The new programme of building was carried out in great haste: columns, marbles, doors, and tiles were taken wholesale from the temples of the empire and moved to the new city. In similar fashion, many of the greatest works of Greek and Roman art were soon to be seen in its squares and streets. The emperor stimulated private building by promising householders gifts of land from the imperial estates in Asiana and Pontica and on 18 May 332 he announced that, as in Rome, free distributions of food would be made to the citizens. At the time, the amount is said to have been 80,000 rations a day, doled out from 117 distribution points around the city.
Constantine laid out a new square at the centre of old Byzantium, naming it the Augustaeum. The new senate-house (or Curia) was housed in a basilica on the east side. On the south side of the great square was erected the Great Palace of the Emperor with its imposing entrance, the Chalke, and its ceremonial suite known as the Palace of Daphne. Nearby was the vast Hippodrome for chariot-races, seating over 80,000 spectators, and the famed Baths of Zeuxippus. At the western entrance to the Augustaeum was the Milion, a vaulted monument from which distances were measured across the Eastern Roman Empire.
From the Augustaeum led a great street, the Mese, lined with colonnades. As it descended the First Hill of the city and climbed the Second Hill, it passed on the left the Praetorium or law-court. Then it passed through the oval Forum of Constantine where there was a second Senate-house and a high column with a statue of Constantine himself in the guise of Helios, crowned with a halo of seven rays and looking toward the rising sun. From there, the Mese passed on and through the Forum Tauri and then the Forum Bovis, and finally up the Seventh Hill (or Xerolophus) and through to the Golden Gate in the Constantinian Wall. After the construction of the Theodosian Walls in the early 5th century, it was extended to the new Golden Gate, reaching a total length of seven Roman miles. After the construction of the Theodosian Walls, Constantinople consisted of an area approximately the size of Old Rome within the Aurelian walls, or some 1,400 ha.
337–529: Constantinople during the Barbarian Invasions and the fall of the West
The importance of Constantinople increased, but it was gradual. From the death of Constantine in 337 to the accession of Theodosius I, emperors had been resident only in the years 337–338, 347–351, 358–361, 368–369. Its status as a capital was recognized by the appointment of the first known Urban Prefect of the City Honoratus, who held office from 11 December 359 until 361. The urban prefects had concurrent jurisdiction over three provinces each in the adjacent dioceses of Thrace (in which the city was located), Pontus and Asia comparable to the 100-mile extraordinary jurisdiction of the prefect of Rome. The emperor Valens, who hated the city and spent only one year there, nevertheless built the Palace of Hebdomon on the shore of the Propontis near the Golden Gate, probably for use when reviewing troops. All the emperors up to Zeno and Basiliscus were crowned and acclaimed at the Hebdomon. Theodosius I founded the Church of John the Baptist to house the skull of the saint (today preserved at the Topkapı Palace), put up a memorial pillar to himself in the Forum of Taurus, and turned the ruined temple of Aphrodite into a coach house for the Praetorian Prefect; Arcadius built a new forum named after himself on the Mese, near the walls of Constantine.
After the shock of the Battle of Adrianople in 378, in which the emperor Valens with the flower of the Roman armies was destroyed by the Visigoths within a few days' march, the city looked to its defences, and in 413–414 Theodosius II built the 18-metre (60-foot)-tall triple-wall fortifications, which were not to be breached until the coming of gunpowder. Theodosius also founded a University near the Forum of Taurus, on 27 February 425.
Uldin, a prince of the Huns, appeared on the Danube about this time and advanced into Thrace, but he was deserted by many of his followers, who joined with the Romans in driving their king back north of the river. Subsequent to this, new walls were built to defend the city and the fleet on the Danube improved.
After the barbarians overran the Western Roman Empire, Constantinople became the indisputable capital city of the Roman Empire. Emperors were no longer peripatetic between various court capitals and palaces. They remained in their palace in the Great City and sent generals to command their armies. The wealth of the eastern Mediterranean and western Asia flowed into Constantinople.
527–565: Constantinople in the Age of Justinian
The emperor Justinian I (527–565) was known for his successes in war, for his legal reforms and for his public works. It was from Constantinople that his expedition for the reconquest of the former Diocese of Africa set sail on or about 21 June 533. Before their departure, the ship of the commander Belisarius was anchored in front of the Imperial palace, and the Patriarch offered prayers for the success of the enterprise. After the victory, in 534, the Temple treasure of Jerusalem, looted by the Romans in AD 70 and taken to Carthage by the Vandals after their sack of Rome in 455, was brought to Constantinople and deposited for a time, perhaps in the Church of St Polyeuctus, before being returned to Jerusalem in either the Church of the Resurrection or the New Church.
Chariot-racing had been important in Rome for centuries. In Constantinople, the hippodrome became over time increasingly a place of political significance. It was where (as a shadow of the popular elections of old Rome) the people by acclamation showed their approval of a new emperor, and also where they openly criticized the government, or clamoured for the removal of unpopular ministers. In the time of Justinian, public order in Constantinople became a critical political issue.
Throughout the late Roman and early Byzantine periods, Christianity was resolving fundamental questions of identity, and the dispute between the orthodox and the monophysites became the cause of serious disorder, expressed through allegiance to the chariot-racing parties of the Blues and the Greens. The partisans of the Blues and the Greens were said to affect untrimmed facial hair, head hair shaved at the front and grown long at the back, and wide-sleeved tunics tight at the wrist; and to form gangs to engage in night-time muggings and street violence. At last these disorders took the form of a major rebellion of 532, known as the "Nika" riots (from the battle-cry of "Conquer!" of those involved).
Fires started by the Nika rioters consumed the Theodosian basilica of Hagia Sophia (Holy Wisdom), the city's cathedral, which lay to the north of the Augustaeum and had itself replaced the Constantinian basilica founded by Constantius II to replace the first Byzantine cathedral, Hagia Irene (Holy Peace). Justinian commissioned Anthemius of Tralles and Isidore of Miletus to replace it with a new and incomparable Hagia Sophia. This was the great cathedral of the city, whose dome was said to be held aloft by God alone, and which was directly connected to the palace so that the imperial family could attend services without passing through the streets. The dedication took place on 26 December 537 in the presence of the emperor, who was later reported to have exclaimed, "O Solomon, I have outdone thee!" Hagia Sophia was served by 600 people including 80 priests, and cost 20,000 pounds of gold to build.
Justinian also had Anthemius and Isidore demolish and replace the original Church of the Holy Apostles and Hagia Irene built by Constantine with new churches under the same dedication. The Justinianic Church of the Holy Apostles was designed in the form of an equal-armed cross with five domes, and ornamented with beautiful mosaics. This church was to remain the burial place of the emperors from Constantine himself until the 11th century. When the city fell to the Turks in 1453, the church was demolished to make room for the tomb of Mehmet II the Conqueror. Justinian was also concerned with other aspects of the city's built environment, legislating against the abuse of laws prohibiting building within of the sea front, in order to protect the view.
During Justinian I's reign, the city's population reached about 500,000 people. However, the social fabric of Constantinople was also damaged by the onset of the Plague of Justinian between 541 and 542 AD. It killed perhaps 40% of the city's inhabitants.
Survival, 565–717: Constantinople during the Byzantine Dark Ages
In the early 7th century, the Avars and later the Bulgars overwhelmed much of the Balkans, threatening Constantinople with attack from the west. Simultaneously, the Persian Sassanids overwhelmed the Prefecture of the East and penetrated deep into Anatolia. Heraclius, son of the exarch of Africa, set sail for the city and assumed the throne. He found the military situation so dire that he is said to have contemplated withdrawing the imperial capital to Carthage, but relented after the people of Constantinople begged him to stay. The citizens lost their right to free grain in 618 when Heraclius realized that the city could no longer be supplied from Egypt as a result of the Persian wars: the population fell substantially as a result.
While the city withstood a siege by the Sassanids and Avars in 626, Heraclius campaigned deep into Persian territory and briefly restored the status quo in 628, when the Persians surrendered all their conquests. However, further sieges followed the Arab conquests, first from 674 to 678 and then in 717 to 718. The Theodosian Walls kept the city impenetrable from the land, while a newly discovered incendiary substance known as Greek fire allowed the Byzantine navy to destroy the Arab fleets and keep the city supplied. In the second siege, the second ruler of Bulgaria, Khan Tervel, rendered decisive help. He was called Saviour of Europe.
717–1025: Constantinople during the Macedonian Renaissance
In the 730s Leo III carried out extensive repairs of the Theodosian walls, which had been damaged by frequent and violent attacks; this work was financed by a special tax on all the subjects of the Empire.
Theodora, widow of the Emperor Theophilus (died 842), acted as regent during the minority of her son Michael III, who was said to have been introduced to dissolute habits by her brother Bardas. When Michael assumed power in 856, he became known for excessive drunkenness, appeared in the hippodrome as a charioteer and burlesqued the religious processions of the clergy. He removed Theodora from the Great Palace to the Carian Palace and later to the monastery of Gastria, but, after the death of Bardas, she was released to live in the palace of St Mamas; she also had a rural residence at the Anthemian Palace, where Michael was assassinated in 867.
In 860, an attack was made on the city by a new principality set up a few years earlier at Kiev by Askold and Dir, two Varangian chiefs: Two hundred small vessels passed through the Bosporus and plundered the monasteries and other properties on the suburban Princes' Islands. Oryphas, the admiral of the Byzantine fleet, alerted the emperor Michael, who promptly put the invaders to flight; but the suddenness and savagery of the onslaught made a deep impression on the citizens.
In 980, the emperor Basil II received an unusual gift from Prince Vladimir of Kiev: 6,000 Varangian warriors, which Basil formed into a new bodyguard known as the Varangian Guard. They were known for their ferocity, honour, and loyalty. It is said that, in 1038, they were dispersed in winter quarters in the Thracesian Theme when one of their number attempted to violate a countrywoman, but in the struggle she seized his sword and killed him; instead of taking revenge, however, his comrades applauded her conduct, compensated her with all his possessions, and exposed his body without burial as if he had committed suicide. However, following the death of an Emperor, they became known also for plunder in the Imperial palaces. Later in the 11th century the Varangian Guard became dominated by Anglo-Saxons who preferred this way of life to subjugation by the new Norman kings of England.
The Book of the Eparch, which dates to the 10th century, gives a detailed picture of the city's commercial life and its organization at that time. The corporations in which the tradesmen of Constantinople were organised were supervised by the Eparch, who regulated such matters as production, prices, import, and export. Each guild had its own monopoly, and tradesmen might not belong to more than one. It is an impressive testament to the strength of tradition how little these arrangements had changed since the office, then known by the Latin version of its title, had been set up in 330 to mirror the urban prefecture of Rome.
In the 9th and 10th centuries, Constantinople had a population of between 500,000 and 800,000.
Iconoclast controversy in Constantinople
In the 8th and 9th centuries, the iconoclast movement caused serious political unrest throughout the Empire. The emperor Leo III issued a decree in 726 against images, and ordered the destruction of a statue of Christ over one of the doors of the Chalke, an act that was fiercely resisted by the citizens. Constantine V convoked a church council in 754, which condemned the worship of images, after which many treasures were broken, burned, or painted over with depictions of trees, birds or animals: One source refers to the church of the Holy Virgin at Blachernae as having been transformed into a "fruit store and aviary". Following the death of her husband Leo IV in 780, the empress Irene restored the veneration of images through the agency of the Second Council of Nicaea in 787.
The iconoclast controversy returned in the early 9th century, only to be resolved once more in 843 during the regency of Empress Theodora, who restored the icons. These controversies contributed to the deterioration of relations between the Western and the Eastern Churches.
1025–1081: Constantinople after Basil II
In the late 11th century catastrophe struck with the unexpected and calamitous defeat of the imperial armies at the Battle of Manzikert in Armenia in 1071. The Emperor Romanus Diogenes was captured. The peace terms demanded by Alp Arslan, sultan of the Seljuk Turks, were not excessive, and Romanus accepted them. On his release, however, Romanus found that enemies had placed their own candidate on the throne in his absence; he surrendered to them and suffered death by torture, and the new ruler, Michael VII Ducas, refused to honour the treaty. In response, the Turks began to move into Anatolia in 1073. The collapse of the old defensive system meant that they met no opposition, and the empire's resources were distracted and squandered in a series of civil wars. Thousands of Turkoman tribesmen crossed the unguarded frontier and moved into Anatolia. By 1080, a huge area had been lost to the Empire, and the Turks were within striking distance of Constantinople.
1081–1185: Constantinople under the Comneni
Under the Comnenian dynasty (1081–1185), Byzantium staged a remarkable recovery. In 1090–91, the nomadic Pechenegs reached the walls of Constantinople, where Emperor Alexius I with the aid of the Kipchaks annihilated their army. In response to a call for aid from Alexius, the First Crusade assembled at Constantinople in 1096, but declining to put itself under Byzantine command set out for Jerusalem on its own account. John II built the monastery of the Pantocrator (Almighty) with a hospital for the poor of 50 beds.
With the restoration of firm central government, the empire became fabulously wealthy. The population was rising (estimates for Constantinople in the 12th century vary from some 100,000 to 500,000), and towns and cities across the realm flourished. Meanwhile, the volume of money in circulation dramatically increased. This was reflected in Constantinople by the construction of the Blachernae palace, the creation of brilliant new works of art, and general prosperity at this time: an increase in trade, made possible by the growth of the Italian city-states, may have helped the growth of the economy. It is certain that the Venetians and others were active traders in Constantinople, making a living out of shipping goods between the Crusader Kingdoms of Outremer and the West, while also trading extensively with Byzantium and Egypt. The Venetians had factories on the north side of the Golden Horn, and large numbers of westerners were present in the city throughout the 12th century. Toward the end of Manuel I Komnenos's reign, the number of foreigners in the city reached about 60,000–80,000 people out of a total population of about 400,000 people. In 1171, Constantinople also contained a small community of 2,500 Jews. In 1182, most Latin (Western European) inhabitants of Constantinople were massacred.
In artistic terms, the 12th century was a very productive period. There was a revival in the mosaic art, for example: Mosaics became more realistic and vivid, with an increased emphasis on depicting three-dimensional forms. There was an increased demand for art, with more people having access to the necessary wealth to commission and pay for such work.
1185–1261: Constantinople during the Imperial Exile
On 25 July 1197, Constantinople was struck by a severe fire which burned the Latin Quarter and the area around the Gate of the Droungarios () on the Golden Horn. Nevertheless, the destruction wrought by the 1197 fire paled in comparison with that brought by the Crusaders. In the course of a plot between Philip of Swabia, Boniface of Montferrat and the Doge of Venice, the Fourth Crusade was, despite papal excommunication, diverted in 1203 against Constantinople, ostensibly promoting the claims of Alexios IV Angelos brother-in-law of Philip, son of the deposed emperor Isaac II Angelos. The reigning emperor Alexios III Angelos had made no preparation. The Crusaders occupied Galata, broke the defensive chain protecting the Golden Horn, and entered the harbour, where on 27 July they breached the sea walls: Alexios III fled. But the new Alexios IV Angelos found the Treasury inadequate, and was unable to make good the rewards he had promised to his western allies. Tension between the citizens and the Latin soldiers increased. In January 1204, the protovestiarius Alexios Murzuphlos provoked a riot, it is presumed, to intimidate Alexios IV, but whose only result was the destruction of the great statue of Athena Promachos, the work of Phidias, which stood in the principal forum facing west.
In February 1204, the people rose again: Alexios IV was imprisoned and executed, and Murzuphlos took the purple as Alexios V Doukas. He made some attempt to repair the walls and organise the citizenry, but there had been no opportunity to bring in troops from the provinces and the guards were demoralised by the revolution. An attack by the Crusaders on 6 April failed, but a second from the Golden Horn on 12 April succeeded, and the invaders poured in. Alexios V fled. The Senate met in Hagia Sophia and offered the crown to Theodore Lascaris, who had married into the Angelos dynasty, but it was too late. He came out with the Patriarch to the Golden Milestone before the Great Palace and addressed the Varangian Guard. Then the two of them slipped away with many of the nobility and embarked for Asia. By the next day the Doge and the leading Franks were installed in the Great Palace, and the city was given over to pillage for three days.
Sir Steven Runciman, historian of the Crusades, wrote that the sack of Constantinople is "unparalleled in history".
For the next half-century, Constantinople was the seat of the Latin Empire. Under the rulers of the Latin Empire, the city declined, both in population and the condition of its buildings. Alice-Mary Talbot cites an estimated population for Constantinople of 400,000 inhabitants; after the destruction wrought by the Crusaders on the city, about one third were homeless, and numerous courtiers, nobility, and higher clergy, followed various leading personages into exile. "As a result Constantinople became seriously depopulated," Talbot concludes.
The Latins took over at least 20 churches and 13 monasteries, most prominently the Hagia Sophia, which became the cathedral of the Latin Patriarch of Constantinople. It is to these that E.H. Swift attributed the construction of a series of flying buttresses to shore up the walls of the church, which had been weakened over the centuries by earthquake tremors. However, this act of maintenance is an exception: for the most part, the Latin occupiers were too few to maintain all of the buildings, either secular and sacred, and many became targets for vandalism or dismantling. Bronze and lead were removed from the roofs of abandoned buildings and melted down and sold to provide money to the chronically under-funded Empire for defense and to support the court; Deno John Geanokoplos writes that "it may well be that a division is suggested here: Latin laymen stripped secular buildings, ecclesiastics, the churches." Buildings were not the only targets of officials looking to raise funds for the impoverished Latin Empire: the monumental sculptures which adorned the Hippodrome and fora of the city were pulled down and melted for coinage. "Among the masterpieces destroyed, writes Talbot, "were a Herakles attributed to the fourth-century B.C. sculptor Lysippos, and monumental figures of Hera, Paris, and Helen."
The Nicaean emperor John III Vatatzes reportedly saved several churches from being dismantled for their valuable building materials; by sending money to the Latins "to buy them off" (exonesamenos), he prevented the destruction of several churches. According to Talbot, these included the churches of Blachernae, Rouphinianai, and St. Michael at Anaplous. He also granted funds for the restoration of the Church of the Holy Apostles, which had been seriously damaged in an earthquake.
The Byzantine nobility scattered, many going to Nicaea, where Theodore Lascaris set up an imperial court, or to Epirus, where Theodore Angelus did the same; others fled to Trebizond, where one of the Comneni had already with Georgian support established an independent seat of empire. Nicaea and Epirus both vied for the imperial title, and tried to recover Constantinople. In 1261, Constantinople was captured from its last Latin ruler, Baldwin II, by the forces of the Nicaean emperor Michael VIII Palaiologos under the command of Caesar Alexios Strategopoulos.
1261–1453: Palaiologan Era and the Fall of Constantinople
Although Constantinople was retaken by Michael VIII Palaiologos, the Empire had lost many of its key economic resources, and struggled to survive. The palace of Blachernae in the north-west of the city became the main Imperial residence, with the old Great Palace on the shores of the Bosporus going into decline. When Michael VIII captured the city, its population was 35,000 people, but, by the end of his reign, he had succeeded in increasing the population to about 70,000 people. The Emperor achieved this by summoning former residents who had fled the city when the crusaders captured it, and by relocating Greeks from the recently reconquered Peloponnese to the capital. Military defeats, civil wars, earthquakes and natural disasters were joined by the Black Death, which in 1347 spread to Constantinople, exacerbated the people's sense that they were doomed by God. In 1453, when the Ottoman Turks captured the city, it contained approximately 50,000 people.
Constantinople was conquered by the Ottoman Empire on 29 May 1453. Mehmed II intended to complete his father's mission and conquer Constantinople for the Ottomans. In 1452 he reached peace treaties with Hungary and Venice. He also began the construction of the Boğazkesen (later called the Rumelihisarı), a fortress at the narrowest point of the Bosphorus Strait, in order to restrict passage between the Black and Mediterranean seas. Mehmed then tasked the Hungarian gunsmith Urban with both arming Rumelihisarı and building cannon powerful enough to bring down the walls of Constantinople. By March 1453 Urban's cannon had been transported from the Ottoman capital of Edirne to the outskirts of Constantinople. In April, having quickly seized Byzantine coastal settlements along the Black Sea and Sea of Marmara, Ottoman regiments in Rumelia and Anatolia assembled outside the Byzantine capital. Their fleet moved from Gallipoli to nearby Diplokionion, and the sultan himself set out to meet his army.
The Ottomans were commanded by 21-year-old Ottoman Sultan Mehmed II. The conquest of Constantinople followed a seven-week siege which had begun on 6 April 1453. The Empire fell on 29 May 1453.
1453–1930: Ottoman and Republican Kostantiniyye
The Christian Orthodox city of Constantinople was now under Ottoman control. When Mehmed II finally entered Constantinople through the Gate of Charisius (today known as Edirnekapı or Adrianople Gate), he immediately rode his horse to the Hagia Sophia, where after the doors were axed down, the thousands of citizens hiding within the sanctuary were raped and enslaved, often with slavers fighting each other to the death over particularly beautiful and valuable slave girls. Moreover, symbols of Christianity everywhere were vandalized or destroyed, including the crucifix of Hagia Sophia which was paraded through the sultan's camps. Afterwards he ordered his soldiers to stop hacking at the city's valuable marbles and 'be satisfied with the booty and captives; as for all the buildings, they belonged to him'. He ordered that an imam meet him there in order to chant the adhan thus transforming the Orthodox cathedral into a Muslim mosque, solidifying Islamic rule in Constantinople.
Mehmed's main concern with Constantinople had to do with consolidating control over the city and rebuilding its defenses. After 45,000 captives were marched from the city, building projects were commenced immediately after the conquest, which included the repair of the walls, construction of the citadel, and building a new palace. Mehmed issued orders across his empire that Muslims, Christians, and Jews should resettle the city, with Christians and Jews required to pay jizya and Muslims pay Zakat; he demanded that five thousand households needed to be transferred to Constantinople by September. From all over the Islamic empire, prisoners of war and deported people were sent to the city: these people were called "Sürgün" in Turkish (). Two centuries later, Ottoman traveler Evliya Çelebi gave a list of groups introduced into the city with their respective origins. Even today, many quarters of Istanbul, such as Aksaray, Çarşamba, bear the names of the places of origin of their inhabitants. However, many people escaped again from the city, and there were several outbreaks of plague, so that in 1459 Mehmed allowed the deported Greeks to come back to the city.
Culture
Constantinople was the largest and richest urban center in the Eastern Mediterranean Sea during the late Eastern Roman Empire, mostly as a result of its strategic position commanding the trade routes between the Aegean Sea and the Black Sea. It would remain the capital of the eastern, Greek-speaking empire for over a thousand years and in some ways is the nexus of Byzantine art production. At its peak, roughly corresponding to the Middle Ages, it was one of the richest and largest cities in Europe. It exerted a powerful cultural pull and dominated much of the economic life in the Mediterranean. Visitors and merchants were especially struck by the beautiful monasteries and churches of the city, in particular the Hagia Sophia, or the Church of Holy Wisdom. According to Russian 14th-century traveler Stephen of Novgorod: "As for Hagia Sophia, the human mind can neither tell it nor make description of it."
It was especially important for preserving in its libraries manuscripts of Greek and Latin authors throughout a period when instability and disorder caused their mass-destruction in western Europe and north Africa: On the city's fall, thousands of these were brought by refugees to Italy, and played a key part in stimulating the Renaissance, and the transition to the modern world. The cumulative influence of the city on the west, over the many centuries of its existence, is incalculable. In terms of technology, art and culture, as well as sheer size, Constantinople was without parallel anywhere in Europe for a thousand years. Many languages were spoken in Constantinople. A 16th century Chinese geographical treatise specifically recorded that there were translators living in the city, indicating this was a multilingual, multicultural cosmopolitan.
Women in literature
Constantinople was home to the first known Western Armenian journal published and edited by a woman (Elpis Kesaratsian). Entering circulation in 1862, Kit'arr or Guitar stayed in print for only seven months. Female writers who openly expressed their desires were viewed as immodest, but this changed slowly as journals began to publish more "women's sections". In the 1880s, Matteos Mamurian invited Srpouhi Dussap to submit essays for Arevelian Mamal. According to Zaruhi Galemkearian's autobiography, she was told to write about women's place in the family and home after she published two volumes of poetry in the 1890s. By 1900, several Armenian journals had started to include works by female contributors including the Constantinople-based Tsaghik.
Markets
Even before Constantinople was founded, the markets of Byzantion were mentioned first by Xenophon and then by Theopompus who wrote that Byzantians "spent their time at the market and the harbour". In Justinian's age the Mese street running across the city from east to west was a daily market. Procopius claimed "more than 500 prostitutes" did business along the market street. Ibn Batutta who traveled to the city in 1325 wrote of the bazaars "Astanbul" in which the "majority of the artisans and salespeople in them are women".
Architecture and Coinage
The Byzantine Empire used Roman and Greek architectural models and styles to create its own unique type of architecture. The influence of Byzantine architecture and art can be seen in the copies taken from it throughout Europe. Particular examples include St Mark's Basilica in Venice, the basilicas of Ravenna, and many churches throughout the Slavic East. Also, alone in Europe until the 13th-century Italian florin, the Empire continued to produce sound gold coinage, the solidus of Diocletian becoming the bezant prized throughout the Middle Ages. Its city walls were much imitated (for example, see Caernarfon Castle) and its urban infrastructure was moreover a marvel throughout the Middle Ages, keeping alive the art, skill and technical expertise of the Roman Empire. In the Ottoman period Islamic architecture and symbolism were used.
Great bathhouses were built in Byzantine centers such as Constantinople and Antioch.
Religion
Constantine's foundation gave prestige to the Bishop of Constantinople, who eventually came to be known as the Ecumenical Patriarch, and made it a prime center of Christianity alongside Rome. This contributed to cultural and theological differences between Eastern and Western Christianity eventually leading to the Great Schism that divided Western Catholicism from Eastern Orthodoxy from 1054 onwards. Constantinople is also of great religious importance to Islam, as the conquest of Constantinople is one of the signs of the End time in Islam.
Education
There were many institutions in ancient Constantinople such as the Imperial University of Constantinople, sometimes known as the University of the Palace Hall of Magnaura (), an Eastern Roman educational institution that could trace its corporate origins to 425 AD, when the emperor Theodosius II founded the Pandidacterium ().
Media
In the past the Bulgarian newspapers in the late Ottoman period were Makedoniya, Napredŭk, and Pravo.
International status
The city acted as a defence for the eastern provinces of the old Roman Empire against the barbarian invasions of the 5th century. The 18-meter-tall walls built by Theodosius II were, in essence, impregnable to the barbarians coming from south of the Danube river, who found easier targets to the west rather than the richer provinces to the east in Asia. From the 5th century, the city was also protected by the Anastasian Wall, a 60-kilometer chain of walls across the Thracian peninsula. Many scholars argue that these sophisticated fortifications allowed the east to develop relatively unmolested while Ancient Rome and the west collapsed.
Constantinople's fame was such that it was described even in contemporary Chinese histories, the Old and New Book of Tang, which mentioned its massive walls and gates as well as a purported clepsydra mounted with a golden statue of a man. The Chinese histories even related how the city had been besieged in the 7th century by Muawiyah I and how he exacted tribute in a peace settlement.
See also
People from Constantinople
List of people from Constantinople
Secular buildings and monuments
Augustaion
Column of Justinian
Basilica Cistern
Column of Marcian
Bucoleon Palace
Horses of Saint Mark
Obelisk of Theodosius
Serpent Column
Walled Obelisk
Palace of Lausus
Cistern of Philoxenos
Palace of the Porphyrogenitus
Prison of Anemas
Valens Aqueduct
Churches, monasteries and mosques
Church of Saint Thekla of the Palace of Blachernae
Church of Myrelaion
Chora Church
Church of Saints Sergius and Bacchus
Church of the Holy Apostles
Church of St. Polyeuctus
Monastery of Christ Pantepoptes
Lips Monastery
Monastery of the Christ the Benefactor
Hagia Irene
Saint John the Forerunner by-the-Dome
Church of Theotokos Kyriotissa
Church of Saint Andrew in Krisei
Nea Ekklesia
Pammakaristos Church
Stoudios Monastery
Toklu Dede Mosque
Church of Saint Theodore
Monastery of the Pantokrator
Unnamed Mosque established during Byzantine times for visiting Muslim dignitaries
Miscellaneous
Ahmed Bican Yazıcıoğlu
Byzantine calendar
Byzantine silk
Eparch of Constantinople (List of eparchs)
Sieges of Constantinople
Third Rome
Thracia
Timeline of Istanbul history
Notes
References
Bibliography
Ball, Warwick (2016). Rome in the East: Transformation of an Empire, 2nd edition. London & New York: Routledge. .
Emerson, Charles. 1913: In Search of the World Before the Great War (2013) compares Constantinople to 20 major world cities; pp 358–80.
online review
Ibrahim, Raymond (2018). Sword and Scimitar, 1st edition. New York. .
Klein, Konstantin M.: Wienand, Johannes (2022) (eds.): City of Caesar, City of God: Constantinople and Jerusalem in Late Antiquity. De Gruyter, Berlin 2022, ISBN 978-3-11-071720-4. doi: City of Caesar, City of God.
Korolija Fontana-Giusti, Gordana 'The Urban Language of Early Constantinople: The Changing Roles of the Arts and Architecture in the Formation of the New Capital and the New Consciousness' in Intercultural Transmission in the Medieval Mediterranean, (2012), Stephanie L. Hathaway and David W. Kim (eds), London: Continuum, pp 164–202. .
Yule, Henry (1915). Henri Cordier (ed.), Cathay and the Way Thither: Being a Collection of Medieval Notices of China, Vol I: Preliminary Essay on the Intercourse Between China and the Western Nations Previous to the Discovery of the Cape Route. London: Hakluyt Society. Accessed 21 September 2016.
External links
Constantinople, from History of the Later Roman Empire, by J. B. Bury
History of Constantinople from the "New Advent Catholic Encyclopedia".
Monuments of Byzantium – Pantokrator Monastery of Constantinople
Constantinoupolis on the web Select internet resources on the history and culture
Info on the name change from the Foundation for the Advancement of Sephardic Studies and Culture
, documenting the monuments of Byzantine Constantinople
Byzantium 1200, a project aimed at creating computer reconstructions of the Byzantine monuments located in Istanbul in 1200 AD.
Constantine and Constantinople How and why Constantinople was founded
Hagia Sophia Mosaics The Deesis and other Mosaics of Hagia Sophia in Constantinople
320s establishments in the Roman Empire
330 establishments
1453 disestablishments in the Ottoman Empire
15th-century disestablishments in the Byzantine Empire
Capitals of former nations
Constantine the Great
Holy cities
Populated places along the Silk Road
Populated places established in the 4th century
Populated places disestablished in the 15th century
Populated places of the Byzantine Empire
Roman towns and cities in Turkey
Thrace |
5648 | https://en.wikipedia.org/wiki/Cornwall | Cornwall | Cornwall (; ) is a ceremonial county in South West England. It is recognised as one of the Celtic nations and is the homeland of the Cornish people. The county is bordered by the Atlantic Ocean to the north and west, Devon to the east, and the English Channel to the south. The largest settlement is Falmouth, and the county town is Truro.
The county is rural, with an area of and population of 568,210. The largest settlements are Falmouth (23,061), Newquay (20,342), St Austell (19,958), and Truro (18,766). Most of Cornwall forms a single unitary authority area, and the Isles of Scilly have a unique local authority. The Cornish nationalist movement disputes the constitutional status of Cornwall and seeks greater autonomy within the United Kingdom.
Cornwall is the westernmost part of the South West Peninsula. Its coastline is characterised by steep cliffs and, to the south, several rias, including those at the mouths of the rivers Fal and Fowey. It includes the southernmost point on Great Britain, Lizard Point, and forms a large part of the Cornwall Area of Outstanding Natural Beauty. The AONB also includes Bodmin Moor, an upland outcrop of the Cornubian batholith granite formation. The county contains many short rivers; the longest is the Tamar, which forms the border with Devon.
Cornwall had a minor Roman presence, and later formed part of the Brittonic kingdom of Dumnonia. From the 7th century, the Britons in the South West increasingly came into conflict with the expanding Anglo-Saxon kingdom of Wessex, eventually being pushed west of the Tamar; by the Norman Conquest Cornwall was administered as part of England, though it retained its own culture. The remainder of the Middle Ages and Early Modern Period were relatively settled, with Cornwall developing its tin mining industry and becoming a duchy in 1337. During the Industrial Revolution, the tin and copper mines were expanded and then declined, with china clay extraction becoming a major industry. Railways were built, leading to a growth of tourism in the 20th century. The Cornish language became extinct as a living community language at the end of the 18th century, but is now being revived.
Name
The modern English name Cornwall is a compound of two terms coming from two different language groups:
Corn- originates from the Proto-Celtic "*karnu-" ("horn", presumed in reference to "headland"), and is cognate with the English word "horn" and Latin "cornu" (both deriving from the Proto-Indo-European *ker-). There may also have been an Iron Age group that occupied the Cornish peninsula known as the Cornovii (i.e. "people of the horn or headland").
-wall derives from the Old English exonym "", meaning "foreigner", "slave" or "Brittonic-speaker" (as in Welsh).
In the Cornish language, Cornwall is Kernow which stems from the same Proto-Celtic root.
History
Prehistory, Roman and post-Roman periods
Humans reoccupied Britain after the last Ice Age. The area now known as Cornwall was first inhabited in the Palaeolithic and Mesolithic periods. It continued to be occupied by Neolithic and then by Bronze Age people.
Cornwall in the Late Bronze Age formed part of a maritime trading-networked culture which researchers have dubbed the Atlantic Bronze Age system, and which extended over most of the areas of present-day Ireland, England, Wales, France, Spain, and Portugal.
During the British Iron Age, Cornwall, like all of Britain (modern England, Scotland, Wales, and the Isle of Man), was inhabited by a Celtic-speaking people known as the Britons with distinctive cultural relations to neighbouring Brittany. The Common Brittonic spoken at this time eventually developed into several distinct tongues, including Cornish, Welsh, Breton, Cumbric and Pictish.
The first written account of Cornwall comes from the 1st-century BC Sicilian Greek historian Diodorus Siculus, supposedly quoting or paraphrasing the 4th-century BCE geographer Pytheas, who had sailed to Britain:
The identity of these merchants is unknown. It has been theorised that they were Phoenicians, but there is no evidence for this. Professor Timothy Champion, discussing Diodorus Siculus's comments on the tin trade, states that "Diodorus never actually says that the Phoenicians sailed to Cornwall. In fact, he says quite the opposite: the production of Cornish tin was in the hands of the natives of Cornwall, and its transport to the Mediterranean was organised by local merchants, by sea and then overland through France, passing through areas well outside Phoenician control." Isotopic evidence suggests that tin ingots found off the coast of Haifa, Israel, may have from Cornwall. Tin, required for the production of bronze, was a relatively rare and precious commodity in the Bronze Age – hence the interest shown in Devon and Cornwall's tin resources. (For further discussion of tin mining see the section on the economy below.)
In the first four centuries AD, during the time of Roman dominance in Britain, Cornwall was rather remote from the main centres of Romanisation – the nearest being Isca Dumnoniorum, modern-day Exeter. However, the Roman road system extended into Cornwall with four significant Roman sites based on forts: Tregear near Nanstallon was discovered in the early 1970s, two others were found at Restormel Castle, Lostwithiel in 2007, and a third fort near Calstock was also discovered early in 2007. In addition, a Roman-style villa was found at Magor Farm, Illogan in 1935. Ptolemy's Geographike Hyphegesis mentions four towns controlled by the Dumnonii, three of which may have been in Cornwall. However, after 410 AD, Cornwall appears to have reverted to rule by Romano-Celtic chieftains of the Cornovii tribe as part of the Brittonic kingdom of Dumnonia (which also included present-day Devonshire and the Scilly Isles), including the territory of one Marcus Cunomorus, with at least one significant power base at Tintagel in the early 6th century.
"King" Mark of Cornwall is a semi-historical figure known from Welsh literature, from the Matter of Britain, and, in particular, from the later Norman-Breton medieval romance of Tristan and Yseult, where he appears as a close relative of King Arthur, himself usually considered to be born of the Cornish people in folklore traditions derived from Geoffrey of Monmouth's 12th-century Historia Regum Britanniae.
Archaeology supports ecclesiastical, literary and legendary evidence for some relative economic stability and close cultural ties between the sub-Roman Westcountry, South Wales, Brittany, the Channel Islands, and Ireland through the fifth and sixth centuries. In Cornwall, the arrival of Celtic saints such as Nectan, Paul Aurelian, Petroc, Piran, Samson and numerous others reinforced the preexisting Roman christianity.
Conflict with Wessex
The Battle of Deorham in 577 saw the separation of Dumnonia (and therefore Cornwall) from Wales, following which the Dumnonii often came into conflict with the expanding English kingdom of Wessex. Centwine of Wessex "drove the Britons as far as the sea" in 682, and by 690 St Bonifice, then a Saxon boy, was attending an abbey in Exeter, which was in turn ruled by a Saxon abbot. The Carmen Rhythmicum written by Aldhelm contains the earliest literary reference to Cornwall as distinct from Devon. Religious tensions between the Dumnonians (who celebrated celtic Christian traditions) and Wessex (who were Roman Catholic) are described in Aldhelm's letter to King Geraint. The Annales Cambriae report that in AD 722 the Britons of Cornwall won a battle at "Hehil". It seems likely that the enemy the Cornish fought was a West Saxon force, as evidenced by the naming of King Ine of Wessex and his kinsman Nonna in reference to an earlier Battle of Llongborth in 710.
The Anglo-Saxon Chronicle stated in 815 (adjusted date) "and in this year king Ecgbryht raided in Cornwall from east to west." this has been interpreted to mean a raid from the Tamar to Land's End, and the end of Cornish independence. However, the Anglo-Saxon Chronicle states that in 825 (adjusted date) a battle took place between the Wealas (Cornish) and the Defnas (men of Devon) at Gafulforda. The Cornish giving battle here, and the later battle at Hingston Down, casts doubt on any claims of control Wessex had at this stage.
In 838, the Cornish and their Danish allies were defeated by Egbert in the Battle of Hingston Down at Hengestesdune. In 875, the last recorded king of Cornwall, Dumgarth, is said to have drowned. Around the 880s, Anglo-Saxons from Wessex had established modest land holdings in the north eastern part of Cornwall; notably Alfred the Great who had acquired a few estates. William of Malmesbury, writing around 1120, says that King Athelstan of England (924–939) fixed the boundary between English and Cornish people at the east bank of the River Tamar. While elements of William's story, like the burning of Exeter, have been cast in doubt by recent writers Athelstan did re-establish a separate Cornish Bishop and relations between Wessex and the Cornish elite improved from the time of his rule.
Eventually King Edgar was able to issue charters the width of Cornwall, and frequently sent emissaries or visited personally as seen by his appearances in the Bodmin Manumissions.
Breton–Norman period
One interpretation of the Domesday Book is that by this time the native Cornish landowning class had been almost completely dispossessed and replaced by English landowners, particularly Harold Godwinson himself. However, the Bodmin manumissions show that two leading Cornish figures nominally had Saxon names, but these were both glossed with native Cornish names. In 1068, Brian of Brittany may have been created Earl of Cornwall, and naming evidence cited by medievalist Edith Ditmas suggests that many other post-Conquest landowners in Cornwall were Breton allies of the Normans, the Bretons being descended from Britons who had fled to what is today Brittany during the early years of the Anglo-Saxon conquest. She also proposed this period for the early composition of the Tristan and Iseult cycle by poets such as Béroul from a pre-existing shared Brittonic oral tradition.
Soon after the Norman conquest most of the land was transferred to the new Breton–Norman aristocracy, with the lion's share going to Robert, Count of Mortain, half-brother of King William and the largest landholder in England after the king with his stronghold at Trematon Castle near the mouth of the Tamar.
Later medieval administration and society
Subsequently, however, Norman absentee landlords became replaced by a new Cornish-Norman ruling class including scholars such as Richard Rufus of Cornwall. These families eventually became the new rulers of Cornwall, typically speaking Norman French, Breton-Cornish, Latin, and eventually English, with many becoming involved in the operation of the Stannary Parliament system, the Earldom and eventually the Duchy of Cornwall. The Cornish language continued to be spoken and acquired a number of characteristics establishing its identity as a separate language from Breton.
Stannary parliaments
The stannary parliaments and stannary courts were legislative and legal institutions in Cornwall and in Devon (in the Dartmoor area). The stannary courts administered equity for the region's tin-miners and tin mining interests, and they were also courts of record for the towns dependent on the mines. The separate and powerful government institutions available to the tin miners reflected the enormous importance of the tin industry to the English economy during the Middle Ages. Special laws for tin miners pre-date written legal codes in Britain, and ancient traditions exempted everyone connected with tin mining in Cornwall and Devon from any jurisdiction other than the stannary courts in all but the most exceptional circumstances.
Piracy and smuggling
Cornish piracy was active during the Elizabethan era on the west coast of Britain. Cornwall is well known for its wreckers who preyed on ships passing Cornwall's rocky coastline. During the 17th and 18th centuries Cornwall was a major smuggling area.
Heraldry
In later times, Cornwall was known to the Anglo-Saxons as "West Wales" to distinguish it from "North Wales" (the modern nation of Wales). The name appears in the Anglo-Saxon Chronicle in 891 as On Corn walum. In the Domesday Book it was referred to as Cornualia and in c. 1198 as Cornwal. Other names for the county include a latinisation of the name as Cornubia (first appears in a mid-9th-century deed purporting to be a copy of one dating from c. 705), and as Cornugallia in 1086.
Physical geography
Cornwall forms the tip of the south-west peninsula of the island of Great Britain, and is therefore exposed to the full force of the prevailing winds that blow in from the Atlantic Ocean. The coastline is composed mainly of resistant rocks that give rise in many places to tall cliffs. Cornwall has a border with only one other county, Devon, which is formed almost entirely by the River Tamar, and the remainder (to the north) by the Marsland Valley.
Coastal areas
The north and south coasts have different characteristics. The north coast on the Celtic Sea, part of the Atlantic Ocean, is more exposed and therefore has a wilder nature. The prosaically named High Cliff, between Boscastle and St Gennys, is the highest sheer-drop cliff in Cornwall at . However, there are also many extensive stretches of fine golden sand which form the beaches important to the tourist industry, such as those at Bude, Polzeath, Watergate Bay, Perranporth, Porthtowan, Fistral Beach, Newquay, St Agnes, St Ives, and on the south coast Gyllyngvase beach in Falmouth and the large beach at Praa Sands further to the south-west. There are two river estuaries on the north coast: Hayle Estuary and the estuary of the River Camel, which provides Padstow and Rock with a safe harbour. The seaside town of Newlyn is a popular holiday destination, as it is one of the last remaining traditional Cornish fishing ports, with views reaching over Mount's Bay.
The south coast, dubbed the "Cornish Riviera", is more sheltered and there are several broad estuaries offering safe anchorages, such as at Falmouth and Fowey. Beaches on the south coast usually consist of coarser sand and shingle, interspersed with rocky sections of wave-cut platform. Also on the south coast, the picturesque fishing village of Polperro, at the mouth of the Pol River, and the fishing port of Looe on the River Looe are both popular with tourists.
Inland areas
The interior of the county consists of a roughly east–west spine of infertile and exposed upland, with a series of granite intrusions, such as Bodmin Moor, which contains the highest land within Cornwall. From east to west, and with approximately descending altitude, these are Bodmin Moor, Hensbarrow north of St Austell, Carnmenellis to the south of Camborne, and the Penwith or Land's End peninsula. These intrusions are the central part of the granite outcrops that form the exposed parts of the Cornubian batholith of south-west Britain, which also includes Dartmoor to the east in Devon and the Isles of Scilly to the west, the latter now being partially submerged.
The intrusion of the granite into the surrounding sedimentary rocks gave rise to extensive metamorphism and mineralisation, and this led to Cornwall being one of the most important mining areas in Europe until the early 20th century. It is thought tin was mined here as early as the Bronze Age, and copper, lead, zinc and silver have all been mined in Cornwall. Alteration of the granite also gave rise to extensive deposits of China Clay, especially in the area to the north of St Austell, and the extraction of this remains an important industry.
The uplands are surrounded by more fertile, mainly pastoral farmland. Near the south coast, deep wooded valleys provide sheltered conditions for flora that like shade and a moist, mild climate. These areas lie mainly on Devonian sandstone and slate. The north east of Cornwall lies on Carboniferous rocks known as the Culm Measures. In places these have been subjected to severe folding, as can be seen on the north coast near Crackington Haven and in several other locations.
Lizard Peninsula
The geology of the Lizard peninsula is unusual, in that it is mainland Britain's only example of an ophiolite, a section of oceanic crust now found on land. Much of the peninsula consists of the dark green and red Precambrian serpentinite, which forms spectacular cliffs, notably at Kynance Cove, and carved and polished serpentine ornaments are sold in local gift shops. This ultramafic rock also forms a very infertile soil which covers the flat and marshy heaths of the interior of the peninsula. This is home to rare plants, such as the Cornish Heath, which has been adopted as the county flower.
Hills and high points
Settlements and transport
Cornwall's only city, and the home of the council headquarters, is Truro. Nearby Falmouth is notable as a port. St Just in Penwith is the westernmost town in England, though the same claim has been made for Penzance, which is larger. St Ives and Padstow are today small vessel ports with a major tourism and leisure sector in their economies. Newquay on the north coast is another major urban settlement which is known for its beaches and is a popular surfing destination, as is Bude further north, but Newquay is now also becoming important for its aviation-related industries. Camborne is the county's largest town and more populous than the capital Truro. Together with the neighbouring town of Redruth, it forms the largest urban area in Cornwall, and both towns were significant as centres of the global tin mining industry in the 19th century; nearby copper mines were also very productive during that period. St Austell is also larger than Truro and was the centre of the china clay industry in Cornwall. Until four new parishes were created for the St Austell area on 1 April 2009 St Austell was the largest settlement in Cornwall.
Cornwall borders the county of Devon at the River Tamar. Major roads between Cornwall and the rest of Great Britain are the A38 which crosses the Tamar at Plymouth via the Tamar Bridge and the town of Saltash, the A39 road (Atlantic Highway) from Barnstaple, passing through North Cornwall to end in Falmouth, and the A30 which connects Cornwall to the M5 motorway at Exeter, crosses the border south of Launceston, crosses Bodmin Moor and connects Bodmin, Truro, Redruth, Camborne, Hayle and Penzance. Torpoint Ferry links Plymouth with Torpoint on the opposite side of the Hamoaze. A rail bridge, the Royal Albert Bridge built by Isambard Kingdom Brunel (1859), provides the other main land transport link. The city of Plymouth, a large urban centre in south west Devon, is an important location for services such as hospitals, department stores, road and rail transport, and cultural venues, particularly for people living in east Cornwall.
Cardiff and Swansea, across the Bristol Channel, have at some times in the past been connected to Cornwall by ferry, but these do not operate now.
The Isles of Scilly are served by ferry (from Penzance) and by aeroplane, having its own airport: St Mary's Airport. There are regular flights between St Mary's and Land's End Airport, near St Just, and Newquay Airport; during the summer season, a service is also provided between St Mary's and Exeter Airport, in Devon.
Ecology
Flora and fauna
Cornwall has varied habitats including terrestrial and marine ecosystems. One noted species in decline locally is the Reindeer lichen, which species has been made a priority for protection under the national UK Biodiversity Action Plan.
Botanists divide Cornwall and Scilly into two vice-counties: West (1) and East (2). The standard flora is by F. H. Davey Flora of Cornwall (1909). Davey was assisted by A. O. Hume and he thanks Hume, his companion on excursions in Cornwall and Devon, and for help in the compilation of that Flora, publication of which was financed by him.
Climate
Cornwall has a temperate Oceanic climate (Köppen climate classification: Cfb), with mild winters and cool summers. Cornwall has the mildest and one of the sunniest climates of the United Kingdom, as a result of its oceanic setting and the influence of the Gulf Stream. The average annual temperature in Cornwall ranges from on the Isles of Scilly to in the central uplands. Winters are among the warmest in the country due to the moderating effects of the warm ocean currents, and frost and snow are very rare at the coast and are also rare in the central upland areas. Summers are, however, not as warm as in other parts of southern England. The surrounding sea and its southwesterly position mean that Cornwall's weather can be relatively changeable.
Cornwall is one of the sunniest areas in the UK. It has more than 1,541 hours of sunshine per year, with the highest average of 7.6 hours of sunshine per day in July. The moist, mild air coming from the southwest brings higher amounts of rainfall than in eastern Great Britain, at per year. However, this is not as much as in more northern areas of the west coast. The Isles of Scilly, for example, where there are on average fewer than two days of air frost per year, is the only area in the UK to be in the Hardiness zone 10. The islands have, on average, less than one day of air temperature exceeding 30 °C per year and are in the AHS Heat Zone 1. Extreme temperatures in Cornwall are particularly rare; however, extreme weather in the form of storms and floods is common. Due to climate change Cornwall faces more heatwaves and severe droughts, faster coastal erosion, stronger storms and higher wind speeds as well as the possibility of more high impact flooding.
Culture
Language
Cornish language
Cornish, a member of the Brythonic branch of the Celtic language family, is a revived language that died out as a first language in the late 18th century. It is closely related to the other Brythonic languages, Breton and Welsh, and less so to the Goidelic languages. Cornish has no legal status in the UK.
There has been a revival of the language by academics and optimistic enthusiasts since the mid-19th century that gained momentum from the publication in 1904 of Henry Jenner's Handbook of the Cornish Language. It is a social networking community language rather than a social community group language. Cornwall Council encourages and facilitates language classes within the county, in schools and within the wider community.
In 2002, Cornish was named as a UK regional language in the European Charter for Regional or Minority Languages. As a result, in 2005 its promoters received limited government funding. Several words originating in Cornish are used in the mining terminology of English, such as costean, gossan, gunnies, kibbal, kieve and vug.
English dialect
The Cornish language and culture influenced the emergence of particular pronunciations and grammar not used elsewhere in England. The Cornish dialect is spoken to varying degrees; however, someone speaking in broad Cornish may be practically unintelligible to one not accustomed to it. Cornish dialect has generally declined, as in most places it is now little more than a regional accent and grammatical differences have been eroded over time. Marked differences in vocabulary and usage still exist between the eastern and western parts of Cornwall.
Flag
Saint Piran's Flag is the national flag and ancient banner of Cornwall, and an emblem of the Cornish people. The banner of Saint Piran is a white cross on a black background (in terms of heraldry 'sable, a cross argent'). According to legend Saint Piran adopted these colours from seeing the white tin in the black coals and ashes during his discovery of tin. The Cornish flag is an exact reverse of the former Breton black cross national flag and is known by the same name "Kroaz Du".
Arts and media
Since the 19th century, Cornwall, with its unspoilt maritime scenery and strong light, has sustained a vibrant visual art scene of international renown. Artistic activity within Cornwall was initially centred on the art-colony of Newlyn, most active at the turn of the 20th century. This Newlyn School is associated with the names of Stanhope Forbes, Elizabeth Forbes, Norman Garstin and Lamorna Birch. Modernist writers such as D. H. Lawrence and Virginia Woolf lived in Cornwall between the wars, and Ben Nicholson, the painter, having visited in the 1920s came to live in St Ives with his then wife, the sculptor Barbara Hepworth, at the outbreak of the Second World War. They were later joined by the Russian emigrant Naum Gabo, and other artists. These included Peter Lanyon, Terry Frost, Patrick Heron, Bryan Wynter and Roger Hilton. St Ives also houses the Leach Pottery, where Bernard Leach, and his followers championed Japanese inspired studio pottery. Much of this modernist work can be seen in Tate St Ives. The Newlyn Society and Penwith Society of Arts continue to be active, and contemporary visual art is documented in a dedicated online journal.
Local television programmes are provided by BBC South West & ITV West Country. Radio programmes are produced by BBC Radio Cornwall in Truro for the entire county, Heart West, Source FM for the Falmouth and Penryn areas, Coast FM for west Cornwall, Radio St Austell Bay for the St Austell area, NCB Radio for north Cornwall & Pirate FM.
Music
Cornwall has a folk music tradition that has survived into the present and is well known for its unusual folk survivals such as Mummers Plays, the Furry Dance in Helston played by the famous Helston Town Band, and Obby Oss in Padstow.
Newlyn is home to a food and music festival that hosts live music, cooking demonstrations, and displays of locally caught fish.
As in other former mining districts of Britain, male voice choirs and brass bands, such as Brass on the Grass concerts during the summer at Constantine, are still very popular in Cornwall. Cornwall also has around 40 brass bands, including the six-times National Champions of Great Britain, Camborne Youth Band, and the bands of Lanner and St Dennis.
Cornish players are regular participants in inter-Celtic festivals, and Cornwall itself has several inter-Celtic festivals such as Perranporth's Lowender Peran folk festival.
Contemporary musician Richard D. James (also known as Aphex Twin) grew up in Cornwall, as did Luke Vibert and Alex Parks, winner of Fame Academy 2003. Roger Taylor, the drummer from the band Queen was also raised in the county, and currently lives not far from Falmouth. The American singer-songwriter Tori Amos now resides predominantly in North Cornwall not far from Bude with her family. The lutenist, composer and festival director Ben Salfield lives in Truro. Mick Fleetwood of Fleetwood Mac was born in Redruth.
Literature
Cornwall's rich heritage and dramatic landscape have inspired numerous writers.
Fiction
Sir Arthur Quiller-Couch, author of many novels and works of literary criticism, lived in Fowey: his novels are mainly set in Cornwall. Daphne du Maurier lived at Menabilly near Fowey and many of her novels had Cornish settings: The Loving Spirit, Jamaica Inn, Rebecca, Frenchman's Creek, The King's General (partially), My Cousin Rachel, The House on the Strand and Rule Britannia. She is also noted for writing Vanishing Cornwall. Cornwall provided the inspiration for The Birds, one of her terrifying series of short stories, made famous as a film by Alfred Hitchcock.
Conan Doyle's The Adventure of the Devil's Foot featuring Sherlock Holmes is set in Cornwall. Winston Graham's series Poldark, Kate Tremayne's Adam Loveday series, Susan Cooper's novels Over Sea, Under Stone and Greenwitch, and Mary Wesley's The Camomile Lawn are all set in Cornwall. Writing under the pseudonym of Alexander Kent, Douglas Reeman sets parts of his Richard Bolitho and Adam Bolitho series in the Cornwall of the late 18th and the early 19th centuries, particularly in Falmouth. Gilbert K. Chesterton placed the action of many of his stories there.
Medieval Cornwall is the setting of the trilogy by Monica Furlong, Wise Child, Juniper and Colman, as well as part of Charles Kingsley's Hereward the Wake.
Hammond Innes's novel, The Killer Mine; Charles de Lint's novel The Little Country; and Chapters 24–25 of J. K. Rowling's Harry Potter and the Deathly Hallows take place in Cornwall (Shell Cottage, on the beach outside the fictional village of Tinworth).
David Cornwell, who wrote espionage novels under the name John le Carré, lived and worked in Cornwall. Nobel Prize-winning novelist William Golding was born in St Columb Minor in 1911, and returned to live near Truro from 1985 until his death in 1993. D. H. Lawrence spent a short time living in Cornwall. Rosamunde Pilcher grew up in Cornwall, and several of her books take place there.
St. Michael's Mount in Cornwall (under the fictional name of Mount Polbearne) is the setting of the Little Beach Street Bakery series by Jenny Colgan, who spent holidays in Cornwall as a child. The book series includes Little Beach Street Bakery (2014), Summer at Little Beach Street Bakery (2015), Christmas at Little Beach Street Bakery (2016), and Sunrise by the Sea (2021).
In the Paddington Bear novels by Michael Bond the title character is said to have landed at an unspecified port in Cornwall having travelled in a lifeboat aboard a cargo ship from darkest Peru. From here he travels to London on a train and eventually arrives at Paddington Station.
Enid Blyton's 1953 novel Five Go Down to the Sea (the twelfth book in The Famous Five series) is set in Cornwall, near the fictional coastal village of Tremannon.
Poetry
The late Poet Laureate Sir John Betjeman was famously fond of Cornwall and it featured prominently in his poetry. He is buried in the churchyard at St Enodoc's Church, Trebetherick.
Charles Causley, the poet, was born in Launceston and is perhaps the best known of Cornish poets. Jack Clemo and the scholar A. L. Rowse were also notable Cornishmen known for their poetry; The Rev. R. S. Hawker of Morwenstow wrote some poetry which was very popular in the Victorian period. The Scottish poet W. S. Graham lived in West Cornwall from 1944 until his death in 1986.
The poet Laurence Binyon wrote "For the Fallen" (first published in 1914) while sitting on the cliffs between Pentire Point and The Rumps and a stone plaque was erected in 2001 to commemorate the fact. The plaque bears the inscription "FOR THE FALLEN / Composed on these cliffs, 1914". The plaque also bears below this the fourth stanza (sometimes referred to as "The Ode") of the poem:
They shall grow not old, as we that are left grow old
Age shall not weary them, nor the years condemn
At the going down of the sun and in the morning
We will remember them
Other literary works
Cornwall produced a substantial number of passion plays such as the Ordinalia during the Middle Ages. Many are still extant, and provide valuable information about the Cornish language. See also Cornish literature
Colin Wilson, a prolific writer who is best known for his debut work The Outsider (1956) and for The Mind Parasites (1967), lived in Gorran Haven, a small village on the southern Cornish coast. The writer D. M. Thomas was born in Redruth but lived and worked in Australia and the United States before returning to his native Cornwall. He has written novels, poetry, and other works, including translations from Russian.
Thomas Hardy's drama The Queen of Cornwall (1923) is a version of the Tristan story; the second act of Richard Wagner's opera Tristan und Isolde takes place in Cornwall, as do Gilbert and Sullivan's operettas The Pirates of Penzance and Ruddigore.
Clara Vyvyan was the author of various books about many aspects of Cornish life such as Our Cornwall. She once wrote: "The Loneliness of Cornwall is a loneliness unchanged by the presence of men, its freedoms a freedom inexpressible by description or epitaph. Your cannot say Cornwall is this or that. Your cannot describe it in a word or visualise it in a second. You may know the country from east to west and sea to sea, but if you close your eyes and think about it no clear-cut image rises before you. In this quality of changefulness have we possibly surprised the secret of Cornwall's wild spirit—in this intimacy the essence of its charm? Cornwall!".
A level of Tomb Raider: Legend, a game dealing with Arthurian Legend, takes place in Cornwall at a museum above King Arthur's tomb. The adventure game The Lost Crown is set in the fictional town of Saxton, which uses the Cornish settlements of Polperro, Talland and Looe as its model.
The fairy tale Jack the Giant Killer takes place in Cornwall.
The Mousehole Cat, a children's book written by Antonia Barber and illustrated by Nicola Bayley, is set in the Cornish village Mousehole and based on the legend of Tom Bawcock and the continuing tradition of Tom Bawcock's Eve.
Sports
The main sports played in Cornwall are rugby, football and cricket. Athletes from Truro have done well in Olympic and Commonwealth Games fencing, winning several medals. Surfing is popular, particularly with tourists, thousands of whom take to the water throughout the summer months. Some towns and villages have bowling clubs, and a wide variety of British sports are played throughout Cornwall. Cornwall is also one of the few places in England where shinty is played; the English Shinty Association is based in Penryn.
The Cornwall County Cricket Club plays as one of the minor counties of English cricket.
Truro, and all of the towns and some villages have football clubs belonging to the Cornwall County Football Association, and some clubs have teams competing higher within the English football league pyramid. Of these, the highest ranked — by two flights — is Truro City F.C., who will be playing in the National League South in the 2023–24 season. Other notable Cornish teams include Mousehole A.F.C., Helston Athletic F.C., and Falmouth Town F.C.
Rugby football
Viewed as an "important identifier of ethnic affiliation", rugby union has become a sport strongly tied to notions of Cornishness. and since the 20th century, rugby union has emerged as one of the most popular spectator and team sports in Cornwall (perhaps the most popular), with professional Cornish rugby footballers being described as a "formidable force", "naturally independent, both in thought and deed, yet paradoxically staunch English patriots whose top players have represented England with pride and passion".
In 1985, sports journalist Alan Gibson made a direct connection between the love of rugby in Cornwall and the ancient parish games of hurling and wrestling that existed for centuries before rugby officially began. Among Cornwall's native sports are a distinctive form of Celtic wrestling related to Breton wrestling, and Cornish hurling, a kind of mediaeval football played with a silver ball (distinct from Irish Hurling). Cornish Wrestling is Cornwall's oldest sport and as Cornwall's native tradition it has travelled the world to places like Victoria, Australia and Grass Valley, California following the miners and gold rushes. Cornish hurling now takes place at St. Columb Major, St Ives, and less frequently at Bodmin.
In rugby league, Cornwall R.L.F.C., founded in 2021, will represent the county in the professional league system. The semi-pro club will start in the third tier RFL League 1. At an amateur level, the county is represented by Cornish Rebels.
Surfing and watersports
Due to its long coastline, various maritime sports are popular in Cornwall, notably sailing and surfing. International events in both are held in Cornwall. Cornwall hosted the Inter-Celtic Watersports Festival in 2006. Surfing in particular is very popular, as locations such as Bude and Newquay offer some of the best surf in the UK. Pilot gig rowing has been popular for many years and the World championships takes place annually on the Isles of Scilly. On 2 September 2007, 300 surfers at Polzeath beach set a new world record for the highest number of surfers riding the same wave as part of the Global Surf Challenge and part of a project called Earthwave to raise awareness about global warming.
Fencing
As its population is comparatively small, and largely rural, Cornwall's contribution to British national sport in the United Kingdom has been limited; the county's greatest successes have come in fencing. In 2014, half of the men's GB team fenced for Truro Fencing Club, and 3 Truro fencers appeared at the 2012 Olympics.
Cuisine
Cornwall has a strong culinary heritage. Surrounded on three sides by the sea amid fertile fishing grounds, Cornwall naturally has fresh seafood readily available; Newlyn is the largest fishing port in the UK by value of fish landed, and is known for its wide range of restaurants. Television chef Rick Stein has long operated a fish restaurant in Padstow for this reason, and Jamie Oliver chose to open his second restaurant, Fifteen, in Watergate Bay near Newquay. MasterChef host and founder of Smiths of Smithfield, John Torode, in 2007 purchased Seiners in Perranporth. One famous local fish dish is Stargazy pie, a fish-based pie in which the heads of the fish stick through the piecrust, as though "star-gazing". The pie is cooked as part of traditional celebrations for Tom Bawcock's Eve, but is not generally eaten at any other time.
Cornwall is perhaps best known though for its pasties, a savoury dish made with pastry. Today's pasties usually contain a filling of beef steak, onion, potato and swede with salt and white pepper, but historically pasties had a variety of different fillings. "Turmut, 'tates and mate" (i.e. "Turnip, potatoes and meat", turnip being the Cornish and Scottish term for swede, itself an abbreviation of 'Swedish Turnip', the British term for rutabaga) describes a filling once very common. For instance, the licky pasty contained mostly leeks, and the herb pasty contained watercress, parsley, and shallots. Pasties are often locally referred to as oggies. Historically, pasties were also often made with sweet fillings such as jam, apple and blackberry, plums or cherries.
The wet climate and relatively poor soil of Cornwall make it unsuitable for growing many arable crops. However, it is ideal for growing the rich grass required for dairying, leading to the production of Cornwall's other famous export, clotted cream. This forms the basis for many local specialities including Cornish fudge and Cornish ice cream. Cornish clotted cream has Protected Geographical Status under EU law, and cannot be made anywhere else. Its principal manufacturer is A. E. Rodda & Son of Scorrier.
Local cakes and desserts include Saffron cake, Cornish heavy (hevva) cake, Cornish fairings biscuits, figgy 'obbin, Cream tea and whortleberry pie.
There are also many types of beers brewed in Cornwall—those produced by Sharp's Brewery, Skinner's Brewery, Keltek Brewery and St Austell Brewery are the best known—including stouts, ales and other beer types. There is some small scale production of wine, mead and cider.
Politics and administration
Cornish national identity
Cornwall is recognised by Cornish and Celtic political groups as one of six Celtic nations, alongside Brittany, Ireland, the Isle of Man, Scotland and Wales. (The Isle of Man Government and the Welsh Government also recognise Asturias and Galicia.) Cornwall is represented, as one of the Celtic nations, at the Festival Interceltique de Lorient, an annual celebration of Celtic culture held in Brittany.
Cornwall Council consider Cornwall's unique cultural heritage and distinctiveness to be one of the area's major assets. They see Cornwall's language, landscape, Celtic identity, political history, patterns of settlement, maritime tradition, industrial heritage, and non-conformist tradition, to be among the features making up its "distinctive" culture. However, it is uncertain exactly how many of the people living in Cornwall consider themselves to be Cornish; results from different surveys (including the national census) have varied. In the 2001 census, 7 per cent of people in Cornwall identified themselves as Cornish, rather than British or English. However, activists have argued that this underestimated the true number as there was no explicit "Cornish" option included in the official census form. Subsequent surveys have suggested that as many as 44 per cent identify as Cornish. Many people in Cornwall say that this issue would be resolved if a Cornish option became available on the census. The question and content recommendations for the 2011 census provided an explanation of the process of selecting an ethnic identity which is relevant to the understanding of the often quoted figure of 37,000 who claimed Cornish identity. The 2021 census found that 17% of people in Cornwall identified as being Cornish (89,000), with 14% of people in Cornwall identifying as Cornish-only (80,000). Again there was no tick-box provided, and "Cornish" had to be written-in as "Other".
On 24 April 2014 it was announced that Cornish people have been granted minority status under the European Framework Convention for the Protection of National Minorities.
Local politics
Cornwall forms two local government districts; Cornwall and the Isles of Scilly. The district of Cornwall is governed by Cornwall Council, a unitary authority based at Lys Kernow in Truro, and the Council of the Isles of Scilly governs the archipelago from Hugh Town. The Crown Court is based at the Courts of Justice in Truro. Magistrates' Courts are found in Truro (but at a different location to the Crown Court) and at Bodmin.
The Isles of Scilly form part of the ceremonial county of Cornwall, and have, at times, been served by the same county administration. Since 1890 they have been administered by their own unitary authority, the Council of the Isles of Scilly. They are grouped with Cornwall for other administrative purposes, such as the National Health Service and Devon and Cornwall Police.
Before reorganisation on 1 April 2009, council functions throughout the rest of Cornwall were organised in two tiers, with Cornwall County Council and district councils for its six districts, Caradon, Carrick, Kerrier, North Cornwall, Penwith, and Restormel. While projected to streamline services, cut red tape and save around £17 million a year, the reorganisation was met with wide opposition, with a poll in 2008 showing 89% disapproval from Cornish residents.
The first elections for the unitary authority were held on 4 June 2009. The council has 123 seats; the largest party (in 2017) is the Conservatives, with 46 seats. The Liberal Democrats are the second-largest party, with 37 seats, with the Independents the third-largest grouping with 30.
Before the creation of the unitary council, the former county council had 82 seats, the majority of which were held by the Liberal Democrats, elected at the 2005 county council elections. The six former districts had a total of 249 council seats, and the groups with greatest numbers of councillors were Liberal Democrats, Conservatives and Independents.
Parliament and national politics
Following a review by the Boundary Commission for England taking effect at the 2010 general election, Cornwall is divided into six county constituencies to elect MPs to the House of Commons of the United Kingdom.
Before the 2010 boundary changes Cornwall had five constituencies, all of which were won by Liberal Democrats at the 2005 general election. In the 2010 general election Liberal Democrat candidates won three constituencies and Conservative candidates won three other constituencies. At the 2015 general election all six Cornish seats were won by Conservative candidates; all these Conservative MPs retained their seats at the 2017 general election, and the Conservatives won all six constituencies again at the 2019 general election.
Until 1832, Cornwall had 44 MPs—more than any other county—reflecting the importance of tin to the Crown. Most of the increase in numbers of MPs came between 1529 and 1584 after which there was no change until 1832.
Although Cornwall does not have a designated government department, in 2007 while Leader of the Opposition David Cameron created a Shadow Secretary of State for Cornwall. The position was not made into a formal UK Cabinet position when Cameron entered government following the 2010 United Kingdom general election
Devolution movement
Cornish nationalists have organised into two political parties: Mebyon Kernow, formed in 1951, and the Cornish Nationalist Party. In addition to the political parties, there are various interest groups such as the Revived Cornish Stannary Parliament and the Celtic League. The Cornish Constitutional Convention was formed in 2000 as a cross-party organisation including representatives from the private, public and voluntary sectors to campaign for the creation of a Cornish Assembly, along the lines of the National Assembly for Wales, Northern Ireland Assembly and the Scottish Parliament. Between 5 March 2000 and December 2001, the campaign collected the signatures of 41,650 Cornish residents endorsing the call for a devolved assembly, along with 8,896 signatories from outside Cornwall. The resulting petition was presented to the Prime Minister, Tony Blair.
Emergency services
Devon and Cornwall Police
Cornwall Fire and Rescue Service
South Western Ambulance Service
Cornwall Air Ambulance
HM Coastguard
Cornwall Search & Rescue Team
British Transport Police
Economy
Cornwall is one of the poorest parts of the United Kingdom in terms of per capita GDP and average household incomes. At the same time, parts of the county, especially on the coast, have high house prices, driven up by demand from relatively wealthy retired people and second-home owners. The GVA per head was 65% of the UK average for 2004. The GDP per head for Cornwall and the Isles of Scilly was 79.2% of the EU-27 average for 2004, the UK per head average was 123.0%. In 2011, the latest available figures, Cornwall's (including the Isles of Scilly) measure of wealth was 64% of the European average per capita.
Historically mining of tin (and later also of copper) was important in the Cornish economy. The first reference to this appears to be by Pytheas: see above. Julius Caesar was the last classical writer to mention the tin trade, which appears to have declined during the Roman occupation. The tin trade revived in the Middle Ages and its importance to the Kings of England resulted in certain privileges being granted to the tinners; the Cornish rebellion of 1497 is attributed to grievances of the tin miners. In the mid-19th century, however, the tin trade again fell into decline. Other primary sector industries that have declined since the 1960s include china clay production, fishing and farming.
Today, the Cornish economy depends heavily on its tourist industry, which makes up around a quarter of the economy. The official measures of deprivation and poverty at district and 'sub-ward' level show that there is great variation in poverty and prosperity in Cornwall with some areas among the poorest in England and others among the top half in prosperity. For example, the ranking of 32,482 sub-wards in England in the index of multiple deprivation (2006) ranged from 819th (part of Penzance East) to 30,899th (part of Saltash Burraton in Caradon), where the lower number represents the greater deprivation.
Cornwall was one of two UK areas designated as 'less developed regions' by the European Union, which, prior to Brexit, meant the area qualified for EU Cohesion Policy grants. It was granted Objective 1 status by the European Commission for 2000 to 2006, followed by further rounds of funding known as 'Convergence Funding' from 2007 to 2013 and 'Growth Programme' for 2014 to 2020.
Tourism
Cornwall has a tourism-based seasonal economy which is estimated to contribute up to 24% of Cornwall's gross domestic product. In 2011 tourism brought £1.85 billion into the Cornish economy. Cornwall's unique culture, spectacular landscape and mild climate make it a popular tourist destination, despite being somewhat distant from the United Kingdom's main centres of population. Surrounded on three sides by the English Channel and Celtic Sea, Cornwall has many miles of beaches and cliffs; the South West Coast Path follows a complete circuit of both coasts. Other tourist attractions include moorland, country gardens, museums, historic and prehistoric sites, and wooded valleys. Five million tourists visit Cornwall each year, mostly drawn from within the UK. Visitors to Cornwall are served by the airport at Newquay, whilst private jets, charters and helicopters are also served by Perranporth airfield; nightsleeper and daily rail services run between Cornwall, London and other regions of the UK.
Newquay and Porthtowan are popular destinations for surfers. In recent years, the Eden Project near St Austell has been a major financial success, drawing one in eight of Cornwall's visitors in 2004.
In the summer of 2018, due to the recognition of its beaches and weather through social media and the marketing of travel companies, Cornwall received about 20 per cent more visitors than the usual 4.5 million figure. The sudden rise and demand of tourism in Cornwall caused multiple traffic and safety issues in coastal areas.
In October 2021, Cornwall was longlisted for the UK City of Culture 2025, but failed to make the March 2022 shortlist.
Fishing
Other industries include fishing, although this has been significantly re-structured by EU fishing policies ( the Southwest Handline Fishermen's Association has started to revive the fishing industry).
Agriculture
Agriculture, once an important part of the Cornish economy, has declined significantly relative to other industries. However, there is still a strong dairy industry, with products such as Cornish clotted cream.
Mining
Mining of tin and copper was also an industry, but today the derelict mine workings survive only as a World Heritage Site. However, the Camborne School of Mines, which was relocated to Penryn in 2004, is still a world centre of excellence in the field of mining and applied geology and the grant of World Heritage status has attracted funding for conservation and heritage tourism. China clay extraction has also been an important industry in the St Austell area, but this sector has been in decline, and this, coupled with increased mechanisation, has led to a decrease in employment in this sector, although the industry still employs around 2,133 people in Cornwall, and generates over £80 million to the local economy.
In March 2016, a Canadian company, Strongbow Exploration, had acquired, from administration, a 100% interest in the South Crofty tin mine and the associated mineral rights in Cornwall with the aim of reopening the mine and bringing it back to full production. Work is currently ongoing to build a water filtration plant in order to dewater the mine.
Internet
Cornwall is the landing point for twenty-two of the world's fastest high-speed undersea and transatlantic fibre optic cables, making Cornwall an important hub within Europe's Internet infrastructure. The Superfast Cornwall project completed in 2015, and saw 95% of Cornish houses and businesses connected to a fibre-based broadband network, with over 90% of properties able to connect with speeds above 24 Mbit/s.
Aerospace
The county's newest industry is aviation: Newquay Airport is the home of a growing business park with Enterprise Zone status, known as Aerohub. Also a space launch facility, Spaceport Cornwall, has been established at Newquay, in partnership with Goonhilly satellite tracking station near Helston in south Cornwall.
Demographics
Cornwall's population was 537,400 in the 2011 census, with a population density of 144 people per square kilometre, ranking it 40th and 41st, respectively, among the 47 counties of England. Cornwall's population was 95.7% White British and has a relatively high rate of population growth. At 11.2% in the 1980s and 5.3% in the 1990s, it had the fifth-highest population growth rate of the counties of England. The natural change has been a small population decline, and the population increase is due to inward migration into Cornwall. According to the 1991 census, the population was 469,800.
Cornwall has a relatively high retired population, with 22.9% of pensionable age, compared with 20.3% for the United Kingdom as a whole. This may be due partly to Cornwall's rural and coastal geography increasing its popularity as a retirement location, and partly to outward migration of younger residents to more economically diverse areas.
Education
Over 10,000 students attend Cornwall's two universities, Falmouth University and the University of Exeter (including Camborne School of Mines). Falmouth University is a specialist public university for the creative industries and arts, while the University Of Exeter has two campuses in Cornwall, Truro and Penryn, the latter shared with Falmouth. Penryn campus is home to educational departments such as the rapidly growing Centre for Ecology and Conservation (CEC), the Environment and Sustainability Institute (ESI), and the Institute of Cornish Studies.
Cornwall has a comprehensive education system, with 31 state and eight independent secondary schools. There are three further education colleges: Truro and Penwith College, Cornwall College and Callywith College which opened in September 2017. The Isles of Scilly only has one school, while the former Restormel district has the highest school population, and school year sizes are around 200, with none above 270. Before the introduction of comprehensive schools there were a number of grammar schools and secondary modern schools, e.g. the schools that later became Sir James Smith's School and Wadebridge School. There are also primary schools in many villages and towns: e.g. St Mabyn Church of England Primary School.
See also
Christianity in Cornwall
Index of Cornwall-related articles
Outline of Cornwall – overview of the wide range of topics covered by this subject
Tamar Valley AONB
Duchy of Cornwall
Notes
References
Sources
A second edition was published in 2001 by the House of Stratus, Thirsk: the original text new illustrations and an afterword by Halliday's son
Further reading
(illustrated edition Published by Victor Gollancz, London, 1981, , photographs by Christian Browning)
(Available online on Google Books).
(Available online on Digital Book Index)
(Available online on Google Books).
(eleven chapters by various hands, including three previously published essays)
External links
Cornwall Council
The History of Parliament: the House of Commons – Cornwall, County, 1386 to 1831
Images of daily life in late 19th century Cornwall
Images of Cornwall at the English Heritage Archive
Celtic nations
English unitary authorities created in 2009
Local government districts of South West England
NUTS 2 statistical regions of the United Kingdom
Peninsulas of England
Unitary authority districts of England
Counties in South West England
Counties of England established in antiquity
Former kingdoms |
5649 | https://en.wikipedia.org/wiki/Constitutional%20monarchy | Constitutional monarchy | Constitutional monarchy, also known as limited monarchy, parliamentary monarchy or democratic monarchy, is a form of monarchy in which the monarch exercises their authority in accordance with a constitution and is not alone in making decisions. Constitutional monarchies differ from absolute monarchies (in which a monarch is the only decision-maker) in that they are bound to exercise powers and authorities within limits prescribed by an established legal framework.
Constitutional monarchies range from countries such as Liechtenstein, Monaco, Morocco, Jordan, Kuwait, Bahrain and Bhutan, where the constitution grants substantial discretionary powers to the sovereign, to countries such the United Kingdom and other Commonwealth realms, the Netherlands, Spain, Belgium, Norway, Sweden, Lesotho, Malaysia, Thailand, Cambodia, and Japan, where the monarch retains significantly less, if any, personal discretion in the exercise of their authority. On the surface level, this distinction may be hard to establish, with numerous liberal democracies restraining monarchic power in practice rather than written law, e.g., the constitution of the United Kingdom, which affords the monarch substantial, if limited, legislative and executive powers.
Constitutional monarchy may refer to a system in which the monarch acts as a non-party political head of state under the constitution, whether codified or uncodified. While most monarchs may hold formal authority and the government may legally operate in the monarch's name, in the form typical in Europe the monarch no longer personally sets public policy or chooses political leaders. Political scientist Vernon Bogdanor, paraphrasing Thomas Macaulay, has defined a constitutional monarch as "A sovereign who reigns but does not rule".
In addition to acting as a visible symbol of national unity, a constitutional monarch may hold formal powers such as dissolving parliament or giving royal assent to legislation. However, such powers generally may only be exercised strictly in accordance with either written constitutional principles or unwritten constitutional conventions, rather than any personal political preferences of the sovereign. In The English Constitution, British political theorist Walter Bagehot identified three main political rights which a constitutional monarch may freely exercise: the right to be consulted, the right to encourage, and the right to warn. Many constitutional monarchies still retain significant authorities or political influence, however, such as through certain reserve powers, and may also play an important political role.
The Commonwealth realms share the same person as hereditary monarchy under the Westminster system of constitutional governance. Two constitutional monarchies – Malaysia and Cambodia – are elective monarchies, in which the ruler is periodically selected by a small electoral college.
The concept of semi-constitutional monarch identifies constitutional monarchies where the monarch retains substantial powers, on a par with a president in a presidential or semi-presidential system. As a result, constitutional monarchies where the monarch has a largely ceremonial role may also be referred to as 'parliamentary monarchies' to differentiate them from semi-constitutional monarchies. Strongly limited constitutional monarchies, such as those of the United Kingdom and Australia, have been referred to as crowned republics by writers H. G. Wells and Glenn Patmore.
History
The oldest constitutional monarchy dating back to ancient times was that of the Hittites. They were an ancient Anatolian people that lived during the Bronze Age whose king had to share his authority with an assembly, called the Panku, which was the equivalent to a modern-day deliberative assembly or a legislature. Members of the Panku came from scattered noble families who worked as representatives of their subjects in an adjutant or subaltern federal-type landscape.
Constitutional and absolute monarchy
England, Scotland and the United Kingdom
In the Kingdom of England, the Glorious Revolution of 1688 furthered the constitutional monarchy, restricted by laws such as the Bill of Rights 1689 and the Act of Settlement 1701, although the first form of constitution was enacted with the Magna Carta of 1215. At the same time, in Scotland, the Convention of Estates enacted the Claim of Right Act 1689, which placed similar limits on the Scottish monarchy.
Queen Anne was the last monarch to veto an Act of Parliament when, on 11 March 1708, she blocked the Scottish Militia Bill. However Hanoverian monarchs continued to selectively dictate government policies. For instance King George III constantly blocked Catholic Emancipation, eventually precipitating the resignation of William Pitt the Younger as prime minister in 1801. The sovereign's influence on the choice of prime minister gradually declined over this period. King William IV was the last monarch to dismiss a prime minister, when in 1834 he removed Lord Melbourne as a result of Melbourne's choice of Lord John Russell as Leader of the House of Commons. Queen Victoria was the last monarch to exercise real personal power, but this diminished over the course of her reign. In 1839, she became the last sovereign to keep a prime minister in power against the will of Parliament when the Bedchamber crisis resulted in the retention of Lord Melbourne's administration. By the end of her reign, however, she could do nothing to block the unacceptable (to her) premierships of William Gladstone, although she still exercised power in appointments to the Cabinet. For example in 1886 she vetoed Gladstone's choice of Hugh Childers as War Secretary in favour of Sir Henry Campbell-Bannerman.
Today, the role of the British monarch is by convention effectively ceremonial. The British Parliament and the Government – chiefly in the office of Prime Minister of the United Kingdom – exercise their powers under "Royal (or Crown) Prerogative": on behalf of the monarch and through powers still formally possessed by the monarch.
No person may accept significant public office without swearing an oath of allegiance to the King. With few exceptions, the monarch is bound by constitutional convention to act on the advice of the Government.
Continental Europe
Poland developed the first constitution for a monarchy in continental Europe, with the Constitution of 3 May 1791; it was the second single-document constitution in the world just after the first republican Constitution of the United States. Constitutional monarchy also occurred briefly in the early years of the French Revolution, but much more widely afterwards. Napoleon Bonaparte is considered the first monarch proclaiming himself as an embodiment of the nation, rather than as a divinely appointed ruler; this interpretation of monarchy is germane to continental constitutional monarchies. German philosopher Georg Wilhelm Friedrich Hegel, in his work Elements of the Philosophy of Right (1820), gave the concept a philosophical justification that concurred with evolving contemporary political theory and the Protestant Christian view of natural law. Hegel's forecast of a constitutional monarch with very limited powers whose function is to embody the national character and provide constitutional continuity in times of emergency was reflected in the development of constitutional monarchies in Europe and Japan.
Executive monarchy versus ceremonial monarchy
There exist at least two different types of constitutional monarchies in the modern world – executive and ceremonial. In executive monarchies, the monarch wields significant (though not absolute) power. The monarchy under this system of government is a powerful political (and social) institution. By contrast, in ceremonial monarchies, the monarch holds little or no actual power or direct political influence, though they frequently have a great deal of social and cultural influence.
Ceremonial and executive monarchy should not be confused with democratic and non-democratic monarchical systems. For example, in Liechtenstein and Monaco, the ruling monarchs wield significant executive power. However, while they are theoretically very powerful within their small states, they are not absolute monarchs and have very limited de facto power compared to the Islamic monarchs, which is why their countries are generally considered to be liberal democracies. For instance, when Hereditary Prince Alois of Liechtenstein threatened to veto a referendum to legalize abortion in 2011, it came as a surprise because the prince had not vetoed any law for over 30 years (in the end, this referendum failed to make it to a vote).
Modern constitutional monarchy
As originally conceived, a constitutional monarch was head of the executive branch and quite a powerful figure even though their power was limited by the constitution and the elected parliament. Some of the framers of the U.S. Constitution may have envisioned the president as an elected constitutional monarch, as the term was then understood, following Montesquieu's account of the separation of powers.
The present-day concept of a constitutional monarchy developed in the United Kingdom, where the democratically elected parliaments, and their leader, the prime minister, exercise power, with the monarchs having ceded power and remaining as a titular position. In many cases the monarchs, while still at the very top of the political and social hierarchy, were given the status of "servants of the people" to reflect the new, egalitarian position. In the course of France's July Monarchy, Louis-Philippe I was styled "King of the French" rather than "King of France".
Following the unification of Germany, Otto von Bismarck rejected the British model. In the constitutional monarchy established under the Constitution of the German Empire which Bismarck inspired, the Kaiser retained considerable actual executive power, while the Imperial Chancellor needed no parliamentary vote of confidence and ruled solely by the imperial mandate. However, this model of constitutional monarchy was discredited and abolished following Germany's defeat in the First World War. Later, Fascist Italy could also be considered a constitutional monarchy, in that there was a king as the titular head of state while actual power was held by Benito Mussolini under a constitution. This eventually discredited the Italian monarchy and led to its abolition in 1946. After the Second World War, surviving European monarchies almost invariably adopted some variant of the constitutional monarchy model originally developed in Britain.
Nowadays a parliamentary democracy that is a constitutional monarchy is considered to differ from one that is a republic only in detail rather than in substance. In both cases, the titular head of statemonarch or presidentserves the traditional role of embodying and representing the nation, while the government is carried on by a cabinet composed predominantly of elected Members of Parliament.
However, three important factors distinguish monarchies such as the United Kingdom from systems where greater power might otherwise rest with Parliament. These are:
The Royal Prerogative, under which the monarch may exercise power under certain very limited circumstances
Sovereign Immunity, under which the monarch may do no wrong under the law because the responsible government is instead deemed accountable
The immunity of the monarch from some taxation or restrictions on property use
Other privileges may be nominal or ceremonial (e.g., where the executive, judiciary, police or armed forces act on the authority of or owe allegiance to the Crown).
Today slightly more than a quarter of constitutional monarchies are Western European countries, including the United Kingdom, Spain, the Netherlands, Belgium, Norway, Denmark, Luxembourg, Monaco, Liechtenstein and Sweden. However, the two most populous constitutional monarchies in the world are in Asia: Japan and Thailand. In these countries, the prime minister holds the day-to-day powers of governance, while the monarch retains residual (but not always insignificant) powers. The powers of the monarch differ between countries. In Denmark and in Belgium, for example, the monarch formally appoints a representative to preside over the creation of a coalition government following a parliamentary election, while in Norway the King chairs special meetings of the cabinet.
In nearly all cases, the monarch is still the nominal chief executive, but is bound by convention to act on the advice of the Cabinet. Only a few monarchies (most notably Japan and Sweden) have amended their constitutions so that the monarch is no longer even the nominal chief executive.
There are fifteen constitutional monarchies under King Charles III, which are known as Commonwealth realms. Unlike some of their continental European counterparts, the Monarch and his Governors-General in the Commonwealth realms hold significant "reserve" or "prerogative" powers, to be wielded in times of extreme emergency or constitutional crises, usually to uphold parliamentary government. For example, during the 1975 Australian constitutional crisis, the Governor-General dismissed the Australian Prime Minister Gough Whitlam. The Australian Senate had threatened to block the Government's budget by refusing to pass the necessary appropriation bills. On 11 November 1975, Whitlam intended to call a half-Senate election to try to break the deadlock. When he sought the Governor-General's approval of the election, the Governor-General instead dismissed him as Prime Minister. Shortly after that, he installed leader of the opposition Malcolm Fraser in his place. Acting quickly before all parliamentarians became aware of the government change, Fraser and his allies secured passage of the appropriation bills, and the Governor-General dissolved Parliament for a double dissolution election. Fraser and his government were returned with a massive majority. This led to much speculation among Whitlam's supporters as to whether this use of the Governor-General's reserve powers was appropriate, and whether Australia should become a republic. Among supporters of constitutional monarchy, however, the event confirmed the monarchy's value as a source of checks and balances against elected politicians who might seek powers in excess of those conferred by the constitution, and ultimately as a safeguard against dictatorship.
In Thailand's constitutional monarchy, the monarch is recognized as the Head of State, Head of the Armed Forces, Upholder of the Buddhist Religion, and Defender of the Faith. The immediate former King, Bhumibol Adulyadej, was the longest-reigning monarch in the world and in all of Thailand's history, before passing away on 13 October 2016. Bhumibol reigned through several political changes in the Thai government. He played an influential role in each incident, often acting as mediator between disputing political opponents. (See Bhumibol's role in Thai Politics.) Among the powers retained by the Thai monarch under the constitution, lèse majesté protects the image of the monarch and enables him to play a role in politics. It carries strict criminal penalties for violators. Generally, the Thai people were reverent of Bhumibol. Much of his social influence arose from this reverence and from the socioeconomic improvement efforts undertaken by the royal family.
In the United Kingdom, a frequent debate centres on when it is appropriate for a British monarch to act. When a monarch does act, political controversy can often ensue, partially because the neutrality of the crown is seen to be compromised in favour of a partisan goal, while some political scientists champion the idea of an "interventionist monarch" as a check against possible illegal action by politicians. For instance, the monarch of the United Kingdom can theoretically exercise an absolute veto over legislation by withholding royal assent. However, no monarch has done so since 1708, and it is widely believed that this and many of the monarch's other political powers are lapsed powers.
List of current constitutional monarchies
There are currently 43 monarchies worldwide.
Ceremonial constitutional monarchies
Executive constitutional monarchies
Former constitutional monarchies
The Kingdom of Afghanistan was a constitutional monarchy under Mohammad Zahir Shah from 1964 to 1973.
Kingdom of Albania from 1928 until 1939, Albania was a Constitutional Monarchy ruled by the House of Zogu, King Zog I.
The Anglo-Corsican Kingdom was a brief period in the history of Corsica (1794–1796) when the island broke with Revolutionary France and sought military protection from Great Britain. Corsica became an independent kingdom under George III of the United Kingdom, but with its own elected parliament and a written constitution guaranteeing local autonomy and democratic rights.
Barbados from gaining its independence in 1966 until 2021, was a constitutional monarchy in the Commonwealth of Nations with a Governor-General representing the Monarchy of Barbados. After an extensive history of republican movements, a republic was declared on 30 November 2021.
Brazil from 1822, with the proclamation of independence and rise of the Empire of Brazil by Pedro I of Brazil to 1889, when Pedro II was deposed by a military coup.
Kingdom of Bulgaria until 1946 when Tsar Simeon was deposed by the communist assembly.
Many republics in the Commonwealth of Nations were constitutional monarchies for some period after their independence, including South Africa (1910–1961), Ceylon from 1948 to 1972 (now Sri Lanka), Fiji (1970–1987), Gambia (1965–1970), Ghana (1957–1960), Guyana (1966–1970), Trinidad and Tobago (1962–1976), and Barbados (1966–2021).
Egypt was a constitutional monarchy starting from the later part of the Khedivate, with parliamentary structures and a responsible khedival ministry developing in the 1860s and 1870s. The constitutional system continued through the Khedivate period and developed during the Sultanate and then Kingdom of Egypt, which established an essentially democratic liberal constitutional regime under the Egyptian Constitution of 1923. This system persisted until the declaration of a republic after the Free Officers Movement coup in 1952. For most of this period, however, Egypt was occupied by the United Kingdom, and overall political control was in the hands of British colonial officials nominally accredited as diplomats to the Egyptian royal court but actually able to overrule any decision of the monarch or elected government.
The Grand Principality of Finland was a constitutional monarchy though its ruler, Alexander I, was simultaneously an autocrat and absolute ruler in Russia.
France, several times from 1789 through the 19th century. The transformation of the Estates General of 1789 into the National Assembly initiated an ad-hoc transition from the absolute monarchy of the Ancien Régime to a new constitutional system. France formally became an executive constitutional monarchy with the promulgation of the French Constitution of 1791, which took effect on 1 October of that year. This first French constitutional monarchy was short-lived, ending with the overthrow of the monarchy and establishment of the French First Republic after the Insurrection of 10 August 1792. Several years later, in 1804, Napoleon Bonaparte proclaimed himself Emperor of the French in what was ostensibly a constitutional monarchy, though modern historians often call his reign as an absolute monarchy. The Bourbon Restoration (under Louis XVIII and Charles X), the July Monarchy (under Louis-Philippe), and the Second Empire (under Napoleon III) were also constitutional monarchies, although the power of the monarch varied considerably between them and sometimes within them.
The German Empire from 1871 to 1918, (as well as earlier confederations, and the monarchies it consisted of) was also a constitutional monarchy—see Constitution of the German Empire.
Greece until 1973 when Constantine II was deposed by the military government. The decision was formalized by a plebiscite 8 December 1974.
Hawaii, which was an absolute monarchy from its founding in 1810, transitioned to a constitutional monarchy in 1840 when King Kamehameha III promulgated the kingdom's first constitution. This constitutional form of government continued until the monarchy was overthrown in an 1893 coup.
The Kingdom of Hungary. In 1848–1849 and 1867–1918 as part of Austria-Hungary. In the interwar period (1920–1944) Hungary remained a constitutional monarchy without a reigning monarch.
Iceland. The Act of Union, a 1 December 1918 agreement with Denmark, established Iceland as a sovereign kingdom united with Denmark under a common king. Iceland abolished the monarchy and became a republic on 17 June 1944 after the Icelandic constitutional referendum, 24 May 1944.
India was a constitutional monarchy, with George VI as head of state and the Earl Mountbatten as governor-general, for a brief period between gaining its independence from the British on 15 August 1947 and becoming a republic when it adopted its constitution on 26 January 1950, henceforth celebrated as Republic Day.
Pahlavi Iran under Mohammad Reza Shah Pahlavi was a constitutional monarchy, which had been originally established during the Persian Constitutional Revolution in 1906.
Italy until 2 June 1946, when a referendum proclaimed the end of the Kingdom and the beginning of the Republic.
The Kingdom of Laos was a constitutional monarchy until 1975, when Sisavang Vatthana was forced to abdicate by the communist Pathet Lao.
Malta was a constitutional monarchy with Elizabeth II as Queen of Malta, represented by a Governor-General appointed by her, for the first ten years of independence from 21 September 1964 to the declaration of the Republic of Malta on 13 December 1974.
Mexico was twice an Empire. The First Mexican Empire lasted from 19 May 1822 to 19 March 1823, with Agustin I elected as emperor. Then, the Mexican monarchists and conservatives, with the help of the Austrian and Spanish crowns and Napoleon III of France, elected Maximilian of Austria as Emperor of Mexico. This constitutional monarchy lasted three years, from 1864 to 1867.
Montenegro until 1918 when it merged with Serbia and other areas to form Yugoslavia.
Nepal until 28 May 2008, when King Gyanendra was deposed, and the Federal Democratic Republic of Nepal was declared.
Ottoman Empire from 1876 until 1878 and again from 1908 until the dissolution of the empire in 1922.
Pakistan was a constitutional monarchy for a brief period between gaining its independence from the British on 14 August 1947 and becoming a republic when it adopted the first Constitution of Pakistan on 23 March 1956. The Dominion of Pakistan had a total of two monarchs (George VI and Elizabeth II) and four Governor-Generals (Muhammad Ali Jinnah being the first). Republic Day (or Pakistan Day) is celebrated every year on 23 March to commemorate the adoption of its Constitution and the transition of the Dominion of Pakistan to the Islamic Republic of Pakistan.
The Polish–Lithuanian Commonwealth, formed after the Union of Lublin in 1569 and lasting until the final partition of the state in 1795, operated much like many modern European constitutional monarchies (into which it was officially changed by the establishment of the Constitution of 3 May 1791, which historian Norman Davies calls "the first constitution of its kind in Europe"). The legislators of the unified state truly did not see it as a monarchy at all, but as a republic under the presidency of the King . Poland–Lithuania also followed the principle of , had a bicameral parliament, and a collection of entrenched legal documents amounting to a constitution along the lines of the modern United Kingdom. The King was elected and had the duty of maintaining the people's rights.
Portugal was a monarchy since 1139 and a constitutional monarchy from 1822 to 1828, and again from 1834 until 1910, when Manuel II was overthrown by a military coup. From 1815 to 1825 it was part of the United Kingdom of Portugal, Brazil and the Algarves which was a constitutional monarchy for the years 1820–23.
Kingdom of Romania from its establishment in 1881 until 1947 when Michael I was forced to abdicate by the communists.
Kingdom of Serbia from 1882 until 1918, when it merged with the State of Slovenes, Croats and Serbs into the unitary Yugoslav Kingdom, that was led by the Serbian Karadjordjevic dynasty.
Trinidad and Tobago was a constitutional monarchy with Elizabeth II as Queen of Trinidad and Tobago, represented by a Governor-General appointed by her, for the first fourteen years of independence from 31 August 1962 to the declaration of the Republic of Trinidad and Tobago on 1 August 1976. Republic Day is celebrated every year on 24 September.
Yugoslavia from 1918 (as Kingdom of Serbs, Croats and Slovenes) until 1929 and from 1931 (as Kingdom of Yugoslavia) until 1944 when under pressure from the Allies Peter II recognized the communist government.
Unusual constitutional monarchies
Andorra is a diarchy, being headed by two co-princes: the bishop of Urgell and the president of France.
Andorra, Monaco and Liechtenstein are the only countries with reigning princes.
Belgium is the only remaining explicit popular monarchy: the formal title of its king is King of the Belgians rather than King of Belgium. Historically, several defunct constitutional monarchies followed this model; the Belgian formulation is recognized to have been modelled on the title "King of the French" granted by the Charter of 1830 to monarch of the July Monarchy.
Japan is the only country remaining with an emperor.
Luxembourg is the only country remaining with a grand duke.
Malaysia is a federal country with an elective monarchy: the Yang di-Pertuan Agong is selected from among nine state rulers who are also constitutional monarchs themselves.
Papua New Guinea. Unlike in most other Commonwealth realms, sovereignty is constitutionally vested in the citizenry of Papua New Guinea and the preamble to the constitution states "that all power belongs to the people—acting through their duly elected representatives". The monarch has been, according to section 82 of the constitution, "requested by the people of Papua New Guinea, through their Constituent Assembly, to become [monarch] and Head of State of Papua New Guinea" and thus acts in that capacity.
Spain. The Constitution of Spain does not even recognize the monarch as sovereign, but just as the head of state (Article 56). Article 1, Section 2, states that "the national sovereignty is vested in the Spanish people".
United Arab Emirates is a federal country with an elective monarchy, the President or Ra'is, being selected from among the rulers of the seven emirates, each of whom is a hereditary absolute monarch in their own emirate.
See also
Australian Monarchist League
Criticism of monarchy
Monarchism
Figurehead
Parliamentary republic
Reserve power
References
Citations
Sources
– excerpted from
– originally published as Georg Friedrich Wilhelm Hegel, Philosophie des Rechts.
– England and the Netherlands in the 17th and 18th centuries were parliamentary democracies.
Further reading
Monarchy
Constitutional state types |
5654 | https://en.wikipedia.org/wiki/Caspar%20David%20Friedrich | Caspar David Friedrich | Caspar David Friedrich (5 September 1774 – 7 May 1840) was a German Romantic landscape painter, generally considered the most important German artist of his generation. He is best known for his allegorical landscapes, which typically feature contemplative figures silhouetted against night skies, morning mists, barren trees or Gothic ruins. His primary interest was the contemplation of nature, and his often symbolic and anti-classical work seeks to convey a subjective, emotional response to the natural world. Friedrich's paintings characteristically set a human presence in diminished perspective amid expansive landscapes, reducing the figures to a scale that, according to the art historian Christopher John Murray, directs "the viewer's gaze towards their metaphysical dimension".
Friedrich was born in the town of Greifswald on the Baltic Sea in what was at the time Swedish Pomerania. He studied in Copenhagen until 1798, before settling in Dresden. He came of age during a period when, across Europe, a growing disillusionment with materialistic society was giving rise to a new appreciation of spirituality. This shift in ideals was often expressed through a reevaluation of the natural world, as artists such as Friedrich, J. M. W. Turner and John Constable sought to depict nature as a "divine creation, to be set against the artifice of human civilization".
Friedrich's work brought him renown early in his career. Contemporaries such as the French sculptor David d'Angers spoke of him as having discovered "the tragedy of landscape". His work nevertheless fell from favour during his later years, and he died in obscurity. As Germany moved towards modernisation in the late 19th century, a new sense of urgency characterised its art, and Friedrich's contemplative depictions of stillness came to be seen as products of a bygone age.
The early 20th century brought a renewed appreciation of his art, beginning in 1906 with an exhibition of thirty-two of his paintings in Berlin. His work influenced Expressionist artists and later Surrealists and Existentialists. The rise of Nazism in the early 1930s saw a resurgence in Friedrich's popularity, but this was followed by a sharp decline as his paintings were, by association with the Nazi movement, seen as promoting German nationalism. In the late 1970s Friedrich regained his reputation as an icon of the German Romantic movement and a painter of international importance.
Life
Early years and family
Caspar David Friedrich was born on 5 September 1774, in Greifswald, Swedish Pomerania, on the Baltic coast of Germany. The sixth of ten children, he was raised in the strict Lutheran creed of his father Adolf Gottlieb Friedrich, a candle-maker and soap boiler. Records of the family's financial circumstances are contradictory; while some sources indicate the children were privately tutored, others record that they were raised in relative poverty. He became familiar with death from an early age. His mother, Sophie, died in 1781 when he was seven. A year later, his sister Elisabeth died, and a second sister, Maria, succumbed to typhus in 1791. Arguably the greatest tragedy of his childhood happened in 1787 when his brother Johann Christoffer died: at the age of thirteen, Caspar David witnessed his younger brother fall through the ice of a frozen lake, and drown. Some accounts suggest that Johann Christoffer perished while trying to rescue Caspar David, who was also in danger on the ice.
Friedrich began his formal study of art in 1790 as a private student of artist Johann Gottfried Quistorp at the University of Greifswald in his home city, at which the art department is now named Caspar-David-Friedrich-Institut in his honour. Quistorp took his students on outdoor drawing excursions; as a result, Friedrich was encouraged to sketch from life at an early age. Through Quistorp, Friedrich met and was subsequently influenced by the theologian Ludwig Gotthard Kosegarten, who taught that nature was a revelation of God. Quistorp introduced Friedrich to the work of the German 17th-century artist Adam Elsheimer, whose works often included religious subjects dominated by landscape, and nocturnal subjects. During this period he also studied literature and aesthetics with Swedish professor Thomas Thorild. Four years later Friedrich entered the prestigious Academy of Copenhagen, where he began his education by making copies of casts from antique sculptures before proceeding to drawing from life.
Living in Copenhagen afforded the young painter access to the Royal Picture Gallery's collection of 17th-century Dutch landscape painting. At the Academy he studied under teachers such as Christian August Lorentzen and the landscape painter Jens Juel. These artists were inspired by the Sturm und Drang movement and represented a midpoint between the dramatic intensity and expressive manner of the budding Romantic aesthetic and the waning neo-classical ideal. Mood was paramount, and influence was drawn from such sources as the Icelandic legend of Edda, the poems of Ossian and Norse mythology.
Move to Dresden
Friedrich settled permanently in Dresden in 1798. During this early period, he experimented in printmaking with etchings and designs for woodcuts which his furniture-maker brother cut. By 1804 he had produced 18 etchings and four woodcuts; they were apparently made in small numbers and only distributed to friends. Despite these forays into other media, he gravitated toward working primarily with ink, watercolour and sepias. With the exception of a few early pieces, such as Landscape with Temple in Ruins (1797), he did not work extensively with oils until his reputation was more established.
Landscapes were his preferred subject, inspired by frequent trips, beginning in 1801, to the Baltic coast, Bohemia, the Krkonoše and the Harz Mountains. Mostly based on the landscapes of northern Germany, his paintings depict woods, hills, harbors, morning mists and other light effects based on a close observation of nature. These works were modeled on sketches and studies of scenic spots, such as the cliffs on Rügen, the surroundings of Dresden and the river Elbe. He executed his studies almost exclusively in pencil, even providing topographical information, yet the subtle atmospheric effects characteristic of Friedrich's mid-period paintings were rendered from memory. These effects took their strength from the depiction of light, and of the illumination of sun and moon on clouds and water: optical phenomena peculiar to the Baltic coast that had never before been painted with such an emphasis.
His reputation as an artist was established when he won a prize in 1805 at the Weimar competition organised by Johann Wolfgang von Goethe. At the time, the Weimar competition tended to draw mediocre and now-forgotten artists presenting derivative mixtures of neo-classical and pseudo-Greek styles. The poor quality of the entries began to prove damaging to Goethe's reputation, so when Friedrich entered two sepia drawings—Procession at Dawn and Fisher-Folk by the Sea—the poet responded enthusiastically and wrote, "We must praise the artist's resourcefulness in this picture fairly. The drawing is well done, the procession is ingenious and appropriate ... his treatment combines a great deal of firmness, diligence and neatness ... the ingenious watercolour ... is also worthy of praise."
Friedrich completed the first of his major paintings in 1808, at the age of 34. Cross in the Mountains, today known as the Tetschen Altar, is an altarpiece panel said to have been commissioned for a family chapel in Tetschen, Bohemia. The panel depicts a cross in profile at the top of a mountain, alone, and surrounded by pine trees.
Although the altarpiece was generally coldly received, it was Friedrich's first painting to receive wide publicity. The artist's friends publicly defended the work, while art critic Basilius von Ramdohr published a long article challenging Friedrich's use of landscape in a religious context. He rejected the idea that landscape painting could convey explicit meaning, writing that it would be "a veritable presumption, if landscape painting were to sneak into the church and creep onto the altar". Friedrich responded with a programme describing his intentions in 1809, comparing the rays of the evening sun to the light of the Holy Father. This statement marked the only time Friedrich recorded a detailed interpretation of his own work, and the painting was among the few commissions the artist ever received.
Following the purchase of two of his paintings by the Prussian Crown Prince, Friedrich was elected a member of the Berlin Academy in 1810. Yet in 1816, he sought to distance himself from Prussian authority and applied that June for Saxon citizenship. The move was not expected; the Saxon government was pro-French, while Friedrich's paintings were seen as generally patriotic and distinctly anti-French. Nevertheless, with the aid of his Dresden-based friend Graf Vitzthum von Eckstädt, Friedrich attained citizenship, and in 1818, membership in the Saxon Academy with a yearly dividend of 150 thalers. Although he had hoped to receive a full professorship, it was never awarded him as, according to the German Library of Information, "it was felt that his painting was too personal, his point of view too individual to serve as a fruitful example to students." Politics too may have played a role in stalling his career: Friedrich's decidedly Germanic subjects and costuming frequently clashed with the era's prevailing pro-French attitudes.
Marriage
On 21 January 1818, Friedrich married Caroline Bommer, the twenty-five-year-old daughter of a dyer from Dresden. The couple had three children, with their first, Emma, arriving in 1820. Physiologist and painter Carl Gustav Carus notes in his biographical essays that marriage did not impact significantly on either Friedrich's life or personality, yet his canvasses from this period, including Chalk Cliffs on Rügen—painted after his honeymoon—display a new sense of levity, while his palette is brighter and less austere. Human figures appear with increasing frequency in the paintings of this period, which Siegel interprets as a reflection that "the importance of human life, particularly his family, now occupies his thoughts more and more, and his friends, his wife, and his townspeople appear as frequent subjects in his art."
Around this time, he found support from two sources in Russia. In 1820, the Grand Duke Nikolai Pavlovich, at the behest of his wife Alexandra Feodorovna, visited Friedrich's studio and returned to Saint Petersburg with a number of his paintings, an exchange that began a patronage that continued for many years. Not long thereafter, the poet Vasily Zhukovsky, tutor to the Grand Duke's son (later Tsar Alexander II), met Friedrich in 1821 and found in him a kindred spirit. For decades Zhukovsky helped Friedrich both by purchasing his work himself and by recommending his art to the royal family; his assistance toward the end of Friedrich's career proved invaluable to the ailing and impoverished artist. Zhukovsky remarked that his friend's paintings "please us by their precision, each of them awakening a memory in our mind."
Friedrich was acquainted with Philipp Otto Runge, another leading German painter of the Romantic period. He was also a friend of Georg Friedrich Kersting, and painted him at work in his unadorned studio, and of the Norwegian painter Johan Christian Clausen Dahl (1788–1857). Dahl was close to Friedrich during the artist's final years, and he expressed dismay that to the art-buying public, Friedrich's pictures were only "curiosities". While the poet Zhukovsky appreciated Friedrich's psychological themes, Dahl praised the descriptive quality of Friedrich's landscapes, commenting that "artists and connoisseurs saw in Friedrich's art only a kind of mystic, because they themselves were only looking out for the mystic ... They did not see Friedrich's faithful and conscientious study of nature in everything he represented".
During this period Friedrich frequently sketched memorial monuments and sculptures for mausoleums, reflecting his obsession with death and the afterlife; he even created designs for some of the funerary art in Dresden's cemeteries. Some of these works were lost in the fire that destroyed Munich's Glass Palace (1931) and later in the 1945 bombing of Dresden.
Later life
Friedrich's reputation steadily declined over the final fifteen years of his life. As the ideals of early Romanticism passed from fashion, he came to be viewed as an eccentric and melancholy character, out of touch with the times. Gradually his patrons fell away. By 1820, he was living as a recluse and was described by friends as the "most solitary of the solitary". Towards the end of his life he lived in relative poverty. He became isolated and spent long periods of the day and night walking alone through woods and fields, often beginning his strolls before sunrise.
He suffered his first stroke in June 1835, which left him with minor limb paralysis and greatly reduced his ability to paint. As a result, he was unable to work in oil; instead he was limited to watercolour, sepia and reworking older compositions. Although his vision remained strong, he had lost the full strength of his hand. Yet he was able to produce a final 'black painting', Seashore by Moonlight (1835–1836), described by Vaughan as the "darkest of all his shorelines, in which richness of tonality compensates for the lack of his former finesse". Symbols of death appeared in his work from this period. Soon after his stroke, the Russian royal family purchased a number of his earlier works, and the proceeds allowed him to travel to Teplitz—in today's Czech Republic—to recover.
During the mid-1830s, Friedrich began a series of portraits and he returned to observing himself in nature. As the art historian William Vaughan observed, however, "He can see himself as a man greatly changed. He is no longer the upright, supportive figure that appeared in Two Men Contemplating the Moon in 1819. He is old and stiff ... he moves with a stoop". By 1838, he was capable working in a small format only. He and his family were living in poverty and grew increasingly dependent for support on the charity of friends.
Death
Friedrich died in Dresden on 7 May 1840, and was buried in Dresden's Trinitatis-Friedhof (Trinity Cemetery) east of the city centre (the entrance to which he had painted some 15 years earlier). His simple flat gravestone lies north-west of the central roundel within the main avenue.
By this time his reputation and fame had waned, and his passing was little noticed within the artistic community. His artwork had certainly been acknowledged during his lifetime, but not widely. While the close study of landscape and an emphasis on the spiritual elements of nature were commonplace in contemporary art, his interpretations were highly original and personal. By 1838, his work no longer sold or received attention from critics; the Romantic movement had moved away from the early idealism that the artist had helped found.
Carl Gustav Carus later wrote a series of articles which paid tribute to Friedrich's transformation of the conventions of landscape painting. However, Carus' articles placed Friedrich firmly in his time, and did not place the artist within a continuing tradition. Only one of his paintings had been reproduced as a print, and that was produced in very few copies.
Themes
Landscape and the sublime
The visualisation and portrayal of landscape in an entirely new manner was Friedrich's key innovation. He sought not just to explore the blissful enjoyment of a beautiful view, as in the classic conception, but rather to examine an instant of sublimity, a reunion with the spiritual self through the contemplation of nature. Friedrich was instrumental in transforming landscape in art from a backdrop subordinated to human drama to a self-contained emotive subject. Friedrich's paintings commonly employed the Rückenfigur—a person seen from behind, contemplating the view. The viewer is encouraged to place himself in the position of the Rückenfigur, by which means he experiences the sublime potential of nature, understanding that the scene is as perceived and idealised by a human.
Friedrich created the idea of a landscape full of romantic feeling—die romantische Stimmungslandschaft. His art details a wide range of geographical features, such as rock coasts, forests and mountain scenes, and often used landscape to express religious themes. During his time, most of the best-known paintings were viewed as expressions of a religious mysticism. He wrote: "The artist should paint not only what he sees before him, but also what he sees within him. If, however, he sees nothing within him, then he should also refrain from painting that which he sees before him. Otherwise, his pictures will be like those folding screens behind which one expects to find only the sick or the dead." Expansive skies, storms, mist, forests, ruins and crosses bearing witness to the presence of God are frequent elements in Friedrich's landscapes. Though death finds symbolic expression in boats that move away from shore—a Charon-like motif—and in the poplar tree, it is referenced more directly in paintings like The Abbey in the Oakwood (1808–1810), in which monks carry a coffin past an open grave, toward a cross, and through the portal of a church in ruins.
He was one of the first artists to portray winter landscapes in which the land is rendered as stark and dead. Friedrich's winter scenes are solemn and still—according to the art historian Hermann Beenken, Friedrich painted winter scenes in which "no man has yet set his foot. The theme of nearly all the older winter pictures had been less winter itself than life in winter. In the 16th and 17th centuries, it was thought impossible to leave out such motifs as the crowd of skaters, the wanderer ... It was Friedrich who first felt the wholly detached and distinctive features of a natural life. Instead of many tones, he sought the one; and so, in his landscape, he subordinated the composite chord into one single basic note".
Bare oak trees and tree stumps, such as those in Raven Tree (), Man and Woman Contemplating the Moon (), and Willow Bush under a Setting Sun (), are recurring elements of his paintings, and usually symbolise death. Countering the sense of despair are Friedrich's symbols for redemption: the cross and the clearing sky promise eternal life, and the slender moon suggests hope and the growing closeness of Christ. In his paintings of the sea, anchors often appear on the shore, also indicating a spiritual hope. In The Abbey in the Oakwood, the movement of the monks away from the open grave and toward the cross and the horizon imparts Friedrich's message that the final destination of man's life lies beyond the grave.
With dawn and dusk constituting prominent themes of his landscapes, Friedrich's own later years were characterised by a growing pessimism. His work becomes darker, revealing a fearsome monumentality. The Wreck of the Hope—also known as The Polar Sea or The Sea of Ice (1823–1824)—perhaps best summarises Friedrich's ideas and aims at this point, though in such a radical way that the painting was not well received. Completed in 1824, it depicted a grim subject, a shipwreck in the Arctic Ocean; "the image he produced, with its grinding slabs of travertine-colored floe ice chewing up a wooden ship, goes beyond documentary into allegory: the frail bark of human aspiration crushed by the world's immense and glacial indifference."
Friedrich's written commentary on aesthetics was limited to a collection of aphorisms set down in 1830, in which he explained the need for the artist to match natural observation with an introspective scrutiny of his own personality. His best-known remark advises the artist to "close your bodily eye so that you may see your picture first with the spiritual eye. Then bring to the light of day that which you have seen in the darkness so that it may react upon others from the outside inwards."
Loneliness and death
Both Friedrich's life and art have at times been perceived by some to have been marked with an overwhelming sense of loneliness. Art historians and some of his contemporaries attribute such interpretations to the losses suffered during his youth to the bleak outlook of his adulthood, while Friedrich's pale and withdrawn appearance helped reinforce the popular notion of the "taciturn man from the North".
Friedrich suffered depressive episodes in 1799, 1803–1805, c. 1813, in 1816 and between 1824 and 1826. There are noticeable thematic shifts in the works he produced during these episodes, which see the emergence of such motifs and symbols as vultures, owls, graveyards and ruins. From 1826 these motifs became a permanent feature of his output, while his use of color became more dark and muted. Carus wrote in 1929 that Friedrich "is surrounded by a thick, gloomy cloud of spiritual uncertainty", though the noted art historian and curator Hubertus Gassner disagrees with such notions, seeing in Friedrich's work a positive and life-affirming subtext inspired by Freemasonry and religion.
Germanic folklore
Reflecting Friedrich's patriotism and resentment during the 1813 French occupation of the dominion of Pomerania, motifs from German folklore became increasingly prominent in his work. An anti-French German nationalist, Friedrich used motifs from his native landscape to celebrate Germanic culture, customs and mythology. He was impressed by the anti-Napoleonic poetry of Ernst Moritz Arndt and Theodor Körner, and the patriotic literature of Adam Müller and Heinrich von Kleist. Moved by the deaths of three friends killed in battle against France, as well as by Kleist's 1808 drama Die Hermannsschlacht, Friedrich undertook a number of paintings in which he intended to convey political symbols solely by means of the landscape—a first in the history of art.
In Old Heroes' Graves (1812), a dilapidated monument inscribed "Arminius" invokes the Germanic chieftain, a symbol of nationalism, while the four tombs of fallen heroes are slightly ajar, freeing their spirits for eternity. Two French soldiers appear as small figures before a cave, lower and deep in a grotto surrounded by rock, as if farther from heaven. A second political painting, Fir Forest with the French Dragoon and the Raven (c. 1813), depicts a lost French soldier dwarfed by a dense forest, while on a tree stump a raven is perched—a prophet of doom, symbolizing the anticipated defeat of France.
Legacy
Influence
Alongside other Romantic painters, Friedrich helped position landscape painting as a major genre within Western art. Of his contemporaries, Friedrich's style most influenced the painting of Johan Christian Dahl (1788–1857). Among later generations, Arnold Böcklin (1827–1901) was strongly influenced by his work, and the substantial presence of Friedrich's works in Russian collections influenced many Russian painters, in particular Arkhip Kuindzhi (c. 1842–1910) and Ivan Shishkin (1832–1898). Friedrich's spirituality anticipated American painters such as Albert Pinkham Ryder (1847–1917), Ralph Blakelock (1847–1919), the painters of the Hudson River School and the New England Luminists.
At the turn of the 20th century, Friedrich was rediscovered by the Norwegian art historian Andreas Aubert (1851–1913), whose writing initiated modern Friedrich scholarship, and by the Symbolist painters, who valued his visionary and allegorical landscapes. The Norwegian Symbolist Edvard Munch (1863–1944) would have seen Friedrich's work during a visit to Berlin in the 1880s. Munch's 1899 print The Lonely Ones echoes Friedrich's Rückenfigur (back figure), although in Munch's work the focus has shifted away from the broad landscape and toward the sense of dislocation between the two melancholy figures in the foreground.
Friedrich's modern revival gained momentum in 1906, when thirty-two of his works were featured in an exhibition in Berlin of Romantic-era art. His landscapes exercised a strong influence on the work of German artist Max Ernst (1891–1976), and as a result other Surrealists came to view Friedrich as a precursor to their movement. In 1934, the Belgian painter René Magritte (1898–1967) paid tribute in his work The Human Condition, which directly echoes motifs from Friedrich's art in its questioning of perception and the role of the viewer.
A few years later, the Surrealist journal Minotaure included Friedrich in a 1939 article by the critic Marie Landsberger, thereby exposing his work to a far wider circle of artists. The influence of The Wreck of Hope (or The Sea of Ice) is evident in the 1940–41 painting Totes Meer by Paul Nash (1889–1946), a fervent admirer of Ernst. Friedrich's work has been cited as an inspiration by other major 20th-century artists, including Mark Rothko (1903–1970), Gerhard Richter (b. 1932), Gotthard Graubner and Anselm Kiefer (b. 1945). Friedrich's Romantic paintings have also been singled out by writer Samuel Beckett (1906–89), who, standing before Man and Woman Contemplating the Moon, said "This was the source of Waiting for Godot, you know."
In his 1961 article "The Abstract Sublime", originally published in ARTnews, the art historian Robert Rosenblum drew comparisons between the Romantic landscape paintings of both Friedrich and Turner with the Abstract Expressionist paintings of Mark Rothko. Rosenblum specifically describes Friedrich's 1809 painting The Monk by the Sea, Turner's The Evening Star and Rothko's 1954 Light, Earth and Blue as revealing affinities of vision and feeling. According to Rosenblum, "Rothko, like Friedrich and Turner, places us on the threshold of those shapeless infinities discussed by the aestheticians of the Sublime. The tiny monk in the Friedrich and the fisher in the Turner establish a poignant contrast between the infinite vastness of a pantheistic God and the infinite smallness of His creatures. In the abstract language of Rothko, such literal detail—a bridge of empathy between the real spectator and the presentation of a transcendental landscape—is no longer necessary; we ourselves are the monk before the sea, standing silently and contemplatively before these huge and soundless pictures as if we were looking at a sunset or a moonlit night."
Critical opinion
Until 1890, and especially after his friends had died, Friedrich's work lay in near-oblivion for decades. Yet, by 1890, the symbolism in his work began to ring true with the artistic mood of the day, especially in central Europe. However, despite a renewed interest and an acknowledgment of his originality, his lack of regard for "painterly effect" and thinly rendered surfaces jarred with the theories of the time.
During the 1930s, Friedrich's work was used in the promotion of Nazi ideology, which attempted to fit the Romantic artist within the nationalistic Blut und Boden. It took decades for Friedrich's reputation to recover from this association with Nazism. His reliance on symbolism and the fact that his work fell outside the narrow definitions of modernism contributed to his fall from favour. In 1949, art historian Kenneth Clark wrote that Friedrich "worked in the frigid technique of his time, which could hardly inspire a school of modern painting", and suggested that the artist was trying to express in painting what is best left to poetry. Clark's dismissal of Friedrich reflected the damage the artist's reputation sustained during the late 1930s.
Friedrich's reputation suffered further damage when his imagery was adopted by a number of Hollywood directors, including Walt Disney, built on the work of such German cinema masters as Fritz Lang and F. W. Murnau, within the horror and fantasy genres. His rehabilitation was slow, but enhanced through the writings of such critics and scholars as Werner Hofmann, Helmut Börsch-Supan and Sigrid Hinz, who successfully rebutted the political associations ascribed to his work, developed a catalogue raisonné, and placed Friedrich within a purely art-historical context.
By the 1970s, he was again being exhibited in major international galleries and found favour with a new generation of critics and art historians. Today, his international reputation is well established. He is a national icon in his native Germany, and highly regarded by art historians and connoisseurs across the Western World. He is generally viewed as a figure of great psychological complexity, and according to Vaughan, "a believer who struggled with doubt, a celebrator of beauty haunted by darkness. In the end, he transcends interpretation, reaching across cultures through the compelling appeal of his imagery. He has truly emerged as a butterfly—hopefully one that will never again disappear from our sight".
Work
Friedrich was a prolific artist who produced more than 500 attributed works. In line with the Romantic ideals of his time, he intended his paintings to function as pure aesthetic statements, so he was cautious that the titles given to his work were not overly descriptive or evocative. It is likely that some of today's more literal titles, such as The Stages of Life, were not given by the artist himself, but were instead adopted during one of the revivals of interest in Friedrich. Complications arise when dating Friedrich's work, in part because he often did not directly name or date his canvases. He kept a carefully detailed notebook on his output, however, which has been used by scholars to tie paintings to their completion dates.
Notes
References
Sources
External links
Hermitage Museum Archive
CasparDavidFriedrich.org – 89 paintings by Caspar David Friedrich
Biographical timeline, Hamburg Kunsthalle
Caspar David Friedrich and the German romantic landscape
German masters of the nineteenth century: paintings and drawings from the Federal Republic of Germany, full text exhibition catalog from The Metropolitan Museum of Art, which contains material on Caspar David Friedrich (no. 29-36)
German romantic painters
German landscape painters
People from Greifswald
German Lutherans
People from Swedish Pomerania
University of Greifswald alumni
18th-century German painters
18th-century German male artists
German male painters
19th-century German painters
19th-century male artists
Royal Danish Academy of Fine Arts alumni
People associated with the University of Greifswald
1774 births
1840 deaths
Artists of the Moravian Church
Academic staff of the Dresden Academy of Fine Arts
19th-century mystics |
5655 | https://en.wikipedia.org/wiki/Courtney%20Love | Courtney Love | Courtney Michelle Love (née Harrison; born July 9, 1964) is an American singer, guitarist, songwriter, and actress. A figure in the alternative and grunge scenes of the 1990s, her career has spanned four decades. She rose to prominence as the lead vocalist and rhythm guitarist of the alternative rock band Hole, which she formed in 1989. Love has drawn public attention for her uninhibited live performances and confrontational lyrics, as well as her highly publicized personal life following her marriage to Nirvana frontman Kurt Cobain. In 2020, NME named her one of the most influential singers in alternative culture of the last 30 years.
Love had an itinerant childhood, but was primarily raised in Portland, Oregon, where she played in a series of short-lived bands and was active in the local punk scene. After briefly being in a juvenile hall, she spent a year living in Dublin and Liverpool before returning to the United States and pursuing an acting career. She appeared in supporting roles in the Alex Cox films Sid and Nancy (1986) and Straight to Hell (1987) before forming the band Hole in Los Angeles with guitarist Eric Erlandson. The group received critical acclaim from underground rock press for their 1991 debut album, produced by Kim Gordon, while their second release, Live Through This (1994), was met with critical accolades and multi-platinum sales. In 1995, Love returned to acting, earning a Golden Globe Award nomination for her performance as Althea Leasure in Miloš Forman's The People vs. Larry Flynt (1996), which established her as a mainstream actress. The following year, Hole's third album, Celebrity Skin (1998), was nominated for three Grammy Awards.
Love continued to work as an actress into the early 2000s, appearing in big-budget pictures such as Man on the Moon (1999) and Trapped (2002), before releasing her first solo album, America's Sweetheart, in 2004. The subsequent several years were marred with publicity surrounding Love's legal troubles and drug relapse, which resulted in a mandatory lockdown rehabilitation sentence in 2005 while she was writing a second solo album. That project became Nobody's Daughter, released in 2010 as a Hole album but without the former Hole lineup. Between 2014 and 2015, Love released two solo singles and returned to acting in the network series Sons of Anarchy and Empire. In 2020, she confirmed she was writing new music. Love has also been active as a writer; she co-created and co-wrote three volumes of a manga, Princess Ai, between 2004 and 2006, and wrote a memoir, Dirty Blonde (2006).
Life and career
1964–1982: Childhood and education
Courtney Michelle Harrison was born July 9, 1964, at Saint Francis Memorial Hospital in San Francisco, California, the first child of psychotherapist Linda Carroll (née Risi; born 1944) and Hank Harrison (1941–2022), a publisher and road manager for the Grateful Dead. Her parents met at a party held for Dizzy Gillespie in 1963, and the two married in Reno, Nevada after Carroll discovered she was pregnant. Carroll, who was adopted at birth, is the biological daughter of novelist Paula Fox. Love's matrilineal great-grandmother was Elsie Fox (née de Sola), a Cuban writer who co-wrote the film The Last Train from Madrid with Love's great-grandfather, Paul Hervey Fox, cousin of writer Faith Baldwin and actor Douglas Fairbanks. Phil Lesh, the founding bassist of the Grateful Dead, is Love's godfather. According to Love, she was named after Courtney Farrell, the protagonist of Pamela Moore's 1956 novel Chocolates for Breakfast. Love is of Cuban, English, German, Irish, and Welsh descent. Through her mother's subsequent marriages, Love has two younger half-sisters, three younger half-brothers (one of whom died in infancy), and one adopted brother.
Love spent her early years in Haight-Ashbury, San Francisco, until her parents divorced in 1970. In a custody hearing, her mother, as well as one of her father's girlfriends, testified that Hank had dosed Courtney with LSD when she was a toddler. Carroll also alleged that Hank threatened to abduct his daughter and flee with her to a foreign country. Though Hank denied these allegations, his custody was revoked. In 1970, Carroll relocated with Love to the rural community of Marcola, Oregon where they lived along the Mohawk River while Carroll completed her psychology degree at the University of Oregon. There, Carroll remarried to schoolteacher Frank Rodríguez, who legally adopted Love. Though Love was baptized a Roman Catholic, her mother maintained an unorthodox home; according to Love, "There were hairy, wangly-ass hippies running around naked [doing] Gestalt therapy", and her mother raised her in a gender-free household with "no dresses, no patent leather shoes, no canopy beds, nothing". Love attended a Montessori school in Eugene, Oregon, where she struggled academically and socially. She has said that she began seeing psychiatrists at "like, [age] three. Observational therapy. TM for tots. You name it, I've been there." At age nine, a psychologist noted that she exhibited signs of autism, among them tactile defensiveness. Love commented in 1995: "When I talk about being introverted, I was diagnosed autistic. At an early age, I would not speak. Then I simply bloomed."
In 1972, Love's mother divorced Rodríguez, remarried to sportswriter David Menely, and moved the family to Nelson, New Zealand. Love was enrolled at Nelson College for Girls, but soon expelled for misbehavior. In 1973, Carroll sent Love back to Portland, Oregon, to be raised by her former stepfather and other family friends. At age 14, Love was arrested for shoplifting from a Portland department store and remanded at Hillcrest Correctional Facility, a juvenile hall in Salem, Oregon. While at Hillcrest, she became acquainted with records by Patti Smith, the Runaways, and the Pretenders, who later inspired her to start a band. She was intermittently placed in foster care throughout late 1979 until becoming legally emancipated in 1980, after which she remained staunchly estranged from her mother. Shortly after her emancipation, Love spent two months in Japan working as a topless dancer, but was deported after her passport was confiscated. She returned to Portland and began working at the strip club Mary's Club, adopting the surname Love to conceal her identity; she later adopted Love as her surname. She worked odd jobs, including as a DJ at a gay disco. Love said she lacked social skills, and learned them while frequenting gay clubs and spending time with drag queens. During this period, she enrolled at Portland State University, studying English and philosophy. She later commented that, had she not found a passion for music, she would have sought a career working with children.
In 1981, Love was granted a small trust fund that had been left by her maternal grandparents, which she used to travel to Dublin, Ireland, where her biological father was living. She audited courses at Trinity College, studying theology for two semesters. She later received honorary patronage from Trinity's University Philosophical Society in 2010. While in Dublin, Love met musician Julian Cope of the Teardrop Explodes at one of the band's concerts. Cope took a liking to Love and offered to let her stay at his Liverpool home in his absence. She traveled to London, where she was met by her friend and future bandmate, Robin Barbur, from Portland. Recalling Cope's offer, Love and Barbur moved into Cope's home with him and several other artists, including Pete de Freitas of Echo & the Bunnymen. De Freitas was initially hesitant to allow the girls to stay, but acquiesced as they were "alarmingly young and obviously had nowhere else to go". Love recalled: "They kind of took me in. I was sort of a mascot; I would get them coffee or tea during rehearsals." Cope writes of Love frequently in his 1994 autobiography, Head-On, in which he refers to her as "the adolescent".
In July 1982, Love returned to the United States. In late 1982, she attended a Faith No More concert in San Francisco and convinced the members to let her join as a singer. The group recorded material with Love as a vocalist, but fired her; according to keyboardist Roddy Bottum, who remained Love's friend in the years after, the band wanted a "male energy". Love returned to working abroad as an erotic dancer, briefly in Taiwan, and then at a taxi dance hall in Hong Kong. By Love's account, she first used heroin while working at the Hong Kong dance hall, having mistaken it for cocaine. While still inebriated from the drug, Love was pursued by a wealthy male client who requested that she return with him to the Philippines, and gave her money to purchase new clothes. She used the money to purchase an airfare back to the United States.
1983–1987: Early music projects and film
At age 19, through her then-boyfriend's mother, film costume designer Bernadene Mann, Love took a job at Paramount Studios cleaning out the wardrobe department of vintage pieces that had suffered dry rot or other damage. During this time, Love became interested in vintage fashion. She subsequently returned to Portland, where she formed short-lived musical projects with her friends Ursula Wehr and Robin Barbur (namely Sugar Babylon, later known as Sugar Babydoll). After meeting Kat Bjelland at the Satyricon nightclub in 1984, the two formed the group the Pagan Babies. Love asked Bjelland to start the band with her as a guitarist, and the two moved to San Francisco in June 1985, where they recruited bassist Jennifer Finch and drummer Janis Tanaka. According to Bjelland, "[Courtney] didn't play an instrument at the time" aside from keyboards, so Bjelland would transcribe Love's musical ideas on guitar for her. The group played several house shows and recorded one 4-track demo before disbanding in late 1985. After Pagan Babies, Love moved to Minneapolis, where Bjelland had formed the group Babes in Toyland, and briefly worked as a concert promoter before returning to California. Drummer Lori Barbero recalled Love's time in Minneapolis:
Deciding to shift her focus to acting, Love enrolled at the San Francisco Art Institute and studied film under experimental director George Kuchar, featuring in one of his short films, Club Vatican. She also took experimental theater courses in Oakland taught by Whoopi Goldberg. In 1985, Love submitted an audition tape for the role of Nancy Spungen in the Sid Vicious biopic Sid and Nancy (1986) and was given a minor supporting role by director Alex Cox. After filming Sid and Nancy in New York City, she worked at a peep show in Times Square and squatted at the ABC No Rio social center and Pyramid Club in the East Village. That year, Cox cast her in a leading role in his film Straight to Hell (1987), a Spaghetti Western starring Joe Strummer, Dennis Hopper, and Grace Jones, shot in Spain in 1986. The film was poorly reviewed by critics, but it caught the attention of Andy Warhol, who featured Love in an episode of Andy Warhol's Fifteen Minutes. She also had a part in the 1988 Ramones music video for "I Wanna Be Sedated", appearing as a bride among dozens of party guests.
Displeased by the "celebutante" fame she had attained, Love abandoned her acting career in 1988 and resumed work as a stripper in Oregon, where she was recognized by customers at a bar in the small town of McMinnville. This prompted Love to go into isolation and relocate to Anchorage, Alaska, where she lived for three months to "gather her thoughts", supporting herself by working at a strip club frequented by local fishermen. "I decided to move to Alaska because I needed to get my shit together and learn how to work", she said in retrospect. "So I went on this sort of vision quest. I got rid of all my earthly possessions. I had my bad little strip clothes and some big sweaters, and I moved into a trailer with a bunch of other strippers."
1988–1991: Beginnings of Hole
At the end of 1988, Love taught herself to play guitar and relocated to Los Angeles, where she placed an ad in a local music zine: "I want to start a band. My influences are Big Black, Sonic Youth, and Fleetwood Mac." By 1989, Love had recruited guitarist Eric Erlandson; bassist Lisa Roberts, her neighbor; and drummer Caroline Rue, whom she met at a Gwar concert. Love named the band Hole after a line from Euripides' Medea ("There is a hole that pierces right through me") and a conversation in which her mother told her that she could not live her life "with a hole running through her". On July 23, 1989, Love married Leaving Trains vocalist James Moreland in Las Vegas; the marriage was annulled the same year. She later said that Moreland was a transvestite and that they had married "as a joke". After forming Hole, Love and Erlandson had a romantic relationship that lasted over a year.
In Hole's formative stages, Love continued to work at strip clubs in Hollywood (including Jumbo's Clown Room and the Seventh Veil), saving money to purchase backline equipment and a touring van, while rehearsing at a Hollywood studio loaned to her by the Red Hot Chili Peppers. Hole played their first show in November 1989 at Raji's, a rock club in central Hollywood. Their debut single, "Retard Girl", was issued in April 1990 through the Long Beach indie label Sympathy for the Record Industry and was played by Rodney Bingenheimer on local rock station KROQ. Hole appeared on the cover of Flipside, a Los Angeles-based punk fanzine. In early 1991, they released their second single, "Dicknail", through Sub Pop Records.
With no wave, noise rock, and grindcore bands being major influences on Love, Hole's first studio album, Pretty on the Inside, captured an abrasive sound and contained disturbing, graphic lyrics, described by Q as "confrontational [and] genuinely uninhibited". The record was released in September 1991 on Caroline Records, produced by Kim Gordon of Sonic Youth with assistant production from Gumball's Don Fleming; Love and Gordon had met when Hole opened for Sonic Youth during their promotional tour for Goo at the Whisky a Go Go in November 1990. In early 1991, Love sent Gordon a personal letter asking her to produce the record for the band, to which she agreed.
Pretty on the Inside received generally positive critical reception from indie and punk rock critics and was named one of the 20 best albums of the year by Spin. It gained a following in the United Kingdom, charting at 59 on the UK Albums Chart, and its lead single, "Teenage Whore", entered the UK Indie Chart at number one. The album's feminist slant led many to tag the band as part of the riot grrrl movement, a movement with which Love did not associate. The band toured in support of the record, headlining with Mudhoney in Europe; in the United States, they opened for the Smashing Pumpkins, and performed at CBGB in New York City.
During the tour, Love briefly dated Smashing Pumpkins frontman Billy Corgan and then the Nirvana frontman Kurt Cobain. The journalist Michael Azerrad states that Love and Cobain met in 1989 at the Satyricon nightclub in Portland, Oregon. However, the Cobain biographer Charles Cross gives the date as February 12, 1990; Cross said that Cobain playfully wrestled Love to the floor after she said that he looked like Dave Pirner of Soul Asylum. According to Love, she met Cobain at a Dharma Bums show in Portland, while Love's bandmate Eric Erlandson said that he and Love were introduced to Cobain in a parking lot after a concert at the Hollywood Palladium on May 17, 1991. In late 1991, Love and Cobain became re-acquainted through Jennifer Finch, one of Love's friends and former bandmates. Love and Cobain were a couple by 1992.
1992–1995: Marriage to Kurt Cobain, Live Through This and breakthrough
Shortly after completing the tour for Pretty on the Inside, Love married Cobain on Waikiki Beach in Honolulu, Hawaii, on February 24, 1992. She wore a satin and lace dress once owned by actress Frances Farmer, and Cobain wore plaid pajamas. During Love's pregnancy, Hole recorded a cover of "Over the Edge" for a Wipers tribute album, and recorded their fourth single, "Beautiful Son", which was released in April 1993. On August 18, the couple's only child, a daughter, Frances Bean Cobain, was born in Los Angeles. They relocated to Carnation, Washington, and then Seattle.
Love's first major media exposure came in a September 1992 profile with Cobain for Vanity Fair by Lynn Hirschberg, entitled "Strange Love". Cobain had become a major public figure following the surprise success of Nirvana's album Nevermind. Love was urged by her manager to participate in the cover story. During the prior year, Love and Cobain had developed a heroin addiction; the profile painted them in an unflattering light, suggesting that Love had been addicted to heroin during her pregnancy. The Los Angeles Department of Children and Family Services investigated, and custody of Frances was temporarily awarded to Love's sister Jaimee. Love claimed she was misquoted by Hirschberg, and asserted that she had immediately quit heroin during her first trimester after she discovered she was pregnant. Love later said the article had serious implications for her marriage and Cobain's mental state, suggesting it was a factor in his suicide two years later.
On September 8, 1993, Love and Cobain made their only public performance together at the Rock Against Rape benefit in Hollywood, performing two acoustic duets of "Pennyroyal Tea" and "Where Did You Sleep Last Night". Love also performed electric versions of two new Hole songs, "Doll Parts" and "Miss World", both written for their upcoming second album. In October 1993, Hole recorded their second album, Live Through This, in Atlanta. The album featured a new lineup with bassist Kristen Pfaff and drummer Patty Schemel.
In April 1994, Cobain killed himself in the Seattle home he shared with Love, who was in rehab in Los Angeles at the time. In the following months, Love was rarely seen in public, staying at her home with friends and family. Cobain's remains were cremated and his ashes divided into portions by Love, who kept some in a teddy bear and some in an urn. In June, she traveled to the Namgyal Buddhist Monastery in Ithaca, New York and had Cobain's ashes ceremonially blessed by Buddhist monks. Another portion was mixed into clay and made into memorial sculptures.
Live Through This was released one week after Cobain's death on Geffen's subsidiary label DGC. On June 16, Pfaff died of a heroin overdose in Seattle. For Hole's impending tour, Love recruited the Canadian bassist Melissa Auf der Maur. Hole's performance on August 26, 1994, at the Reading Festival—Love's first public performance following Cobain's death—was described by MTV as "by turns macabre, frightening and inspirational". John Peel wrote in The Guardian that Love's disheveled appearance "would have drawn whistles of astonishment in Bedlam", and that her performance "verged on the heroic ... Love steered her band through a set which dared you to pity either her recent history or that of the band ... The band teetered on the edge of chaos, generating a tension which I cannot remember having felt before from any stage."
Live Through This was certified platinum in April 1995 and received numerous accolades. The success combined with Cobain's suicide produced publicity for Love, and she was featured on Barbara Walters' 10 Most Fascinating People in 1995. Her erratic onstage behavior and various legal troubles during Hole's tour compounded the media coverage of her. Hole performed a series of riotous concerts over the following year, with Love frequently appearing hysterical onstage, flashing crowds, stage diving, and getting into fights with audience members. One journalist reported that at the band's show in Boston in December 1994: "Love interrupted the music and talked about her deceased husband Kurt Cobain, and also broke out into Tourette syndrome-like rants. The music was great, but the raving was vulgar and offensive, and prompted some of the audience to shout back at her."
In January 1995, Love was arrested in Melbourne for disrupting a Qantas flight after getting into an argument with a stewardess. On July 4, 1995, at the Lollapalooza Festival in George, Washington, Love threw a lit cigarette at musician Kathleen Hanna before punching her in the face, alleging that she had made a joke about her daughter. She pleaded guilty to an assault charge and was sentenced to anger management classes. In November 1995, two male teenagers sued Love for allegedly punching them during a Hole concert in Orlando, Florida in March 1995. The judge dismissed the case on grounds that the teens "weren't exposed to any greater amount of violence than could reasonably be expected at an alternative rock concert". Love later said she had little memory of 1994 and 1995, as she had been using large quantities of heroin and Rohypnol at the time.
1996–2002: Acting success and Celebrity Skin
After Hole's world tour concluded in 1996, Love made a return to acting, first in small roles in the Jean-Michel Basquiat biopic Basquiat and the drama Feeling Minnesota (1996), and then a starring role as Larry Flynt's wife Althea in Miloš Forman's critically acclaimed 1996 film The People vs. Larry Flynt. Love went through rehabilitation and quit using heroin at the insistence of Forman; she was ordered to take multiple urine tests under the supervision of Columbia Pictures while filming, and passed all of them. Despite Columbia Pictures' initial reluctance to hire Love due to her troubled past, her performance received acclaim, earning a Golden Globe nomination for Best Actress, and a New York Film Critics Circle Award for Best Supporting Actress. Critic Roger Ebert called her work in the film "quite a performance; Love proves she is not a rock star pretending to act, but a true actress." She won several other awards from various film critic associations for the film. During this time, Love maintained what the media noted as a more decorous public image, and she appeared in ad campaigns for Versace and in a Vogue Italia spread. Following the release of The People vs. Larry Flynt, she dated her co-star Edward Norton, with whom she remained until 1999.
In late 1997, Hole released the compilations My Body, the Hand Grenade and The First Session, both of which featured previously recorded material. Love attracted media attention in May 1998 after punching journalist Belissa Cohen at a party; the suit was settled out of court for an undisclosed sum. In September 1998, Hole released their third studio album, Celebrity Skin, which featured a stark power pop sound that contrasted with their earlier punk influences. Love divulged her ambition of making an album where "art meets commerce ... there are no compromises made, it has commercial appeal, and it sticks to [our] original vision." She said she was influenced by Neil Young, Fleetwood Mac, and My Bloody Valentine when writing the album. Smashing Pumpkins frontman Billy Corgan co-wrote several songs. Celebrity Skin was well received by critics; Rolling Stone called it "accessible, fiery and intimate—often at the same time ... a basic guitar record that's anything but basic." Celebrity Skin went multi-platinum, and topped "Best of Year" lists at Spin and The Village Voice. It garnered Hole's only number-one single on the Modern Rock Tracks chart with "Celebrity Skin". Hole promoted the album through MTV performances and at the 1998 Billboard Music Awards, and were nominated for three Grammy Awards at the 41st Grammy Awards ceremony.
Before the release of Celebrity Skin, Love and Fender designed a low-priced Squier brand guitar, the Vista Venus. The instrument featured a shape inspired by Mercury, a little-known independent guitar manufacturer, Stratocaster, and Rickenbacker's solid body guitars. It had a single-coil and a humbucker pickup and was available in 6-string and 12-string versions. In an early 1999 interview, Love said about the Venus: "I wanted a guitar that sounded really warm and pop, but which required just one box to go dirty ... And something that could also be your first band guitar. I didn't want it all teched out. I wanted it real simple, with just one pickup switch."
Hole toured with Marilyn Manson on the Beautiful Monsters Tour in 1999, but dropped out after nine performances; Love and Manson disagreed over production costs, and Hole was forced to open for Manson under an agreement with Interscope Records. Hole resumed touring with Imperial Teen. Love later said Hole also abandoned the tour due to Manson and Korn's (whom they also toured with in Australia) sexualized treatment of teenage female audience members. Love told interviewers at 99X.FM in Atlanta: "What I really don't like—there are certain girls that like us, or like me, who are really messed up ... they're very young, and they do not need to be taken and raped, or filmed having enema contests ... [they were] going out into the audience and picking up fourteen and fifteen-year-old girls who obviously cut themselves, and then [I had] to see them in the morning ... it's just uncool."
In 1999, Love was awarded an Orville H. Gibson award for Best Female Rock Guitarist. During this time, she starred opposite Jim Carrey as his partner Lynne Margulies in the Andy Kaufman biopic Man on the Moon (1999), followed by a role as William S. Burroughs's wife Joan Vollmer in Beat (2000) alongside Kiefer Sutherland. Love was cast as the lead in John Carpenter's sci-fi horror film Ghosts of Mars, but backed out after injuring her foot. She sued the ex-wife of her then-boyfriend, James Barber, whom Love alleged had caused the injury by running over her foot with her Volvo. The following year, she returned to film opposite Lili Taylor in Julie Johnson (2001), in which she played a woman who has a lesbian relationship; Love won an Outstanding Actress award at L.A.'s Outfest. She was then cast in the thriller Trapped (2002), alongside Kevin Bacon and Charlize Theron. The film was a box-office flop.
In the interim, Hole had become dormant. In March 2001, Love began a "punk rock femme supergroup", Bastard, enlisting Schemel, Veruca Salt co-frontwoman Louise Post, and bassist Gina Crosley. Post recalled: "[Love] was like, 'Listen, you guys: I've been in my Malibu, manicure, movie-star world for two years, alright? I wanna make a record. And let's leave all that grunge shit behind us, eh? We were being so improvisational, and singing together, and with a trust developing between us. It was the shit." The group recorded a demo tape, but by September 2001, Post and Crosley had left, with Post citing "unhealthy and unprofessional working conditions". In May 2002, Hole announced their breakup amid continuing litigation with Universal Music Group over their record contract.
In 1997, Love and former Nirvana members Krist Novoselic and Dave Grohl formed a limited liability company, Nirvana LLC, to manage Nirvana's business dealings. In June 2001, Love filed a lawsuit to dissolve it, blocking the release of unreleased Nirvana material and delaying the release of the Nirvana compilation With the Lights Out. Grohl and Novoselic sued Love, calling her "irrational, mercurial, self-centered, unmanageable, inconsistent and unpredictable". She responded with a letter stating that "Kurt Cobain was Nirvana" and that she and his family were the "rightful heirs" to the Nirvana legacy.
2003–2008: Solo work and legal troubles
In February 2003, Love was arrested at Heathrow Airport for disrupting a flight and was banned from Virgin Airlines. In October, she was arrested in Los Angeles after breaking several windows of her producer and then-boyfriend James Barber's home and was charged with being under the influence of a controlled substance; the ordeal resulted in her temporarily losing custody of her daughter.
After the breakup of Hole, Love began composing material with songwriter Linda Perry, and in July 2003 signed a contract with Virgin Records. She began recording her debut solo album, America's Sweetheart, in France shortly after. Virgin Records released America's Sweetheart in February 2004; it received mixed reviews. Charles Aaron of Spin called it a "jaw-dropping act of artistic will and a fiery, proper follow-up to 1994's Live Through This" and awarded it eight out of ten, while Amy Phillips of The Village Voice wrote: "[Love is] willing to act out the dream of every teenage brat who ever wanted to have a glamorous, high-profile hissyfit, and she turns those egocentric nervous breakdowns into art. Sure, the art becomes less compelling when you've been pulling the same stunts for a decade. But, honestly, is there anybody out there who fucks up better?" The album sold fewer than 100,000 copies. Love later expressed regret over the record, blaming her drug problems at the time. Shortly after it was released, she told Kurt Loder on TRL: "I cannot exist as a solo artist. It's a joke."
On March 17, 2004, Love appeared on the Late Show with David Letterman to promote America's Sweetheart. Her appearance drew media coverage when she lifted her shirt multiple times, flashed Letterman, and stood on his desk. The New York Times wrote: "The episode was not altogether surprising for Ms. Love, 39, whose most public moments have veered from extreme pathos—like the time she read the suicide note of her famous husband, Kurt Cobain, on MTV—to angry feminism to catfights to incoherent ranting." Hours later, in the early morning of March 18, Love was arrested in Manhattan for allegedly striking a fan with a microphone stand during a small concert in the East Village. She was released within hours and performed a scheduled concert the following evening at the Bowery Ballroom. Four days later, she called in multiple times to The Howard Stern Show, claiming in broadcast conversations with Stern that the incident had not occurred, and that actress Natasha Lyonne, who was at the concert, was told by the alleged victim that he had been paid $10,000 to file a false claim leading to Love's arrest.
On July 9, 2004, her 40th birthday, Love was arrested for failing to make a court appearance for the March 2004 charges, and taken to Bellevue Hospital, allegedly incoherent, where she was placed on a 72-hour watch. According to police, she was believed to be a potential danger to herself, but deemed mentally sound and released to a rehab facility two days later. Amidst public criticism and press coverage, comedian Margaret Cho published an opinion piece, "Courtney Deserves Better from Feminists", arguing that negative associations of Love with her drug and personal problems (including from feminists) overshadowed her music and wellbeing. Love pleaded guilty in October 2004 to disorderly conduct over the incident in East Village.
Love's appearance as a roaster on the Comedy Central Roast of Pamela Anderson in August 2005, in which she appeared intoxicated and disheveled, attracted further media attention. One review said that Love "acted as if she belonged in an institution". Six days after the broadcast, Love was sentenced to a 28-day lockdown rehab program for being under the influence of a controlled substance, violating her probation. To avoid jail time, she accepted an additional 180-day rehab sentence in September 2005. In November 2005, after completing the program, Love was discharged from the rehab center under the provision that she complete further outpatient rehab. In subsequent interviews, Love said she had been addicted to substances including prescription drugs, cocaine, and crack cocaine. She said she had been sober since completing rehabilitation in 2007, and cited her Soka Gakkai Buddhist practice (which she began in 1988) as integral to her sobriety.
In the midst of her legal troubles, Love had endeavors in writing and publishing. She co-wrote a semi-autobiographical manga, Princess Ai (Japanese: プリンセス·アイ物語), with Stu Levy, illustrated by Misaho Kujiradou and Ai Yazawa; it was released in three volumes in the United States and Japan between 2004 and 2006. In 2006, Love published a memoir, Dirty Blonde, and began recording her second solo album, How Dirty Girls Get Clean, collaborating again with Perry and Billy Corgan. Love had written several songs, including an anti-cocaine song titled "Loser Dust", during her time in rehab in 2005. She told Billboard: "My hand-eye coordination was so bad [after the drug use], I didn't even know chords anymore. It was like my fingers were frozen. And I wasn't allowed to make noise [in rehab] ... I never thought I would work again." Tracks and demos for the album leaked online in 2006, and a documentary, The Return of Courtney Love, detailing the making of the album, aired on the British television network More4 in the fall of that year. A rough acoustic version of "Never Go Hungry Again", recorded during an interview for The Times in November, was also released. Incomplete audio clips of the song "Samantha", originating from an interview with NPR, were distributed on the internet in 2007.
2009–2012: Hole revival and visual art
In March 2009, fashion designer Dawn Simorangkir brought a libel suit against Love concerning a defamatory post Love made on her Twitter account, which was eventually settled for $450,000. Several months later, in June 2009, NME published an article detailing Love's plan to reunite Hole and release a new album, Nobody's Daughter. In response, former Hole guitarist Eric Erlandson stated in Spin magazine that contractually no reunion could take place without his involvement; therefore Nobody's Daughter would remain Love's solo record, as opposed to a "Hole" record. Love responded to Erlandson's comments in a Twitter post, claiming "he's out of his mind, Hole is my band, my name, and my Trademark". Nobody's Daughter was released worldwide as a Hole album on April 27, 2010. For the new line-up, Love recruited guitarist Micko Larkin, Shawn Dailey (bass guitar), and Stu Fisher (drums, percussion). Nobody's Daughter featured material written and recorded for Love's unfinished solo album, How Dirty Girls Get Clean, including "Pacific Coast Highway", "Letter to God", "Samantha", and "Never Go Hungry", although they were re-produced in the studio with Larkin and engineer Michael Beinhorn. The album's subject matter was largely centered on Love's tumultuous life between 2003 and 2007, and featured a polished folk rock sound, and more acoustic guitar work than previous Hole albums.
The first single from Nobody's Daughter was "Skinny Little Bitch", released to promote the album in March 2010. The album received mixed reviews. Robert Sheffield of Rolling Stone gave the album three out of five, saying Love "worked hard on these songs, instead of just babbling a bunch of druggy bullshit and assuming people would buy it, the way she did on her 2004 flop, America's Sweetheart". Sal Cinquemani of Slant Magazine also gave the album three out of five: "It's Marianne Faithfull's substance-ravaged voice that comes to mind most often while listening to songs like 'Honey' and 'For Once in Your Life'. The latter track is, in fact, one of Love's most raw and vulnerable vocal performances to date ... the song offers a rare glimpse into the mind of a woman who, for the last 15 years, has been as famous for being a rock star as she's been for being a victim." Love and the band toured internationally from 2010 into late 2012 promoting the record, with their pre-release shows in London and at South by Southwest receiving critical acclaim. In 2011, Love participated in Hit So Hard, a documentary chronicling bandmate Schemel's time in Hole.
In May 2012, Love debuted an art collection at Fred Torres Collaborations in New York titled "And She's Not Even Pretty", which contained over 40 drawings and paintings by Love composed in ink, colored pencil, pastels, and watercolors. Later in the year, she collaborated with Michael Stipe on the track "Rio Grande" for Johnny Depp's sea shanty album Son of Rogues Gallery, and in 2013, co-wrote and contributed vocals on "Rat A Tat" from Fall Out Boy's album Save Rock and Roll, also appearing in the song's music video.
2013–2015: Return to acting; libel lawsuits
After dropping the Hole name and performing as a solo artist in late 2012, Love appeared in spring 2013 advertisements for Yves Saint Laurent alongside Kim Gordon and Ariel Pink. Love completed a solo tour of North America in mid-2013, which was purported to be in promotion of an upcoming solo album; however, it was ultimately dubbed a "greatest hits" tour, and featured songs from Love's and Hole's back catalogue. Love told Billboard at the time that she had recorded eight songs in the studio.
Love was subject of a second landmark libel lawsuit brought against her in January 2014 by her former attorney Rhonda Holmes, who accused Love of online defamation, seeking $8 million in damages. It was the first case of alleged Twitter-based libel in U.S. history to make it to trial. The jury, however, found in Love's favor. A subsequent defamation lawsuit filed by fashion designer Simorangkir in February 2014, however, resulted in Love being ordered to pay a further $350,000 in recompense.
On April 22, 2014, Love debuted the song "You Know My Name" on BBC Radio 6 to promote her tour of the United Kingdom. It was released as a double A-side single with the song "Wedding Day" on May 4, 2014, on her own label Cherry Forever Records via Kobalt Label Services. The tracks were produced by Michael Beinhorn, and feature Tommy Lee on drums. In an interview with the BBC, Love revealed that she and former Hole guitarist Eric Erlandson had reconciled, and had been rehearsing new material together, along with former bassist Melissa Auf der Maur and drummer Patty Schemel, though she did not confirm a reunion of the band. On May 1, 2014, in an interview with Pitchfork, Love commented further on the possibility of Hole reuniting, saying:
"I'm not going to commit to it happening, because we want an element of surprise. There's a lot of is to be dotted and ts to be crossed."
Love was cast in several television series in supporting parts throughout 2014, including the FX series Sons of Anarchy, Revenge, and Lee Daniels' network series Empire in a recurring guest role as Elle Dallas. The track "Walk Out on Me", featuring Love, was included on the Empire: Original Soundtrack from Season 1 album, which debuted at number 1 on the Billboard 200. Alexis Petridis of The Guardian praised the track, saying: "The idea of Courtney Love singing a ballad with a group of gospel singers seems faintly terrifying ... The reality is brilliant. Love's voice fits the careworn lyrics, effortlessly summoning the kind of ravaged darkness that Lana Del Rey nearly ruptures herself trying to conjure up."
In January 2015, Love starred in a New York City stage production, Kansas City Choir Boy, a "pop opera" conceived by and co-starring Todd Almond. Charles Isherwood of The New York Times praised her performance, noting a "soft-edged and bewitching" stage presence, and wrote: "Her voice, never the most supple or rangy of instruments, retains the singular sound that made her an electrifying front woman for the band Hole: a single sustained noted can seem to simultaneously contain a plea, a wound and a threat." The show toured later in the year, with performances in Boston and Los Angeles. In April 2015, the journalist Anthony Bozza sued Love, alleging a contractual violation regarding his co-writing of her memoir. Love performed as the opening act for Lana Del Rey on her Endless Summer Tour for eight West Coast shows in May and June 2015. During her tenure, Love debuted the single "Miss Narcissist", released on Wavves' independent label Ghost Ramp. She was also cast in a supporting role in James Franco's film The Long Home, based on the novel by William Gay, her first film role in over ten years; as of 2022, it remains unreleased.
2016–present: Fashion and forthcoming music
In January 2016, Love released a clothing line in collaboration with Sophia Amoruso, "Love, Courtney", featuring 18 pieces reflecting her personal style. In November 2016, she began filming the pilot for A Midsummer's Nightmare, a Shakespeare anthology series adapted for Lifetime. She starred as Kitty Menéndez in Menendez: Blood Brothers, a biopic television film based on the lives of Lyle and Erik Menéndez, which premiered on Lifetime in June 2017.
In October 2017, shortly after the Harvey Weinstein scandal made news, a 2005 video of Love warning young actresses about Weinstein went viral. In the footage, while on the red carpet for the Comedy Central Roast of Pamela Anderson, Love was asked by Natasha Leggero if she had any advice for "a young girl moving to Hollywood"; she responded, "If Harvey Weinstein invites you to a private party in the Four Seasons [hotel], don't go." She later tweeted, "Although I wasn't one of his victims, I was eternally banned by [Creative Artists Agency] for speaking out."
In the same year, Love was cast in Justin Kelly's biopic JT LeRoy, portraying a film producer opposite Laura Dern. In March 2018, she appeared in the music video for Marilyn Manson's "Tattooed in Reverse", and in April she appeared as a guest judge on RuPaul's Drag Race. In December, Love was awarded a restraining order against Sam Lutfi, who had acted as her manager for the previous six years, alleging verbal abuse and harassment. Her daughter, Frances, and sister, Jaimee, were also awarded restraining orders against Lutfi. In January 2019, a Los Angeles County judge extended the three-year order to five years, citing Lutfi's tendency to "prey upon people".
On August 18, 2019, Love performed a solo set at the Yola Día festival in Los Angeles, which also featured performances by Cat Power and Lykke Li. On September 9, Love garnered press attention when she publicly criticized Joss Sackler, an heiress to the Sackler family OxyContin fortune, after she allegedly offered Love $100,000 to attend her fashion show during New York Fashion Week. In the same statement, Love indicated that she had relapsed into opioid addiction in 2018, stating that she had recently celebrated a year of sobriety. In October 2019, Love relocated from Los Angeles to London.
On November 21, 2019, Love recorded the song "Mother", written and produced by Lawrence Rothman, as part of the soundtrack for the horror film The Turning (2020). In January 2020, she received the Icon Award at the NME Awards; NME described her as "one of the most influential singers in alternative culture of the last 30 years". The following month, she confirmed she was writing a new record which she described as "really sad ... [I'm] writing in minor chords, and that appeals to my sadness." In March 2021, Love said she had been hospitalized with acute anemia in August 2020, which had nearly killed her and reduced her weight to ; she made a full recovery.
In August 2022, Love revealed the completion of her memoir, The Girl with the Most Cake, after a nearly ten-year period of writing.
It was announced on May 15, 2023, that Love had been cast in Assassination, a biographical film about the Assassination of John F. Kennedy, directed by David Mamet and co-starring Viggo Mortensen, Shia LaBeouf, Al Pacino, and John Travolta.
Artistry
Influences
Love has been candid about her diverse musical influences, the earliest being Patti Smith, The Runaways, and The Pretenders, artists she discovered while in juvenile hall as a young teenager. As a child, her first exposure to music was records that her parents received each month through Columbia Record Club. The first record Love owned was Leonard Cohen's Songs of Leonard Cohen (1967), which she obtained from her mother: "He was so lyric-conscious and morbid, and I was a pretty morbid kid", she recalled. As a teenager, she named Flipper, Kate Bush, Soft Cell, Joni Mitchell, Laura Nyro, Lou Reed, and Dead Kennedys among her favorite artists. While in Dublin at age fifteen, Love attended a Virgin Prunes concert, an event she credited as being a pivotal influence: "I had never seen so much sex, snarl, poetry, evil, restraint, grace, filth, raw power and the very essence of rock and roll", she recalled. "[I had seen] U2 [who] gave me lashes of love and inspiration, and a few nights later the Virgin Prunes fuckedmeup." Decades later, in 2009, Love introduced the band's frontman Gavin Friday at a Carnegie Hall event, and performed a song with him.
Though often associated with punk music, Love has noted that her most significant musical influences have been post-punk and new wave artists. Commenting in 2021, Love said: Over the years, Love has also named several other new wave and post-punk bands as influences, including The Smiths, Siouxsie and the Banshees, Television, and Bauhaus.
Love's diverse genre interests were illustrated in a 1991 interview with Flipside, in which she stated: "There's a part of me that wants to have a grindcore band and another that wants to have a Raspberries-type pop band." Discussing the abrasive sound of Hole's debut album, she said she felt she had to "catch up with all my hip peers who'd gone all indie on me, and who made fun of me for liking R.E.M. and The Smiths." She has also embraced the influence of experimental artists and punk rock groups, including Sonic Youth, Swans, Big Black, Diamanda Galás, the Germs, and The Stooges. While writing Celebrity Skin, she drew influence from Neil Young and My Bloody Valentine. She has also cited her contemporary PJ Harvey as an influence, saying: "The one rock star that makes me know I'm shit is Polly Harvey. I'm nothing next to the purity that she experiences."
Literature and poetry have often been a major influence on her songwriting; Love said she had "always wanted to be a poet, but there was no money in it." She has named the works of T.S. Eliot and Charles Baudelaire as influential, and referenced works by Dante Rossetti, William Shakespeare, Rudyard Kipling, and Anne Sexton in her lyrics.
Musical style and lyrics
Musically, Love's work with Hole and her solo efforts have been characterized as alternative rock; Hole's early material, however, was described by critics as being stylistically closer to grindcore and aggressive punk rock. Spins October 1991 review of Hole's first album noted Love's layering of harsh and abrasive riffs buried more sophisticated musical arrangements. In 1998, she stated that Hole had "always been a pop band. We always had a subtext of pop. I always talked about it, if you go back ... what'll sound like some weird Sonic Youth tuning back then to you was sounding like the Raspberries to me, in my demented pop framework."
Love's lyrical content is composed from a female's point of view, and her lyrics have been described as "literate and mordant" and noted by scholars for "articulating a third-wave feminist consciousness." Simon Reynolds, in reviewing Hole's debut album, noted: "Ms. Love's songs explore the full spectrum of female emotions, from vulnerability to rage. The songs are fueled by adolescent traumas, feelings of disgust about the body, passionate friendships with women and the desire to escape domesticity. Her lyrical style could be described as emotional nudism." Journalist and critic Kim France, in critiquing Love's lyrics, referred to her as a "dark genius" and likened her work to that of Anne Sexton.
Love has remarked that lyrics have always been the most important component of songwriting for her: "The important thing for me ... is it has to look good on the page. I mean, you can love Led Zeppelin and not love their lyrics ... but I made a big effort in my career to have what's on the page mean something." Common themes present in Love's lyrics during her early career included body image, rape, suicide, conformity, pregnancy, prostitution, and death. In a 1991 interview with Everett True, she said: "I try to place [beautiful imagery] next to fucked up imagery, because that's how I view things ... I sometimes feel that no one's taken the time to write about certain things in rock, that there's a certain female point of view that's never been given space."
Critics have noted that Love's later musical work is more lyrically introspective. Celebrity Skin and America's Sweetheart are lyrically centered on celebrity life, Hollywood, and drug addiction, while continuing Love's interest in vanity and body image. Nobody's Daughter was lyrically reflective of Love's past relationships and her struggle for sobriety, with the majority of its lyrics written while she was in rehab in 2006.
Performance
Love has a contralto vocal range. According to Love, she never wanted to be a singer, but rather aspired to be a skilled guitarist: "I'm such a lazy bastard though that I never did that", she said. "I was always the only person with the nerve to sing, and so I got stuck with it." She has been regularly noted by critics for her husky vocals as well as her "banshee [-like]" screaming abilities. Her vocals have been compared to those of Johnny Rotten, and David Fricke of Rolling Stone described them as "lung-busting" and "a corrosive, lunatic wail". Upon the release of Hole's 2010 album, Nobody's Daughter, Amanda Petrusich of Pitchfork compared Love's raspy, unpolished vocals to those of Bob Dylan. In 2023, Rolling Stone ranked Love at number 130 on its list of the 200 Greatest Singers of All Time.
She has played a variety of Fender guitars throughout her career, including a Jaguar and a vintage 1965 Jazzmaster; the latter was purchased by the Hard Rock Cafe and is on display in New York City. Between 1989 and 1991, Love primarily played a Rickenbacker 425 because she "preferred the 3/4 neck", but she destroyed the guitar onstage at a 1991 concert opening for the Smashing Pumpkins. In the mid-1990s, she often played a guitar made by Mercury, an obscure company that manufactured custom guitars, as well as a Univox Hi-Flier. Fender's Vista Venus, designed by Love in 1998, was partially inspired by Rickenbacker guitars as well as her Mercury. During tours after the release of Nobody's Daughter (post-2010), Love has played a Rickenbacker 360 onstage. Her setup has included Fender tube gear, Matchless, Ampeg, Silvertone and a solid-state 1976 Randall Commander.
Love has referred to herself as "a shit guitar player", further commenting in a 2014 interview: "I can still write a song, but [the guitar playing] sounds like shit ... I used to be a good rhythm player but I am no longer dependable." Throughout her career, she has also garnered a reputation for unpredictable live shows. In the 1990s, her performances with Hole were characterized by confrontational behavior, with Love stage diving, smashing guitars or throwing them into the audience, wandering into the crowd at the end of sets, and engaging in sometimes incoherent rants. Critics and journalists have noted Love for her comical, often stream-of-consciousness-like stage banter. Music journalist Robert Hilburn wrote in 1993 that, "rather than simply scripted patter, Love's comments between songs [have] the natural feel of someone who is sharing her immediate feelings." In a review of a live performance published in 2010, it was noted that Love's onstage "one-liners [were] worthy of the Comedy Store."
Philanthropy
In 1993, Love and husband Kurt Cobain performed an acoustic set together at the Rock Against Rape benefit in Los Angeles, which raised awareness and provided resources for victims of sexual abuse. In 2000, Love publicly advocated for reform of the record industry in a personal letter published by Salon. In the letter, Love said: "It's not piracy when kids swap music over the Internet using Napster or Gnutella or Freenet or iMesh or beaming their CDs into a My.MP3.com or MyPlay.com music locker. It's piracy when those guys that run those companies make side deals with the cartel lawyers and label heads so that they can be 'the label's friend', and not the artists'." In a subsequent interview with Carrie Fisher, she said that she was interested in starting a union for recording artists, and also discussed race relations in the music industry, advocating for record companies to "put money back into the black community [whom] white people have been stealing from for years."
Love has been a long-standing supporter of LGBT causes. She has frequently collaborated with Los Angeles Gay and Lesbian Center, taking part in the center's "An Evening with Women" events. The proceeds of the event help provide food and shelter for homeless youth; services for seniors; legal assistance; domestic violence services; health and mental health services, and cultural arts programs. Love participated with Linda Perry for the event in 2012, and performed alongside Aimee Mann and comedian Wanda Sykes. Speaking on her collaboration on the event, Love said: "Seven thousand kids in Los Angeles a year go out on the street, and forty percent of those kids are gay, lesbian, or transgender. They come out to their parents, and become homeless ... for whatever reason, I don't really know why, but gay men have a lot of foundations—I've played many of them—but the lesbian side of it doesn't have as much money and/or donors, so we're excited that this has grown to cover women and women's affairs."
She has also contributed to AIDS organizations, partaking in benefits for amfAR and the RED Campaign. In May 2011, she donated six of her husband Cobain's personal vinyl records for auction at Mariska Hargitay's Joyful Heart Foundation event for victims of child abuse, rape, and domestic violence. She has also supported the Sophie Lancaster Foundation.
Influence
Love has had an impact on female-fronted alternative acts and performers. She has been cited as influential on young female instrumentalists in particular, having once infamously proclaimed: "I want every girl in the world to pick up a guitar and start screaming ... I strap on that motherfucking guitar and you cannot fuck with me. That's my feeling." In The Electric Guitar: A History of an American Icon, it is noted:
With over 3 million records sold in the United States alone, Hole became one of the most successful rock bands of all time fronted by a woman. VH1 ranked Love 69 in their list of The 100 Greatest Women in Music History in 2012. In 2015, the Phoenix New Times declared Love the number one greatest female rock star of all time, writing: "To build a perfect rock star, there are several crucial ingredients: musical talent, physical attractiveness, tumultuous relationships, substance abuse, and public meltdowns, just to name a few. These days, Love seems to have rebounded from her epic tailspin and has leveled out in a slightly more normal manner, but there's no doubt that her life to date is the type of story people wouldn't believe in a novel or a movie."
Among the alternative musicians who have cited Love as an influence are Scout Niblett; Brody Dalle of The Distillers; Dee Dee Penny of Dum Dum Girls; Victoria Legrand of Beach House; Annie Hardy of Giant Drag; and Nine Black Alps. Contemporary female pop artists Lana Del Rey, Avril Lavigne, Tove Lo, and Sky Ferreira have also cited Love as an influence. Love has frequently been recognized as the most high-profile contributor of feminist music during the 1990s, and for "subverting [the] mainstream expectations of how a woman should look, act, and sound." According to music journalist Maria Raha, "Hole was the highest-profile female-fronted band of the '90s to openly and directly sing about feminism." Patti Smith, a major influence of Love's, also praised her, saying: "I hate genderizing things ... [but] when I heard Hole, I was amazed to hear a girl sing like that. Janis Joplin was her own thing; she was into Big Mama Thornton and Bessie Smith. But what Courtney Love does, I'd never heard a girl do that."
She has also been a gay icon since the mid-1990s, and has jokingly referred to her fanbase as consisting of "females, gay guys, and a few advanced, evolved heterosexual men." Love's aesthetic image, particularly in the early 1990s, also became influential and was dubbed "kinderwhore" by critics and media. The subversive fashion mainly consisted of vintage babydoll dresses accompanied by smeared makeup and red lipstick. MTV reporter Kurt Loder described Love as looking like "a debauched rag doll" onstage. Love later said she had been influenced by the fashion of Chrissy Amphlett of the Divinyls. Interviewed in 1994, Love commented "I would like to think–in my heart of hearts–that I'm changing some psychosexual aspects of rock music. Not that I'm so desirable. I didn't do the kinder-whore thing because I thought I was so hot. When I see the look used to make one more appealing, it pisses me off. When I started, it was a What Ever Happened to Baby Jane? thing. My angle was irony."
Discography
Hole discography
Pretty on the Inside (1991)
Live Through This (1994)
Celebrity Skin (1998)
Nobody's Daughter (2010)
Solo discography
America's Sweetheart (2004)
Filmography
Sid and Nancy (1986)
Straight to Hell (1987)
The People vs. Larry Flynt (1996)
200 Cigarettes (1999)
Man on the Moon (1999)
Julie Johnson (2001)
Trapped (2002)
Bibliography
Footnotes
References
Sources
External links
Works by or about Courtney Love (library search via WorldCat)
1964 births
Living people
American alternative rock musicians
Alternative rock guitarists
American punk rock guitarists
American women rock singers
Alternative rock singers
American punk rock singers
Women punk rock singers
American rock songwriters
American women singer-songwriters
Hole (band) members
Faith No More members
Sympathy for the Record Industry artists
Feminist musicians
Musicians from Portland, Oregon
Guitarists from San Francisco
Guitarists from Oregon
Singer-songwriters from Oregon
Singers from San Francisco
Songwriters from San Francisco
American film actresses
American television actresses
Actresses from Portland, Oregon
Actresses from San Francisco
American Buddhists
American contraltos
American feminists
American LGBT rights activists
American people of Cuban descent
American people of English descent
American people of German descent
American people of Irish descent
American people of Welsh descent
American people convicted of assault
American women painters
Artists with autism
Converts to Buddhism from Roman Catholicism
Converts to Sōka Gakkai
People educated at Nelson College for Girls
People from Lane County, Oregon
Portland State University alumni
Alumni of Trinity College Dublin
Singer-songwriters from California
20th-century squatters
Writers from San Francisco
20th-century American actresses
21st-century American actresses
20th-century American singer-songwriters
21st-century American singer-songwriters
20th-century American artists
20th-century American women guitarists
20th-century American women artists
21st-century American women artists
20th-century American writers
21st-century American writers
20th-century American women singers
20th-century American women writers
21st-century American women guitarists
20th-century American guitarists
21st-century American guitarists
21st-century American women writers
21st-century American women singers
American activists with disabilities
Actors with autism
American actors with disabilities |
5658 | https://en.wikipedia.org/wiki/Human%20cannibalism | Human cannibalism | Human cannibalism is the act or practice of humans eating the flesh or internal organs of other human beings. A person who practices cannibalism is called a cannibal. The meaning of "cannibalism" has been extended into zoology to describe an individual of a species consuming all or part of another individual of the same species as food, including sexual cannibalism.
Neanderthals are believed to have practised cannibalism, and Neanderthals may have been eaten by anatomically modern humans. Cannibalism was also practised in ancient Egypt, Roman Egypt and during famines in Egypt such as the great famine of 1199–1202. The Island Carib people of the Lesser Antilles, from whom the word "cannibalism" is derived, acquired a long-standing reputation as cannibals after their legends were recorded in the 17th century. Some controversy exists over the accuracy of these legends and the prevalence of actual cannibalism in the culture.
Cannibalism has been well documented in much of the world, including Fiji, the Amazon Basin, the Congo, and the Māori people of New Zealand. Cannibalism was also practised in New Guinea and in parts of the Solomon Islands, and human flesh was sold at markets in some parts of Melanesia. Fiji was once known as the "Cannibal Isles".
Cannibalism has recently been both practised and fiercely condemned in several wars, especially in Liberia and the Democratic Republic of the Congo. It was still practised in Papua New Guinea as of 2012, for cultural reasons and in ritual as well as in war in various Melanesian tribes. Cannibalism has been said to test the bounds of cultural relativism because it challenges anthropologists "to define what is or is not beyond the pale of acceptable human behavior". A few scholars argue that no firm evidence exists that cannibalism has ever been a socially acceptable practice anywhere in the world, at any time in history, but such views have been largely rejected as irreconcilable with the actual evidence.
A form of cannibalism popular in early modern Europe was the consumption of body parts or blood for medical purposes. This practice was at its height during the 17th century, although as late as the second half of the 19th century some peasants attending an execution are recorded to have "rushed forward and scraped the ground with their hands that they might collect some of the bloody earth, which they subsequently crammed in their mouth, in hope that they might thus get rid of their disease."
Cannibalism has occasionally been practised as a last resort by people suffering from famine. Famous examples include the ill-fated Donner Party (1846–1847) and, more recently, the crash of Uruguayan Air Force Flight 571 (1972), after which some survivors ate the bodies of the dead. Additionally, there are cases of people engaging in cannibalism for sexual pleasure, such as Jeffrey Dahmer, Armin Meiwes, Issei Sagawa, and Albert Fish. There is resistance to formally labelling cannibalism a mental disorder.
Etymology
The word "cannibal" is derived from Spanish caníbal or caríbal, originally used as a name for the Caribs, a people from the West Indies said to have eaten human flesh. The older term anthropophagy, meaning "eating humans", is also used for human cannibalism.
Reasons and types
Cannibalism has been practised under a variety of circumstances and for various motives. To adequately express this diversity, Shirley Lindenbaum suggests that "it might be better to talk about 'cannibalisms'" in the plural.
Institutionalized, survival, and pathological cannibalism
One major distinction is whether cannibal acts are accepted by the culture in which they occur – institutionalized cannibalism – or whether they are merely practised under starvation conditions to ensure one's immediate survival – survival cannibalism – or by isolated individuals considered criminal and often pathological by society at large – cannibalism as psychopathology or "aberrant behavior".
Institutionalized cannibalism, sometimes also called "learned cannibalism", is the consumption of human body parts as "an institutionalized practice" generally accepted in the culture where it occurs.
By contrast, survival cannibalism means "the consumption of others under conditions of starvation such as shipwreck, military siege, and famine, in which persons normally averse to the idea are driven [to it] by the will to live". Also known as famine cannibalism, such forms of cannibalism resorted to only in situations of extreme necessity have occurred in many cultures where cannibalism is otherwise clearly rejected. The survivors of the shipwrecks of the Essex and Méduse in the 19th century are said to have engaged in cannibalism, as did the members of Franklin's lost expedition and the Donner Party. Such cases often involve only necro-cannibalism (eating the corpse of someone already dead) as opposed to homicidal cannibalism (killing someone for food). In modern English law, the latter is always considered a crime, even in the most trying circumstances. The case of R v Dudley and Stephens, in which two men were found guilty of murder for killing and eating a cabin boy while adrift at sea in a lifeboat, set the precedent that necessity is no defence to a charge of murder. This decision outlawed and effectively ended the practice of shipwrecked sailors drawing lots in order to determine who would be killed and eaten to prevent the others from starving, a time-honoured practice formerly known as a "custom of the sea".
In other cases, cannibalism is an expression of a psychopathology or mental disorder, condemned by the society in which it occurs and "considered to be an indicator of [a] severe personality disorder or psychosis". Well-known cases include Albert Fish, Issei Sagawa, and Armin Meiwes.
Exo-, endo-, and autocannibalism
Within institutionalized cannibalism, exocannibalism is often distinguished from endocannibalism. Endocannibalism refers to the consumption of a person from the same community. Often it is a part of a funerary ceremony, similar to burial or cremation in other cultures. The consumption of the recently deceased in such rites can be considered "an act of affection" and a major part of the grieving process. It has also been explained as a way of guiding the souls of the dead into the bodies of living descendants.
In contrast, exocannibalism is the consumption of a person from outside the community. It is frequently "an act of aggression, often in the context of warfare", where the flesh of killed or captured enemies may be eaten to celebrate one's victory over them.
Both types of cannibalism can also be fuelled by the belief that eating a person's flesh or internal organs will endow the cannibal with some of the characteristics of the deceased. However, several authors investigating exocannibalism in New Zealand, New Guinea, and the Congo Basin observe that such beliefs were absent in these regions.
A further type, different from both exo- and endocannibalism, is autocannibalism (also called autophagy or self-cannibalism), "the act of eating parts of oneself". It does not ever seem to have been an institutionalized practice, but occasionally occurs as pathological behaviour, or due to other reasons such as curiosity. Also on record are instances of forced autocannibalism committed as acts of aggression, where individuals are forced to eat parts of their own bodies as a form of torture.
Additional motives and explanations
Exocannibalism is thus often associated with the consumption of enemies as an act of aggression, a practice also known as war cannibalism. Endocannibalism is often associated with the consumption of deceased relatives in funerary rites driven by affection – a practice known as funerary or mortuary cannibalism. But acts of institutionalized cannibalism can also be driven by various other motives, for which additional names have been coined.
Medicinal cannibalism (also called medical cannibalism) means "the ingestion of human tissue ... as a supposed medicine or tonic". In contrast to other forms of cannibalism, which Europeans generally frowned upon, the "medicinal ingestion" of various "human body parts was widely practiced throughout Europe from the sixteenth to the eighteenth centuries", with early records of the practice going back to the first century CE. It was also frequently practised in China.
Sacrificial cannibalism refers the consumption of the flesh of victims of human sacrifice, for example among the Aztecs. Human and animal remains excavated in Knossos, Crete, have been interpreted as evidence of a ritual in which children and sheep were sacrificed and eaten together during the Bronze Age. According to Ancient Roman reports, the Celts in Britain practised sacrificial cannibalism, and archaeological evidence backing these claims has by now been found.
Human predation is the hunting of people from unrelated and possibly hostile groups in order to eat them. In parts of the Southern New Guinea lowland rain forests, hunting people "was an opportunistic extension of seasonal foraging or pillaging strategies", with human bodies just as welcome as those of animals as sources of protein, according to the anthropologist Bruce M. Knauft. As populations living near coasts and rivers were usually better nourished and hence often physically larger and stronger than those living inland, they "raided inland 'bush' peoples with impunity and often with little fear of retaliation". Cases of human predation are also on record for the neighbouring Bismarck Archipelago and for Australia. In the Congo Basin, there lived groups such as the Zappo Zaps who hunted humans for food even when game was plentiful.
The term gastronomic cannibalism has been suggested for cases where human flesh is eaten to "provide a supplement to the regular diet" – thus essentially for its nutritional value – or, in an alternative definition, for cases where it is "eaten without ceremony (other than culinary), in the same manner as the flesh of any other animal". While the term has been criticized as being too vague to clearly identify a specific type of cannibalism, various records indicate that nutritional or culinary concerns could indeed play a role in such acts even outside of periods of starvation. Referring to the Congo Basin, where many of the eaten were butchered slaves rather than enemies killed in war, the anthropologist Emil Torday notes that "the most common [reason for cannibalism] was simply gastronomic: the natives loved 'the flesh that speaks' [as human flesh was commonly called] and paid for it". The historian Key Ray Chong observes that, throughout Chinese history, "learned cannibalism was often practiced ... for culinary appreciation".
In his popular book Guns, Germs and Steel, Jared Diamond suggests that "protein starvation is probably also the ultimate reason why cannibalism was widespread in traditional New Guinea highland societies", and both in New Zealand and Fiji, cannibals explained their acts as due to a lack of animal meat. In Liberia, a former cannibal argued that it would have been wasteful to let the flesh of killed enemies spoil, and eaters of human flesh in the Bismarck Archipelago expressed the same sentiment. In many cases, human flesh was also described as particularly delicious, especially when it came from women, children, or both. Such statements are on record for various regions and peoples, including the Aztecs, today's Liberia and Nigeria, the Fang people in west-central Africa, the Congo Basin, 12th to 14th-century China, Sumatra, Australia, New Zealand, and Fiji.
There is a debate among anthropologists on how important functionalist reasons are for the understanding of institutionalized cannibalism. Diamond is not alone in suggesting "that the consumption of human flesh was of nutritional benefit for some populations in New Guinea" and the same case has been made for other "tropical peoples ... exploiting a diverse range of animal foods", including human flesh. The materialist anthropologist Marvin Harris argued that a "shortage of animal protein" was also the underlying reason for Aztec cannibalism. The cultural anthropologist Marshall Sahlins, on the other hand, rejected such explanations as overly simplistic, stressing that cannibal customs must be regarded as "complex phenomen[a]" with "myriad attributes" which can only be understood if one considers "symbolism, ritual, and cosmology" in addition to their "practical function".
While not a motive, the term innocent cannibalism has been suggested for cases of people eating human flesh without knowing what they are eating. It is a subject of myths, such as the myth of Thyestes who unknowingly ate the flesh of his own sons. There are also actual cases on record, for example from the Congo Basin, where cannibalism had been quite widespread and where even in the 1950s travellers were sometimes served a meat dish, learning only afterwards that the meat had been of human origin.
In pre-modern medicine, an explanation given by the now-discredited theory of humorism for cannibalism was that it was caused by a black acrimonious humor, which, being lodged in the linings of the ventricles of the heart, produced a voracity for human flesh. On the other hand, the French philosopher Michel de Montaigne understood war cannibalism as a way of expressing vengeance and hatred towards one's enemies and celebrating one's victory over them, thus giving an interpretation that is close to modern explanations. He also pointed out that some acts of Europeans in his own time could be considered as equally barbarous, making his essay "Of Cannibals" () a precursor to later ideas of cultural relativism.
Medical aspects
A well-known case of mortuary cannibalism is that of the Fore tribe in New Guinea, which resulted in the spread of the prion disease kuru. Although the Fore's mortuary cannibalism was well-documented, the practice had ceased before the cause of the disease was recognized. However, some scholars argue that although post-mortem dismemberment was the practice during funeral rites, cannibalism was not. Marvin Harris theorizes that it happened during a famine period coincident with the arrival of Europeans and was rationalized as a religious rite.
In 2003, a publication in Science received a large amount of press attention when it suggested that early humans may have practised extensive cannibalism. According to this research, genetic markers commonly found in modern humans worldwide suggest that today many people carry a gene that evolved as protection against the brain diseases that can be spread by consuming human brain tissue. A 2006 reanalysis of the data questioned this hypothesis, because it claimed to have found a data collection bias, which led to an erroneous conclusion. This claimed bias came from incidents of cannibalism used in the analysis not being due to local cultures, but having been carried out by explorers, stranded seafarers or escaped convicts. The original authors published a subsequent paper in 2008 defending their conclusions.
Myths, legends and folklore
Cannibalism features in the folklore and legends of many cultures and is most often attributed to evil characters or as extreme retribution for some wrongdoing. Examples include the witch in "Hansel and Gretel", Lamia of Greek mythology and the witch Baba Yaga of Slavic folklore.
A number of stories in Greek mythology involve cannibalism, in particular the eating of close family members, e.g., the stories of Thyestes, Tereus and especially Cronus, who became Saturn in the Roman pantheon. The story of Tantalus is another example, though here a family member is prepared for consumption by others.
The wendigo is a creature appearing in the legends of the Algonquian people. It is thought of variously as a malevolent cannibalistic spirit that could possess humans or a monster that humans could physically transform into. Those who indulged in cannibalism were at particular risk, and the legend appears to have reinforced this practice as taboo. The Zuni people tell the story of the Átahsaia – a giant who cannibalizes his fellow demons and seeks out human flesh.
The wechuge is a demonic cannibalistic creature that seeks out human flesh appearing in the mythology of the Athabaskan people. It is said to be half monster and half human-like; however, it has many shapes and forms.
Scepticism
William Arens, author of The Man-Eating Myth: Anthropology and Anthropophagy, questions the credibility of reports of cannibalism and argues that the description by one group of people of another people as cannibals is a consistent and demonstrable ideological and rhetorical device to establish perceived cultural superiority. Arens bases his thesis on a detailed analysis of various "classic" cases of cannibalism reported by explorers, missionaries, and anthropologists. He claims that all of them were steeped in racism, unsubstantiated, or based on second-hand or hearsay evidence. Though widely discussed, Arens's book generally failed to convince the academic community. Claude Lévi-Strauss observes that, in spite of his "brilliant but superficial book ... [n]o serious ethnologist disputes the reality of cannibalism". Shirley Lindenbaum notes that, while after "Arens['s] ... provocative suggestion ... many anthropologists ... reevaluated their data", the outcome was an improved and "more nuanced" understanding of where, why and under which circumstances cannibalism took place rather than a confirmation of his claims: "Anthropologists working in the Americas, Africa, and Melanesia now acknowledge that institutionalized cannibalism occurred in some places at some times. Archaeologists and evolutionary biologists are taking cannibalism seriously."
Lindenbaum and others point out that Arens displays a "strong ethnocentrism". His refusal to admit that institutionalized cannibalism ever existed seems to be motivated by the implied idea "that cannibalism is the worst thing of all" – worse than any other behaviour people engaged in, and therefore uniquely suited to vilifying others. Kajsa Ekholm Friedman calls this "a remarkable opinion in a culture [the European/American one] that has been capable of the most extreme cruelty and destructive behavior, both at home and in other parts of the world."
She observes that, contrary to European values and expectations, "in many parts of the Congo region there was no negative evaluation of cannibalism. On the contrary, people expressed their strong appreciation of this very special meat and could not understand the hysterical reactions from the white man's side." And why indeed, she goes on to ask, should they have had the same negative reactions to cannibalism as Arens and his contemporaries? Implicitly he assumes that everybody throughout human history must have shared the strong taboo placed by his own culture on cannibalism, but he never attempts to explain why this should be so, and "neither logic nor historical evidence justifies" this viewpoint, as Christian Siefkes commented.
Accusations of cannibalism could be used to characterize indigenous peoples as "uncivilized", "primitive", or even "inhuman." While this means that the reliability of reports of cannibal practices must be carefully evaluated especially if their wording suggests such a context, many actual accounts do not fit this pattern. The earliest firsthand account of cannibal customs in the Caribbean comes from Diego Álvarez Chanca, who accompanied Christopher Columbus on his second voyage. His description of the customs of the Caribs of Guadeloupe includes their cannibalism (men killed or captured in war were eaten, while captured boys were "castrated [and used as] servants until they gr[e]w up, when they [were] slaughtered" for consumption), but he nevertheless notes "that these people are more civilized than the other islanders" (who did not practice cannibalism). Nor was he an exception. Among the earliest reports of cannibalism in the Caribbean and the Americas, there are some (like those of Amerigo Vespucci) that seem to mostly consist of hearsay and "gross exaggerations", but others (by Chanca, Columbus himself, and other early travellers) show "genuine interest and respect for the natives" and include "numerous cases of sincere praise".
Reports of cannibalism from other continents follow similar patterns. Condescending remarks can be found, but many Europeans who described cannibal customs in Central Africa wrote about those who practised them in quite positive terms, calling them "splendid" and "the finest people" and not rarely, like Chanca, actually considering them as "far in advance of" and "intellectually and morally superior" to the non-cannibals around them. Writing from Melanesia, the missionary George Brown explicitly rejects the European prejudice of picturing cannibals as "particularly ferocious and repulsive", noting instead that many cannibals he met were "no more ferocious than" others and "indeed ... very nice people".
Reports or assertions of cannibal practices could nevertheless be used to promote the use of military force as a means of "civilizing" and "pacifying" the "savages". During the Spanish conquest of the Aztec Empire and its earlier conquests in the Caribbean there were widespread reports of cannibalism, and cannibals became exempted from Queen Isabella's prohibition on enslaving the indigenous. Another example of the sensationalism of cannibalism and its connection to imperialism occurred during Japan's 1874 expedition to Taiwan. As Robert Eskildsen describes, Japan's popular media "exaggerated the aborigines' violent nature", in some cases by wrongly accusing them of cannibalism.
This Horrid Practice: The Myth and Reality of Traditional Maori Cannibalism (2008) by New Zealand historian Paul Moon received a hostile reception by some Māori, who felt the book tarnished their whole people. However, the factual accuracy of the book was not seriously disputed and even critics such as Margaret Mutu grant that cannibalism was "definitely" practised and that it was "part of our [Māori] culture."
History
Among modern humans, cannibalism has been practised by various groups. It was practised by humans in Prehistoric Europe, Mesoamerica, South America, among Iroquoian peoples in North America, Maori in New Zealand, the Solomon Islands, parts of West Africa and Central Africa, some of the islands of Polynesia, New Guinea, Sumatra, and Fiji. Evidence of cannibalism has been found in ruins associated with the Ancestral Puebloans of the Southwestern United States as well (at Cowboy Wash in Colorado).
Prehistory
There is evidence, both archaeological and genetic, that cannibalism has been practised for hundreds of thousands of years by early Homo sapiens and archaic hominins. Human bones that have been "de-fleshed" by other humans go back 600,000 years. The oldest Homo sapiens bones (from Ethiopia) show signs of this as well. Some anthropologists, such as Tim D. White, suggest that cannibalism was common in human societies prior to the beginning of the Upper Paleolithic period. This theory is based on the large amount of "butchered human" bones found in Neanderthal and other Lower/Middle Paleolithic sites.
It seems likely that not all instances of prehistoric cannibalism were due to the same reason, just as cannibalistic acts known from the historical record have been motivated by a variety of reasons. One suggested reason for cannibalism in the Lower and Middle Paleolithic have been food shortages. It has been also suggested that removing dead bodies through ritual (funerary) cannibalism was a means of predator control, aiming to eliminate predators' and scavengers' access to hominid (and early human) bodies. Jim Corbett proposed that after major epidemics, when human corpses are easily accessible to predators, there are more cases of man-eating leopards, so removing dead bodies through ritual cannibalism (before the cultural traditions of burying and burning bodies appeared in human history) might have had practical reasons for hominids and early humans to control predation.
The oldest archaeological evidence of hominid cannibalism comes from the Gran Dolina cave in northern Spain. The remains of several individuals who died about 800,000 years ago and may have belongs to the Homo antecessor species show unmistakable signs of having been butchered and consumed in the same way as animals whose bones were also found at the site. They belong to at least eleven individuals, all of which were young (ranging from infancy to late teenhood). A study of this case considers it an instance of "nutritional" cannibalism, where individuals belonging to hostile or unrelated groups were hunted, killed, and eaten much like animals. Based on the placement and processing of human and animal remains, the authors conclude that cannibalism was likely a "repetitive behavior over time as part of a culinary tradition", not caused by starvation or other exceptional circumstances. They suggest that young individuals (more than half of which were children under ten) were targeted because they "posed a lower risk for hunters" and because this was an effective means for limiting the growth of competing groups.
Several sites in Croatia, France, and Spain yield evidence that the Neanderthals sometimes practised cannibalism, though the interpretation of some of the finds remains controversial.
Neanderthals could also fall victim to cannibalism by anatomically modern humans. Evidence found in southwestern France indicates that the latter butchered and ate a Neanderthal child about 30,000 years ago; it is unknown whether the child was killed by them or died of other reasons. The find has been considered as strengthening the conjecture that modern humans might have hunted Neanderthals and in this way contributed to their extinction.
In Gough's Cave, England, remains of human bones and skulls, around 14,700 years old, suggest that cannibalism took place amongst the people living in or visiting the cave, and that they may have used human skulls as drinking vessels.
The archaeological site of Herxheim in southwestern Germany was a ritual center and a mass grave formed by people of the Linear Pottery culture in Neolithic Europe. It contained the scattered remains of more than 1000 individuals from different, in some cases faraway regions, who died around 5000 BCE. Whether they were war captives or human sacrifices is unclear, but the evidence indicates that their corpses were spit-roasted whole and then consumed.
At Fontbrégoua Cave in southeastern France, the remains of six people who lived about 7,000 years ago were found (two children, one adolescent, and three adults), in addition to animal bones. The patterns of cut marks indicate that both humans and animals were skinned and processed in similar ways. Since the human victims were all processed at the same time, the main excavator, Paola Villa, suspects that they all belonged to the same family or extended family and were killed and butchered together, probably during some kind of violent conflict. Others have argued that the traces were caused by defleshing rituals preceding a secondary burial, but the fact that both humans and wild and domestic animals were processed in the same way makes this unlikely; moreover, Villa argues that the observed traces better fit a typical butchering process than a secondary burial.
Researchers have also found physical evidence of cannibalism from more recent times, including from Prehistoric Britain. In 2001, archaeologists at the University of Bristol found evidence of cannibalism practised around 2000 years ago in Gloucestershire, South West England. This is in agreement with Ancient Roman reports that the Celts in Britain practised human sacrifice, killing and eating captured enemies as well as convicted criminals.
Early history
Cannibalism is mentioned many times in early history and literature. The oldest written reference may be from the tomb of the ancient Egyptian king Unas (24th century BCE). It contained a hymn in praise of the king portraying him as a cannibal who eats both "men" and "gods", thus indicating an attitude towards cannibalism quite different from the modern one.
Herodotus claimed in his Histories (5th century BCE) that after eleven days' voyage up the Borysthenes (Dnieper River) one reached a desolated land that extended for a long way, followed by a country of man-eaters (other than the Scythians), and beyond it by another desolated and uninhabited area.
The Stoic philosopher Chrysippus approved of eating one's dead relatives in a funerary ritual, noting that such rituals were common among many peoples.
Cassius Dio recorded cannibalism practised by the bucoli, Egyptian tribes led by Isidorus against Rome. They sacrificed and consumed two Roman officers in a ritualistic fashion, swearing an oath over their entrails.
According to Appian, during the Roman siege of Numantia in the 2nd century BCE, the population of Numantia (in today's Spain) was reduced to cannibalism and suicide. Cannibalism was also reported by Josephus during the siege of Jerusalem in 70 CE.
Jerome, in his letter Against Jovinianus (written 393 CE), discusses how people came to their present condition as a result of their heritage, and lists several examples of peoples and their customs. In the list, he mentions that he has heard that the Attacotti (in Britain) eat human flesh and that the Massagetae and Derbices (two Central Asian peoples) kill and eat old people, considering this a more desirable fate than dying of old age and illness.
Middle Ages
The Americas
There is universal agreement that some Mesoamerican people practised human sacrifice, but there is a lack of scholarly consensus as to whether cannibalism in pre-Columbian America was widespread. At one extreme, the anthropologist Marvin Harris, author of Cannibals and Kings, has suggested that the flesh of the victims was a part of an aristocratic diet as a reward, since the Aztec diet was lacking in proteins. While most historians of the pre-Columbian era accept that there was ritual cannibalism related to human sacrifices, they often reject suggestions that human flesh could have been a significant portion of the Aztec diet. Cannibalism was also associated with acts of warfare, and has been interpreted as an element of blood revenge in war.
West Africa
When the Moroccan explorer Ibn Battuta visited the Mali Empire in the 1350s, he was surprised to see sultan Sulayman give "a slave girl as part of his reception-gift" to a group of warriors from a cannibal region who had come to visit his court. "They slaughtered her and ate her and smeared their faces and hands with her blood and came in gratitude to the sultan." He was told that the sultan did so every time he received the cannibal guests. Though a Muslim like Ibn Battuta himself, he apparently considered catering to his visitors' preferences more important than whatever reservations he may have had about the practice. Other Muslim authors writing around that time also report that cannibalism was practised in some West Africa regions and that slave girls were sometimes slaughtered for food, since "their flesh is the best thing we have to eat."
Europe and Europeans
Cases of cannibalism were recorded during the First Crusade, as there are various accounts of crusaders consuming the bodies of their dead opponents following the sieges of Antioch and of Ma'arra in 1097–1098. While the Christian sources all explain these acts as due to hunger, Amin Maalouf is sceptical of this justification, arguing that that the crusaders' behaviour indicates they might have been driven by "fanaticism" rather than, or in addition to "necessity". Thomas Asbridge states that, while the "cannibalism at Marrat is among the most infamous of all the atrocities perpetrated by the First Crusaders", it nevertheless had "some positive effects on the crusaders' short-term prospects", since reports of their brutality convinced many Muslim commanders to accept truces rather than trying to fight them.
During Europe's Great Famine of 1315–1317, there were various reports of cannibalism among starving people.
Western Asia
Charges of cannibalism were levied against the Qizilbash of the Safavid Ismail I.
China
Cannibalism has been repeatedly recorded throughout China's well-documented history. The sinologist Bengt Pettersson found references to more than three hundred different episodes of cannibalism in the Official Dynastic Histories alone. Most episodes occurred in the context of famine or war, or were otherwise motivated by vengeance or medical reasons. More than half of the episodes recorded in the Official Histories describe cases motivated by food scarcity during famines or in times of war. Pettersson observes that the records of such events "neither encouraged nor condemned" the consumption of human flesh under such circumstances, rather accepting it as an unavoidable way of "coping with a life-threatening situation".
In other cases, cannibalism was an element of vengeance or punishment – eating the hearts and livers, or sometimes the whole bodies, of killed enemies was a way of further humiliating them and sweetening the revenge. Both private individuals and state officials engaged in such acts, especially from the 4th to the 10th century CE, but in some cases right until the end of Imperial China (in 1912). More than 70 cases are listed in the Official Histories alone. In warfare, human flesh could be eaten out of a lack of other provisions, but also out of hatred against the enemy or to celebrate one's victory. Not just enemy fighters, but also their "servants and concubines were all steamed and eaten", according to one account.
At least since the Tang dynasty (618–907), the consumption of human flesh was considered a highly effective medical treatment, recommended by the Bencao Shiyi, an influential medical reference book published in the early 8th century, as well as in similar later manuals. Together with the ethical ideal of filial piety, according to which young people were supposed to do everything in their power to support their parents and parents-in-law, this idea lead to a unique form of voluntary cannibalism, in which a young person cut some of the flesh out of their body and gave it to an ill parent or parent-in-law for consumption. The majority of the donors were women, frequently daughters-in-law of the patient.
The Official Histories describe more than 110 cases of such voluntary offerings that took place between the early 7th and the early 20th century. While these acts were (at least nominally) voluntary and the donors usually (though not always) survived them, several sources also report of children and adolescents who were killed so that their flesh could be eaten for medical purposes.
During the Tang dynasty, cannibalism was supposedly resorted to by rebel forces early in the period (who were said to raid neighbouring areas for victims to eat), and (on a large scale) by both soldiers and civilians during the siege of Suiyang, a decisive episode of the An Lushan Rebellion. Eating an enemy's heart and liver was also repeatedly mentioned as a feature of both official punishments and private vengeance. The final decades of the dynasty were marked by large-scale rebellions, during which both rebels and regular soldiers butchered prisoners for food and killed and ate civilians. Sometimes "the rebels captured by government troops were [even] sold as food", according to several of the Official Histories, while warlords likewise relied on the sale of human flesh to finance their rebellions. An Arab traveller visiting China during this time noted with surprise: "cannibalism [is] permissible for them according to their legal code, for they trade in human flesh in their markets."
References to cannibalizing the enemy also appear in poetry written in the subsequent Song dynasty (960–1279) – for example, in Man Jiang Hong – although they are perhaps meant symbolically, expressing hatred towards the enemy. The Official Histories covering this period record various cases of rebels and bandits eating the flesh of their victims.
The flesh of executed criminals was sometimes cut off and sold for consumption. During the Tang dynasty a law was enacted that forbade this practice, but whether the law was effectively enforced is unclear. The sale of human flesh is also repeatedly mentioned during famines, in accounts ranging from the 6th to the 15th century. Several of these accounts mention that animal flesh was still available, but had become so expensive that few could afford it. Dog meat was five times as expensive as human flesh, according to one such report. Sometimes, poor men sold their own wives or children to butchers who slaughtered them and sold their flesh. Cannibalism in famine situations seems to have been generally tolerated by the authorities, who did not intervene when such acts occurred.
A number of accounts suggests that human flesh was occasionally eaten for culinary reasons. An anecdote told about Duke Huan of Qi (7th century BCE) claims that he was curious about the taste of "steamed child", having already eaten everything else. His cook supposedly killed his own son to prepare the dish, and Duke Huan judged it to be "the best food of all". In later times, wealthy men, among them a son of the 4th-century emperor Shi Hu and an "open and high-spirited" man who lived in the 7th century CE, served the flesh of purchased women or children during lavish feasts. The sinologist observes that while such acts were not common, they do not seem to have been rare exceptions, and the hosts apparently did not have to face ostracism or legal prosection. Key Ray Chong even concludes that "learned cannibalism was often practiced ... for culinary appreciation, and exotic dishes [of human flesh] were prepared for jaded upper-class palates".
The Official Histories mention 10th-century officials who liked to eat the flesh of babies and children, and during the Jin dynasty (1115–1234), human flesh seems to have been readily available at the home of a general, who supposedly served it to one of his guests as a practical joke. Accounts from the 12th to 14th centuries indicate that both soldiers and writers praised this flesh as particularly delicious, considering especially children's flesh as unsurpassable in taste.
Pettersson observes that people generally seem to have had less reservations about the consumption of human flesh than one might expect today. While survival cannibalism during famines was regarded a lamentable necessity, accounts explaining the practice as due to other reasons, such as vengeance or filial piety, were generally even positive.
Early modern and colonial era
The Americas
European explorers and colonizers brought home many stories of cannibalism practised by the native peoples they encountered. In Spain's overseas expansion to the New World, the practice of cannibalism was reported by Christopher Columbus in the Caribbean islands, and the Caribs were greatly feared because of their supposed practice of it. Queen Isabel of Castile had forbidden the Spaniards to enslave the indigenous, unless they were "guilty" of cannibalism. The accusation of cannibalism became a pretext for attacks on indigenous groups and justification for the Spanish conquest. In Yucatán, shipwrecked Spaniard Jerónimo de Aguilar, who later became a translator for Hernán Cortés, reported to have witnessed fellow Spaniards sacrificed and eaten, but escaped from captivity where he was being fattened for sacrifice himself. In the Florentine Codex (1576) compiled by Franciscan Bernardino de Sahagún from information provided by indigenous eyewitnesses has questionable evidence of Mexica (Aztec) cannibalism. Franciscan friar Diego de Landa reported on Yucatán instances.
In early Brazil, there is reportage of cannibalism among the Tupinamba. It is recorded about the natives of the captaincy of Sergipe in Brazil: "They eat human flesh when they can get it, and if a woman miscarries devour the abortive immediately. If she goes her time out, she herself cuts the navel-string with a shell, which she boils along with the secondine [i.e. placenta], and eats them both." (see human placentophagy).
The 1913 Handbook of Indians of Canada (reprinting 1907 material from the Bureau of American Ethnology), claims that North American natives practising cannibalism included "... the Montagnais, and some of the tribes of Maine; the Algonkin, Armouchiquois, Iroquois, and Micmac; farther west the Assiniboine, Cree, Foxes, Chippewa, Miami, Ottawa, Kickapoo, Illinois, Sioux, and Winnebago; in the south the people who built the mounds in Florida, and the Tonkawa, Attacapa, Karankawa, Caddo, and Comanche; in the northwest and west, portions of the continent, the Thlingchadinneh and other Athapascan tribes, the Tlingit, Heiltsuk, Kwakiutl, Tsimshian, Nootka, Siksika, some of the Californian tribes, and the Ute. There is also a tradition of the practice among the Hopi, and mentions of the custom among other tribes of New Mexico and Arizona. The Mohawk, and the Attacapa, Tonkawa, and other Texas tribes were known to their neighbours as 'man-eaters.'" The forms of cannibalism described included both resorting to human flesh during famines and ritual cannibalism, the latter usually consisting of eating a small portion of an enemy warrior. From another source, according to Hans Egede, when the Inuit killed a woman accused of witchcraft, they ate a portion of her heart.
As with most lurid tales of native cannibalism, these stories are treated with a great deal of scrutiny, as accusations of cannibalism were often used as justifications for the subjugation or destruction of "savages". The historian Patrick Brantlinger suggests that Indigenous peoples that were colonized were being dehumanized as part of the justification for the atrocities.
Among settlers, sailors, and explorers
This period of time was also rife with instances of explorers and seafarers resorting to cannibalism for survival. There is archaeological and written evidence for English settlers' cannibalism in 1609 in the Jamestown Colony under famine conditions, during a period which became known as Starving Time.
Sailors shipwrecked or lost at sea repeatedly resorted to cannibalism to face off starvation. The survivors of the sinking of the French ship Méduse in 1816 resorted to cannibalism after four days adrift on a raft. Their plight was made famous by Théodore Géricault's painting Raft of the Medusa. After a whale sank the Essex of Nantucket on November 20, 1820, the survivors, in three small boats, resorted, by common consent, to cannibalism in order for some to survive. This event became an important source of inspiration for Herman Melville's Moby-Dick.
The case of R v Dudley and Stephens (1884) is an English criminal case which dealt with four crew members of an English yacht, the Mignonette, who were cast away in a storm some from the Cape of Good Hope. After several days, one of the crew, a seventeen-year-old cabin boy, fell unconscious due to a combination of the famine and drinking seawater. The others (one possibly objecting) decided to kill him and eat him. They were picked up four days later. Two of the three survivors were found guilty of murder. A significant outcome of this case was that necessity in English criminal law was determined to be no defence against a charge of murder. This was a break with the traditional understanding among sailors, which had been that selecting a victim for killing and consumption was acceptable in a starvation situation as long as lots were drawn so that all faced an equal risk of being killed.
On land, travellers through sparsely inhabited regions and explorers of unknown areas sometimes ate human flesh after running out of other provisions. In a famous example from the 1840s, the members of Donner Party found themselves stranded by snow in the Donner Pass, a high mountain pass in California, without adequate supplies during the Mexican–American War, leading to several instances of cannibalism, including the murder of two young Native American men for food. Sir John Franklin's lost polar expedition, which took place at approximately the same time, is another example of cannibalism out of desperation.
In frontier situations where there was no strong authority, some individuals got used to killing and eating others even in situations where other food would have been available. One notorious case was the mountain man Boone Helm, who become known as "The Kentucky Cannibal" for eating several of his fellow travellers, from 1850 until his eventual hanging in 1864.
West Africa
The Leopard Society was a cannibalistic secret society that existed until the mid-1900s and was active mostly in regions that today belong to Sierra Leone, Liberia and Ivory Coast. The Leopard men would dress in leopard skins and waylay travellers with sharp claw-like weapons in the form of leopards' claws and teeth. The victims' flesh would be cut from their bodies and distributed to members of the society.
Central Africa
Cannibalism was practised widely in the some parts of the Congo Basin, though it was by no means universal. Some peoples, such as the Bakongo, rejected the practice altogether. In some other regions human flesh was eaten "only occasionally to mark a particularly significant ritual occasion, but in other societies in the Congo, perhaps even a majority by the late nineteenth century, people ate human flesh whenever they could, saying that it was far tastier than other meat", notes the anthropologist Robert B. Edgerton.
Many people not only freely admitted eating human flesh, but were surprised when they heard that Europeans did not eat it. Emil Torday observed: "They are not ashamed of cannibalism, and openly admit that they practise it because of their liking for human flesh", with the primary reason for cannibalism being a "gastronomic" preference for such dishes. Torday once received "a portion of a human thigh" sent as a well-intended gift, and other Europeans were offered pieces of human flesh in gestures of hospitality. People expected to be rewarded with fresh human flesh for services well performed and were disappointed when they received something else instead.
In addition to enemies killed or captured in war, slaves were frequent victims. Many "healthy children" had to die "to provide a feast for their owners". Young slave children were at particular risk since they were in low demand for other purposes and since their flesh was widely praised as especially delicious, "just as many modern meat eaters prefer lamb over mutton and veal over beef". Such acts were not considered controversial – people did not understand why Europeans objected to the killing of slaves, while themselves killing and eating goats; they argued that both were the "property" of their owners, to be used as it pleased them.
A third group of victims were persons from other ethnic groups, who in some areas were "hunt[ed] for food" just like animals. Many of the victims, who were usually killed with poisoned arrows or with clubs, were "women and children ... who had ventured too far from home while gathering firewood or fetching drinking water" and who were targeted "because they were easier to overpower" and also considered tastier than adult men.
In some regions there was a regular trade in slaves destined to be eaten, and the flesh of recently butchered slaves was available for purchase as well. Some people fattened slave children to sell them for consumption; if such a child became ill and lost too much weight, their owner drowned them in the nearest river instead of wasting further food on them, as a French missionary once witnessed. Human flesh not sold the same day was smoked, so it could be "sold at leisure" during subsequent weeks. Europeans were often hesitant to buy smoked meat since they knew that the "smoking of human flesh to preserve it was ... widespread", but once meat was smoked, its origin was hard to determine.
Instead of being killed quickly, "persons to be eaten often had both of their arms and legs broken and were made to sit up to their necks in a stream for [up to] three days, a practice said to make their flesh more tender, before they were killed and cooked." Both adults and children, and also animals such as birds and monkeys, were routinely submitted to this treatment prior to being slaughtered.
Various reports indicate that living slaves were exposed on marketplaces, so that purchasers could choose which body parts to buy before the victim was butchered and the flesh distributed.
This custom, reported around both the central Congo River and the Ubangi in the north, seem to have been motivated by a desire to get fresh rather than smoked flesh, since without refrigeration there was no other way to preserve flesh from spoiling quickly.
Killed or captured enemies made another sort of victims, even during wars fought by the colonial state. During the 1892–1894 war between the Congo Free State and the Swahili–Arab city-states of Nyangwe and Kasongo in Eastern Congo, there were reports of widespread cannibalization of the bodies of defeated combatants by the Batetela allies of the Belgian commander Francis Dhanis. In April 1892, 10,000 Batetela, under the command of Gongo Lutete, joined forces with Dhanis in a campaign against the Swahili–Arab leaders Sefu and Mohara. After one early skirmish in the campaign, Dhanis's medical officer, Captain Sidney Langford Hinde, "noticed that the bodies of both the killed and wounded had vanished." When fighting broke out again, Hinde saw his Batetela allies drop human arms, legs and heads on the road; now he had to accept that they had really "carried them off for food", which he had initially doubted.
According to Hinde, the conquest of Nyangwe was followed by "days of cannibal feasting" during which hundreds were eaten, with only their heads being kept as mementos. During this time, Lutete "hid himself in his quarters, appalled by the sight of thousands of men smoking human hands and human chops on their camp fires, enough to feed his army for many days." Hinde also noted that the Batetela town Ngandu had "at least 2,000 polished human skulls" as a "solid white pavement in front" of its gates, with human skulls crowning every post of the stockade.
Soon after, Nyangwe's surviving population rose in a rebellion, during whose brutal suppression a thousand rioters were killed by the new government. One young Belgian officer wrote home: "Happily Gongo's men ... ate them up [in a few hours]. It's horrible but exceedingly useful and hygienic.... I should have been horrified at the idea in Europe! but it seems quite natural to me here. Don't show this letter to anyone indiscreet". Hinde too commented approvingly on the thoroughness with which the cannibals "disposed of all the dead, leaving nothing even for the jackals, and thus sav[ing] us, no doubt, from many an epidemic." Generally the Free State administration seems to have done little to suppress cannibal customs, sometimes even tolerating or facilitating them among its own auxiliary troops and allies.
In August 1903, the UK diplomat Roger Casement wrote from Lake Tumba to a consular colleague: "The people round here are all cannibals.... There are also dwarfs (called Batwas) in the forest who are even worse cannibals than the taller human environment. They eat man flesh raw! It's a fact." He added that assailants would "bring down a dwarf on the way home, for the marital cooking pot.... The Dwarfs, as I say, dispense with cooking pots and eat and drink their human prey fresh cut on the battlefield while the blood is still warm and running. These are not fairy tales ..., but actual gruesome reality in the heart of this poor, benighted savage land."
The origins of Congolese cannibalism are lost in time. The oldest known references to it can be found in Filippo Pigafetta's Report of the Kingdom of Congo, published in the late 16th century based on the memories of Duarte Lopez, a Portuguese trader who had lived for several years in the Kingdom of Kongo. Lopez reported that farther up the Congo River, there lived a people who ate both killed enemies and those of their slaves which they could not sell for a "good price".
Oral records indicate that, already at a time when slavery was not widespread in the Congo Basin, people assumed that anyone sold as a slave would likely be eaten, "because cannibalism was common, and slaves were purchased especially for such purposes". In the 19th century, warfare and slave raids increased in the Congo Basin as a result of the international demand for slaves, who could no longer be so easily captured nearer to the coasts. As a result, the consumption of slaves increased as well, since most of those sold in the Atlantic slave trade were young and healthy individuals aged from 14 to 30, and similar preferences existed in the Arab–Swahili slave trade. However, many of the captives were younger, older, or otherwise considered less saleable, and such victims were often eaten by the slave raiders or sold to cannibals who purchased them as "meat".
Most of the accounts of cannibalism in the Congo are from the late 19th century, when the Atlantic slave trade had come to a halt, but slavery still existed in Africa and the Arab world. Various reports indicate that around the Ubangi River, slaves were frequently exchanged against ivory, which was then exported to Europe or the Americas, while the slaves were eaten. Some European traders seem to have directly and knowingly taken part in these deadly transactions, while others turned a blind eye. The local elephant hunters preferred the flesh especially of young human beings – four to sixteen was the preferred age range, according to one trader – "because it was not only more tender, but also much quicker to cook" than the meat of elephants or other large animals.
While sceptics such as William Arens sometimes claim that there are no credible eyewitness accounts of cannibal acts, there are numerous such accounts from the Congo. David Livingstone "saw human parts being cooked with bananas, and many other Europeans" – among them Hinde – "reported seeing cooked human remains lying around abandoned fires." Soldiers of the German explorer Hermann Wissmann saw how people captured and wounded in a slave raid were shot by a Swahili–Arab leader and then handed over "to his auxiliary troops, who ... cut them in pieces and dragged them to the fire to serve as their supper". Visiting a village near the Aruwimi River, the British artist Herbert Ward saw a man "carrying four large lumps of human flesh, with the skin still clinging to it, on a stick", and soon afterwards "a party of men squatting round a fire, before which this ghastly flesh, exposed on spits, was cooking"; he was told that the flesh came from a man who had been killed a few hours before. Another time, when "camping for the night with a party of Arab raiders and their followers", he and his companions felt "compelled to change the position of our tent owing to the offensive smell of human flesh, which was being cooked on all sides of us."
The Belgian colonial officer Camille Coquilhat saw "the remaining half of [a] steamed man" – a slave who had been purchased for consumption and slaughtered a few hours earlier – "in an enormous pot" and discussed with the slave's owner, who at first thought that Coquilhat was joking when he objected to his cannibalistic customs. Near the Ubangi River, which formed the border between the Belgian and the French colonial enterprises, the French traveller saw local auxiliaries of the French troops kill "some women and some children" after a punitive expedition, then cooking their flesh in pots and "enjoy[ing]" it.
Among the Mangbetu people in the north-east, Georg A. Schweinfurth saw a human arm being smoked over a fire. At other occasion, he watched a group of young women using boiling water for "scalding the hair off the lower half of a human body" in preparation for cooking it. A few years later, Gaetano Casati saw how the roasted leg of a slave woman was served at the court of the Mangbetu king. More eyewitness accounts could be added.
Europe
From the 16th century on, an unusual form of medical cannibalism became widespread in several European countries, for which thousands of Egyptian mummies were ground up and sold as medicine. Powdered human mummy – called mummia – was thought to stop internal bleeding and to have other healing properties. The practice developed into a widespread business that flourished until the early 18th century. The demand was much higher than the supply of ancient mummies, leading to much of the offered "mummia" being counterfeit, made from recent Egyptian or European corpses – often from the gallows – instead. In a few cases, mummia was still offered in medical catalogues in the early 20th century.
Australia
Hundreds of accounts exist of cannibalism among Aboriginal Australians in all parts of Australia, with the possible exception of Tasmania, dating from the first European settlement to the 1930s and later. While it is generally accepted that some forms of cannibalism were practised in Australia in certain circumstances, the prevalence and meaning of such acts in pre-colonial Aboriginal societies are disputed.
Before colonization, Aboriginal Australians were predominantly nomadic hunter-gatherers at times lacking in protein sources. Reported cases of cannibalism include killing and eating small children (infanticide was widely practised as a means of population control and because mothers had trouble carrying two young children not yet able to walk) and enemy warriors slain in battle.
In the late 1920s, the anthropologist Géza Róheim heard from Aboriginals that infanticidal cannibalism had been practised especially during droughts. "Years ago it had been custom for every second child to be eaten" – the baby was roasted and consumed not only by the mother, but also by the older siblings, who benefited from this meat during times of food scarcity. One woman told him that her little sister had been roasted, but denied having eaten of her. Another "admitted having killed and eaten her small daughter", and several other people he talked to remembered having "eaten one of their brothers". The consumption of infants took two different forms, depending on where it was practised:
Usually only babies who had not yet received a name (which happened around the first birthday) were consumed, but in times of severe hunger, older children (up to four years or so) could be killed and eaten too, though people tended to have bad feelings about this. Babies were killed by their mother, while a bigger child "would be killed by the father by being beaten on the head". But cases of women killing older children are on record too. In 1904 a parish priest in Broome, Western Australia, stated that infanticide was very common, including one case where a four-year-old was "killed and eaten by its mother", who later became a Christian.
The journalist and anthropologist Daisy Bates, who spent a long time among Aboriginals and was well acquainted with their customs, knew an Aboriginal woman who one day left her village to give birth a mile away, taking only her daughter with her. She then "killed and ate the baby, sharing the food with the little daughter." After her return, Bates found the place and saw "the ashes of a fire" with the baby's "broken skull, and one or two charred bones" in them. She states that "baby cannibalism was rife among these central-western peoples, as it is west of the border in Central Australia."
The Norwegian ethnographer Carl Sofus Lumholtz confirms that infants were commonly killed and eaten especially in times of food scarcity. He notes that people spoke of such acts "as an everyday occurrence, and not at all as anything remarkable."
Some have interpreted the consumption of infants as a religious practice: "In parts of New South Wales ..., it was customary long ago for the first-born of every lubra [Aboriginal woman] to be eaten by the tribe, as part of a religious ceremony." However, there seems to be no direct evidence that such acts actually had a religious meaning, and the Australian anthropologist Alfred William Howitt rejects the idea that the eaten were human sacrifices as "absolutely without foundation", arguing that religious sacrifices of any kind were unknown in Australia.
Another frequently reported practise was funerary endocannibalism, the cooking and consumption of the deceased as a funerary rite.
According to Bates, exocannibalism was also practised in many regions. Foreigners and members of different ethnic groups were hunted and eaten much like animals. She met "fine sturdy fellows" who "frankly admitted the hunting and sharing of kangaroo and human meat as frequently as that of kangaroo and emu." The bodies of the killed were roasted whole in "a deep hole in the sand". There were also "killing vendettas", in which a hostile settlement was attacked and as many persons as possible killed, whose flesh was then shared according to well-defined rules: "The older men ate the soft and virile parts, and the brain; swift runners were given the thighs; hands, arms or shoulders went to the best spear-throwers, and so on." Referring to the coast of the Great Australian Bight, Bates writes: "Cannibalism had been rife for centuries in these regions and for a thousand miles north and east of them." Human flesh was not eaten for spiritual reasons and not only due to hunger; rather it was considered a "favourite food".
Lumholtz similarly notes that "the greatest delicacy known to the Australian native is human flesh", even adding that the "appetite for human flesh" was the primary motive for killing. Unrelated individuals and isolated families were attacked just to be eaten and any stranger was at risk of being "pursued like a wild beast and slain and eaten". Acquiring human flesh is this manner was something to be proud of, not a reason for shame. He stresses that such flesh was nevertheless by no means a "daily food", since opportunities to capture victims were relatively rare. One specific instance of kidnapping for cannibal purposes was recorded in the 1840s by the English immigrant George French Angas, who stated that several children were kidnapped, butchered, and eaten near Lake Alexandrina in South Australia shortly before he arrived there.
Polynesia and Melanesia
The first encounter between Europeans and Māori may have involved cannibalism of a Dutch sailor. In June 1772, the French explorer Marion du Fresne and 26 members of his crew were killed and eaten in the Bay of Islands. In an 1809 incident known as the Boyd massacre, about 66 passengers and crew of the Boyd were killed and eaten by Māori on the Whangaroa peninsula, Northland. Cannibalism was already a regular practice in Māori wars. In another instance, on July 11, 1821, warriors from the Ngapuhi tribe killed 2,000 enemies and remained on the battlefield "eating the vanquished until they were driven off by the smell of decaying bodies". Māori warriors fighting the New Zealand government in Titokowaru's War in New Zealand's North Island in 1868–69 revived ancient rites of cannibalism as part of the radical Hauhau movement of the Pai Marire religion.
In parts of Melanesia, cannibalism was still practised in the early 20th century, for a variety of reasons – including retaliation, to insult an enemy people, or to absorb the dead person's qualities. One tribal chief, Ratu Udre Udre in Rakiraki, Fiji, is said to have consumed 872 people and to have made a pile of stones to record his achievement. Fiji was nicknamed the "Cannibal Isles" by European sailors, who avoided disembarking there. The dense population of the Marquesas Islands, in what is now French Polynesia, was concentrated in narrow valleys, and consisted of warring tribes, who sometimes practised cannibalism on their enemies. Human flesh was called "long pig". W. D. Rubinstein wrote:
Early 20th century to present
After World War I, cannibalism continued to occur as a ritual practice and in times of drought or famine. Occasional cannibal acts committed by individual criminals are documented as well throughout the 20th and 21st centuries.
World War II
Many instances of cannibalism by necessity were recorded during World War II. For example, during the 872-day siege of Leningrad, reports of cannibalism began to appear in the winter of 1941–1942, after all birds, rats, and pets were eaten by survivors. Leningrad police even formed a special division to combat cannibalism.
Some 2.8 million Soviet POWs died in Nazi custody in less than eight months during 1941–42. According to the USHMM, by the winter of 1941, "starvation and disease resulted in mass death of unimaginable proportions". This deliberate starvation led to many incidents of cannibalism.
Following the Soviet victory at Stalingrad it was found that some German soldiers in the besieged city, cut off from supplies, resorted to cannibalism. Later, following the German surrender in January 1943, roughly 100,000 German soldiers were taken prisoner of war (POW). Almost all of them were sent to POW camps in Siberia or Central Asia where, due to being chronically underfed by their Soviet captors, many resorted to cannibalism. Fewer than 5,000 of the prisoners taken at Stalingrad survived captivity.
Cannibalism took place in the concentration and death camps in the Independent State of Croatia (NDH), a Nazi German puppet state which was governed by the fascist Ustasha organization, who committed the Genocide of Serbs and the Holocaust in NDH. Some survivors testified that some of the Ustashas drank the blood from the slashed throats of the victims.
The Australian War Crimes Section of the Tokyo tribunal, led by prosecutor William Webb (the future Judge-in-Chief), collected numerous written reports and testimonies that documented Japanese soldiers' acts of cannibalism among their own troops, on enemy dead, as well as on Allied prisoners of war in many parts of the Greater East Asia Co-Prosperity Sphere. In September 1942, Japanese daily rations on New Guinea consisted of 800 grams of rice and tinned meat. However, by December, this had fallen to 50 grams. According to historian Yuki Tanaka, "cannibalism was often a systematic activity conducted by whole squads and under the command of officers".
In some cases, flesh was cut from living people. A prisoner of war from the British Indian Army, Lance Naik Hatam Ali, testified that in New Guinea: "the Japanese started selecting prisoners and every day one prisoner was taken out and killed and eaten by the soldiers. I personally saw this happen and about 100 prisoners were eaten at this place by the Japanese. The remainder of us were taken to another spot away where 10 prisoners died of sickness. At this place, the Japanese again started selecting prisoners to eat. Those selected were taken to a hut where their flesh was cut from their bodies while they were alive and they were thrown into a ditch where they later died."
Another well-documented case occurred in Chichi-jima in February 1945, when Japanese soldiers killed and consumed five American airmen. This case was investigated in 1947 in a war crimes trial, and of 30 Japanese soldiers prosecuted, five (Maj. Matoba, Gen. Tachibana, Adm. Mori, Capt. Yoshii, and Dr. Teraki) were found guilty and hanged. In his book Flyboys: A True Story of Courage, James Bradley details several instances of cannibalism of World War II Allied prisoners by their Japanese captors. The author claims that this included not only ritual cannibalization of the livers of freshly killed prisoners, but also the cannibalization-for-sustenance of living prisoners over the course of several days, amputating limbs only as needed to keep the meat fresh.
There are more than 100 documented cases in Australia's government archives of Japanese soldiers practising cannibalism on enemy soldiers and civilians in New Guinea during the war. For instance, from an archived case, an Australian lieutenant describes how he discovered a scene with cannibalized bodies, including one "consisting only of a head which had been scalped and a spinal column" and that "in all cases, the condition of the remains were such that there can be no doubt that the bodies had been dismembered and portions of the flesh cooked". In another archived case, a Pakistani corporal (who was captured in Singapore and transported to New Guinea by the Japanese) testified that Japanese soldiers cannibalized a prisoner (some were still alive) per day for about 100 days. There was also an archived memo, in which a Japanese general stated that eating anyone except enemy soldiers was punishable by death. Toshiyuki Tanaka, a Japanese scholar in Australia, mentions that it was done "to consolidate the group feeling of the troops" rather than due to food shortage in many of the cases. Tanaka also states that the Japanese committed the cannibalism under supervision of their senior officers and to serve as a power projection tool.
Jemadar Abdul Latif (VCO of the 4/9 Jat Regiment of the British Indian Army and POW rescued by the Australians at Sepik Bay in 1945) stated that the Japanese soldiers ate both Indian POWs and local New Guinean people. At the camp for Indian POWs in Wewak, where many died and 19 POWs were eaten, the Japanese doctor and lieutenant Tumisa would send an Indian out of the camp after which a Japanese party would kill and eat flesh from the body as well as cut off and cook certain body parts (liver, buttock muscles, thighs, legs, and arms), according to Captain R. U. Pirzai in a The Courier-Mail report of August 25, 1945.
South America
When Uruguayan Air Force Flight 571 crashed on a glacier in the Andes on October 13, 1972, many survivors resorted to eating the deceased during their 72 days in the mountains. The experiences and memories of the survivors became the source of several books and films. In an account of the accident and aftermath, survivor Roberto Canessa described the decision to eat the pilots and their dead friends and family members:
North America
In 1991, Jeffrey Dahmer of Milwaukee, Wisconsin, was arrested after one of his intended victims managed to escape. Found in Dahmer's apartment were two human hearts, an entire torso, a bag full of human organs from his victims, and a portion of arm muscle. He stated that he planned to consume all of the body parts over the next few weeks.
West Africa
In the 1980s, Médecins Sans Frontières, the international medical charity, supplied photographic and other documentary evidence of ritualized cannibal feasts among the participants in Liberia's internecine strife preceding the First Liberian Civil War to representatives of Amnesty International. Amnesty International declined to publicize this material; the Secretary-General of the organization, Pierre Sane, said at the time in an internal communication that "what they do with the bodies after human rights violations are committed is not part of our mandate or concern". The existence of cannibalism on a wide scale in Liberia was subsequently verified.
A few years later, reported of cannibal acts committed during the Second Liberian Civil War and Sierra Leone Civil War emerged.
Central Africa
Reports from the Belgian Congo indicate that cannibalism was still widely practised in some regions in the 1920s. Hermann Norden, an American who visited the Kasai region in 1923, found that "cannibalism was commonplace". People were afraid of walking outside of populated places because there was a risk of being attacked, killed, and eaten. Norden talked with a Belgian who "admitted that it was quite likely he had occasionally been served human flesh without knowing what he was eating" – it was simply a dish that appeared on the tables from time.
Other travellers heard persistent rumours that there was still a certain underground trade in slaves, some of whom (adults and children alike) were regularly killed and then "cut up and cooked as ordinary meat", around both the Kasai and the Ubangi River. The colonial state seems to have done little to discourage or punish such acts. There are also reports that human flesh was sometimes sold at markets in both Kinshasa and Brazzaville, "right in the middle of European life."
Norden observed that cannibalism was so common that people talked about it quite "casual[ly]": "No stress was put upon it, nor horror shown. This person had died of fever; that one had been eaten. It was all a matter of the way one's luck held."
The culinary use of human flesh continued in some cases even after World War II. In 1950, a Belgian administrator ate a "remarkably delicious" dish, learning after he had finished "that the meat came from a young girl." A few years later, a Danish traveller was served a piece of the "soft and tender" flesh of a butchered woman.
During the Congo Crisis, which followed the country's independence in 1960, body parts of killed enemies were eaten and the flesh of war victims was sometimes sold for consumption. In Luluabourg (today Kananga), an American journalist saw a truck smeared with blood. A police commissioner investigating the scene told her that "sixteen women and children" had been lured in a nearby village to enter the truck, kidnapped, and "butchered ... for meat." She also talked with a Presbyterian missionary, who excused this act as due to "protein need.... The bodies of their enemies are the only source of protein available."
In conflict situations, cannibalism persisted into the 21st century. During the first decade of the new century, cannibal acts have been reported from the Second Congo War and the Ituri conflict in the northeast of the Democratic Republic of the Congo. According to UN investigators, fighters belonging to several factions "grilled" human bodies "on a barbecue"; young girls were boiled "alive in ... big pots filled with boiling water and oil" or "cut into small pieces ... and then eaten."
A UN human rights expert reported in July 2007 that sexual atrocities committed by rebel groups as well as by armed forces and national police against Congolese women go "far beyond rape" and include sexual slavery, forced incest, and cannibalism. In the Ituri region, much of the violence, which included "widespread cannibalism", was consciously directed against pygmies, who were believed to be relatively helpless and even considered subhuman by some other Congolese.
UN investigators also collected eyewitness accounts of cannibalism during a violent conflict that shook the Kasai region in 2016/2017. Various parts of killed enemies and beheaded captives were cooked and eaten, including their heads, thighs, and penises.
Cannibalism has also been reported from the Central African Republic, north of the Congo Basin. Jean-Bédel Bokassa ruled the country from 1966 to 1979 as dictator and finally as self-declared emperor. Tenacious rumours that he liked to dine on the flesh of opponents and political prisoners were substantiated by several testimonies during his eventual trial in 1986/1987. Bokassa's successor David Dacko stated that he had seen photographs of butchered bodies hanging in the cold-storage rooms of Bokassa's palace immediately after taking power in 1979. These or similar photos, said to show a walk-in freezer containing the bodies of schoolchildren arrested in April 1979 during protests and beat to death in the 1979 Ngaragba Prison massacre, were also published in Paris Match magazine. During the trial, Bokassa's former chef testified that he had repeatedly cooked human flesh from the palace's freezers for his boss's table. While Bokassa was found guilty of murder in at least twenty cases, the charge of cannibalism was nevertheless not taken into account for the final verdict, since the consumption of human remains is considered a misdemeanor under CAR law and all previously committed misdemeanors had been forgiven by a general amnesty declared in 1981.
Further acts of cannibalism were reported to have targeted the Muslim minority during the Central African Republic Civil War which started in 2012.
East Africa
In the 1970s the Ugandan dictator Idi Amin was reputed to practice cannibalism. More recently, the Lord's Resistance Army has been accused of routinely engaging in ritual or magical cannibalism. It is also reported by some that witch doctors in the country sometimes use the body parts of children in their medicine.
During the South Sudanese Civil War, cannibalism and forced cannibalism have been reported from South Sudan.
Central and Western Europe
Before 1931, The New York Times reporter William Seabrook, apparently disappointed that he had been unable to taste human flesh in West Africa, obtained from a hospital intern at the Sorbonne a chunk of this meat from the body of a healthy man killed in an accident, then cooked and ate it. He reported,
Karl Denke, possible Carl Großmann and Fritz Haarmann, as well as Joachim Kroll were German murderers and cannibals active between the early 20th century and the 1970s. Armin Meiwes is a former computer repair technician who achieved international notoriety for killing and eating a voluntary victim in 2001, whom he had found via the Internet. After Meiwes and the victim jointly attempted to eat the victim's severed penis, Meiwes killed his victim and proceeded to eat a large amount of his flesh. He was arrested in December 2002. In January 2004, Meiwes was convicted of manslaughter and sentenced to eight years and six months in prison. Despite the victim's undisputed consent, the prosecutors successfully appealed this decision, and in a retrial that ended in May 2006, Meiwes was convicted of murder and sentenced to life imprisonment.
On July 23, 1988, Rick Gibson ate the flesh of another person in public. Because England does not have a specific law against cannibalism, he legally ate a canapé of donated human tonsils in Walthamstow High Street, London. A year later, on April 15, 1989, he publicly ate a slice of human testicle. When he tried to eat another slice of human testicle as "hors d'oeuvre" at the Pitt International Galleries in Vancouver on July 14, 1989, the police confiscated the testicle. However, the charge of publicly exhibiting a disgusting object was dropped, and two months later he finally ate the piece of human testicle on the steps of the Vancouver court house.
In 2008, a British model called Anthony Morley was imprisoned for the killing, dismemberment and partial cannibalisation of his lover, magazine executive Damian Oldfield.
Eastern Europe and the Soviet Union
In his book, The Gulag Archipelago, Soviet writer Aleksandr Solzhenitsyn described cases of cannibalism in 20th-century Soviet Union. Of the famine in Povolzhie (1921–1922) he wrote: "That horrible famine was up to cannibalism, up to consuming children by their own parents – the famine, which Russia had never known even in the Time of Troubles [in 1601–1603]".
The historian Orlando Figes observes that "thousands of cases" of cannibalism were reported, while the number of cases that were never reported was doubtless even higher. In Pugachyov, "it was dangerous for children to go out after dark since there were known to be bands of cannibals and traders who killed them to eat or sell their tender flesh." An inhabitant of a nearby village stated: "There are several cafeterias in the village – and all of them serve up young children." This was no exception – Figes estimates "that a considerable proportion of the meat in Soviet factories in the Volga area ... was human flesh." Various gangs specialized in "capturing children, murdering them and selling the human flesh as horse meat or beef", with the buyers happy to have found a source of meat in a situation of extreme shortage and often willing not to "ask too many questions".
Cannibalism was also widespread during the Holodomor, a man-made famine in Soviet Ukraine between 1932 and 1933.
Survival was a moral as well as a physical struggle. A woman doctor wrote to a friend in June 1933 that she had not yet become a cannibal, but was "not sure that I shall not be one by the time my letter reaches you". The good people died first. Those who refused to steal or to prostitute themselves died. Those who gave food to others died. Those who refused to eat corpses died. Those who refused to kill their fellow man died. ... At least 2,505 people were sentenced for cannibalism in the years 1932 and 1933 in Ukraine, though the actual number of cases was certainly much higher.
Most cases of cannibalism were "necrophagy, the consumption of corpses of people who had died of starvation". But the murder of children for food was common as well. Many survivors told of neighbours who had killed and eaten their own children. One woman, asked why she had done this, "answered that her children would not survive anyway, but this way she would". She was arrested by the police. The police also documented cases of children being kidnapped, killed, and eaten, and "stories of children being hunted down as food" circulated in many areas. A man who lived through the famine in his youth later remembered that "the availability of human flesh at market[s] was an open and acknowledged secret. People were glad" if they could buy it since "there was no other means to survive."
In March 1933 the secret police in Kiev Oblast collected "ten or more reports of cannibalism every day" but concluded that "in reality there are many more such incidents", most of which went unreported. Those found guilty of cannibalism were often "imprisoned, executed, or lynched". But while the authorities were well informed about the extent of cannibalism, they also tried to suppress this information from becoming widely known, the chief of the secret police warning "that written notes on the subject do not circulate among the officials where they might cause rumours".
The Holodomor was part of the Soviet famine of 1930–1933, which devastated also other parts of the Soviet Union in the early 1930s. Multiple cases of cannibalism were also reported from Kazakhstan.
A few years later, starving people again resorted to cannibalism during the siege of Leningrad (1941–1944). About this time, Solzhenitsyn writes: "Those who consumed human flesh, or dealt with the human liver trading from dissecting rooms ... were accounted as the political criminals".
Of the building of Northern Railway Labor Camp ("Sevzheldorlag") Solzhenitsyn reports, "An ordinary hard working political prisoner almost could not survive at that penal camp. In the camp Sevzheldorlag (chief: colonel Klyuchkin) in 1946–47 there were many cases of cannibalism: they cut human bodies, cooked and ate."
The Soviet journalist Yevgenia Ginzburg was a long-term political prisoner who spent time in the Soviet prisons, Gulag camps and settlements from 1938 to 1955. She described in her memoir, Harsh Route (or Steep Route), of a case which she was directly involved in during the late 1940s, after she had been moved to the prisoners' hospital.
The chief warder shows me the black smoked pot, filled with some food: "I need your medical expertise regarding this meat." I look into the pot, and hardly hold vomiting. The fibres of that meat are very small, and don't resemble me anything I have seen before. The skin on some pieces bristles with black hair ... A former smith from Poltava, Kulesh worked together with Centurashvili. At this time, Centurashvili was only one month away from being discharged from the camp ... And suddenly he surprisingly disappeared ... The wardens searched for two more days, and then assumed that it was an escape case, though they wondered why, since his imprisonment period was almost over ... The crime was there. Approaching the fireplace, Kulesh killed Centurashvili with an axe, burned his clothes, then dismembered him and hid the pieces in snow, in different places, putting specific marks on each burial place. ... Just yesterday, one body part was found under two crossed logs.
India
The Aghori are Indian ascetics who believe that eating human flesh confers spiritual and physical benefits, such as prevention of ageing. They claim to only eat those who have voluntarily granted their body to the sect upon their death, but an Indian TV crew witnessed one Aghori feasting on a corpse discovered floating in the Ganges and a member of the Dom caste reports that Aghori often take bodies from cremation ghats (or funeral pyres).
China
Cannibalism is documented to have occurred in rural China during the severe famine that resulted from the Great Leap Forward (1958–1962).
During Mao Zedong's Cultural Revolution (1966–1976), local governments' documents revealed hundreds of incidents of cannibalism for ideological reasons, including large-scale cannibalism during the Guangxi Massacre. Cannibal acts occurred at public events organized by local Communist Party officials, with people taking part in them in order to prove their revolutionary passion. The writer Zheng Yi documented many of these incidents, especially those in Guangxi, in his 1993 book, Scarlet Memorial.
Pills made of human flesh were said to be used by some Tibetan Buddhists, motivated by a belief that mystical powers were bestowed upon those who consumed Brahmin flesh.
Southeast Asia
In Joshua Oppenheimer's film The Look of Silence, several of the anti-Communist militias active in the Indonesian mass killings of 1965–66 claim that drinking blood from their victims prevented them from going mad.
East Asia
Reports of widespread cannibalism began to emerge from North Korea during the famine of the 1990s and subsequent ongoing starvation. Kim Jong-il was reported to have ordered a crackdown on cannibalism in 1996, but Chinese travellers reported in 1998 that cannibalism had occurred. Three people in North Korea were reported to have been executed for selling or eating human flesh in 2006. Further reports of cannibalism emerged in early 2013, including reports of a man executed for killing his two children for food.
There are conflicting claims about how widespread cannibalism was in North Korea. While refugees reported that it was widespread, Barbara Demick wrote in her book, Nothing to Envy: Ordinary Lives in North Korea (2010), that it did not seem to be.
Melanesia
The Korowai tribe of south-eastern Papua could be one of the last surviving tribes in the world engaging in cannibalism. A local cannibal cult killed and ate victims as late as 2012.
As in some other Papuan societies, the Urapmin people engaged in cannibalism in war. Notably, the Urapmin also had a system of food taboos wherein dogs could not be eaten and they had to be kept from breathing on food, unlike humans who could be eaten and with whom food could be shared.
See also
Alexander Pearce, alleged Irish cannibal
Alferd Packer, an American prospector, accused but not convicted of cannibalism
Androphagi, an ancient nation of cannibals
Asmat people, a Papua group with a reputation of cannibalism
Cannibal film
Cannibalism in literature
Cannibalism in popular culture
Cannibalism in poultry
Chijon family, a Korean gang that killed and ate rich people
Child cannibalism for children as victims of cannibalism (in myth and reality)
Custom of the sea, the practice of shipwrecked survivors drawing lots to see who would be killed and eaten so that the others might survive
Homo antecessor, an extinct human species providing some of the earliest known evidence for human cannibalism
Human fat has been applied in European pharmacopeia between the 16th and the 19th centuries.
Human placentophagy, the consumption of the placenta (afterbirth)
Idi Amin, Ugandan dictator who is alleged to have consumed humans
Issei Sagawa, a Japanese man who became a minor celebrity after killing and eating another student
List of incidents of cannibalism
Manifesto Antropófago (Cannibal Manifesto in English), a Brazilian poem
Medical cannibalism, the consumption of human body parts to treat or prevent diseases
Mummia, medicine made from human mummies
Noida serial murders, a widely publicized instance of alleged cannibalism in India
Placentophagy, the act of mammals eating the placenta of their young after childbirth
Pleistocene human diet, the eating habits of human ancestors in the Pleistocene
R v Dudley and Stephens, an important trial of two men accused of shipwreck cannibalism
Self-cannibalism, the practice of eating oneself (also called autocannibalism)
Traditional Chinese medicines derived from the human body
Transmissible spongiform encephalopathy, a progressive condition that affect the brain and nervous system of many animals, including humans
Vorarephilia, a sexual fetish and paraphilia where arousal results from the idea of devouring others or being devoured
Wari’ people, an Amerindian tribe that practised cannibalism
References
Further reading
Berdan, Frances F. The Aztecs of Central Mexico: An Imperial Society. New York 1982.
Earle, Rebecca. The Body of the Conquistador: Food, Race, and the Colonial Experience in Spanish America, 1492–1700. New York: Cambridge University Press 2012.
Jáuregui, Carlos. Canibalia: Canibalismo, calibanismo, antropofagía cultural y consumo en América Latina. Madrid: Vervuert 2008.
Lestringant, Frank. Cannibals: The Discovery and Representation of the Cannibal from Columbus to Jules Verne. Berkeley and Los Angeles: University of California Press 1997.
Ortiz de Montellano, Bernard R. Aztec Medicine, Health, and Nutrition. New Brunswick 1990.
Read, Kay A. Time and Sacrifice in the Aztec Cosmos. Bloomington 1998.
Sahlins, Marshall. "Cannibalism: An Exchange." New York Review of Books 26, no. 4 (March 22, 1979).
Schutt, Bill. Cannibalism: A Perfectly Natural History. Chapel Hill: Algonquin Books 2017.
External links
Is there a relation between cannibalism and amyloidosis?
All about Cannibalism: The Ancient Taboo in Modern Times (Cannibalism Psychology) at CrimeLibrary.com
Cannibalism, Víctor Montoya
The Straight Dope Notes arguing that routine cannibalism is myth
Did a mob of angry Dutch kill and eat their prime minister? (from The Straight Dope)
Harry J. Brown, 'Hans Staden among the Tupinambas.' |
5659 | https://en.wikipedia.org/wiki/Chemical%20element | Chemical element | A chemical element is a chemical substance that cannot be broken down into other substances. The basic particle that constitutes a chemical element is the atom, and each chemical element is distinguished by the number of protons in the nuclei of its atoms, known as its atomic number. For example, oxygen has an atomic number of 8, meaning that each oxygen atom has 8 protons in its nucleus. This is in contrast to chemical compounds and mixtures, which contain atoms with more than one atomic number.
Almost all of the baryonic matter of the universe is composed of chemical elements (among rare exceptions are neutron stars). When different elements undergo chemical reactions, atoms are rearranged into new compounds held together by chemical bonds. Only a minority of elements, such as silver and gold, are found uncombined as relatively pure native element minerals. Nearly all other naturally occurring elements occur in the Earth as compounds or mixtures. Air is primarily a mixture of the elements nitrogen, oxygen, and argon, though it does contain compounds including carbon dioxide and water.
The history of the discovery and use of the elements began with primitive human societies that discovered native minerals like carbon, sulfur, copper and gold (though the concept of a chemical element was not yet understood). Attempts to classify materials such as these resulted in the concepts of classical elements, alchemy, and various similar theories throughout human history. Much of the modern understanding of elements developed from the work of Dmitri Mendeleev, a Russian chemist who published the first recognizable periodic table in 1869. This table organizes the elements by increasing atomic number into rows ("periods") in which the columns ("groups") share recurring ("periodic") physical and chemical properties. The periodic table summarizes various properties of the elements, allowing chemists to derive relationships between them and to make predictions about compounds and potential new ones.
By November 2016, the International Union of Pure and Applied Chemistry had recognized a total of 118 elements. The first 94 occur naturally on Earth, and the remaining 24 are synthetic elements produced in nuclear reactions. Save for unstable radioactive elements (radionuclides) which decay quickly, nearly all of the elements are available industrially in varying amounts. The discovery and synthesis of further new elements is an ongoing area of scientific study.
Description
The lightest chemical elements are hydrogen and helium, both created by Big Bang nucleosynthesis during the first 20 minutes of the universe in a ratio of around 3:1 by mass (or 12:1 by number of atoms), along with tiny traces of the next two elements, lithium and beryllium. Almost all other elements found in nature were made by various natural methods of nucleosynthesis. On Earth, small amounts of new atoms are naturally produced in nucleogenic reactions, or in cosmogenic processes, such as cosmic ray spallation. New atoms are also naturally produced on Earth as radiogenic daughter isotopes of ongoing radioactive decay processes such as alpha decay, beta decay, spontaneous fission, cluster decay, and other rarer modes of decay.
Of the 94 naturally occurring elements, those with atomic numbers 1 through 82 each have at least one stable isotope (except for technetium, element 43 and promethium, element 61, which have no stable isotopes). Isotopes considered stable are those for which no radioactive decay has yet been observed. Elements with atomic numbers 83 through 94 are unstable to the point that radioactive decay of all isotopes can be detected. Some of these elements, notably bismuth (atomic number 83), thorium (atomic number 90), and uranium (atomic number 92), have one or more isotopes with half-lives long enough to survive as remnants of the explosive stellar nucleosynthesis that produced the heavy metals before the formation of our Solar System. At over 1.9 years, over a billion times longer than the current estimated age of the universe, bismuth-209 (atomic number 83) has the longest known alpha decay half-life of any naturally occurring element, and is almost always considered on par with the 80 stable elements. The very heaviest elements (those beyond plutonium, element 94) undergo radioactive decay with half-lives so short that they are not found in nature and must be synthesized.
There are now 118 known elements. In this context, "known" means observed well enough, even from just a few decay products, to have been differentiated from other elements. Most recently, the synthesis of element 118 (since named oganesson) was reported in October 2006, and the synthesis of element 117 (tennessine) was reported in April 2010. Of these 118 elements, 94 occur naturally on Earth. Six of these occur in extreme trace quantities: technetium, atomic number 43; promethium, number 61; astatine, number 85; francium, number 87; neptunium, number 93; and plutonium, number 94. These 94 elements have been detected in the universe at large, in the spectra of stars and also supernovae, where short-lived radioactive elements are newly being made. The first 94 elements have been detected directly on Earth as primordial nuclides present from the formation of the Solar System, or as naturally occurring fission or transmutation products of uranium and thorium.
The remaining 24 heavier elements, not found today either on Earth or in astronomical spectra, have been produced artificially: these are all radioactive, with very short half-lives; if any atoms of these elements were present at the formation of Earth, they are extremely likely, to the point of certainty, to have already decayed, and if present in novae have been in quantities too small to have been noted. Technetium was the first purportedly non-naturally occurring element synthesized, in 1937, although trace amounts of technetium have since been found in nature (and also the element may have been discovered naturally in 1925). This pattern of artificial production and later natural discovery has been repeated with several other radioactive naturally occurring rare elements.
List of the elements are available by name, atomic number, density, melting point, boiling point and by symbol, as well as ionization energies of the elements. The nuclides of stable and radioactive elements are also available as a list of nuclides, sorted by length of half-life for those that are unstable. One of the most convenient, and certainly the most traditional presentation of the elements, is in the form of the periodic table, which groups together elements with similar chemical properties (and usually also similar electronic structures).
Atomic number
The atomic number of an element is equal to the number of protons in each atom, and defines the element. For example, all carbon atoms contain 6 protons in their atomic nucleus; so the atomic number of carbon is 6. Carbon atoms may have different numbers of neutrons; atoms of the same element having different numbers of neutrons are known as isotopes of the element.
The number of protons in the atomic nucleus also determines its electric charge, which in turn determines the number of electrons of the atom in its non-ionized state. The electrons are placed into atomic orbitals that determine the atom's various chemical properties. The number of neutrons in a nucleus usually has very little effect on an element's chemical properties (except in the case of hydrogen and deuterium). Thus, all carbon isotopes have nearly identical chemical properties because they all have six protons and six electrons, even though carbon atoms may, for example, have 6 or 8 neutrons. That is why the atomic number, rather than mass number or atomic weight, is considered the identifying characteristic of a chemical element.
The symbol for atomic number is Z.
Isotopes
Isotopes are atoms of the same element (that is, with the same number of protons in their atomic nucleus), but having different numbers of neutrons. Thus, for example, there are three main isotopes of carbon. All carbon atoms have 6 protons in the nucleus, but they can have either 6, 7, or 8 neutrons. Since the mass numbers of these are 12, 13 and 14 respectively, the three isotopes of carbon are known as carbon-12, carbon-13, and carbon-14, often abbreviated to 12C, 13C, and 14C. Carbon in everyday life and in chemistry is a mixture of 12C (about 98.9%), 13C (about 1.1%) and about 1 atom per trillion of 14C.
Most (66 of 94) naturally occurring elements have more than one stable isotope. Except for the isotopes of hydrogen (which differ greatly from each other in relative mass—enough to cause chemical effects), the isotopes of a given element are chemically nearly indistinguishable.
All of the elements have some isotopes that are radioactive (radioisotopes), although not all of these radioisotopes occur naturally. The radioisotopes typically decay into other elements upon radiating an alpha or beta particle. If an element has isotopes that are not radioactive, these are termed "stable" isotopes. All of the known stable isotopes occur naturally (see primordial isotope). The many radioisotopes that are not found in nature have been characterized after being artificially made. Certain elements have no stable isotopes and are composed only of radioactive isotopes: specifically the elements without any stable isotopes are technetium (atomic number 43), promethium (atomic number 61), and all observed elements with atomic numbers greater than 82.
Of the 80 elements with at least one stable isotope, 26 have only one single stable isotope. The mean number of stable isotopes for the 80 stable elements is 3.1 stable isotopes per element. The largest number of stable isotopes that occur for a single element is 10 (for tin, element 50).
Isotopic mass and atomic mass
The mass number of an element, A, is the number of nucleons (protons and neutrons) in the atomic nucleus. Different isotopes of a given element are distinguished by their mass numbers, which are conventionally written as a superscript on the left hand side of the atomic symbol (e.g. 238U). The mass number is always a whole number and has units of "nucleons". For example, magnesium-24 (24 is the mass number) is an atom with 24 nucleons (12 protons and 12 neutrons).
Whereas the mass number simply counts the total number of neutrons and protons and is thus a natural (or whole) number, the atomic mass of a particular isotope (or "nuclide") of the element is the mass of a single atom of that isotope, and is typically expressed in daltons (symbol: Da), or univeral atomic mass units (symbol: u). Its relative atomic mass is a dimensionless number equal to the atomic mass divided by the atomic amass constant, which equals 1 Da. In general, the mass number of a given nuclide differs in value slightly from its relative atomic mass, since the mass of each proton and neutron is not exactly 1 Da; since the electrons contribute a lesser share to the atomic mass as neutron number exceeds proton number; and because of the nuclear binding energy and the electron binding energy. For example, the atomic mass of chlorine-35 to five significant digits is 34.969 Da and that of chlorine-37 is 36.966 Da. However, the relative atomic mass of each isotope is quite close to its mass number (always within 1%). The only isotope whose atomic mass is exactly a natural number is 12C, which has a mass of 12 Da because the dalton is defined as 1/12 of the mass of a free neutral carbon-12 atom in the ground state.
The standard atomic weight (commonly called "atomic weight") of an element is the average of the atomic masses of all the chemical element's isotopes as found in a particular environment, weighted by isotopic abundance, relative to the atomic mass unit. This number may be a fraction that is not close to a whole number. For example, the relative atomic mass of chlorine is 35.453 u, which differs greatly from a whole number as it is an average of about 76% chlorine-35 and 24% chlorine-37. Whenever a relative atomic mass value differs by more than 1% from a whole number, it is due to this averaging effect, as significant amounts of more than one isotope are naturally present in a sample of that element.
Chemically pure and isotopically pure
Chemists and nuclear scientists have different definitions of a pure element. In chemistry, a pure element means a substance whose atoms all (or in practice almost all) have the same atomic number, or number of protons. Nuclear scientists, however, define a pure element as one that consists of only one stable isotope.
For example, a copper wire is 99.99% chemically pure if 99.99% of its atoms are copper, with 29 protons each. However it is not isotopically pure since ordinary copper consists of two stable isotopes, 69% 63Cu and 31% 65Cu, with different numbers of neutrons. However, a pure gold ingot would be both chemically and isotopically pure, since ordinary gold consists only of one isotope, 197Au.
Allotropes
Atoms of chemically pure elements may bond to each other chemically in more than one way, allowing the pure element to exist in multiple chemical structures (spatial arrangements of atoms), known as allotropes, which differ in their properties. For example, carbon can be found as diamond, which has a tetrahedral structure around each carbon atom; graphite, which has layers of carbon atoms with a hexagonal structure stacked on top of each other; graphene, which is a single layer of graphite that is very strong; fullerenes, which have nearly spherical shapes; and carbon nanotubes, which are tubes with a hexagonal structure (even these may differ from each other in electrical properties). The ability of an element to exist in one of many structural forms is known as 'allotropy'.
The reference state of an element is defined by convention, usually as the thermodynamically most stable allotrope and physical state at a pressure of 1 bar and a given temperature (typically at 298.15K). However, for phosphorus, the reference state is white phosphorus even though it is not the most stable allotrope. In thermochemistry, an element is defined to have an enthalpy of formation of zero in its reference state. For example, the reference state for carbon is graphite, because the structure of graphite is more stable than that of the other allotropes.
Properties
Several kinds of descriptive categorizations can be applied broadly to the elements, including consideration of their general physical and chemical properties, their states of matter under familiar conditions, their melting and boiling points, their densities, their crystal structures as solids, and their origins.
General properties
Several terms are commonly used to characterize the general physical and chemical properties of the chemical elements. A first distinction is between metals, which readily conduct electricity, nonmetals, which do not, and a small group, (the metalloids), having intermediate properties and often behaving as semiconductors.
A more refined classification is often shown in colored presentations of the periodic table. This system restricts the terms "metal" and "nonmetal" to only certain of the more broadly defined metals and nonmetals, adding additional terms for certain sets of the more broadly viewed metals and nonmetals. The version of this classification used in the periodic tables presented here includes: actinides, alkali metals, alkaline earth metals, halogens, lanthanides, transition metals, post-transition metals, metalloids, reactive nonmetals, and noble gases. In this system, the alkali metals, alkaline earth metals, and transition metals, as well as the lanthanides and the actinides, are special groups of the metals viewed in a broader sense. Similarly, the reactive nonmetals and the noble gases are nonmetals viewed in the broader sense. In some presentations, the halogens are not distinguished, with astatine identified as a metalloid and the others identified as nonmetals.
States of matter
Another commonly used basic distinction among the elements is their state of matter (phase), whether solid, liquid, or gas, at a selected standard temperature and pressure (STP). Most of the elements are solids at conventional temperatures and atmospheric pressure, while several are gases. Only bromine and mercury are liquids at 0 degrees Celsius (32 degrees Fahrenheit) and normal atmospheric pressure; caesium and gallium are solids at that temperature, but melt at 28.4 °C (83.2 °F) and 29.8 °C (85.6 °F), respectively.
Melting and boiling points
Melting and boiling points, typically expressed in degrees Celsius at a pressure of one atmosphere, are commonly used in characterizing the various elements. While known for most elements, either or both of these measurements is still undetermined for some of the radioactive elements available in only tiny quantities. Since helium remains a liquid even at absolute zero at atmospheric pressure, it has only a boiling point, and not a melting point, in conventional presentations.
Densities
The density at selected standard temperature and pressure (STP) is frequently used in characterizing the elements. Density is often expressed in grams per cubic centimeter (g/cm3). Since several elements are gases at commonly encountered temperatures, their densities are usually stated for their gaseous forms; when liquefied or solidified, the gaseous elements have densities similar to those of the other elements.
When an element has allotropes with different densities, one representative allotrope is typically selected in summary presentations, while densities for each allotrope can be stated where more detail is provided. For example, the three familiar allotropes of carbon (amorphous carbon, graphite, and diamond) have densities of 1.8–2.1, 2.267, and 3.515 g/cm3, respectively.
Crystal structures
The elements studied to date as solid samples have eight kinds of crystal structures: cubic, body-centered cubic, face-centered cubic, hexagonal, monoclinic, orthorhombic, rhombohedral, and tetragonal. For some of the synthetically produced transuranic elements, available samples have been too small to determine crystal structures.
Occurrence and origin on Earth
Chemical elements may also be categorized by their origin on Earth, with the first 94 considered naturally occurring, while those with atomic numbers beyond 94 have only been produced artificially as the synthetic products of human-made nuclear reactions.
Of the 94 naturally occurring elements, 83 are considered primordial and either stable or weakly radioactive. The remaining 11 naturally occurring elements possess half lives too short for them to have been present at the beginning of the Solar System, and are therefore considered transient elements. Of these 11 transient elements, 5 (polonium, radon, radium, actinium, and protactinium) are relatively common decay products of thorium and uranium. The remaining 6 transient elements (technetium, promethium, astatine, francium, neptunium, and plutonium) occur only rarely, as products of rare decay modes or nuclear reaction processes involving uranium or other heavy elements.
No radioactive decay has been observed for elements with atomic numbers 1 through 82, except 43 (technetium) and 61 (promethium). Observationally stable isotopes of some elements (such as tungsten and lead), however, are predicted to be slightly radioactive with very long half-lives: for example, the half-lives predicted for the observationally stable lead isotopes range from 1035 to 10189 years. Elements with atomic numbers 43, 61, and 83 through 94 are unstable enough that their radioactive decay can readily be detected. Three of these elements, bismuth (element 83), thorium (element 90), and uranium (element 92) have one or more isotopes with half-lives long enough to survive as remnants of the explosive stellar nucleosynthesis that produced the heavy elements before the formation of the Solar System. For example, at over 1.9 years, over a billion times longer than the current estimated age of the universe, bismuth-209 has the longest known alpha decay half-life of any naturally occurring element. The very heaviest 24 elements (those beyond plutonium, element 94) undergo radioactive decay with short half-lives and cannot be produced as daughters of longer-lived elements, and thus are not known to occur in nature at all.
Periodic table
The properties of the chemical elements are often summarized using the periodic table, which powerfully and elegantly organizes the elements by increasing atomic number into rows ("periods") in which the columns ("groups") share recurring ("periodic") physical and chemical properties. The current standard table contains 118 confirmed elements as of 2021.
Although earlier precursors to this presentation exist, its invention is generally credited to the Russian chemist Dmitri Mendeleev in 1869, who intended the table to illustrate recurring trends in the properties of the elements. The layout of the table has been refined and extended over time as new elements have been discovered and new theoretical models have been developed to explain chemical behavior.
Use of the periodic table is now ubiquitous within the academic discipline of chemistry, providing an extremely useful framework to classify, systematize and compare all the many different forms of chemical behavior. The table has also found wide application in physics, geology, biology, materials science, engineering, agriculture, medicine, nutrition, environmental health, and astronomy. Its principles are especially important in chemical engineering.
Nomenclature and symbols
The various chemical elements are formally identified by their unique atomic numbers, by their accepted names, and by their symbols.
Atomic numbers
The known elements have atomic numbers from 1 through 118, conventionally presented as Arabic numerals. Since the elements can be uniquely sequenced by atomic number, conventionally from lowest to highest (as in a periodic table), sets of elements are sometimes specified by such notation as "through", "beyond", or "from ... through", as in "through iron", "beyond uranium", or "from lanthanum through lutetium". The terms "light" and "heavy" are sometimes also used informally to indicate relative atomic numbers (not densities), as in "lighter than carbon" or "heavier than lead", although technically the weight or mass of atoms of an element (their atomic weights or atomic masses) do not always increase monotonically with their atomic numbers.
Element names
The naming of various substances now known as elements precedes the atomic theory of matter, as names were given locally by various cultures to various minerals, metals, compounds, alloys, mixtures, and other materials, although at the time it was not known which chemicals were elements and which compounds. As they were identified as elements, the existing names for anciently known elements (e.g., gold, mercury, iron) were kept in most countries. National differences emerged over the names of elements either for convenience, linguistic niceties, or nationalism. For a few illustrative examples: German speakers use "Wasserstoff" (water substance) for "hydrogen", "Sauerstoff" (acid substance) for "oxygen" and "Stickstoff" (smothering substance) for "nitrogen", while English and some romance languages use "sodium" for "natrium" and "potassium" for "kalium", and the French, Italians, Greeks, Portuguese and Poles prefer "azote/azot/azoto" (from roots meaning "no life") for "nitrogen".
For purposes of international communication and trade, the official names of the chemical elements both ancient and more recently recognized are decided by the International Union of Pure and Applied Chemistry (IUPAC), which has decided on a sort of international English language, drawing on traditional English names even when an element's chemical symbol is based on a Latin or other traditional word, for example adopting "gold" rather than "aurum" as the name for the 79th element (Au). IUPAC prefers the British spellings "aluminium" and "caesium" over the U.S. spellings "aluminum" and "cesium", and the U.S. "sulfur" over the British "sulphur". However, elements that are practical to sell in bulk in many countries often still have locally used national names, and countries whose national language does not use the Latin alphabet are likely to use the IUPAC element names.
According to IUPAC, chemical elements are not proper nouns in English; consequently, the full name of an element is not routinely capitalized in English, even if derived from a proper noun, as in californium and einsteinium. Isotope names of chemical elements are also uncapitalized if written out, e.g., carbon-12 or uranium-235. Chemical element symbols (such as Cf for californium and Es for einsteinium), are always capitalized (see below).
In the second half of the twentieth century, physics laboratories became able to produce nuclei of chemical elements with half-lives too short for an appreciable amount of them to exist at any time. These are also named by IUPAC, which generally adopts the name chosen by the discoverer. This practice can lead to the controversial question of which research group actually discovered an element, a question that delayed the naming of elements with atomic number of 104 and higher for a considerable amount of time. (See element naming controversy).
Precursors of such controversies involved the nationalistic namings of elements in the late 19th century. For example, lutetium was named in reference to Paris, France. The Germans were reluctant to relinquish naming rights to the French, often calling it cassiopeium. Similarly, the British discoverer of niobium originally named it columbium, in reference to the New World. It was used extensively as such by American publications before the international standardization (in 1950).
Chemical symbols
Specific chemical elements
Before chemistry became a science, alchemists had designed arcane symbols for both metals and common compounds. These were however used as abbreviations in diagrams or procedures; there was no concept of atoms combining to form molecules. With his advances in the atomic theory of matter, John Dalton devised his own simpler symbols, based on circles, to depict molecules.
The current system of chemical notation was invented by Berzelius. In this typographical system, chemical symbols are not mere abbreviations—though each consists of letters of the Latin alphabet. They are intended as universal symbols for people of all languages and alphabets.
The first of these symbols were intended to be fully universal. Since Latin was the common language of science at that time, they were abbreviations based on the Latin names of metals. Cu comes from cuprum, Fe comes from ferrum, Ag from argentum. The symbols were not followed by a period (full stop) as with abbreviations. Later chemical elements were also assigned unique chemical symbols, based on the name of the element, but not necessarily in English. For example, sodium has the chemical symbol 'Na' after the Latin natrium. The same applies to "Fe" (ferrum) for iron, "Hg" (hydrargyrum) for mercury, "Sn" (stannum) for tin, "Au" (aurum) for gold, "Ag" (argentum) for silver, "Pb" (plumbum) for lead, "Cu" (cuprum) for copper, and "Sb" (stibium) for antimony. "W" (wolfram) for tungsten ultimately derives from German, "K" (kalium) for potassium ultimately from Arabic.
Chemical symbols are understood internationally when element names might require translation. There have sometimes been differences in the past. For example, Germans in the past have used "J" (for the alternate name Jod) for iodine, but now use "I" and "Iod".
The first letter of a chemical symbol is always capitalized, as in the preceding examples, and the subsequent letters, if any, are always lower case (small letters). Thus, the symbols for californium and einsteinium are Cf and Es.
General chemical symbols
There are also symbols in chemical equations for groups of chemical elements, for example in comparative formulas. These are often a single capital letter, and the letters are reserved and not used for names of specific elements. For example, an "X" indicates a variable group (usually a halogen) in a class of compounds, while "R" is a radical, meaning a compound structure such as a hydrocarbon chain. The letter "Q" is reserved for "heat" in a chemical reaction. "Y" is also often used as a general chemical symbol, although it is also the symbol of yttrium. "Z" is also frequently used as a general variable group. "E" is used in organic chemistry to denote an electron-withdrawing group or an electrophile; similarly "Nu" denotes a nucleophile. "L" is used to represent a general ligand in inorganic and organometallic chemistry. "M" is also often used in place of a general metal.
At least two additional, two-letter generic chemical symbols are also in informal usage, "Ln" for any lanthanide element and "An" for any actinide element. "Rg" was formerly used for any rare gas element, but the group of rare gases has now been renamed noble gases and the symbol "Rg" has now been assigned to the element roentgenium.
Isotope symbols
Isotopes are distinguished by the atomic mass number (total protons and neutrons) for a particular isotope of an element, with this number combined with the pertinent element's symbol. IUPAC prefers that isotope symbols be written in superscript notation when practical, for example 12C and 235U. However, other notations, such as carbon-12 and uranium-235, or C-12 and U-235, are also used.
As a special case, the three naturally occurring isotopes of the element hydrogen are often specified as H for 1H (protium), D for 2H (deuterium), and T for 3H (tritium). This convention is easier to use in chemical equations, replacing the need to write out the mass number for each atom. For example, the formula for heavy water may be written D2O instead of 2H2O.
Origin of the elements
Only about 4% of the total mass of the universe is made of atoms or ions, and thus represented by chemical elements. This fraction is about 15% of the total matter, with the remainder of the matter (85%) being dark matter. The nature of dark matter is unknown, but it is not composed of atoms of chemical elements because it contains no protons, neutrons, or electrons. (The remaining non-matter part of the mass of the universe is composed of the even less well understood dark energy).
The 94 naturally occurring chemical elements were produced by at least four classes of astrophysical process. Most of the hydrogen, helium and a very small quantity of lithium were produced in the first few minutes of the Big Bang. This Big Bang nucleosynthesis happened only once; the other processes are ongoing. Nuclear fusion inside stars produces elements through stellar nucleosynthesis, including all elements from carbon to iron in atomic number. Elements higher in atomic number than iron, including heavy elements like uranium and plutonium, are produced by various forms of explosive nucleosynthesis in supernovae and neutron star mergers. The light elements lithium, beryllium and boron are produced mostly through cosmic ray spallation (fragmentation induced by cosmic rays) of carbon, nitrogen, and oxygen.
During the early phases of the Big Bang, nucleosynthesis of hydrogen nuclei resulted in the production of hydrogen-1 (protium, 1H) and helium-4 (4He), as well as a smaller amount of deuterium (2H) and very minuscule amounts (on the order of 10−10) of lithium and beryllium. Even smaller amounts of boron may have been produced in the Big Bang, since it has been observed in some very old stars, while carbon has not. No elements heavier than boron were produced in the Big Bang. As a result, the primordial abundance of atoms (or ions) consisted of roughly 75% 1H, 25% 4He, and 0.01% deuterium, with only tiny traces of lithium, beryllium, and perhaps boron. Subsequent enrichment of galactic halos occurred due to stellar nucleosynthesis and supernova nucleosynthesis. However, the element abundance in intergalactic space can still closely resemble primordial conditions, unless it has been enriched by some means.
On Earth (and elsewhere), trace amounts of various elements continue to be produced from other elements as products of nuclear transmutation processes. These include some produced by cosmic rays or other nuclear reactions (see cosmogenic and nucleogenic nuclides), and others produced as decay products of long-lived primordial nuclides. For example, trace (but detectable) amounts of carbon-14 (14C) are continually produced in the atmosphere by cosmic rays impacting nitrogen atoms, and argon-40 (40Ar) is continually produced by the decay of primordially occurring but unstable potassium-40 (40K). Also, three primordially occurring but radioactive actinides, thorium, uranium, and plutonium, decay through a series of recurrently produced but unstable radioactive elements such as radium and radon, which are transiently present in any sample of these metals or their ores or compounds. Three other radioactive elements, technetium, promethium, and neptunium, occur only incidentally in natural materials, produced as individual atoms by nuclear fission of the nuclei of various heavy elements or in other rare nuclear processes.
In addition to the 94 naturally occurring elements, several artificial elements have been produced by human nuclear physics technology. , these experiments have produced all elements up to atomic number 118.
Abundance
The following graph (note log scale) shows the abundance of elements in our Solar System. The table shows the twelve most common elements in our galaxy (estimated spectroscopically), as measured in parts per million, by mass. Nearby galaxies that have evolved along similar lines have a corresponding enrichment of elements heavier than hydrogen and helium. The more distant galaxies are being viewed as they appeared in the past, so their abundances of elements appear closer to the primordial mixture. As physical laws and processes appear common throughout the visible universe, however, scientist expect that these galaxies evolved elements in similar abundance.
The abundance of elements in the Solar System is in keeping with their origin from nucleosynthesis in the Big Bang and a number of progenitor supernova stars. Very abundant hydrogen and helium are products of the Big Bang, but the next three elements are rare since they had little time to form in the Big Bang and are not made in stars (they are, however, produced in small quantities by the breakup of heavier elements in interstellar dust, as a result of impact by cosmic rays). Beginning with carbon, elements are produced in stars by buildup from alpha particles (helium nuclei), resulting in an alternatingly larger abundance of elements with even atomic numbers (these are also more stable). In general, such elements up to iron are made in large stars in the process of becoming supernovas. Iron-56 is particularly common, since it is the most stable element that can easily be made from alpha particles (being a product of decay of radioactive nickel-56, ultimately made from 14 helium nuclei). Elements heavier than iron are made in energy-absorbing processes in large stars, and their abundance in the universe (and on Earth) generally decreases with their atomic number.
The abundance of the chemical elements on Earth varies from air to crust to ocean, and in various types of life. The abundance of elements in Earth's crust differs from that in the Solar System (as seen in the Sun and heavy planets like Jupiter) mainly in selective loss of the very lightest elements (hydrogen and helium) and also volatile neon, carbon (as hydrocarbons), nitrogen and sulfur, as a result of solar heating in the early formation of the solar system. Oxygen, the most abundant Earth element by mass, is retained on Earth by combination with silicon. Aluminium at 8% by mass is more common in the Earth's crust than in the universe and solar system, but the composition of the far more bulky mantle, which has magnesium and iron in place of aluminium (which occurs there only at 2% of mass) more closely mirrors the elemental composition of the solar system, save for the noted loss of volatile elements to space, and loss of iron which has migrated to the Earth's core.
The composition of the human body, by contrast, more closely follows the composition of seawater—save that the human body has additional stores of carbon and nitrogen necessary to form the proteins and nucleic acids, together with phosphorus in the nucleic acids and energy transfer molecule adenosine triphosphate (ATP) that occurs in the cells of all living organisms. Certain kinds of organisms require particular additional elements, for example the magnesium in chlorophyll in green plants, the calcium in mollusc shells, or the iron in the hemoglobin in vertebrate animals' red blood cells.
History
Evolving definitions
The concept of an "element" as an undivisible substance has developed through three major historical phases: Classical definitions (such as those of the ancient Greeks), chemical definitions, and atomic definitions.
Classical definitions
Ancient philosophy posited a set of classical elements to explain observed patterns in nature. These elements originally referred to earth, water, air and fire rather than the chemical elements of modern science.
The term 'elements' (stoicheia) was first used by the Greek philosopher Plato in about 360 BCE in his dialogue Timaeus, which includes a discussion of the composition of inorganic and organic bodies and is a speculative treatise on chemistry. Plato believed the elements introduced a century earlier by Empedocles were composed of small polyhedral forms: tetrahedron (fire), octahedron (air), icosahedron (water), and cube (earth).
Aristotle, , also used the term stoicheia and added a fifth element called aether, which formed the heavens. Aristotle defined an element as:
Chemical definitions
In 1661, Robert Boyle proposed his theory of corpuscularism which favoured the analysis of matter as constituted by irreducible units of matter (atoms) and, choosing to side with neither Aristotle's view of the four elements nor Paracelsus' view of three fundamental elements, left open the question of the number of elements. The first modern list of chemical elements was given in Antoine Lavoisier's 1789 Elements of Chemistry, which contained thirty-three elements, including light and caloric. By 1818, Jöns Jakob Berzelius had determined atomic weights for forty-five of the forty-nine then-accepted elements. Dmitri Mendeleev had sixty-six elements in his periodic table of 1869.
From Boyle until the early 20th century, an element was defined as a pure substance that could not be decomposed into any simpler substance. Put another way, a chemical element cannot be transformed into other chemical elements by chemical processes. Elements during this time were generally distinguished by their atomic weights, a property measurable with fair accuracy by available analytical techniques.
Atomic definitions
The 1913 discovery by English physicist Henry Moseley that the nuclear charge is the physical basis for an atom's atomic number, further refined when the nature of protons and neutrons became appreciated, eventually led to the current definition of an element based on atomic number (number of protons per atomic nucleus). The use of atomic numbers, rather than atomic weights, to distinguish elements has greater predictive value (since these numbers are integers), and also resolves some ambiguities in the chemistry-based view due to varying properties of isotopes and allotropes within the same element. Currently, IUPAC defines an element to exist if it has isotopes with a lifetime longer than the 10−14 seconds it takes the nucleus to form an electronic cloud.
By 1914, seventy-two elements were known, all naturally occurring. The remaining naturally occurring elements were discovered or isolated in subsequent decades, and various additional elements have also been produced synthetically, with much of that work pioneered by Glenn T. Seaborg. In 1955, element 101 was discovered and named mendelevium in honor of D.I. Mendeleev, the first to arrange the elements in a periodic manner.
Discovery and recognition of various elements
Ten materials familiar to various prehistoric cultures are now known to be chemical elements: Carbon, copper, gold, iron, lead, mercury, silver, sulfur, tin, and zinc. Three additional materials now accepted as elements, arsenic, antimony, and bismuth, were recognized as distinct substances prior to 1500 AD. Phosphorus, cobalt, and platinum were isolated before 1750.
Most of the remaining naturally occurring chemical elements were identified and characterized by 1900, including:
Such now-familiar industrial materials as aluminium, silicon, nickel, chromium, magnesium, and tungsten
Reactive metals such as lithium, sodium, potassium, and calcium
The halogens fluorine, chlorine, bromine, and iodine
Gases such as hydrogen, oxygen, nitrogen, helium, argon, and neon
Most of the rare-earth elements, including cerium, lanthanum, gadolinium, and neodymium.
The more common radioactive elements, including uranium, thorium, radium, and radon
Elements isolated or produced since 1900 include:
The three remaining undiscovered regularly occurring stable natural elements: hafnium, lutetium, and rhenium
Plutonium, which was first produced synthetically in 1940 by Glenn T. Seaborg, but is now also known from a few long-persisting natural occurrences
The three incidentally occurring natural elements (neptunium, promethium, and technetium), which were all first produced synthetically but later discovered in trace amounts in certain geological samples
Four scarce decay products of uranium or thorium (astatine, francium, actinium, and protactinium), and
Various synthetic transuranic elements, beginning with americium and curium
Recently discovered elements
The first transuranium element (element with atomic number greater than 92) discovered was neptunium in 1940. Since 1999, claims for the discovery of new elements have been considered by the IUPAC/IUPAP Joint Working Party. As of January 2016, all 118 elements have been confirmed by IUPAC as being discovered. The discovery of element 112 was acknowledged in 2009, and the name copernicium and the atomic symbol Cn were suggested for it. The name and symbol were officially endorsed by IUPAC on 19 February 2010. The heaviest element that is believed to have been synthesized to date is element 118, oganesson, on 9 October 2006, by the Flerov Laboratory of Nuclear Reactions in Dubna, Russia. Tennessine, element 117 was the latest element claimed to be discovered, in 2009. On 28 November 2016, scientists at the IUPAC officially recognized the names for the four newest chemical elements, with atomic numbers 113, 115, 117, and 118.
List of the 118 known chemical elements
The following sortable table shows the 118 known chemical elements.
Atomic number, Element, and Symbol all serve independently as unique identifiers.
Element names are those accepted by IUPAC.
Block indicates the periodic table block for each element: red = s-block, yellow = p-block, blue = d-block, green = f-block.
Group and period refer to an element's position in the periodic table. Group numbers here show the currently accepted numbering; for older numberings, see Group (periodic table).
See also
Biological roles of the elements
Chemical database
Discovery of the chemical elements
Element collecting
Fictional element
Goldschmidt classification
Island of stability
List of nuclides
List of the elements' densities
Mineral (nutrient)
Periodic Systems of Small Molecules
Prices of chemical elements
Systematic element name
Table of nuclides
Timeline of chemical element discoveries
Roles of chemical elements
References
Further reading
XML on-line corrected version: created by M. Nic, J. Jirat, B. Kosata; updates compiled by A. Jenkins
External links
Videos for each element by the University of Nottingham
"Chemical Elements", In Our Time, BBC Radio 4 discussion with Paul Strathern, Mary Archer and John Murrell (25 May 2000)
Chemistry |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.